The buzz around “vibe coding” is impossible to ignore. Social media influencers promise you can ship entire SaaS applications in minutes with nothing but prompts and vibes. But if you’ve tried it yourself, you know the reality is more nuanced.
Sure, cloning a landing page or scaffolding a basic CRUD app works well enough. But building something genuinely complex—a production-ready SaaS or internal tool with real business logic? That requires more than wishful prompting.
Here’s the good news: there are workflows that genuinely amplify your development capabilities. After weeks of experimenting, building, and refining, I’ve developed a structured approach that delivers real results using Cursor IDE with Google’s Gemini 2.5 Pro and robust UI templates.
This workflow emerged from building a full-featured personal finance application from scratch—twice. The first iteration was pure exploration, testing techniques and adjusting as I went. The second was a complete rebuild using the refined workflow, recorded in a comprehensive three-hour tutorial that documents every step from initial concept to deployment.
What follows is an honest breakdown of the approach that actually works.
Table of contents
Open Table of contents
Step 1: Build on Solid Ground
Modern full-stack applications involve countless moving pieces. Expecting an LLM to orchestrate everything from scratch is setting yourself up for frustration.
The smarter play? Give your AI assistant a head start by establishing a robust foundation with proven tools:
Essential Building Blocks:
- UI Component Libraries (Shadcn, Material-UI, etc.)
- Production-ready boilerplate templates
- Full-stack frameworks with batteries included
Component libraries and templates provide the LLM with a known foundation to build upon. This eliminates styling guesswork and ensures consistency as your application scales.
Full-stack frameworks like Wasp (for JavaScript with React, Node.js, and Prisma) or Laravel (for PHP) handle the heavy lifting of stack integration. These opinionated frameworks have already made architectural decisions about which tools work well together. They handle configuration and boilerplate under the hood, allowing the AI to focus purely on business logic.
Consider Wasp’s declarative configuration file. You or the LLM simply define backend operations, and the framework manages server setup automatically. This configuration serves as a central “source of truth” that the LLM can reference to understand your application’s structure as it builds new features.
app vibeCodeWasp {
wasp: { version: "^0.16.3" },
title: "Vibe Code Workflow",
auth: {
userEntity: User,
methods: {
email: {},
google: {},
github: {},
},
},
client: {
rootComponent: import Main from "@src/main",
setupFn: import QuerySetup from "@src/config/querySetup",
},
}
route LoginRoute { path: "/login", to: Login }
page Login {
component: import { Login } from "@src/features/auth/login"
}
route EnvelopesRoute { path: "/envelopes", to: EnvelopesPage }
page EnvelopesPage {
authRequired: true,
component: import { EnvelopesPage } from "@src/features/envelopes/EnvelopesPage.tsx"
}
query getEnvelopes {
fn: import { getEnvelopes } from "@src/features/envelopes/operations.ts",
entities: [Envelope, BudgetProfile, UserBudgetProfile]
}
action createEnvelope {
fn: import { createEnvelope } from "@src/features/envelopes/operations.ts",
entities: [Envelope, BudgetProfile, UserBudgetProfile]
}
Step 2: Teach Your AI Assistant the Rules
Once your foundation is set, the next crucial step is establishing comprehensive guidelines for your editor and LLM to follow.
Developing effective rules is an iterative process:
- Start building something real
- Identify patterns where the LLM consistently misses your expectations
- Leverage the LLM itself to refine your workflow
Creating Effective Rules
Different IDEs use different naming conventions, but the concept remains consistent. In Cursor, rules have evolved from a single .cursorrules file to a more organized .cursor/rules/ directory supporting multiple files.
Within this rules structure, you can define:
- General coding style preferences
- Project-specific conventions
- Common patterns and operations
- Authentication approaches
The key is providing structured context so the LLM doesn’t have to rely on broad general knowledge. This means explicitly documenting:
- Your current project and template foundation
- Conventions the LLM should follow
- How to handle common issues
You can also create reusable strategy rules. For instance, I often want the LLM to “consider three different approaches, select the best one, and explain your reasoning.” Rather than typing this repeatedly, I created a rule file (7-possible-solutions-thinking.mdc) that I can reference whenever needed.
Continuous Improvement Through AI Feedback
Treat your rules as a living document. As you develop, continuously refine them to address recurring issues or project-specific challenges.
Regularly ask the LLM to critique your workflow itself. Pass your rules files, plans, and READMEs into context and request improvements. Use past chat sessions as additional context.
A simple but powerful prompt:
“Review these documents for breadth and clarity. Suggest improvements considering they’re used as context for AI-assisted coding workflows.”
This meta-analysis often reveals redundancies, gaps, or opportunities to make your rules more effective.
Step 3: Define the “What” and the “How” (PRD & Plan)
Your initial prompts for generating the Product Requirements Document (PRD) and actionable implementation plan are critical to success.
The PRD serves as a detailed specification for how your application should look, behave, and be implemented at a high level.
From this PRD, instruct the LLM to generate a step-by-step actionable plan using a modified vertical slice methodology optimized for LLM-assisted development.
Vertical slice implementation is crucial because it instructs the LLM to develop features in full-stack “slices”—from database to UI—with incrementally increasing complexity. You might implement a minimal version of a feature in an early phase, then add sophistication in later iterations.
This embodies a recurring theme throughout this workflow: establish a simple, solid foundation, then systematically add complexity in focused increments.
After generating each document, ask the LLM to self-review for potential improvements based on the project structure and its intended use in assisted coding. Sometimes it identifies valuable enhancements; at minimum, it eliminates redundant information.
Here’s an example prompt for plan generation:
“From this PRD, create an actionable, step-by-step plan using a modified vertical slice implementation approach suitable for LLM-assisted coding. First, consider several different plan styles that would work for this project and implementation approach. Select the best one and explain your reasoning. Remember, we’ll constantly reference this plan to guide implementation, so it should be well-structured, concise, and actionable while providing sufficient guidance for the LLM.”
Step 4: Building End-to-End with Vertical Slices
The vertical slice approach pairs beautifully with full-stack frameworks because of the heavy lifting they handle for both you and the LLM.
Rather than attempting to define your entire database schema upfront, this approach tackles the simplest complete implementation of each full-stack feature individually, then builds upon it in subsequent phases. An early phase might define only the database models needed for Authentication, along with related server-side operations and UI components like login forms.
In my Wasp project, implementing a phase typically followed this flow:
- Define necessary database entities in
schema.prismafor that specific feature - Define operations in the
main.waspconfiguration file - Write server-side operation logic
- Define pages and routes in
main.wasp - Build UI components in
src/featuresorsrc/components - Connect everything using Wasp hooks and other library modules (react-router-dom, recharts, tanstack-table)
This approach gave us a massive advantage: building incrementally without drowning in complexity. Once the foundation for these features worked smoothly, we could enhance their sophistication and add sub-features with minimal friction.
Another benefit: when realizing I wanted to add a feature not originally in the plan, I could ask the LLM to review the plan and identify the optimal phase for implementation. Sometimes that meant “right now,” other times it provided excellent recommendations for deferring the feature. We’d update the plan accordingly.
Step 5: Closing the Loop with AI-Assisted Documentation
Documentation typically gets deprioritized. But in an AI-assisted workflow, maintaining records of implementation decisions and current architecture becomes even more critical.
The AI doesn’t inherently “remember” context from three phases ago unless you provide it. So we get the LLM to document for itself.
After completing a significant phase or feature slice defined in our plan, I made it standard practice to have the AI document what we just built. I even created a rule file for this specific task.
The documentation process looked like:
- Gather key files related to the implemented feature (relevant sections of
main.wasp,schema.prisma,operations.ts, UI components) - Provide relevant PRD and Plan sections describing the feature
- Reference the documentation creation rule file
- Have the LLM review its documentation for breadth and clarity
The focus should be on core logic, how different parts connect (Database → Server → Client), and key decisions made, with references to specific implementation files.
The AI would generate or update a markdown file in the ai/docs/ directory. This serves two valuable purposes:
For Humans: Clear, readable records for onboarding or future development
For the AI: A growing knowledge base within the project that can be fed back into context in later stages, maintaining consistency and reducing the chance of forgetting previous decisions
This “closing the loop” step transforms documentation from a burden into an elegant mechanism for workflow sustainability.
Conclusion: The Reality Beyond the Hype
Can you “vibe code” a complex SaaS application in hours? Sort of—but it probably won’t be particularly interesting.
What you absolutely can do is leverage AI to significantly enhance your development process, build faster, manage complexity more effectively, and maintain better structure in your full-stack projects.
The vibe coding workflow I developed after weeks of testing distills down to these core principles:
Start Strong: Use solid foundations like full-stack frameworks (Wasp) and UI libraries (Shadcn-admin) to reduce boilerplate and constrain the problem space for AI.
Teach Your AI: Create explicit, detailed rules (.cursor/rules/) to guide AI on project conventions, specific technologies, and common pitfalls. Don’t rely solely on its general knowledge.
Structure the Dialogue: Use shared artifacts like a PRD and step-by-step Plan (developed collaboratively with AI) to align intent and decompose work.
Slice Vertically: Implement features end-to-end in manageable, incremental slices, adding complexity gradually.
Document Continuously: Use AI to help document features as you build them, maintaining project knowledge for both human and AI collaborators.
Iterate and Refine: Treat rules, plans, and the workflow itself as living documents, using AI to help critique and improve the process.
Following this structured approach delivered exceptional results. I could implement features in record time—genuinely building complex applications 20-50x faster than before.
Having an AI companion with vast knowledge that helps refine ideas and test assumptions is transformative.
While you can accomplish a tremendous amount without directly touching code, it still requires you—the developer—to guide, review, and understand the implementation. But it represents a realistic, effective way to collaborate with AI assistants like Gemini 2.5 Pro in Cursor, moving beyond simple prompts to efficiently build full-featured applications.
Want to see this workflow in action from start to finish? Check out the complete tutorial video and template repository referenced throughout this guide.