Agentic AI Prompting: Best Practices for Smarter Vibe Coding
- Ran Isenberg
- Jun 17
- 10 min read

Agentic AI and Vibe Coding are no longer buzzwords — they're reshaping how we build software. We're not just writing code or prompts anymore; we're collaborating with AI agents that plan, reason, and build alongside us in our IDEs.
The developers who learn to guide these agents — by structuring prompts, setting clear goals, and turning outputs into reusable tools — will ship faster, cleaner, and smarter.
You don't want to be left behind.
In this post, I'll share practical tips from my experience with Vibe Coding's features to production in my IDE — how to craft effective prompts, refine them, share them across the organization, and what pitfalls to avoid along the way.
Table of Contents
My Six Steps to Vibe Coding Nirvana
Building the Perfect Prompt with a Real World Example
Step 1: Define the Persona
Step 2: Clearly State the Problem
Step 3: Provide Context
Step 4: Ask for a Plan First
Step 5: Your Prompt Isn’t One-Size-Fits-All
Step 6: Optional - Share the Prompt
Vibe Coding Risks & Best Practices
Understand What You Commit
AI Is a Tool, Not a Crutch
Use Chained Prompting
Break the Loop
Keep It Simple
Tests and Automagically Fixing Code
Build Guardrails into Your Platform
Agentic AI & Vibe Coding: The Next Dev Revolution
The last couple of years have been quite spectacular for us developers.
I was skeptical at first. I’ve been writing code for over a decade — I know how to build things. But gradually, I started adapting. First, I used ChatGPT for quick consultations. Then, I let it generate small snippets and copied them into my IDE. Eventually, I embraced GitHub Copilot and its autocomplete suggestions — and suddenly, I felt like a super developer.
Now, we’ve entered the next phase of the revolution: Vibe Coding.
Vibe coding (or vibecoding) is an approach to producing software by using artificial intelligence (AI), where a person describes a problem in a few sentences as a prompt to a large language model (LLM) tuned for coding. The LLM generates software based on the description, shifting the programmer's role from manual coding to guiding, testing, and refining the AI-generated source code. - Wikipedia
Lately, I’ve been doing a lot of Vibe Coding — and to my surprise, it works pretty well.
It’s a new way of building, and while it’s powerful, it’s far from perfect. You must adapt but also remain critical. The key is to guide the agent — not let it guide you.
My Six Steps to Vibe Coding Nirvana
Let's walk through some of my practical tips for getting the most out of agentic AI — and saving serious time in the process.
I'm assuming you're already using tools like Cursor, VS Code with Copilot, Q agent mode, Windsurf, or any other agent-based AI platform.
Before you write a single prompt, make sure you clearly understand the problem you're trying to solve. Gather as much context, requirements, and definitions of done as possible — the more specific, the better. Linking to relevant documentation is also helpful.
To boost efficiency, ensure your MCP server is connected to your internal sources, such as Jira, Confluence, Notion, or GitHub. The tighter the integration, the smarter your agent becomes.
Not sure what MCP is? Check out Anton's blog post — it's a great starting point.
Building the Perfect Prompt with a Real World Example
Let's start with a problem I had at work. We'll then review the prompt generation steps and craft the final prompt.
My JIRA ticket required me to take an existing serverless service with dozens of Lambda functions and ensure it adheres to the platform engineering team's best practices for observability. More specifically - utilizing the correlation-id library we developed at work.
This library parses the incoming Lambda function's event, looks for any correlation/session/request id HTTP headers, and injects their values into all of the logs of the current invocation. Quite handy! When all services adopt this library, we will be able to follow a single customer request across multiple service logs with ease.
The second requirement is that if my service sends any SNS or HTTP request to other services, we pass along those correlation-id headers so those services can log those IDs upon parsing.
From a developer's standpoint, this is quite a tedious task that requires going over all the code or crafting regular expressions to find all the relevant places to add the relatively simple code.
This is a classic task for vibe coding. Vibe coding can be used for crafting new features, too, and the prompt engineering methods remain the same. However, the context and task definitions will be different. I'll touch on that in the risks and best practices section.
Let's go over the six steps for crafting our winning prompt. By winning, I mean that by following these principles, you will get better results quickly and with fewer refinements and iterations.
Step 1: Define the Persona
Start by setting the stage. Help the agent understand the overall problem domain and his expertise in this domain.
And in the context of our problem:
You're a senior AWS Serverless Python developer specializing in building production grade and secure multi-tenant SaaS solutions
Step 2: Clearly State the Problem
Be precise about the task. Include both functional and non-functional requirements.
Give as many details as you want to describe the definition of done—the more details and specific, the better. Help the agent understand the scope of the task. Don't make assumptions or expect it to figure out by itself; it might get it right, but most likely, it won't.
In addition, stay positive - tell it what you want and expect; don't tell it what you don't want.
And in the context of our problem:
You're a senior AWS Serverless Python developer specializing in building production grade and secure multi-tenant SaaS solutions. Your task is to integrate the platform engineering's correlation-id library into the Lambda functions in this service. The Lambda handlers entry function reside in the 'service/handlers' folder. Make the changes to adopt the library in the handlers and also in the logic business domain logic ('service/logic' folder) where the code sends either HTTP or SNS messages using boto or requests libraries.
Step 3: Provide Context
Our current prompt assumes the agent will figure out magically by itself what the correlation-id library is and, even worse, how to use it. It's very similar to writing code according to your organization's best practices and standards. The agent uses examples from the dataset it was trained on; it doesn't mean they correlate to your standards or security requirements (let me save the effort - they don't).
We need to direct the agent in the right direction, whether to the correlation-id library documentation on GitHub pages, Confluence, or the coding style best practices. Agents work well with direct HTTP pages (if you know where to point) or local rules and configuration files that you can place in your repository. I highly recommend checking out JIT's Vibe coding open-source repository, which contains many tips for Vibe coding.
MCPs server can also help us. Don't know what MCP is? Check out Anton's fantastic post.
We can connect our organization's data sources and expose them to the agents as context with MCP servers. Every agentic AI IDE or tool supports adding MCP servers via configuration files.
For example, Confluence has an official MCP server to facilitate searching. This can help the agent find best practices, style guides, design documents, and other relevant resources. You are limited by your imagination here.
You can also build your own MCP server; I recently shared my experience building such a server using a Lambda function and I'm working on an MCP blueprint that's I'll share soon.
Let's add context to our correlation-id prompt:
You're a senior AWS Serverless Python developer specializing in building production grade and secure multi-tenant SaaS solutions. Your task is to integrate the platform engineering's correlation-id into the Lambda functions in this service. The Lambda handlers entry function reside in the 'service/handlers' folder. Make the changes to adopt the library in the handlers (see example at https://<library documentation example> and also in the logic business domain logic ('service/logic' folder) where the code sends either HTTP or SNS messages using boto or requests libraries (see example at https://<library second documentation example>)
Step 4: Ask for a Plan First
The longer and more complex a prompt, the more time the agent needs to get it right. Before it starts coding, ask it to explain its plan — this lets you steer the direction, apply your domain knowledge, and avoid hallucinations.
And in the context of our problem:
You're a senior AWS Serverless Python developer specializing in building production grade and secure multi-tenant SaaS solutions. Your task is to integrate the platform engineering's correlation-id into the Lambda functions in this service. The Lambda handlers entry function reside in the 'service/handlers' folder. Make the changes to adopt the library in the handlers (see example at https://<library documentation example> and also in the logic business domain logic ('service/logic' folder) where the code sends either HTTP or SNS messages using boto or requests libraries (see example at https://<library second documentation example>). Before making changes, list your assumptions, the plan, and any potential risks.
Step 5: Your Prompt Isn’t One-Size-Fits-All
Don’t stop at the first response — even with a solid prompt, results can vary. Different models (like GPT-4o, Claude, or Gemini) bring different reasoning styles, strengths, and trade-offs. Switching between them can surface better explanations, stronger code, or help you break out of a dead end.
In addition, as you rely more on agentic workflows, you may hit rate limits or throttling, especially during intense coding sessions. That’s why it’s smart to stay flexible and switch models when needed — and always use the latest versions to take advantage of speed, context length, and accuracy improvements.

Step 6: Optional - Share the Prompt
If the problem you solved is relevant to other services or could benefit other developers, don’t keep it to yourself. Once you’ve crafted a solid prompt, the next step is sharing it across the organization.
Reusable, secure prompts that align with your team’s standards are key to scaling AI-driven development — and that’s where platform engineering plays a vital role.
In my blog post on Agentic AI and platform engineering, I explored this approach, demonstrating how to manage prompts in a shared repository and use a CLI to instantly copy them to the clipboard for fast and consistent reuse.
By versioning prompts, wrapping them in internal tools, and documenting them as blueprints, you lay the groundwork for reliable, efficient, and scalable AI workflows across teams.
The final prompt:
You're a senior AWS Serverless Python developer specializing in building production grade and secure multi-tenant SaaS solutions. Your task is to integrate the platform engineering's correlation-id into the Lambda functions in this service. The Lambda handlers entry function reside in the 'service/handlers' folder. Make the changes to adopt the library in the handlers (see example at https://<library documentation example> and also in the logic business domain logic ('service/logic' folder) where the code sends either HTTP or SNS messages using boto or requests libraries (see example at https://<library second use case documentation example URL>). Before making changes, list your assumptions, the plan, and any potential risks.
Vibe Coding Risks & Best Practices
Agentic AI can accelerate you — or create chaos. To truly benefit from agentic AI, you need more than just good prompts — you need guardrails. Without the right structure, AI can just as easily slow you down with technical debt, security gaps, or bloated code.
Let's go over some additional tips for doing vibe coding the "right way."
Understand What You Commit
This may seem trivial, but it remains relevant and important nonetheless.
Never ship code you don't fully understand. Agentic AI is excellent at generating solutions, but it doesn't always know your context. Always review, refactor, and test everything it produces. Just because it compiles — or even passes a test — doesn't mean it's correct, secure, or straightforward.
AI Is a Tool, Not a Crutch
Tools should amplify your abilities, not replace them. Use AI to move faster, not to stop thinking. That’s how you stay sharp — and employed.
As engineers, our greatest strengths are our minds, creativity, and the hard-earned experience that comes with it. Don’t trade them away for the convenience of vibe coding.
I’ve seen developers rely on agentic AI for 100% of their work — and while it’s tempting, I don’t think that’s the right path. Over-reliance can dull your instincts, erode your problem-solving skills, and make you passive in your craft. When you catch yourself thinking, “Why bother understanding the test failure? Let Claude fix it, I’ll just approve it,” — that’s a warning sign.
Use Chained Prompting
If the prompt gets too long or the task too complex (might require a design), break it into smaller prompts that build upon the changes of the previous prompt. This helps the agent stay focused and gives you more control over each step. Structure your prompts in steps. Check out Claude's chained prompting examples.
Break the Loop
Sometimes, your prompt session doesn't work as expected. You are not happy with the results, or the agent generates bad code in a loop or hallucinates. It can happen.
When that happens, start a new chat session and actively instruct the agent not to try a specific approach but to follow another suggested way (both positive and negative).
Keep It Simple
Ask the agent to simplify the code, remove unnecessary boilerplate, and provide clear comments on key logic. AI tends to abstract or overcomplicate. Keep an eye out for unnecessary layers, extra files, or overly complex patterns that don't add value.
Tests and Automagically Fixing Code
You can ask the agent to add tests to the feature it develops. You can then ask it to run them (explain how first, such as with pytest or any other command), and it will parse the test output and look for failures, allowing it to iterate again and fix the problems.
From my experience, it's not great yet, but it works!
Build Guardrails into Your Platform
Shorten the AI iteration loop by embedding your team's standards and guidelines.
Add coding best practices, CDK patterns, and testing strategies via MCP to your agent.
You can guide agents by pointing them to your MCP setup for coding conventions or wrap best practices into an internal CLI — as I described in this blog post.
In addition, MCPs are not secure (what a shocker!); according to CyberArk's research, "Anthropic's Model Context Protocol (MCP) has severe vulnerabilities extending far beyond known tool poisoning attacks." That's why you should use only platform-approved secure MCP servers.
Summary
In this blog post, we covered practical techniques for getting the most out of Agentic AI and Vibe Coding — from crafting high-quality prompts and integrating organizational context with MCP to sharing and reusing prompts across teams using internal tools.
We also discussed the risks and pain points of vibe coding, including over-reliance on AI, loss of critical thinking, security blind spots, and bloated code. With the right structure, mindset, and platform support, you can harness the power of Agentic AI to build faster, smarter, and safer — without losing your engineering edge.