← back

how i code with AI

january 2025

I have a few friends who aren't keen on using AI to code. Partly because I believe they don't know how to do it properly, so their perception of it is flawed. I use it quite often, and while I'm not excellent, I think I'm getting the hang of it. So this is a short essay on how I code with AI.

Most of the content in this essay is inspired by Dex Horthy's AI engineers lecture, a 20 min watch, which you can view here.

To start, it's important to note that everyone has a different way of coding, so my approach might not work for you. I do however think that it's a good place to start. I also don't think people need to depend on a really good model, rather just really good prompts. For all prompts below, you'd get reasonable results using Claude Sonnet 3.5, but I'll also specify which models work best for each of those too (for when you haven't hit your cursor limits).

Context engineering is how you manage the AI's mental state, what it knows. In my own use case, I keep it stateless, so it knows nothing about the codebase to start. It's my responsibility to feed it all it needs. A lot of tokens are saved this way. Additionally, I try not to go above 40% of my context window, as that's when the AI starts to dumb down.

I use the RPI method, which is research plan and implement. Cursor has commands, so each of this steps can be made into their own command. For each of these, it's best to open a new chat.

When implementing a feature or solving a bug, you first describe what it is using the /research command. Claude Sonnet 4.5 works best for this.

Research Prompt

I have a task to [DESCRIBE FEATURE/BUG]. Goal: Provide all context needed for a junior developer to implement this without asking further questions. Instructions:
1. Do not write any code or modify the codebase yet.
2. Explore the codebase to understand existing systems, data models, and patterns relevant to this task.
3. Return your findings in markdown document with these sections:
    * Files: List of relevant files and their roles
    * Data Structures: Key types, interfaces, database schemas involved
    * Patterns: How similar features are implemented (e.g., 'React Query for fetching', 'errors via middleware')
    * Strategy: High-level implementation approach
    * Unknowns: Ambiguities needing resolution Be brutally concise. Use bullet points. Verify by reading code before stating anything.
Constraints: **Do not modify the .cursor/commands/research.md, make your own RESEARCH.MD in the root directory**

This command collects all parts of the code needed for a junior level dev to be able to implement it. Observe how there is an unknowns section specified, this prevents the AI from hallucinating. It saves these unknowns as questions in the RESEARCH.md file it creates, in which you can answer these unknowns. Depending on the model you used and the feature you are trying to implement, the research can sometimes include unnecessary information, so it's important to proofread. For example, if I'm implementing a system to send notifications on the frontend of the app, and the API is already implemented and working, I do not need to know how it works, I just need to know the input and response type of the API to implement it. The more concise this file is, the better.

Next is /plan. For this ideally you want to use a reasoning model like Gemini 3 or Opus 4.5 (in my experience Opus is faster). This part requires most of the critical thinking:

Plan Prompt

We are working on [FEATURE/BUG]. Research has been completed above.
Goal: Provide an implementation checklist in markdown document.
Instructions:
1. Create a checklist with atomic implementation steps.
2. Each step must be:
    * Atomic: One clear action (e.g., 'Create file X', 'Add function Y to file Z')
    * Verifiable: Include a check to ensure it works (e.g., 'Run test X', 'Verify log output')
    * No Code: Instructions only, not actual code
Format:

[ ] Step 1: [Action] - [Verification]
[ ] Step 2: [Action] - [Verification]

Create the file in the root directory, in a PLAN.MD file. Do not implement yet.
Constraints: **Do not modify the .cursor/commands/plan.md. Create your own PLAN.MD in the root directory**

When prompting it in Cursor, you can use @RESEARCH.MD and describe the feature you want to add or the bug you're solving. What's crucial about this is that the instructions themselves are concise and clear, meant to make it so the AI does more doing and less thinking. It tells it exactly what to do, making it very easy to follow the instructions. This is the most important part of the process to review.

So now we have the relevant RESEARCH.md and PLAN.md files.

We then have the implementation command. Cursor's composer model works very fast for this, so that's what I prefer to use:

Implementation Prompt

I want to implement the feature. Context: RESEARCH.MD and PLAN.MD have been provided above. Instructions:
1. Execute Step 1 ONLY.
2. Verify it works as described.
3. Mark Step 1 as completed (change [ ] to [x]).
4. STOP and ask for confirmation before proceeding to Step 2.
Do not deviate from the plan. If the plan is wrong, stop and tell me.
**EXCEPTION: If the user says 'do all' you may do all steps at once**.

When you start implementation, the LLM follows the plan step by step. For this, you also make a new chat, just tagging both the research and plan. You don't really need to say anything else.

Since the steps are 'atomic' (as mentioned in the planning prompt), the AI makes minimal changes each step, so it's easy to see what it's doing and stop it from going off track if it starts to do so. You can also test after each small step, so finding exactly where the issue happened and debugging it is a lot easier too.

The 'do all' command in the prompt is when I'm feeling extremely lazy. Do not recommend to use that unless it's a real basic feature.

This is the main flow of how I code. Other prompts I use are explained below, which you can find on my prompts page.

Sometimes, when the chat gets too big (or if I cross the 40% context window), I use the /summary prompt. It's useful when starting another implementation chat completing the same feature. Also useful when I want to create a message for several bugs that I've implemented in one PR.

After implementation, I use this /pr command. It's specifically written so it's easier for bots (like CodeRabbit, Mesa, Greptile, etc.) to code review.

Besides that, I also have these 2 other commands: initialize and find. I use initialize when I want to ask a series of questions about a certain feature or part of the codebase. It can also be faster sometimes to initialize first, and then do research on a more niche part of the codebase. Sonnet 4.5 works best for both of these.

Happy coding.