How I build products, apps and features with Claude, ChatGPT and AI dev tools


The DPPD process — Discovery, Proposal, Plan, Delivery

Do you want to successfully build new features or even entire products with AI tools like Gemini, Claude, Cursor or Warp? I’m going to share with you what I’ve learnt of building an entire shippable product, from scratch, with a selection of AI tools, and give you all the processes, prompts and templates I built on the journey. Best of all? Everything that works with AI works for human teams, too.


Before we start

I’m explicitly going to show you how to build real applications, with real code (not Lovable or Replit). The approach is designed so that product engineering teams can swap AI tools in and out at any point. Whether it’s Claude, Cursor, or a skilled developer writing the code, this process will still work, and in a lot of cases improve life for most teams.

As a little background, the product I’ve been building has been designed to pass ‘technical due diligence’ from startup investors. In my real job, I’m regularly asked to review the product, tech stack and development processes of startups going through an investment cycle or acquisition. I run the process to make sure that the product is safe, secure, and documented. This process will help you pass due diligence — no tricks, no shortcuts.

Quality is everything in what I describe here. This isn’t vibe coding. If a focus on quality was important before we had AI agents writing code, it’s twice as important now. 10x more important, even.

AI tools hold up a mirror to your commitment to quality and then magnify where you’re not paying enough attention. Any error that might have been caught when you’re working at ‘human speed’, with human care and attention, will zip past unnoticed when moving at AI speed. The ONLY approach to shipping good quality products is to automate quality checks that are as fast as AI. So, no skimping — TDD, documentation, a solid test pyramid, a good devops flow, great observability — all the things you were told you needed before are no longer negotiable. If you think you can use AI tools to build products while relying on manual regression testing, you should probably stop now. I’m not kidding. The pain will be real.

Communication is your critical focus. Fortunately for us, LLMs are really very good at responding to plain language, and so we can bounce ideas around and interrogate solutions just with narrative text. Text becomes the lowest common denominator in this process: when we work with the LLM we’re communicating in text. When we take an output we take it as text. Text forms all of our documentation, and all of our instructions. You will need to focus on saving and moving instructions and context around as text.

In fact, you should think of communication — especially with AI agents — as being two primary types, that are also critical in human teams. Evergreen documentation — the docs that explain how a feature works, or an architecture diagram — forms the basis of common ground and shared mental models. For humans, these are useful when onboarding into a new company, or a new area they haven’t worked in before. The second type of communication is task allocation — things like Jira tickets or work items. This communication expressly conveys what to work on and when. LLMs benefit from both these types of information — well documented context sharing and explicit task allocation.

This process explains how you create both of these, and deliver outstanding products with AI tools.


First: Do Discovery

‘Discovery’ is bit overloaded with meaning these days, and people often nod and groan when I talk about it. It’s become a little bit like other agile ceremonies — a misunderstood blocker that gets in the way of teams doing good work, just another step in the waterfall. The way I introduce people to ‘doing discovery’ is as a mindset, not a fancy process you’re forced to follow. Put simply, discovery is the thinking you do to help you understand the challenge you’re facing.

A great piece of discovery asks questions like, “What’s the real thing I need to do here?”, “How do other people solve this?”, “Are there any other ways to approach this?”. With huge respect to Teresa Torres, whose work on product discovery is seminal, the discovery we’re talking about doesn’t need set team roles, or indeed a product at all. You can use it on a bugfix, a feature, a technical spike or a refactor, as well as an entire new product (in which case, do read Teresa’s book). You can do it as a team of one and, now, you can do it as a team of two, with ChatGPT or Claude.

Discovery is the time where you sit and think about the nature of the challenge. You don’t need fancy tools or processes to do discovery, but you do need to allow yourself time before committing to a solution. A quote, sometimes misattributed to Einstein (because all quotes are either Einstein or Sun Tzu, right?) that still fits the purpose of discovery is:

“If I had an hour to solve a problem, I’d spend 55 minutes attempting to define the problem, and 5 minutes solving it.”

So, not a real Einstein quote, but the fact remains that this is a great definition of what you’re doing with discovery.

When building a product or a feature, I spend hours, if not days, doing discovery with an LLM, intending to really define the problem before solving it. One of my most recent feature builds was a new user referral process, to invite a limited group of beta users through recommendations. What I didn’t ask was:

“What feature should we build?” or, “How should we build it?”

Instead, I often start with a frame, in this case “Could we talk about some of the most successful early access, B2C, viral product launches please?”. This led to a conversation (with ChatGPT) about the history and types of viral, referral-led launches.

In this conversation, I would prompt with versions of:

“Does this fit with what we know about our brand?”

“What behaviour do we want to reward or discourage?”

I also asked questions like,

How will different people interpret this invitation?”

“What might confuse or reassure them?”

Within a discovery conversation — or workshop — we’re trying to use cycles of divergent and convergent thinking where we open ourselves up to many opportunities and perspectives and then compress them back down into a reasonable number of options. That process — wide, then narrow, then wide again — continues through the conversation. The intent is to go through multiple, nested divergent–convergent cycles, challenging and reframing, so the outcome feels settled rather than merely decided.

As a first pass on the referral feature, we (ChatGPT and I) explored:

  • Referral-only vs open sign-up
  • Trust vs growth
  • Beta users vs future users
  • Freemium vs paid expectations
  • Cost, security, feedback loops

We were exploring the tensions, and trying to understand the compromises of our options.

This was then slowly consolidated through these divergent/convergent phases into some principles:

  • Growth must be referral-driven
  • Trust and security are non-negotiable
  • What our unit of growth is
  • Freemium is actively dangerous
  • Learning speed matters more than volume

As we approached a solid sketch of the type of solution we were settling on, I used one of my new favourite LLM tricks: I keep a doc of three user personas, and I was able to ask the LLM to take the personas as a prompt, and to review the solution through each of their lenses.

“Can we pressure test this against our user personas? I’ll give you the personas, and now imagine each persona being a recipient of our invite. How will they understand and react if they were to receive it?”

This was intended to poke early holes in our plans and assumptions, testing without my own bias on how users would respond to the feature we were sketching out.

After more discussion about internal requirements like analytics, compliance and security — but still not discussing any implementation details, I had spent 1–2 hours in conversation about the scope and shape of the solution. This might sound like a frustratingly long time to those of you who would rather get started straight on the build, but it’s always cheaper to test and change earlier than later — we want to avoid changing our mind after code has been built. The discovery lets us poke, prod and challenge for very little cost, and LLMs are capable partners in the thinking process.

Discovery was complete. It was time to create the artefact of a completed discovery, the Proposal.

Writing the proposal

The Proposal template and detailed guidance

The discovery process can be freeform — it should be driven by curiosity and exploration. The proposal is a point of convergence. Although I have used specific templates for proposals in the past — like Amazon’s PR-FAQs or product requirements documents (PRDs), I’ve become much less keen about enforcing a template.

Remember what I said about communication? The proposal is how we communicate all of that discussion into the next stage. My usual approach is to simply ask the LLM to summarise down the conversation until I’m happy that it represents the discussion and decision points. The one rule I’ve stuck by is that the proposal document must NOT include any implementation details — that’s for the ‘team’ at the next stage (the Plan) to decide.

Having run through this process with a few real life human teams, they quite rightly asked me for a template to use. A template can be important when first getting started — it encourages consistency and hopefully replicates what works. But templates also can be onerous, time consuming without good reason, and encourage a similarity in thinking. So, for this article I’ve developed a principle-led template which you can use after your discovery conversations. Critically though, even the template I offer is not a form to complete, but a set of principles for recording and communicating all of the effort that went into discovery.

The proposal is intended for use by anyone involved in creating a solution — especially those involved in the discovery session, and something that can be used directly as a prompt for LLMs, but also works equally well as a guide for actual human teams. It is the bridge between deeply considering a problem and specifying a detailed solution. It captures intent, not instructions.

To be clear about what a proposal is, and is not:

A Proposal is:

  • The output of discovery
  • A decision artefact
  • A record of shared understanding
  • A statement of intent, boundaries, and success
  • Durable across changes in implementation

A Proposal is not:

  • A task breakdown
  • A technical design
  • A Jira epic
  • A requirements checklist
  • A guarantee of scope or timing

The simple guidance for knowing if you’re still creating a Proposal or not is that if you are describing how something will be built, you are already writing the Plan, not the Proposal.

A great proposal ensures that teams (especially those who will create the plan, which might just be you and Claude) agree on the problem they are solving. It should ensure that compromises are explicit, rather than accidental. It ensures that the team writing the plan has full authority to make decisions about implementation, because they understand why decisions about the solution were made. It also serves as a historic record of exactly what the team thought they were delivering, in non-technical terms.

I’ve said that the template isn’t just a form filling exercise. Some things to know about the template:

  • There is no required length.
  • There is no required order.
  • Not every Proposal needs to answer every prompt explicitly.
  • Clarity matters more than completeness.
  • Two Proposals can be equally ‘good’, but look completely different. That’s the system working.

The complete guidance, and LLM instructions is available here (please make your own copy, don’t try and edit this one)


The Proposal Guide in brief

The linked document goes through in more detail, but the questions that you are trying to answer include the following:

1. What problem or tension are we addressing?

Make sure the Proposal is grounded in reality and reflects a real need. Answer why this solution matters right now, and also what happens if we do nothing.

2. What decision are we trying to make?

This is the most important question. If a reader cannot identify the decision, the Proposal is incomplete. We should leave a record of what we are choosing to do and what we are choosing NOT to do, as well as the reasons we are making those choices (so they are not re-litigated later in planning).

3. What does “better” look like?

Take care to describe the outcomes — we’re not interested in a list of metrics or dashboards unless there is real value to them. Don’t just pick targets because they feel necessary. Be comfortable with ambiguity and uncertainty — if you really can’t describe a quantifiable outcome, at least explain why there is a belief that it will be beneficial.

4. What principles or constraints shape the solution?

This is where the commander’s intent lives. The proposal should still make sense even if the implementation changes. Describe what is currently painful, risky or limiting, as well as what must NOT be compromised. Answer what constraints (technical, ethical, organisational) matter.

5. What is non-negotiable?

If it’s relevant, a Proposal should explicitly name any non-negotiables — decisions or constraints that must be preserved regardless of how the Planning team define their implementation. These aren’t preferences or design suggestions, they are boundaries the Plan must respect.

How do I create a Proposal?

Here’s the good news. If you’ve been through a thorough Discovery conversation and explored your challenge from a number of perspectives, creating the Proposal is easy. Just give an LLM the conversation and ask it to summarise it for you!

“Can you summarise this conversation as a Proposal.”

If you want to give more guidance — and I really encourage you to use the full template I shared here — then you can use this prompt:

[start prompt]

Dear LLM. Please summarise the [following/previous/attached] discovery conversation into a Proposal document.

DO:

  • Treat the conversation as authoritative
  • Record decisions, trade-offs, and constraints exactly as expressed
  • Preserve uncertainty where it exists
  • Do not invent solutions, resolve disagreements, or optimise for completeness

DO NOT:

  • Resolve disagreements that were not resolved in this conversation
  • Introduce new ideas or solutions
  • Optimise for best practice or completeness
  • Smooth away uncertainty for the sake of neatness

Tone and Style Guidance:

  • Write in plain language
  • Assume a thoughtful, capable reader
  • Prefer clarity over completeness
  • Avoid jargon where possible
  • Be explicit about judgement and trade-offs

Output:
 A Proposal document that clearly states:

  • The decision being made and why
  • The problem or tension being addressed
  • Non-negotiables and boundaries
  • What “better” would look like
  • Open questions, risks, and assumptions

The goal is shared understanding, not a perfect answer.

[end prompt]

The Discovery did the hard work for you, the Proposal is just the outcome of your curiosity and conversation.

When is my Proposal ready?

After your team — or your LLM — has written the Proposal, you may still want to manually adjust it or refine it down. You can tell it’s ready when:

  • Recipients of the Proposal — including brand new team members — could explain the intent back to you with confidence
  • Reasonable people might disagree with the decision, but would understand it
  • The document would still make sense in six months’ time
  • A planner can list the non-negotiables without guessing
  • A planner knows what must not be built, even if it’s tempting
  • The solution could survive a different team delivering it
  • The Plan can change without requiring the Proposal to be rewritten

The next step of the process is to take the Proposal — this carefully considered document that preserves all the goodness of the discovery session — and turn it into a list of actual tasks. That’s the Plan.


Writing a Plan

The Plan template and detailed guidance

The Plan is where we cross from intent and outcomes into dependencies and details. Often I’ll work on my proposal in a web-based LLM (ChatGPT or Claude Web, or even Gemini), but the Plan requires me to get close to the code and the existing details.

The Plan — from a product engineering team perspective — is a document that is principally owned by the engineers (ideally, in a cross-functional delivery team). For AI driven development, it is going to be created with the agent that you use to write code, ideally with access to your codebase — for me, that’s Claude Code.

The agent requires the Proposal as an input, and is going to turn that into actual tasks for a coding agent (or real engineers) to deliver. I usually copy the markdown of the Proposal into the repository that Claude Code is working in (usually docs/planning/proposals) and ask Claude to read the markdown.

The Plan determines exactly how the Proposal is implemented, and it makes sequencing, dependencies, and trade-offs explicit. However, the Plan isn’t just a thoughtless process of translating a requirements document into Jira tickets. It’s a creative process, just like Discovery, but focuses more on technical architecture details than on user requirements. It’s expected to evolve through conversation, until the point that it’s agreed and complete.

A Plan:

  • Takes a Proposal as its main input without re-opening existing decisions
  • Is built with an understanding of data, code, architecture and dependencies
  • Is usable whether you’re building a product, a platform, a feature, a bugfix or even an analytics dashboard.
  • Makes delivery predictable, but never encourages false certainty

The LLM requires a substantially more formal prompt to create a Plan that can be turned into work items. I strongly recommend reading the full guidance here: The Plan template and detailed guidance, but in summary, our prompt for the LLM encourages it to review the proposal and turn it into actual implementation instructions.

That prompt includes these sections for the LLM to complete:

  • The Brief: write enough for a new team to understand the context from the Proposal you are reviewing.
  • Constraints: restate the Proposal’s non-negotiables as implementation guardrails.
  • Unknowns requiring decision: include an explicit list of decisions that must be made before (or during) delivery.
  • Chosen approach, workstreams & responsibilities: this is narrative, not schematic, describing the intended implementation approach
  • Risks, assumptions, and mitigations: detail risks so that a team coming to the plan later can quickly understand where risks are and what mitigations we might have made.
  • Detail the work: break the work down into groups of significant effort (Epics) and specific tasks that must be completed (Tasks). For each Epic, include a Testing approach section and Validation criteria

When the LLM has nearly completed the plan and is seeking guidance, it’s time to check over the document to see if the detail of the work in epics and tickets is ready for creation in your chosen planning tool. Ask what, if anything, needs clarification, and whether there are any missing guardrails.

The plan defines the handoff to your planning tool (eg Jira/Linear). The planning tool is what tracks execution, and links with DevOps processes (eg DORA metrics, integrations with Source Code/Deployment tools)

A Plan is ready to convert into Jira/Linear when:

  • Fixed constraints are stated and referenced
  • Epics are dependency-shaped and sequenced
  • Unknowns requiring decision are listed (and owners identified)
  • Each epic has validation criteria
  • Early de-risking work is identified
  • Tasks are coherent, named, and independently understandable

Between the three sources of information, the Proposal, the Plan and the Planning tool, we have everything we need to start development. The Proposal answers What are we choosing to do, and why? The Plan answers How do we intend to do this, given reality?

The planning tool (eg Jira/Linear) answers What have we just done? What are we doing next?

Each has a different audience and a different useful life.

Once the Plan has been finalised, you can instruct the LLM to create epics/tasks/workitems in your planning tool of choice. With Claude and Jira, I use the Atlassian CLI (acli), and also the Github CLI for working with remote code repositories. I typically create the detailed Plan in markdown (and save it in docs/planning/plans) so the LLM can access it locally. When the tickets have been created in Jira, I retrospectively have Claude add the IDs so that I can create commits, feature branches and PRs that reference the IDs, for full visibility across the development ecosystem.


What about actual Delivery?

We’ll get into the tech stuff some other time — suffice to say that by the time you’ve got a detailed plan and a list of actual tickets, it’s much easier to break tasks down for LLMs and have them work on them sequentially.

In development, specifying quality gates, following Test Driven Development (TDD) and the red/green/refactor process, and ensuring a solid, well designed test pyramid are crucial to preventing hallucinations. Committing to git regularly makes rollbacks when hallucinations occur much more trivial than trying to fix forwards from inscrutable errors, and high quality CI/CD processes and tooling are invaluable when moving at this speed. I use trunk-based development with Github, and enforce a number of automated checks pre-commit and pre-push (with Husky) — including both unit and E2E tests close to the development environment, but also a significant number of deterministic quality tools, like jscpd, madge and Semgrep to catch issues that my hallucinating little buddies might make along the way. If you’ve got access to Sonarqube, Github Advanced Security or Aikido (among many others), you should plug them ALL in.

We’ll talk about the specifics of AI assisted engineering another time — but if you get the impression that producing features and products with AI tools still needs a specific set of skills, you’d be right. AI tools can speed us up, but skilled engineers and architects still need to be in the loop, it just helps take away some of the grunt work that no-one enjoyed, like keeping docs up to date or writing tests for other people’s code.


The secret here is that AI tools are just tools. They’re not magic, they don’t have mystical powers and they don’t replace you having to think. They do, however, provide a useful interface that can help with our thinking, and make it much easier and faster to ship value to customers.

Scroll to Top