goose/documentation/blog/2026-02-07-context-engineering/index.md
Adewale Abati 3f5277538d
Some checks failed
Canary / Prepare Version (push) Has been cancelled
Unused Dependencies / machete (push) Has been cancelled
CI / changes (push) Has been cancelled
CI / Build Rust Project on Windows (push) Has been cancelled
Deploy Documentation / deploy (push) Has been cancelled
Live Provider Tests / check-fork (push) Has been cancelled
Publish Ask AI Bot Docker Image / docker (push) Has been cancelled
Publish Docker Image / docker (push) Has been cancelled
Scorecard supply-chain security / Scorecard analysis (push) Has been cancelled
Canary / build-cli (push) Has been cancelled
Canary / Upload Install Script (push) Has been cancelled
Canary / bundle-desktop (push) Has been cancelled
Canary / bundle-desktop-intel (push) Has been cancelled
Canary / bundle-desktop-linux (push) Has been cancelled
Canary / bundle-desktop-windows (push) Has been cancelled
Live Provider Tests / changes (push) Has been cancelled
Canary / Release (push) Has been cancelled
CI / Check Rust Code Format (push) Has been cancelled
CI / Build and Test Rust Project (push) Has been cancelled
CI / Lint Rust Code (push) Has been cancelled
CI / Check Generated Schemas are Up-to-Date (push) Has been cancelled
CI / Test and Lint Electron Desktop App (push) Has been cancelled
Live Provider Tests / Build Binary (push) Has been cancelled
Live Provider Tests / Smoke Tests (push) Has been cancelled
Live Provider Tests / Smoke Tests (Code Execution) (push) Has been cancelled
Live Provider Tests / Compaction Tests (push) Has been cancelled
Live Provider Tests / goose server HTTP integration tests (push) Has been cancelled
docs: blog layout update (#8472)
Signed-off-by: Angie Jones <jones.angie@gmail.com>
Co-authored-by: Angie Jones <jones.angie@gmail.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-10 23:26:49 +00:00

7.5 KiB
Raw Blame History

title description image authors
One Shot Prompting is Dead Practical steps and mental models for building context engineered workflows instead of clever prompts. /img/blog/context-engineering-blogbanner.png
ebony

One shot prompting is dead

I attended one shot promptings funeral.

There were no tears. Just a room full of developers quietly pretending they werent taking shots the night before. Because if were being honest, everyone saw this coming and couldnt be happier it was over.

Saying “one shot prompting is dead” isnt revolutionary. Its just catching up to what builders have been experiencing for months.


The blog post that aged faster than oat milk

Last year, I wrote a post about how to prompt better. I shared tricks, phrasing tips, and even said to add a few “pleases” and “thank yous” and your AI agent would give you the world. At the time it felt cutting edge, because it was. There were livestreams and conference talks entirely about how to prompt better.

Less than a year later, it feels… quaint. Not because prompting stopped mattering, but because prompting stopped being the main character.

The conversation shifted from:

“How do I coach the model better?”

to

“What environment am I dropping this model into?”

Thats a completely different problem, and now it has a name. Context engineering.


The abstraction that broke

One shot prompting worked when agents were party tricks. You crafted a clever prompt, you got a clever answer, and by “clever answer” I mean a fully “working” app, so everyone clapped. But the moment we asked agents to plan, remember, call tools, and operate across multiple steps, the definition of “worked” fell apart.

A single prompt stopped being a solution and became a bottleneck. What matters now isnt the sentence you type. Its the system that surrounds it. Prompts didnt disappear, but they were demoted to one step inside a larger pipeline designed to hold state, plan ahead, and enforce guardrails.

As someone put it in a thread I recently came across:

“The best model with bad context loses to an average model with great context.”

That line explains the shift. Context is now the advantage.

And this isnt theoretical. You can see it in how serious agent systems are being built. Projects like OpenClaw and Ralph Wiggum loop arent chasing clever phrasing. Theyre designing environments where context persists, decisions accumulate, and agents can operate across time without resetting every session.

The excitement around these systems isnt just hype either. Its relief. Builders have been hungry for real working examples that behave predictably over time.

Which leads to the only question that matters ....


How do I actually do this?

When I started building our skills marketplace, one shot prompting alone couldn't cut it. My normal workflow involved researching in one place and implementing in another, and every time I switched tools I had to re-explain the same decisions. Context wasnt living inside the system. It was living in my head. The agent would forget, I would remember, and the entire session became an exercise in rehydration instead of progress.

Heres what that loop looked like in practice:

{/* Video Player */}

Even this demo is powered by persistent context.

That was the moment I experimented with RPI. Not because it was trendy, but because the alternative had become tedious.

You dont have to adopt RPI, or any new pattern, tomorrow to benefit from this. You can simulate the shift in your next session with a small change in how you start.

Before you execute anything, put your agent in chat only mode and run this handoff.

Step 1: Align on the finish line

Tell the agent exactly what counts as done.

“We are shipping: ___
Success looks like: ___”

If the finish line feels fuzzy to you this is the time to flesh it out with your agent, if not your session will drift.

Step 2: Lock in non-negotiables

Define what is not up for debate.

“Constraints: ___
Architecture we are committing to: ___ ”

This prevents the classic agent spiral where it keeps trying to overengineer the project instead of building it.

Step 3: Capture persistent context

Write down the facts that must survive the session.

“Context that must persist:
___
___
___”

This is research, assumptions, domain knowledge, edge cases, terminology, anything your agent will need to pick up exactly where it left off.

Now save it somewhere accessible:

  • a file in the project
  • a context file (goosehints, Cursor rules, etc)
  • a memory extension

Anything that outlives the chat window.

The rule is simple. Context should live in the system, not in your head.


This is good news for people who think beyond code

The interesting part is this shift isnt just technical. It has a quiet career implication hiding inside it. AI isnt replacing engineers. Its replacing workflows that stop at “my code runs, so Im done.” Context engineering rewards a different mindset, the ability to pick up all these different patterns and utilize them by thinking about how decisions propagate through a system, what persists, and what the downstream effects look like over time.

Thats a muscle Im actively working on too. And the more I lean into it, the clearer the direction becomes.


The real skill is orchestration

We attended its funeral, but as you can see, prompting isnt really gone. It just stopped being the workflow.

One shot prompting is still great for demos and exploration. But when the goal is building systems that last longer than a single session, the advantage shifts to how well you design the environment around the model.

The people who thrive in this era wont be the ones with the cleverest phrasing. Theyll be the ones who know how to orchestrate context so intelligence accumulates instead of resetting.

And honestly, thats progress.

<head> </head>