Intent-Aware, or Nowhere: Building for Cognition in the Age of AI

Intent-Aware, or Nowhere: Building for Cognition in the Age of AI

By Derek Neighbors on June 1, 2025

Everyone’s racing to build their Model Context Protocol (MCP).

Specs. Registries. Tool graphs. Agent workflows.
It looks clean. It demos well.
And it solves absolutely the wrong problem.

We’re over-engineering control panels for a world that doesn’t need them.
Because here’s the truth:

You’re not building for operators.
You’re building for agents that interpret, decide, and act.

⚠️ What’s Going Wrong

Most MCPs today are making the same mistakes we made 20 years ago, just with better branding and JSON.

They’re shipping:

  • 📦 Tool specs without examples
  • 📚 Registries without relevance
  • ⚙️ Orchestration without adaptability

That’s not how cognition works.
That’s how bureaucracy works.

Let’s break each one down and explore what building for intelligence really looks like.

📦 Tool Specs Without Examples

“Here’s the schema. Good luck.”

What LLMs get:

{
  "name": "get_user_profile",
  "description": "Retrieves a user profile.",
  "parameters": {
    "user_id": "string"
  }
}

This is technically correct and functionally useless.
No usage pattern. No framing. No context. No affordance.

LLMs don’t learn by schema. They learn by pattern.
No examples = no generalization.

What building for cognition looks like:

{
  "name": "get_user_profile",
  "description": "Use this when you need to display or analyze a user's information, like on a dashboard or notification.",
  "parameters": {
    "user_id": "string (e.g., 'usr_1234')"
  },
  "examples": [
    {
      "input": {"user_id": "usr_1234"},
      "output": {
        "name": "Jane Doe",
        "email": "jane@example.com",
        "created_at": "2022-01-15"
      }
    }
  ]
}

LLMs don’t want an index. They want intuition.
Build for that.

📚 Registries Without Relevance

“Here’s 200 tools. Figure it out.”

Today’s MCPs often send a firehose of tools to the LLM, regardless of user intent, history, or state.

It’s exhaustively complete and functionally noisy.

LLMs now spend compute on figuring out what not to call.

What building for cognition looks like:

  • Load tools based on intent, not inventory
  • Use semantic memory: “You used summarize_feedback_v2 last time. Want that again?”
  • Score tools based on confidence, not presence

The goal isn’t breadth.
It’s relevance in the moment.

You don’t need a registry. You need situational awareness.

⚙️ Orchestration Without Adaptability

“Call A → then B → then C. No deviation allowed.”

MCPs are often designed like brittle pipelines.
One tool fails, the whole chain dies.
No fallback. No deviation. No self-correction.

That’s not AI. That’s a flowchart.

What building for cognition looks like:

  • Dynamic sequencing based on input/output state
  • Reflective retries (“That didn’t work. Let me try a different approach.”)
  • Conditional adaptation: skip unnecessary steps, recover gracefully

Think of it this way:

Hardcoded flows are for robots.
Adaptive plans are for agents.

🔁 Rethink the Foundation

Here’s the shift we need:

Old Paradigm (Tool-Aware) New Paradigm (Intent-Aware)
Static specs Example-rich affordances
Full registry loads Contextual tool surfacing
Fixed orchestration Reflective, adaptive planning
Logs for humans Feedback for the system itself
Control-focused Cognition-focused

🛠️ What to Build Instead

If you want your MCP to scale with intelligence, not against it:

  1. Tools as affordances, not just endpoints
  2. Examples as first-class citizens
  3. Relevance surfaced dynamically, not globally
  4. Orchestration that emerges from intent, not control logic
  5. Feedback loops baked in from the beginning

Example of a feedback loop: After calling generate_summary, the agent checks user interaction: did the user expand it, copy it, or regenerate it? Based on that, it adjusts its prompt next time. This feedback isn’t just logged, it’s used.

Don’t just wire things together.
Teach the system how to think about doing it.

🤖 So How Do We Get to AGI?

Let’s be honest:

How do you see us getting anywhere near AGI without this?
We don’t.

AGI isn’t built from graphs of callable tools.
It emerges from systems that reason across tools, adapt to context, and learn from outcomes.

That means designing for cognition. Not coverage.
For goals. Not flows.

You’re not building a dashboard.
You’re building a decision-making partner.

🧭 Final Thought

The future isn’t tool-aware AI.
It’s intent-aware systems that happen to use tools.

If we want to push the boundary, we have to stop solving for control and start solving for understanding.

Because AGI won’t emerge from wiring more tools into an LLM.
It will emerge when the system learns why it’s using them.

Further Reading

Cover of The Alignment Problem

The Alignment Problem

by Brian Christian

How machine learning systems can be aligned with human values and intentions.

Cover of Human Compatible

Human Compatible

by Stuart Russell

A leading AI researcher's vision for creating beneficial artificial intelligence.

Cover of The Hundred-Page Machine Learning Book

The Hundred-Page Machine Learning Book

by Andriy Burkov

Concise overview of machine learning concepts and practical implementation.

Cover of Building Intelligent Systems

Building Intelligent Systems

by Geoff Hulten

A guide to creating machine learning systems that work in the real world.