Hacker Newsnew | past | comments | ask | show | jobs | submit | verdverm's commentslogin

Claude did not say don't use MCP because it pollutes the context

What they said was don't pollute your context with lots of tool defs, from MCP or not. You'll see this same problem if you have 100s of skills, with their names and descriptions chewing up tokens

Their solution is to let the agent search and discover as needed, it's a general concept around tools (mcp, func, code use, skills)

https://www.anthropic.com/engineering/advanced-tool-use


That's a context pollution problem, not an MCP problem.

https://www.anthropic.com/engineering/advanced-tool-use


building a rag for searching correct MCP is a band-aid.

1. it's not just about MCP, if you have 100s of skills, you are going to have the same context issues

2. it was delegation to a subagent to select the tools that should be made available, which sounded like it got the whole list and did "rag" on the fly like any model would

You're going to want to provide your agent with search, rag, subagent context gathering (and pruning/compation/mgmt) that can work across the internet, code bases, large tool/skill sets, past interaction history. All of this can be presented as a single or few tools to your main agent and is the more meta-pattern/trend emerging


Or if we wrote these things in a language with real imports and modules?

I'm authoring equivalent in CUE, and assimilating "standard" provider ones into CUE on the fly so my agent can work with all the shenanigans out there.


> Standardizing a patch isn’t something I’d expect from Anthropic

This is not the first time, perhaps expectation adjustment is in order. This is also the same company that has an exec telling people in his Discord (15m of fame recently) Claude has emotions


IETF maintains the HTTP standard

https://httpwg.org/

IETF is involved in protocol standards, MCP/A2A are certainly in this category, skills less so


It's not pessimism, but actual compatibility issues

like deno vs npm package ecosystems that didn't work together for many years

There are multiple AGENTS vs CLAUDE vs .github/instructions; skills vs commands; ... intermixed and inconsistent concepts, all out in the wild

When I work on a project, do all the files align? If I work in an org, where developers have agent choice, how many of these instructions and skills "distros" do I need to put (pollute?) my repo with?


Skills have been really helpful in my team as we've been encoding tribal knowledge into something that other developers can easily take advantage of. For example, our backend architecture has these hidden patterns, that once encoding in a skill, can be followed by full stack devs doing work there, saving a ton of time in coding and PR review.

We then hit the problem of how to best share these and keep them up to date, especially with multiple repositories. It led us to build sx - https://github.com/sleuth-io/sx, a package manager for AI tools.


Depending on your workflow, none.

While I do agentic development in personal projects a lot at this point, at work it's super rare beyond quick lookups to things I should already know but can't be arsed to remember exactly (like writing a one-off SQL scripts which does batching mutations and similar)


It's a "standard" though! /s

Skills are specific, contextual, and persistent (stateful) whereas LLMs are not

It isn't between llm and skill, it's between agent and skill. Orgs that invest in skills will duplicate what they could do once in an agent. Orgs that "buy" skills from a provider will need to endlessly tweak them. Multiskill workflows will have semantic layer mismatches.

Skill is a great sleight of hand for Anthropic to get people to think Claude Code is a platform. There is no there there. Orgs will figure this out.

Cheers.


Have you checked out Dagger?

It's what the people who created OG Docker are building now


Dagger is one of those things I want to like, but find incredibly painful to use in practice.

I have tried it but wasn't a fan. I tried to convert one of our Actions workflows and that proved to be a PITA that I gave up on. It seems now the project is pivoting into AI stuff.

Well, one of them.

Am I reading (7) of the data flow correctly?

1. Establish SSE connection

... user event

7. send updates over origin SSE connection

So the client is required to maintain an SSE capable connection for the entire chat session? What if my network drops or I switch to another agent?

Seems an onerous requirement to maintain a connection for the life-time of a session, which can span days (as some people have told us they have done with agents)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: