from Hacker News

MCP Specification – version 2025-06-18 changes

by owebmaster on 6/18/25, 11:59 PM with 117 comments

  • by neya on 6/19/25, 3:21 AM

    One of the biggest lessons for me while riding the MCP hype was that if you're writing backend software, you don't actually need to do MCP. Architecturally, they don't make sense. Atleast not on Elixir anyway. One server per API? That actually sounds crazy if you're doing backend. That's 500 different microservices for 500 APIs. After working with 20 different MCP servers, it then finally dawned on me, good ole' function calling (which is what MCP is under the hood) works just fine. And each API can be just it's own module instead of a server. So, no need to keep yourself updated on the latest MCP spec nor need to update 100s of microservices because the spec changed. Needless complexity.
  • by dend on 6/19/25, 1:22 AM

    I am just glad that we now have a simple path to authorized MCP servers. Massive shout-out to the MCP community and folks at Anthropic for corralling all the changes here.
  • by elliotbnvl on 6/19/25, 1:36 AM

    Fascinated to see that the core spec is written in TypeScript and not, say, an OpenAPI spec or something. I suppose it makes sense, but it’s still surprising to see.
  • by eadmund on 6/19/25, 2:51 AM

    It is mostly pointless complexity, but I’m going to miss batching. It was kind of neat to be able to say ‘do all these things, then respond,’ even if the client can batch the responses itself if it wants to.
  • by disintegrator on 6/19/25, 11:21 AM

    The introduction of WWW-Authenticate challenge is so welcome. Now it's much clearer that the MCP server can punt the client to resource provider's OAuth flow and sit back waiting for an `Authorization: Bearer ...`.
  • by swalsh on 6/19/25, 4:24 AM

    Elicitation is a big win. One of my favorite MCP servers is an SSH server I built, it allows me to basically automate 90% of the server tasks I need done. I handled authentication via a config file, but it's kind of a pain to manage if I want to access a new server.
  • by martinald on 6/19/25, 12:28 PM

    I spent the last few days playing with MCP building some wrappers for some datasets. My thoughts:

    1) Probably the coolest thing to happen with LLMs. While you could do all this with tool calling and APIs, being able to send to less technical friends a MCP URL for claude and seeing them get going with it in a few clicks is amazing.

    2) I'm using the csharp SDK, which only has auth in a branch - so very bleeding edge. Had a lot of problems with implementing that - 95% of my time was spent on auth (which is required for claude MCP integrations if you are not building locally). I'm sure this will get easier with docs but it's pretty involved.

    3) Related to that, Claude doesn't AFIAK expose much in terms of developer logs for what they are sending via their web (not desktop) app and what is going wrong. Would be super helpful to have a developer mode where it showed you request/response of errors. I had real problems with refresh on auth and it turned out I was logging the wrong endpoint on my side. Operator error for sure but would have fixed that in a couple of mins had they had better MCP logging somewhere in the webui. It all worked fine with stdio in desktop and MCP inspector.

    4) My main question/issue is handling longer running tasks. The dataset I'm exposing is effectively a load of PDF documents as I can't get claude to handle the PDF files itself (I am all ears if there is a way!). What I'm currently doing is sending them through gemini to get the text, then sending that to the user via MCP. This works fine for short/simple documents, but for longer length documents (which can take some time to process) I return a message saying it is processing and to retry later.

    While I'm aware there is a progress API, it still requires keeping the connection open to the server (which times out after a while with Cloudflare at least) - could be wrong here though?. It would be much better to be able to tell the LLM to check back in x seconds when you predict it will be done and the LLM can do other stuff in the background (which it will do), but then sorta 'pause execution' until the timer is hit.

    Right now (AFIAK!) you can either keep it waiting (which means it can't do anything else in the meantime) with an open connection w/ progress, or you can return a job ID, but then it will just return a half finished answer which is often misleading as it doesn't have all the context yet. Don't know if this makes any sense, but I can imagine this being a real pain for tasks that take 10mins+.

  • by gotts on 6/19/25, 3:45 AM

    Very glad to see MCP Specification rapid improvement. With each new release I notice something that I was missing in my MCP integrations.
  • by dgellow on 6/20/25, 10:16 AM

    Oh nice! I know what to do over the weekend: I will be updating my mcp oauth proxy :D
  • by Aeolun on 6/19/25, 4:22 AM

    Funny that changes to the spec require a single approval before being merged xD
  • by whazor on 6/19/25, 9:59 AM

    It would make sense to have a 'resource update/delete' approval workflow, where a MCP server can have an o-auth link just to approve the particular step.
  • by jjfoooo4 on 6/19/25, 2:37 AM

    It's very hard for me to understand what MCP solves aside from providing a quick and dirty way to prototype something on my laptop.

    If I'm building a local program, I am going to want tighter control over the toolsets my LLM calls have access to.

    E.g. an MCP server for Google Calendar. MCP is not saving me significant time - I can access the same API's the MCP can. I probably need to carefully instruct the LLM on when and how to use the Google Calendar calls, and I don't want to delegate that to a third party.

    I also do not want to spin up a bunch of arbitrary processes in whatever runtime environment the MCP is written in. If I'm writing in Python, why do I want my users to have to set up a typescript runtime? God help me if there's a security issue in the MCP wrapper for language_foo.

    On the server, things get even more difficult to justify. We have a great tool for having one machine call a process hosted on another machine without knowing it's implementation details: the RPC. MCP just adds a bunch of opinionated middleware (and security holes)

  • by fennecfoxy on 6/19/25, 9:39 AM

    Did they fix evil MCP servers/prompt injection/data exfiltration yet?
  • by freed0mdox on 6/19/25, 1:41 AM

    What MCP is missing is a reasonable way to do async callbacks where you can have the mcp query the model with a custom prompt and results of some operation.
  • by practal on 6/19/25, 12:08 PM

    Does any popular MCP host support sampling by now?
  • by surfingdino on 6/19/25, 6:56 AM

    > structured tool output

    Yeah, let's pretend it works. So far structured output from an LLM is an exercise in programmers' ability to code defensively against responses that may or may not be valid JSON, may not conform to the schema, or may just be null. There's a new cottage industry of modules that automate dealing with this crap.

  • by troupo on 6/19/25, 6:43 AM

    They keep inventing new terms for things. This time it's "Elicitation" for user input.
  • by TOMDM on 6/19/25, 1:28 AM

    Maybe the Key Changes page would be a better link if we're concerned with a specific version?

    https://modelcontextprotocol.io/specification/2025-06-18/cha...