by owebmaster on 6/18/25, 11:59 PM with 117 comments
by neya on 6/19/25, 3:21 AM
by dend on 6/19/25, 1:22 AM
by elliotbnvl on 6/19/25, 1:36 AM
by eadmund on 6/19/25, 2:51 AM
by disintegrator on 6/19/25, 11:21 AM
by swalsh on 6/19/25, 4:24 AM
by martinald on 6/19/25, 12:28 PM
1) Probably the coolest thing to happen with LLMs. While you could do all this with tool calling and APIs, being able to send to less technical friends a MCP URL for claude and seeing them get going with it in a few clicks is amazing.
2) I'm using the csharp SDK, which only has auth in a branch - so very bleeding edge. Had a lot of problems with implementing that - 95% of my time was spent on auth (which is required for claude MCP integrations if you are not building locally). I'm sure this will get easier with docs but it's pretty involved.
3) Related to that, Claude doesn't AFIAK expose much in terms of developer logs for what they are sending via their web (not desktop) app and what is going wrong. Would be super helpful to have a developer mode where it showed you request/response of errors. I had real problems with refresh on auth and it turned out I was logging the wrong endpoint on my side. Operator error for sure but would have fixed that in a couple of mins had they had better MCP logging somewhere in the webui. It all worked fine with stdio in desktop and MCP inspector.
4) My main question/issue is handling longer running tasks. The dataset I'm exposing is effectively a load of PDF documents as I can't get claude to handle the PDF files itself (I am all ears if there is a way!). What I'm currently doing is sending them through gemini to get the text, then sending that to the user via MCP. This works fine for short/simple documents, but for longer length documents (which can take some time to process) I return a message saying it is processing and to retry later.
While I'm aware there is a progress API, it still requires keeping the connection open to the server (which times out after a while with Cloudflare at least) - could be wrong here though?. It would be much better to be able to tell the LLM to check back in x seconds when you predict it will be done and the LLM can do other stuff in the background (which it will do), but then sorta 'pause execution' until the timer is hit.
Right now (AFIAK!) you can either keep it waiting (which means it can't do anything else in the meantime) with an open connection w/ progress, or you can return a job ID, but then it will just return a half finished answer which is often misleading as it doesn't have all the context yet. Don't know if this makes any sense, but I can imagine this being a real pain for tasks that take 10mins+.
by gotts on 6/19/25, 3:45 AM
by dgellow on 6/20/25, 10:16 AM
by Aeolun on 6/19/25, 4:22 AM
by whazor on 6/19/25, 9:59 AM
by jjfoooo4 on 6/19/25, 2:37 AM
If I'm building a local program, I am going to want tighter control over the toolsets my LLM calls have access to.
E.g. an MCP server for Google Calendar. MCP is not saving me significant time - I can access the same API's the MCP can. I probably need to carefully instruct the LLM on when and how to use the Google Calendar calls, and I don't want to delegate that to a third party.
I also do not want to spin up a bunch of arbitrary processes in whatever runtime environment the MCP is written in. If I'm writing in Python, why do I want my users to have to set up a typescript runtime? God help me if there's a security issue in the MCP wrapper for language_foo.
On the server, things get even more difficult to justify. We have a great tool for having one machine call a process hosted on another machine without knowing it's implementation details: the RPC. MCP just adds a bunch of opinionated middleware (and security holes)
by fennecfoxy on 6/19/25, 9:39 AM
by freed0mdox on 6/19/25, 1:41 AM
by practal on 6/19/25, 12:08 PM
by surfingdino on 6/19/25, 6:56 AM
Yeah, let's pretend it works. So far structured output from an LLM is an exercise in programmers' ability to code defensively against responses that may or may not be valid JSON, may not conform to the schema, or may just be null. There's a new cottage industry of modules that automate dealing with this crap.
by troupo on 6/19/25, 6:43 AM
by TOMDM on 6/19/25, 1:28 AM
https://modelcontextprotocol.io/specification/2025-06-18/cha...