Hi HN, Jack here! I'm one of the creators of MonkeyPatch, an easy tool that helps you build LLM-powered functions and apps that get cheaper and faster the more you use them.
For example, if you need to classify PDFs, extract product feedback from tweets, or auto-generate synthetic data, you can spin up an LLM-powered Python function in <5 minutes to power your application. Unlike existing LLM clients, these functions generate well-typed outputs with guardrails to mitigate unexpected behavior.
After about 200-300 calls, these functions will begin to get cheaper and faster. We've seen 8-10x reduction in cost and latency in some use-cases! This happens via progressive knowledge distillation - MonkeyPatch incrementally fine-tunes smaller, cheaper models in the background, tests them against the constraints defined by the developer, and retains the smallest model that meets accuracy requirements, which typically has significantly lower costs and latency.
As an LLM researcher, I kept getting asked by startups and friends to build specific LLM features that they could embed into their applications. I realized that most developers have to either 1) use existing low-level LLM clients (GPT4/Claude), which can be unreliable, untyped, and pricey, or 2) pore through LangChain documentation for days to build something.
We built MonkeyPatch to make it easy for developers to inject LLM-powered functions into their code and create tests to ensure they behave as intended. Our goal is to help developers easily build apps and functions without worrying about reliability, cost, and latency, while following best software engineering practices.
We're only available in Python currently but actively working on a Typescript version. The repo has all the instructions you need to get up and running in a few minutes.
The world of LLMs is changing by the day and so we're not 100% sure how MonkeyPatch will evolve. For now, I'm just excited to share what we've been working on with the HN community. Would love to know what you guys think!
Open-source repo: https://github.com/monkeypatch/monkeypatch.py
Sample use-cases: https://github.com/monkeypatch/monkeypatch.py/tree/master/ex...
Benchmarks: https://github.com/monkeypatch/monkeypatch.py#scaling-and-fi...
by p10jkle on 11/15/23, 4:25 PM
Nice! If I were to write a test for invariant aspects of the function (eg, it produces valid json), will the system guarantee that those invariants are fulfilled? I suppose naively you could just do this by calling over and over and 'telling off' the model if it didn't get it right
by Scipio_Afri on 11/16/23, 2:03 AM
Can I use open source LLMs with this? Would be great if everything was available self hosted with open source models.
by ipsum2 on 11/16/23, 5:26 AM
This is like calling a python package "ListComprehension", that loops through a list and calls OpenAI's API on each item. Confusing and unproductive.
by mitthrowaway2 on 11/16/23, 7:12 AM
There seems to be a lot of (justified) concern about the name. Maybe call it LLMonkeyPatch?
by m_vyas123 on 11/15/23, 6:44 PM
Hey Jack! Thanks for sharing this. The incremental fine-tuning of smaller and cheaper models for cost reduction is definitely a really interesting differentiator.
I had a few questions regarding the reliability of the LLM-powered functions MonkeyPatch facilitates and the testing process. How does MonkeyPatch ensure the reliability of LLM-powered functions it helps developers create, and do the tests employed provide sufficient confidence in maintaining consistent output? If tests fall short of 100% guarantee, how does MonkeyPatch address concerns similar to historical challenges faced with testing traditional LLMs? Thanks.
by OJFord on 11/16/23, 12:38 AM
Why 'Monkeypatch', when it's for Python, where that has an established and as far as I can tell (?) completely irrelevant meaning?
by CyberDildonics on 11/16/23, 2:20 AM
MonkeyPatch is a specific programming term that people have been using for decades. What would posses someone to name a programming tool "MonkeyPatch" when the tool doesn't even have something to do with patching?
by sweetgiorni on 11/16/23, 11:54 AM
Slightly tangential: is it unfair/unreasonable to judge a project by its name? It's hard not to interpret this project's name as the result of poor judgement. Is that sufficient cause to write off the project entirely? That may seem a tad dramatic but I feel that it's a fairly strong signal for how little effort I need to put into evaluating it.
by babyshake on 11/16/23, 7:37 AM
Not including "pass" in a function definition in Python makes the code not compilable, and if we're using VSCode, PyCharm, etc. our IDEs will complain about this whenever the code is viewed. Is this an intentional design decision?
by whoiskatrin on 11/15/23, 6:30 PM
Would love to try a typescript implementation. Any plans to do that?
by angryemu on 11/15/23, 3:32 PM
Tests to align your model seems neat. How reliable is it? Won’t models still hallucinate time to time? How do you think about performance monitoring/management?
by ian_dot_so on 11/15/23, 4:56 PM
This is really interesting! What would be a good example of when I would want to use monkeypatch vs langchain or OpenAI functions?
by lamroger on 11/16/23, 12:25 PM
The guardrails are cool!
I think more details of where the data goes and when it goes from few-shot to fine-tune will be helpful.
by vutch on 11/18/23, 8:43 AM
tried a shot , quite impressed.
I am implementing the bedrock interface (OpenAPI is limited access from my location). Look promised.
Will check it out the fine-tuning with bedrock. But not sure we can do that or not.
Appreciate your work
by pietz on 11/15/23, 11:30 PM
Could you explain the differences to Marvin AI? I see a large overlap.
by eychu94 on 11/16/23, 5:47 AM
Awesome stuff! What other potential integrations are on the roadmap?
by jondwillis on 11/16/23, 4:20 AM
Where in the codebase are you performing the distillation process?
by fudged71 on 11/16/23, 5:10 AM
This is incredibly cool, I’m excited to try it out
by jackmcclelland on 11/15/23, 4:39 PM
this is super cool! what's the use case you're most excited about?
by jacoboplu on 11/15/23, 4:46 PM
Super cool Jack