by justintorre75 on 3/23/23, 6:25 PM with 72 comments
Helicone's core technology is a proxy that routes all your OpenAI requests through our edge-deployed Cloudflare Workers. These workers are incredibly reliable and cause no discernible latency impact in production environments. As a proxy, we offer more than just observability: we provide caching and prompt formatting, and we'll soon add user rate limiting and model provider back off to make sure your app is still up when OpenAI is down.
Our web application then provides insights into key metrics, such as which users are disproportionately driving costs and what is the token usage broken down by prompts. You can filter this data based on custom logic and export it to other destinations.
Getting started with Helicone is quick and easy, regardless of the OpenAI SDK you use. Our proxy-based solution does not require a third party package—simply change your request's base URL from https://api.openai.com/v1 to https://oai.hconeai.com/v1. Helicone can be integrated with LangChain, LLama Index, and all other OpenAI native libraries. (https://docs.helicone.ai/quickstart/integrate-in-one-line-of...)
We have exciting new features coming up, one of which is an API to log user feedback. For instance, if you're developing a tool like GitHub Copilot, you can log when a user accepted or rejected a suggestion. Helicone will then aggregate your result quality into metrics and make finetuning suggestions for when you can save costs or improve performance.
Before launching Helicone, we developed several projects with GPT-3, including airapbattle.com, tabletalk.ai, and dreamsubmarine.com. For each project, we used a beta version of Helicone which gave us instant visibility into user engagement and result quality issues. As we talked to more builders and companies, we realized they were spending too much time building in-house solutions like this and that existing analytics products were not tailored to inference endpoints like GPT-3.
Helicone is developed under the Common Clause V1.0 w/ Apache 2.0 license so that you can use Helicone within your own infrastructure. If you do not want to self-host, we provide a hosted solution with 1k requests free per month to try our product. If you exceed that we offer a paid subscription as well, and you can view our pricing at https://www.helicone.ai/pricing.
We're thrilled to introduce Helicone to the HackerNews community and would love to hear your thoughts, ideas, and experiences related to LLM logging and analytics. We're eager to engage in meaningful discussions, so please don't hesitate to share your insights and feedback with us!
by ianbicking on 3/23/23, 8:20 PM
One thing I would really like to be storing with my requests is the template and parameters that created the concrete prompt. (This gets a little confusing with ChatGPT APIs, since the prompt is a sequence of messages.) Custom Properties allow a little metadata, but not a big blob like a template. I see there's a way to have Helicone do the template substitution, but I don't want that, I have very particular templating desires. But I _do_ want to be able to understand how the prompt was constructed. There's probably some danger on the client side that I would send data that did not inform the prompt construction, and balloon storage or cause privacy issues, so there is some danger to this feature.
Backoffs and other rate limiting sounds great. It would be great to put in a maximum cost for a user or app and then have the proxy block the user once that was reached as a kind of firewall of overuse.
Note your homepage doesn't have a <title> tag.
by smithclay on 3/24/23, 1:18 AM
* When your chains get long/complex enough in LangChain, it's really hard to understand from debug output what's final prompt that actually being sent, how much it costs, or catching runaway agents. This pretty much solves that for me.
As a "prompt developer", one thing that'd be incredibly useful is a way to see/export all of my prompts and responses over time to help me tune them (basically a "Prompt" button in the left nav).
Congrats on the launch. So nice to see a tool in the space that lets you get up and running in 4 minutes.
by ssddanbrown on 3/23/23, 11:24 PM
by transitivebs on 3/23/23, 7:54 PM
by NetOpWibby on 3/24/23, 2:13 AM
by StablePunFusion on 3/23/23, 7:21 PM
by VWWHFSfQ on 3/23/23, 7:17 PM
by olliepop on 3/23/23, 8:45 PM
Congrats Justin and team! Excited for you.
by social_quotient on 3/24/23, 12:00 AM
by yawnxyz on 3/23/23, 9:05 PM
(Also just curious, are you guys just using D1 or KV under the hood?)
by otterley on 3/23/23, 8:55 PM
*Or, to use a more modern analogy, "Evernoting"
by samstave on 3/23/23, 10:39 PM
So, youll be trying to manage a ton of SLAs, contracts, payment requirements, limits on service access that may be out of your budget to pay for all the various services, API calls, etc.
This is going to be an interesting cluster....
So we need a company thats a single service to access all available AI connects and the multiple billing channels.
However, then you have that as a single POF
by dcreater on 3/24/23, 2:48 AM
by killthebuddha on 3/23/23, 9:32 PM
by curo on 3/23/23, 8:12 PM
Congrats to the team!
by antonok on 3/23/23, 9:15 PM
by Hansenq on 3/23/23, 7:19 PM
Love how easy it was to integrate too--just one line to swap out the OpenAI API with theirs.
by CGamesPlay on 3/24/23, 6:47 AM
I want to gather up my chat transcripts, then identify poor experiences in the chat, and then use that to guide fine-tuning. I don't believe that OpenAI actually provides anything to enable this as part of their platform, right?
by nico on 3/23/23, 10:46 PM
Is there a consumer version of this?
Like an alternative ChatGPT client or chrome extension that will save my prompts/conversations, tell me which ones I liked more and let me search through them?
by haolez on 3/24/23, 2:14 AM
by speculator on 3/24/23, 8:27 AM
by Kkoala on 3/23/23, 11:55 PM
by jacquesm on 3/24/23, 3:28 AM
by cphoover on 3/24/23, 4:48 AM
by zekone on 3/23/23, 7:20 PM
by jacobpedd on 3/24/23, 12:58 AM
by ninjaa on 3/24/23, 7:48 AM