by roh26it on 1/8/24, 1:35 PM with 13 comments
We believe, a solid, performant, and reliable gateway lays the foundation to help build the next level of LLM apps. It decreases excessive reliance on any one company and takes the focus back to building instead of spending time fixing the nitty gritties of different providers and making them work together.
Features:
Blazing fast (9.9x faster) with a tiny footprint (~45kb installed)
Load balance across multiple models, providers, and keys
Fallbacks make sure your app stays resilient
Automatic Retries with exponential fallbacks come by default
Plug-in middleware as needed
Battle tested over 100B tokens and millions of requests
For the folks serious about gateway, separation of concerns, TS developers.. I'd love to hear your thoughts and we're hungry for feedback!Reach out to us at hello@portkey.ai or explore the project: https://github.com/portkey-ai/gateway
by shaial on 1/8/24, 8:43 PM
One plugin or feature that I will like to see in an AI gateway: *Cache* per unique request. So if I send the same request (system, messages, temperature, etc.), I will have the option to pull if from a cache (if it was already populated) and skip the LLM generation. This is much faster and cheaper - especially during development and testing.
by ayush-garg on 1/8/24, 1:58 PM
Feels surreal that this gateway is already processing upwards of 3B tokens a day in such a short time.
by retrovrv on 1/8/24, 1:40 PM
by CShorten on 1/8/24, 1:58 PM
by aravindputrevu on 1/8/24, 2:29 PM
Excited to take it for a spin and contribute!
by pranavgoyal on 1/8/24, 5:32 PM
by kesavan_kk on 1/9/24, 2:14 AM
by kaushik92 on 1/8/24, 6:33 PM