from Hacker News

LocalAI: Local models on CPU with OpenAI compatible API

by anton5mith2 on 4/27/23, 12:19 PM with 5 comments

  • by anton5mith2 on 4/27/23, 12:19 PM

    LocalAI is the OpenAI compatible API that lets you run AI models locally on your own CPU! Data never leaves your machine! No need for expensive cloud services or GPUs, LocalAI uses llama.cpp and ggml to power your AI projects!

    LocalAI supports multiple models backends (such as Alpaca, Cerebras, GPT4ALL-J and StableLM) and works seamlessly with OpenAI API. Join the LocalAI community today and unleash your creativity!

    GitHub: https://github.com/go-skynet/LocalAI

    We are also on discord! Feel free to join our growing community!

    https://discord.gg/uJAeKSAGDy

  • by eciton on 4/28/23, 11:39 AM

    The privacy angle is really important — but just as important is avoiding all of the vulnerabilities that OpenAI seems to have.

    Great to see the speed this is progressing and the collab with k8sgpt / prometheus / spectro cloud / etc. Community effort!

  • by noiz777 on 4/28/23, 11:50 AM

    Here's a little example we put together on how to deploy on Edge Kubernetes using Kairos

    https://kairos.io/docs/examples/localai/

  • by SaaMalik on 4/27/23, 1:33 PM

    Interesting. What are the cpu/memory/storage requirements for running LocalAI?