by gavinuhma on 6/27/23, 1:04 PM with 29 comments
With Cape, you can easily de-identify sensitive data before sending it to OpenAI. In addition, you can create embeddings from sensitive text and documents and perform vector searches to improve your prompt context all while keeping the data confidential.
Developers are using Cape with data like financial statements, legal contracts, and internal/proprietary knowledge that would otherwise be too sensitive to process with the ChatGPT API.
You can try CapeChat, our playground for the API at https://chat.capeprivacy.com/
The Cape API is self-serve, and has a free tier. The main features of the API are:
De-identification — Redacts sensitive data like PII, PCI, and PHI from your text and documents.
Re-identification — Reverts de-identified data back to the original form.
Upload documents — Converts sensitive documents to embeddings (supports PDF, Excel, Word, CSV, TXT, PowerPoint, and Markdown).
Vector Search — Performs a vector search on your embeddings to augment your prompts with context.
To do all this, we work with a number of privacy and security techniques.
First of all, we process data within a secure enclave, which is an isolated VM with in-memory encryption. The data remains confidential. No human, including our team at Cape or the underlying cloud provider, can see the data.
Secondly, within the secure enclave, Cape de-identifies your data by removing PII, PCI, and PHI before it is processed by OpenAI. As GPT-4 generates and streams back the response tokens, we re-identify the data so it becomes readable again.
In addition to de-identification, Cape also has API endpoints for embeddings, vector search, and document uploads, which all operate entirely within the secure enclave (no external calls and no sub-processors).
Why did we build this?
Developers asked us for help! We've been working at the intersection of privacy and AI since 2017, and with the explosion of interest in LLMs we've had a lot of questions from developers.
Privacy and security remain one of the biggest barriers to adopting AI like LLMs, particularly for sensitive data.
We’ve spoken with many companies who have been experimenting with ChatGPT or the GPT-4 API and they are extremely excited about the potential, however they find taking an LLM powered feature from PoC to production is a major lift, and it’s uncharted territory for many teams. Developers have questions like:
- How do we ensure the privacy of our customer’s data if we’re sending it to OpenAI?
- How can we securely feed large bodies of internal, proprietary data into GPT-4?
- How can we mitigate hallucinations and bias so that we have higher trust in AI generated text?
The features of the Cape API are designed to help solve these problems for developers, and we have a number of early customers using the API in production already.
To get started, checkout our docs: https://docs.capeprivacy.com/
View the API reference: https://api.capeprivacy.com/v1/redoc
Join the discussion on our Discord: https://discord.gg/nQW7YxUYjh
And of course try the CapeChat playground at https://chat.capeprivacy.com/
by zb3 on 6/27/23, 2:16 PM
by moffkalast on 6/27/23, 2:07 PM
Let's say in case of financial statements, it if can't read credit card numbers and names, then it can't tell you which days some credit card was used and by who. Maybe that's not the typical use case, but I would imagine it being very annoying, given the already high typical LLM failure rate.
by luke-stanley on 6/27/23, 2:51 PM
I do think stripping and adding personal info back only when needed is in principle a good idea for some situations. But I have big doubts at the injection of another party into the mix.
by luke-stanley on 6/27/23, 3:07 PM
by ebg1223 on 6/27/23, 4:17 PM
by dingobread on 6/27/23, 2:24 PM
by sullivanmatt on 6/27/23, 2:24 PM
by noqcks on 6/28/23, 12:50 AM