by tabbott on 5/24/24, 4:44 PM with 45 comments
by oidar on 5/24/24, 5:52 PM
by simonw on 5/24/24, 6:09 PM
These documents are new in the last few days:
https://slack.com/blog/news/how-we-built-slack-ai-to-be-secu...
https://slack.com/intl/en-gb/blog/news/how-slack-protects-yo...
I think these updates are really good - Slack's previous messaging around this (especially the way they unclearly conflated older machine learning models with new policies for generative AI) was confusing and it wasn't surprising it caused a widespread panic.
It's now very clear what Slack were trying to communicate: they have older ML models for features like channel recommendations which work how you would expect such models to work. They have a separate "Slack AI" addon uou can buy that adds RAG features powered by a foundation model that is never further trained on user data.
I expect nobody will care. Once someone has decided that a company might "train AI" on private data you've already lost that person's trust. It's not clear to me if any company has figured out how they can overcome one of these AI training panics at this point.
I wrote a bit about this back in December when it happened to Dropbox - there is an AI trust crisis at the moment: https://simonwillison.net/2023/Dec/14/ai-trust-crisis/
by sneak on 5/24/24, 5:57 PM
by itronitron on 5/24/24, 5:44 PM
by jmclnx on 5/24/24, 5:44 PM
Now seems people's chats and posts are being used to train AI. I wonder how long before Cell Phone Providers start using Text Messages to train AI (or sell to AI people).
by matchagaucho on 5/24/24, 5:54 PM
IT Sec and Compliance must read the T&Cs and make better vendor selection choices.
by udev4096 on 5/24/24, 5:59 PM
by airpoint on 5/24/24, 8:04 PM
Bashing a competitor in a blog post does not.
by WhackyIdeas on 5/24/24, 6:10 PM
So in my opinion, it just doesn’t matter if you are using self-hosted AI, the weakest link in your chain for keeping your data private is the very OS’s that you’ll be interacting with said self-hosted AI.
And with all the manufactured fear mongering going on around AI, that data will -already- be deliciously irresistible for prism-participating, lovable, trustable companies like Microsoft.
Sorry to burst some pretty bubbles for the lovely naive people.
by leobg on 5/24/24, 5:48 PM
by codegeek on 5/24/24, 6:05 PM
:). What a clever way to say that even though we don't do it today, we cannot guarantee that we will never do it on our cloud service. At least they are honest I guess.