by trybackprop on 6/11/24, 8:48 PM with 60 comments
by t-writescode on 6/11/24, 9:00 PM
For companies that are built off these AI services, there's a very real fear that the prices we're getting on API calls aren't covering the costs of the requests themselves under some assumption that costs will magically go down, or we'll get locked in.
I hope the companies like Mistral and OpenAI know that if they're selling API access below cost they could be the origin of a __lot__ of companies closing and choose to make sure their operating costs are sustainable. (I understand creating new features and new models taxes additional capitol and that's a perfectly fine use of VC money. But selling a $1.00 for $0.90 is dangerous and bad.)
by dvt on 6/11/24, 9:02 PM
I'm working on a local product (coming soon™) that uses Mistral-7B-Instruct-v0.2 under the hood. Parsing screen reader text (and taking actions, etc.) via local LLMs is, in my humble opinion, the future of computing. Even though getting it right is hard, and there's lots of edge cases, it's pretty awesome when it works. Mistral has been (by far) the most reliable model to date. I'm likely going to be fine tuning it over the next few months, but here it is working on Windows and MacOS in both web browser[1] and file manager[2] contexts.
[1] https://vimeo.com/931907811
[2] https://dvt.name/wp-content/uploads/2024/04/image-11.png
by underyx on 6/11/24, 8:56 PM
by champagnepapi on 6/11/24, 9:02 PM
by CesareBorgia on 6/11/24, 9:35 PM
Bizzare
by dash2 on 6/11/24, 9:31 PM