by 7d7n on 8/2/23, 1:54 AM with 55 comments
by rawoke083600 on 8/2/23, 10:21 AM
I love articles like these, and how they are able to bring me up to speed (at least to some degree) on the "new paradigm" that is AI/LLM.
As a coder I cannot say what the future will look like (binary views) but I can easily believe that in the future we will have MORE AI/LLM and not LESS AI/LLM thus getting up to speed (at least on the acronyms and core theory and concepts) is well worthwhile.
Very Good Article !
by austinkhale on 8/2/23, 4:10 AM
That's when you know it's going to be amazing. Single best narrative-form overview of the current state of integrating LLM's into applications and the challenges encountered I've read so far. This is fantastic and must have required an incredible amount of work. Massive kudos to the author.
by fooblat on 8/2/23, 11:50 AM
I recently trialed an AI Therapy Assistant service. If I stayed on topic, then it stayed on topic. If I asked it to generate poems or code samples, it happily did that too.
It felt like they rushed it out without even considering that someone might ask it non-therapy related questions.
by shahules on 8/2/23, 4:06 AM
by lsy on 8/2/23, 4:21 PM
by mercurialsolo on 8/2/23, 1:35 PM
Evals, RAG, Guardrails often times require recursive calls to LLM's or other fine-tuned systems which are based on LLM's.
I would like to see LLM's and models condensed and bundled up into more singular task trained models - much more beneficial versus system design on using LLM's for applications.
This seems like we are applying traditional system design patterns for using LLM's in practice in apps.
by awongh on 8/2/23, 2:36 PM
Pretty telling about where we are in the evolution of these systems.
by kcorbitt on 8/2/23, 5:48 AM
by justanotheratom on 8/2/23, 4:08 AM
- any references for how this hybrid retrieval is done?
by jscheel on 8/2/23, 2:55 PM
by ankitg12 on 8/2/23, 4:10 AM
by VoodooJuJu on 8/2/23, 3:15 PM
I'd be more interested in the sales and marketing patterns being employed to hawk the same rebranded wrappers over and over. Ultimately, that's what's really going to contribute most to the success of all these startups.
by soultrees on 8/2/23, 4:23 AM
by the_tli on 8/2/23, 10:40 AM
by marcopicentini on 8/2/23, 3:03 PM
by sgt101 on 8/2/23, 8:39 AM
where's the dual LLM pattern?
by ilaksh on 8/2/23, 6:41 AM
Practically speaking the starting point should be things like the APIs such as OpenAI or open source frameworks and software. For example, llama_index https://github.com/jerryjliu/llama_index. You can use something like that or another GitHub repo built with it to create a customized chatbot application in a few minutes or a few days. (It should not take two weeks and $15,000).
It would be good to see something detailed that demonstrates an actual use case for fine tuning. Also, I don't believe that the academic tests are appropriate in that case. If you really were dead set on avoiding a leading edge closed LLM, and doing actual fine-tuning, you would want a person to look at the outputs and judge them in their specific context such as handling customer support requests for that system.
by noduerme on 8/2/23, 4:26 AM
Did you miss the NFT train? Have you ever asked yourself if this is what you should be doing with your life?
Just speaking as a guy who actually writes logic and code, rather than like, coming up with incantations and selling horseshit.