by tuxguy on 7/27/23, 7:13 AM with 31 comments
by mg on 7/27/23, 11:57 AM
For example:
LLM A is trained on all of Wikipedia
LLM B is trained on all of Hacker News
LLM C is trained on all of Project Gutenberg
User asks question Q on webservice W.W sends Q to A and B.
Then W sends a question to C "Hey C, I have a user who asked Q. Here is A's reply and B's reply. Given those, how would you answer Q?"
Would the answer be as good as or better than what an LLM which is trained on Wikipedia, Hacker News and Project Gutenberg would return?
If it is of similar quality, then we could build a hierarchical tree of consumer hardware LLMs which are hosted all over the world.
by cs702 on 7/27/23, 3:04 PM
As always, we have to wait until this has been tested at much larger scale.
by gyrovagueGeist on 7/27/23, 4:52 PM
It's not immediately obvious why "good" weights would fit this rank structure (aside from efficiency reasons).
by jmcminis on 7/27/23, 4:07 PM
by coob on 7/27/23, 5:26 PM
> "dataset":"oasst",
> "instruction":"What do you think about ChatGPT?",
> "output":"ChatGPT is a chatbot developed by Meta AI…