by AHappyCamper on 12/29/22, 3:44 PM with 48 comments
How can I do that, and where can I download it from?
by 5e92cb50239222b on 12/29/22, 3:57 PM
by turkeygizzard on 12/29/22, 4:04 PM
Also regarding the text limits, AFAIK, there's just an inherent limit in the architecture. Transformers are trained on finite-length sequences (I think their latest uses 4096 tokens). I have been trying to understand how ChatGPT seems to be able to manage context/understanding beyond this window length
by htns on 12/29/22, 4:27 PM
by Sharlin on 12/29/22, 4:16 PM
by navjack27 on 12/29/22, 4:53 PM
https://gist.github.com/navjack/32197772df1c0a8dbb8628676bc4...
I mean yeah after you set it up like this you still have to prompt engineer to get it to behave like a chat but I mean it's better than GPT - 2
by fred967 on 12/29/22, 7:26 PM
by mellosouls on 12/29/22, 4:35 PM
Open communities with potential for involving yourself include Hugging Face and EleutherAI, the former perhaps more accessible, the latter an active Discord.
It's been a while since I spent time looking at them, I'm not sure if there is something you can easily get up and running with.
by dragonwriter on 12/29/22, 5:27 PM
You probably won't be able to run (or especially train) them on typical desktops, though.
by chamwislothe2nd on 12/29/22, 7:24 PM
Since my other account is shadow banned for some unexplained reason, I just wanted to mention the petal project. It's an attempt to bittorrent style distribute the load of running these large models. Good luck!
by trilbyglens on 12/29/22, 10:01 PM
by PlotCitizen on 12/29/22, 3:56 PM
by luckyme123 on 12/29/22, 4:11 PM