by dchuk on 6/15/24, 4:15 AM with 7 comments
I also have been closely following gen AI and LLM developments the last few years.
So I had a thought: what if there was a Rails coding assistant, trained on all of the popular books / courses / tutorials / gem documentation / well written open source projects?
I’m imagining something where it’s not even an IDE interface, it starts with some questions about what feature I want to build, then it starts generating the code for me to review, and I can chat it further to refine the code it’s creating for the feature. Then I can test the feature, and if it all works, I can accept the pull request basically, and move on to the next feature.
(Again, ignore the fact that building something like this also requires time commitment. Also for discussion purposes, put aside copyright concerns for a second).
My question: what would be the right architecture for this? Is RAG the best way to load up the tool with the knowledge? Or do I fine tune a model with all of the content? I can’t nail down when to use each method exactly.
I found this project: https://github.com/e-p-armstrong/augmentoolkit which sounds like what I would want to use, but then I see RAGs called out constantly, so I just don’t know.
Bonus question: let’s say I have a $10k budget. Is buying a maxed out Mac Studio a good investment for training and self hosting?
by jaggs on 6/15/24, 6:37 AM
by langcss on 6/15/24, 6:53 AM
Chatting with GPT4 and pasting into your editor code or pasting into a terminal some commands could also help alot.
by menzoic on 6/15/24, 4:53 AM
Fine tune is good for changing behavior. For example you could create a dataset that makes the model directly output fully formed projects. Using RAG alone would require a bunch of prompt engineering to get the same.
by ericlewis on 6/17/24, 1:40 AM