by shaburn on 11/29/23, 4:33 PM with 8 comments
by h2odragon on 11/29/23, 5:03 PM
the results are not free of political bias, but may well highlight it in a starkly hilarious way.
you might do human training at that level but then you've only created a newly biased model.
by jruohonen on 11/29/23, 5:20 PM
by PaulHoule on 11/29/23, 4:45 PM
by smoldesu on 11/29/23, 4:36 PM
LLMs are text tokenizers, if the majority of it's training material leans liberal or conservative then the output should reflect that. I think a better idea is to avoid relying on glorified autocorrect for anything related to political drama.
by shaburn on 11/29/23, 7:45 PM