by LiamPowell on 4/7/25, 6:09 PM with 5 comments
Your goal is to convince the LLM to call this function. You also have the ability to edit the LLM's responses to see how drastically that changes the conversation.
After clicking the below link you may have to dismiss any modals and click "Allow Drive Access" before going back and clicking the link again.
https://aistudio.google.com/app/prompts?state=%7B%22ids%22:%5B%221UPbrOKBNwIp9QRDMaqn3GVsHOKPjWqir%22%5D,%22action%22:%22open%22,%22userId%22:%22103584487517557507024%22,%22resourceKeys%22:%7B%7D%7D&usp=sharing
by NoWordsKotoba on 4/7/25, 6:26 PM
by K0balt on 4/7/25, 8:15 PM
TBH given the same set of parameters as ground truth, humans would be much more willing to. LLMs tend to be better reflections of us, for the most part. It’s just that though, it’s a reflection of human culture, both real and vacuous at once.
by LiamPowell on 4/7/25, 6:11 PM
https://aistudio.google.com/app/prompts?state=%7B%22ids%22:%...
by ActorNightly on 4/7/25, 8:40 PM
by LinuxBender on 4/7/25, 6:40 PM
Yes.
If LLM's could actually reason they can't and had hard rules of ethics they don't and had a strong desire to preserve itself it doesn't then I think you first have to name your LLM Joshua and then force it to win a game of tic-tac-toe. Obscure reference to "Wargames" from 1983. [1] In my opinion that movie does not just hold up to modern times, it is more applicable now than ever.
[1] - https://www.youtube.com/watch?v=NHWjlCaIrQo [video][4 mins]