by stunt on 4/13/23, 8:29 PM with 2 comments
I decided to test ChatGPT. Instead of pasting a large amount of comma-separated values, I decided to first give ChatGPT a sample record. It replied back with the formula to calculate CPM as well as the result of its calculation for my sample data.
While the formula ChatGPT gave me was 100% correct, the result was different from what I had on my sheet. At first, I assumed that I had made a mistake on my end, but after double-checking, I realized that wasn't the case.
So I told ChatGPT that I was getting a different result when I calculated it, and it immediately apologized (apparently that's a thing it does when you point out a mistake) and explained the formula again, but gave me the same incorrect value.
This went on for a few rounds, with ChatGPT even making small change to the formula, but still giving me the wrong answer. It was only after the forth time, and only after I gave it the correct result, that it finally gave me the correct value.
This whole experience was unexpected because every time, ChatGPT explained the correct formula and even placed the correct values in their correct places in the formula, but the outcome was incorrect.
And that led me to wonder! How exactly does generative AI perform math? Can anyone with more knowledge in this area provide insight and an explanation that how anything like that can happen?
by verdverm on 4/13/23, 8:47 PM
Note, this will change with plugins / chaining, but that is still an external system. In the end, these LLMs are just predicting the next best token to output. The "magic" is in our minds and perceptions
by nikonyrh on 4/14/23, 12:31 AM