by lewq on 12/4/24, 2:42 PM with 30 comments
by satisfice on 12/5/24, 4:50 PM
How do you test things? Easy, he implies: Tell an LLM to test them and then assume everything will be okay! Also, STOP ASKING QUESTIONS!
There is zero critical thinking in this video beyond a speedo-level of coverage given by the first test idea that drifts into this guy's head. He's not testing, he's not engineering, he's just developing excuses to release a product.
by heavyarms on 12/4/24, 9:33 PM
The issue is not the technology. When it comes to natural language (LLM responses that are sentences, prose, etc.) there is no actual standard by which you can even judge the output. There is no gold standard for natural language. Otherwise language would be boring. There is also no simple method for determining truth... philosophers have been discussing this for thousands of years and after all that effort we now know that... ¯\_(ツ)_/¯... and also, Earth is Flat and Birds Are Not Real.
Take, for example, the first sentence of my comment: "Whenever I see one of these posts, I click just to see if the proposed solution to testing the output of an LLM is to use the output of an LLM... and in almost all cases it is." This is absolutely true, in my own head, as my selective memory is choosing to remember that one time I clicked on a similar post on HN. But beyond the simple question of if it is true or not, even an army of human fact checkers and literature majors could probably not come up with a definitive and logical analysis regarding the quality and veracity of my prose. Is it even a grammatically correct sentence structure... with the run-on ellipsis and what not... ??? Is it meant to be funny? Or snarky? Who knows ¯\_(ツ)_/¯ WFT is that random pile of punctuation marks in the middle of that sentence... does the LLM even have a token for that?
by bdangubic on 12/5/24, 12:59 AM
by jmathai on 12/5/24, 3:56 AM
The thinking is we could run the tests to verify that the requirements are functional (assuming it wrote the tests correctly in the first place - in many cases it did, fyi).
The problem was that it was too fickle. Sometimes the failing tests caught application bugs. But too often the LLM just couldn't get the tests to pass even though sometimes the application was working fine.
It resulted in a terrible user experience (they only see latency of getting the application correctly written or a failure if it gives up).
That being said, I think a lot of the issues folks like us find with LLMs are because we haven't figured out how and what to ask.
Ultimately, we found an alternative approach which gets at least 95% of the application working 100% of the time. And this is actually a MUCH better user experience than waiting forever to sometimes just get "Sorry, we couldn't create your application.".
by benatkin on 12/4/24, 9:29 PM
Here’s the website: https://tryhelix.ai/
by throwawaymaths on 12/4/24, 10:00 PM
by justanotheratom on 12/4/24, 7:24 PM
by jasfi on 12/5/24, 6:20 AM
The wait-list is at https://aiconstrux.com