by statico on 3/26/25, 12:40 PM with 3 comments
by alyt on 3/26/25, 1:45 PM
Love the transparency lol.
Cool idea though! I imagine the "erase your hard drive" risk could be mitigated pretty well by adding an extra LLM step to double-check the generated script and explain each part of it (maybe add an `# explanatory comment` at the end of each line), and then an extra human step to confirm before executing the script.
by statico on 3/26/25, 12:45 PM
#!/usr/bin/env llmscript
Count all files in the current directory and its subdirectories
Group them by file extension
Print a summary showing the count for each extension
Sort the results by count in descending order
So I made it a reality in an evening an it kinda works: https://github.com/statico/llmscriptIt generates a script and a test suite, and then it attempts to fix the script until it passes the tests.
It’s written in Go, but I hardly know Go, and used Cursor to generate most of it in a few hours. It works with Ollama and Claude, and I added support for OpenAI but haven’t tested it. You can also run it in Docker if you want to sandbox it.