by Sai_Praneeth on 5/25/25, 1:57 PM with 1 comments
by Sai_Praneeth on 5/25/25, 2:02 PM
The LLM (GPT-4.1) suggests small code changes to improve compression ratio. Mutations are applied and tested on a real input file (big.txt). If the round-trip decompress fails, it's discarded. Everything is logged in a local SQLite DB.
Selection is dumb but effective: top 3 elites + 2 random survivors per generation. Each spawns 4 children. Repeat for N generations or until stagnation. At around 30 generations, I hit a compression ratio of 1.85×. Still decent, considering the starting baseline.
It's not a framework, it's not Pareto, and there's no multi-objective fluff. Just a tiny search loop hacking away at code. Curious if others have tried something similar with code-evolving setups.