by paulriddle on 2/22/19, 9:54 AM with 0 comments
It would create a new paradigm, where you design modules that are nice to read in textual form, and also ones that are very easy to understand in music form. Music is better for passive consumption, it requires less concentration.
I can't shake the feeling that this is possible to do with modern knowledge of ML and programming language design. Add some voice recognition, so that you could pace around your room, give commands to the computer and navigate your code with a birds eye.
Obvioulsy one to many mapping of indentifiers to sounds will not work. It should be more sophisticated. I know generally speaking computers are relatively bad at generating stuff, than at interpreting it. But still, there must be a way.