by laszlojamf on 7/16/22, 7:40 AM with 1 comments
I'm currently working my way through Crafting Interpreters, and I'm using Github Copilot.
About nine times out of ten it suggests the _exact_ implementation from the book.
Is this because so many people have published their progress on GitHub? Is the code Nystrom is writing so unique that the AI picks the only valid option?
I know there's been a lot of talk about licensing etc with Copilot but I don't imagine it could be this bad. It feels like I could almost tab my way through the whole book.
Any data scientist want to chime in?
by tracerbulletx on 7/16/22, 6:25 PM
You're probably right on with the first suggestion. I'm sure there are quite a few people following the book and commiting their progress on GitHub.