by jsomers on 12/19/19, 12:26 AM with 8 comments
by Gullesnuffs on 12/19/19, 8:51 PM
But I also thought it was surprising how well it worked most of the time! It wasn't quite as good as a human spymaster, but with the right choice of scoring function we could actually get a model based on word embeddings to play a pretty reasonable game of Codenames.
by chmullig on 12/19/19, 7:29 PM
I also heartily recommend this implementation using conceptnet embeddings.
https://github.com/commonsense/codenames
Grab the latest mini.h5 and go. https://github.com/commonsense/conceptnet-numberbatch/blob/m...
by bravura on 12/19/19, 7:42 PM
There's an over-indexing problem: words that happen to be very close to one or two of the targets will rank highly even when they're far away from the third. Minimizing the maximum distance from any target helps mitigate but doesn't entirely solve this problem.
This is because they combine the distances using something like more like OR-gate (are any of these very good) and should be using AND (are all of these very good).
by kbenson on 12/19/19, 7:04 PM