by jsilvers on 10/7/22, 4:59 PM with 38 comments
by fzliu on 10/7/22, 7:01 PM
Vectors are just getting started.
by gk1 on 10/7/22, 6:16 PM
The demand for vector embedding models (like those released by OpenAI, Cohere, HuggingFace, etc) and vector databases (like https://pinecone.io -- disclosure: I work there) has only grown since then. The market has decided that vectors are not, in fact, over.
by nelsondev on 10/7/22, 5:46 PM
There are benchmarks here, http://ann-benchmarks.com/ , but LSH underperforms the state of the art ANN algorithms like HNSW on recall/throughput.
LSH I believe was state of the art 10ish years ago, but has since been surpassed. Although the caching aspect is really nice.
by robotresearcher on 10/7/22, 7:05 PM
Vectors didn't go anywhere. The article is discussing which function to use to interpret a vector.
Is there a special meaning of 'vector' here that I am missing? Is it so synonymous in the ML context with 'multidimensional floating point state space descriptor' that any other use is not a vector any more?
by whatever1 on 10/7/22, 9:22 PM
Hopefully someone who knows math will enter the field one day and build the theoretical basis for all this mess and allow us to make real progress.
by PLenz on 10/7/22, 6:20 PM
by mrkeen on 10/7/22, 8:57 PM
Wouldn't the first part of the analogy actually be:
A 1 second flight that will probably land at your exact destination, but could potentially land you anywhere on earth?
by olliej on 10/7/22, 5:55 PM
I could see the hash approach at a functional level resulting in different features essentially getting a different number of bit directly, which be approximately equivalent to having a NN with variable precision floats, all in a very hand wavy way.
Eg we could say a NN/NH needs N bits of information to work accurately, in which case you’re trading the format and operations on those Nbits
by euphetar on 10/8/22, 10:18 AM
by cratermoon on 10/7/22, 8:13 PM
by whycombinetor on 10/7/22, 10:13 PM
by eterevsky on 10/8/22, 8:22 AM
The natural question is: how are you going to train it?
by tomrod on 10/8/22, 2:34 AM
by whimsicalism on 10/8/22, 2:13 AM
Are they re-inventing autoencoders?
by sramam on 10/7/22, 8:26 PM
Am I incorrect in thinking we are headed to future AIs that jump to conclusions? Or is it just my "human neural hash" being triggered in error?!
by ummonk on 10/7/22, 11:25 PM
by aaaaaaaaaaab on 10/7/22, 8:09 PM