by dpw on 3/30/15, 1:43 PM with 81 comments
by ChuckMcM on 3/30/15, 6:26 PM
edit: incorrect is perhaps too strong, it is incomplete.
While it is true that click tracking can be used as a relevance signal, the people who were really pissed off when the data stream got dumped were advertisers who wanted to buy AdWords. That was a very simple system, pay someone for clickstream data, extract trending queries, front those with AdWord buys to get your page on the top of Google's results, and profit.
Having built a search engine and run it for 5 years, we got to see what people felt was relevant and what wasn't in a very loose way with click stream data. Basically you have a query and 10 blue links you can split the results in quartiles and figure out if the thing they clicked on was top half, bottom half, top quarter/second quarter etc. And do A/B testing to see how that played out. But what we found was that the best indication of what a page was about, was the text that linked to it. If you have an in-link to a page which was "<href='page'>great radio site"[1] then "great radio site" would be a query that should return that page which might be titled something like "bob's electromagnetic spectrum imaginarium" or something equally unlikely to come up in a query string.
So the bottom line is that there are lots of ways to try to determine relevance, click stream data is a part of that but by no means the biggest factor.
[1] neutered html for obvious reasons.
by jfuhrman on 3/30/15, 4:05 PM
This is interesting because of the browser choice enforced by the EU on Windows. IE whose default is Bing lost share to other browsers like Chrome, Firefox and Opera which all had Google as the default. So an attempt to fix the browser market totally distorted the Web Search market. I wonder why MS didn't request to the EU that the alternate browsers in the browser choice screen had to have Bing as the default search.
I wonder if the EU will mandate that search relevancy data must be shared by Google with rival search engines like DDG just like they mandated that SMB shares and Office formats must be documented by MS and released to developers.
by solve on 3/30/15, 5:53 PM
Google's biggest PR success is convincing everyone that the quality of web rankings depends almost purely on algorithms. It does not. What allows Google to hold their monopoly is the $100s of millions (or more) they continuously pay to amass more manually created training data:
http://www.theregister.co.uk/2012/11/27/google_raters_manual
http://www.forbes.com/sites/timworstall/2012/11/27/is-google...
A new search engine could appear today with algorithms 10x better than Google, but without access to this scale of training data, their rankings wouldn't even be close to Google's quality.
Google maintains their position by paying cash for this monopoly on training data made by tens of thousands of $9/hour workers, not through superior algorithms!
by bobajeff on 3/30/15, 3:29 PM
Computers introduce a means to lock people in that don't exist in other markets. In software products there are often ecosystems that tie directly in to the product/service which are not required to be shared with competitors unlike with road systems for cars.
Regulators ought to look into ways to enforce measures that require the companies to completely open their ecosystem to competitors. Or look into ways to standardize these ecosystems and require every service/application/website comply with them (similar to how media companies are forced to include closed captioning).
by sanxiyn on 3/30/15, 2:45 PM
by jjoe on 3/30/15, 2:58 PM
by ocdtrekkie on 3/30/15, 2:46 PM
by pcl on 3/30/15, 2:46 PM
by ntakasaki on 3/30/15, 3:54 PM
MS didn't do that from IE, they did for users who installed the Bing bar, a huge difference.
by Metapilot on 3/30/15, 6:35 PM
The author states that "For some 90% of searches, a modern search engine analyzes and learns from past queries, rather than searching the Web itself, to deliver the most relevant results." This may be true in some types of searches but overall, I think the statement is misleading.
Rather, it's better to think of it like this: One important part of the algorithmic process involves constantly crawling the web and updating the index with new information. (Important / frequently-updated web sites may get crawled all day every day, while ones that are less important may get crawled only weekly or monthly). Meanwhile, another part of the algorithmic process constantly analyzes new info discovered in the crawl and combines it with, as the author-mentioned, click-through data learned from past queries.
The answers to many queries don't change, while the answers to many other queries deserve freshness. For example, I'm quite certain Einstein's date of birth hasn't changed in quite a while, but his theory of relativity is in constant discussion and there is always new information and new queries pertaining to it. As a result, there is not much need for a search engine to go digging for the latest info on an "einstein's birthday" query, but it's to everyone's advantage that Google is able to identify which pages on the web deserve priority crawling and that Google has retrieved and incorporated the fresh info those pages contain into its index when it comes to a topical type of query like "diffraction of light with quantum physics".
In the end, the results to every query depend on info gathered from the web and user data helps refine the results. Info that is more static can be prioritized with more input from click-through data, while new information found on the web must rely more on Google's artificial intelligence to push it up in front of searchers.
Another reason that that "90%" statement sticks out to me is that there is a fairly often-used factoid tossed around industry experts that between "6% to 20% of queries that get asked every day have never been asked before." Google can't rely heavily on past query data for all of these type of searches.
by wmf on 3/30/15, 2:19 PM
by ekr on 3/30/15, 3:50 PM
by minthd on 3/30/15, 3:11 PM
Has anybody noticed this happening ?
by tokai on 3/30/15, 6:22 PM
by PaulHoule on 3/30/15, 3:08 PM
by countrybama24 on 3/31/15, 3:51 AM
by thallukrish on 3/30/15, 4:14 PM
by thrownaway2424 on 3/30/15, 4:04 PM
by Semiapies on 3/30/15, 3:31 PM
by asuffield on 3/30/15, 3:31 PM
This article makes a number of bold claims about the contents of data and code which its author hasn't seen, and is written by a company that is receiving a large amount of money from Yahoo. I would encourage people not to forget these details.