We have all heard Google say they have over 200 or hundreds of ranking signals in the search algorithm. Here is Frédéric Dubut from Bing saying Bing has hundreds to thousands of “features” in their search ranking models.
He said this on Twitter “The model has hundreds, if not thousands of features.” In fact, he added that a “large part of the web ranking team’s work is to engineer new features with more predictive power and retire those that have virtually no impact on rankings.”
The model has hundreds, if not thousands of features. A large part of the web ranking team’s work is to engineer new features with more predictive power and retire those that have virtually no impact on rankings.
— Frédéric Dubut (@CoperniX) March 13, 2019
To understand more what features are, you can read his SEJ contribution but it seems like they are a form of signals that get translated into a machine learning readable format.
At a high level, machine learning is good at identifying patterns in data and generalizing based on a (relatively) small set of examples.
For web ranking, it means building a model that will look at some ideal SERPs and learn which features are the most predictive of relevance.
This makes machine learning a scalable way to create a web ranking algorithm. You don’t need to hire experts in every single possible topic to carefully engineer your algorithm.
In any event – it is interesting to hear how Bing does this compared to Google – but in both cases, it is still very much so a huge secret and mystery to people like you and me.
Forum discussion at Twitter.