- Joined
- Sep 15, 2014
- Messages
- 4,342
- Likes
- 8,855
- Degree
- 8
The biggest thing to take away from RankBrain discussions should be that RankBrain is about finding out the signals on an individual keyword level of what works best for each organic result. Meaning that RankBrain is segmenting ranking factors' importance on what it finds on a per keyword basis. RankBrain might find the best organic results for the keyword "SEO" heavily rely on title tags, so it will prioritize the title tag above pagerank or other factors. RankBrain may then find that the content on the page is more relevant and has yielded better organic results for the keyword "auto insurance" - so it will prioritize that ranking factor above the others.
Google finally realized that different segment/niches/industries should show different styles of website with different data structures.
If you are looking for 'food recipes' you are more inclined to find websites based on instructions versus looking for 'yachts for sale'. Yachts for sale website will have a different structure, different data, and should therefore have different priorities, cause one type of site might naturally have more content, images, or videos versus the other.
Take videos for example: videos have hardly any content except within the description and the user generated comments, so gauging what's best might come down to more title tags, and anchor text versus in-content ranking factors.
What's important to take away is that RankBrain is always refining itself and being monitored.
One thing that has always baffled me when talking to SEOs is they seem to think they are smarter than PhDs that work at Google. They also assume that Google, of all people, does not visit the same forums, blogs, and read the same content to keep up with what the latest techniques of 'manipulating' SEO is being done. Never forget:
It's literally the reason why so many SEO forums have died off and why conversations are going to pure skype and invite only groups to keep things "secretz". What you need to realize is SEO is going to become more "technical" and harder. In the extremely near future What might work in one niche DEFINITELY will not work in another niche; even within the same niche a set of keywords might respond to increased authority backlinks but that tactic may not work on another set of keywords.
It's not just about content or links, it's about understanding what's the best results on a per keyword/query basis for searchers.
If you're an outlier and want to rank your wordpress twentytwelve themed site with no images, no logo, and half-assed social media icons - you're not going to be around for too long not within these SERPs.
As @Broland said - "survival of the fittest!"
As I continue my research and finding on RankBrain I'll continue updating this thread and posting the latest.
Google finally realized that different segment/niches/industries should show different styles of website with different data structures.
If you are looking for 'food recipes' you are more inclined to find websites based on instructions versus looking for 'yachts for sale'. Yachts for sale website will have a different structure, different data, and should therefore have different priorities, cause one type of site might naturally have more content, images, or videos versus the other.
Take videos for example: videos have hardly any content except within the description and the user generated comments, so gauging what's best might come down to more title tags, and anchor text versus in-content ranking factors.
So far, RankBrain is living up to its AI hype. Google search engineers, who spend their days crafting the algorithms that underpin the search software, were asked to eyeball some pages and guess which they thought Google’s search engine technology would rank on top. While the humans guessed correctly 70 percent of the time, RankBrain had an 80 percent success rate.
Typical Google users agree. In experiments, the company found that turning off this feature "would be as damaging to users as forgetting to serve half the pages on Wikipedia," Corrado said.
Source: http://news.surgogroup.com/google-turning-lucrative-web-search-ai-machines/
Typical Google users agree. In experiments, the company found that turning off this feature "would be as damaging to users as forgetting to serve half the pages on Wikipedia," Corrado said.
Source: http://news.surgogroup.com/google-turning-lucrative-web-search-ai-machines/
What's important to take away is that RankBrain is always refining itself and being monitored.
All learning that RankBrain does is offline, Google told us. It’s given batches of historical searches and learns to make predictions from these.
Those predictions are tested, and if proven good, then the latest version of RankBrain goes live. Then the learn-offline-and-test cycle is repeated.
Source: http://searchengineland.com/faq-all-about-the-new-google-rankbrain-algorithm-234440
Those predictions are tested, and if proven good, then the latest version of RankBrain goes live. Then the learn-offline-and-test cycle is repeated.
Source: http://searchengineland.com/faq-all-about-the-new-google-rankbrain-algorithm-234440
One thing that has always baffled me when talking to SEOs is they seem to think they are smarter than PhDs that work at Google. They also assume that Google, of all people, does not visit the same forums, blogs, and read the same content to keep up with what the latest techniques of 'manipulating' SEO is being done. Never forget:
It's literally the reason why so many SEO forums have died off and why conversations are going to pure skype and invite only groups to keep things "secretz". What you need to realize is SEO is going to become more "technical" and harder. In the extremely near future What might work in one niche DEFINITELY will not work in another niche; even within the same niche a set of keywords might respond to increased authority backlinks but that tactic may not work on another set of keywords.
It's not just about content or links, it's about understanding what's the best results on a per keyword/query basis for searchers.
This is probably the most comprehensive article regarding this so far on what the future looks like: Artificial intelligence is changing SEO faster than you think
Google's RankBrain is in the camp of the Connectionists. Connectionists believe that all our knowledge is encoded in the connections between neurons in our brain. And RankBrain’s particular strategy is what experts in the field call a back propagation technique, rebranded as "deep learning."
Connectionists claim this strategy is capable of learning anything from raw data, and therefore is also capable of ultimately automating all knowledge discovery. Google apparently believes this, too. On January 26th, 2014, Google announced it had agreed to acquire DeepMind Technologies, which was, essentially, a back propagation shop.
So when we talk about RankBrain, we now can tell people it is comprised of one particular technique (back propagation or "deep learning") on ANI (Artificial Narrow Intelligence).
[..]
Today’s regression analysis is seriously flawed
This is the biggest current fallacy of our industry. There have been many prognosticators every time Google’s rankings shift in a big way. Usually, without fail, a few data scientists and CTOs from well-known companies in our industry will claim they "have a reason!" for the latest Google Dance. The typical analysis consists of perusing through months of ranking data leading up to the event, then seeing how the rankings shifted across all websites of different types.
With today’s approach to regression analysis, these data scientists point to a specific type of website that has been affected (positively or negatively) and conclude with high certainty that Google’s latest algorithmic shift was attributed to a specific type of algorithm (content or backlink, et al.) that these websites shared.
However, that isn’t how Google works anymore. Google’s RankBrain, a machine learning or deep learning approach, works very differently.
Within Google, there are a number of core algorithms that exist. It is RankBrain’s job to learn what mixture of these core algorithms is best applied to each type of search results. For instance, in certain search results, RankBrain might learn that the most important signal is the META Title.
Adding more significance to the META Title matching algorithm might lead to a better searcher experience. But in another search result, this very same signal might have a horrible correlation with a good searcher experience. So in that other vertical, another algorithm, maybe PageRank, might be promoted more.
This means that, in each search result, Google has a completely different mix of algorithms. You can now see why doing regression analysis over every site, without having the context of the search result that it is in, is supremely flawed.
For these reasons, today’s regression analysis must be done by each specific search result. Stouffer recently wrote about a search modeling approach where the Google algorithmic shifts can be measured. First, you can take a snapshot of what the search engine model was calibrated to in the past for a specific keyword search. Then, re-calibrate it after a shift in rankings has been detected, revealing the delta between the two search engine model settings. Using this approach, during certain ranking shifts, you can see which particular algorithm is being promoted or demoted in its weighting.
Having this knowledge, we can then focus on improving that particular part of SEO for sites for those unique search results. But that same approach will not (and cannot) hold for other search results. This is because RankBrain is operating on the search result (or keyword) level. It is literally customizing the algorithms for each search result.
[..]
What Google also realized is that they could teach their new deep learning system, RankBrain, what "good" sites look like, and what "bad" sites look like. Similar to how they weight algorithms differently for each search result, they also realized that each vertical had different examples of "good" and "bad" sites. This is undoubtedly because different verticals have different CRMs, different templates and different structures of data altogether.
When RankBrain operates, it is essentially learning what the correct "settings" are for each environment. As you might have guessed by now, these settings are completely dependent on the vertical on which it is operating. So, for instance, in the health industry, Google knows that a site like WebMD.com is a reputable site that they would like to have near the top of their searchable index. Anything that looks like the structure of WebMD’s site will be associated with the "good" camp. Similarly, any site that looks like the structure of a known spammy site in the health vertical will be associated with the "bad" camp.
As RankBrain works to group "good" and "bad" sites together, using its deep learning capabilities, what happens if you have a site that has many different industries all rolled up into one?
First, we have to discuss a bit more detail on how exactly this deep learning works. Before grouping together sites into a "good" and "bad" bucket, RankBrain must first determine what each site’s classification is. Sites like Nike.com and WebMD.com are pretty easy. While there are many different sub-categories on each site, the general category is very straightforward. These types of sites are easily classifiable.
But what about sites that have many different categories? A good example of these types of sites are the How-To sites. Sites that typically have many broad categories of information. In these instances, the deep learning process breaks down. Which training data does Google use on these sites? The answer is: It can be seemingly random. It may choose one category or another. For well-known sites, like Wikipedia, Google can opt-out of this classification process altogether, to ensure that the deep learning process doesn’t undercut their existing search experience (aka "too big to fail").
The field of SEO will continue to become extremely technical.
But for lesser-known entities, what will happen? The answer is, "Who knows?" Presumably, this machine learning process has an automated way of classifying each site before attempting to compare it to other sites. Let’s say a How-To site looks just like WebMD’s site. Great, right?
Well, if the classification process thinks this site is about shoes, then it is going to be comparing the site to Nike’s site structure, not WebMD’s. It just might turn out that their site structure looks a lot like a spammy shoe site, as opposed to a reputable WebMD site, in which case the overly generalized site could easily be flagged as SPAM. If the How-To site had separate domains, then it would be easy to make each genre look like the best of that industry. Stay niche.
These backlinks smell fishy
Let’s take a look at how this affects backlinks. Based on the classification procedure above, it is more important than ever to stick within your "linking neighborhood," as RankBrain will know if something is different from similar backlink profiles in your vertical.
Let’s take the same example as above. Say a company has a site about shoes. We know that RankBrain’s deep learning process will attempt to compare each aspect of this site with the best and worst sites of the shoe industry. So, naturally, the backlink profile of this site will be compared to the backlink profiles of these best and worst sites.
Let’s also say that a typical reputable shoe site has backlinks from the following neighborhoods:
Well, RankBrain is going to see this and notice that this backlink profile looks a lot different than the typical reputable shoe site. Worse yet, it finds that a bunch of spammy shoe sites also have a backlink profile from auto sites. Uh oh.
And just like that, without even knowing what is the "correct" backlink profile, RankBrain has sniffed out what is "good" and what is "bad" for its search engine results. The new shoe site is flagged, and their organic traffic takes a nosedive.
Some things are certain, though:
To stay within the good graces of RankBrain, your site, it's structure, and it's backlink profile and any other seo signal associated with the top websites in your website should also be reflected on your own website. Meaning if 90% out of the top 10 or top 20 websites have SSL enabled by default, then your website better have SSL enabled by default. If the top sites have video channels associated with their brand OR videos embedded within their content, then your website better have the same elements.Google's RankBrain is in the camp of the Connectionists. Connectionists believe that all our knowledge is encoded in the connections between neurons in our brain. And RankBrain’s particular strategy is what experts in the field call a back propagation technique, rebranded as "deep learning."
Connectionists claim this strategy is capable of learning anything from raw data, and therefore is also capable of ultimately automating all knowledge discovery. Google apparently believes this, too. On January 26th, 2014, Google announced it had agreed to acquire DeepMind Technologies, which was, essentially, a back propagation shop.
So when we talk about RankBrain, we now can tell people it is comprised of one particular technique (back propagation or "deep learning") on ANI (Artificial Narrow Intelligence).
[..]
Today’s regression analysis is seriously flawed
This is the biggest current fallacy of our industry. There have been many prognosticators every time Google’s rankings shift in a big way. Usually, without fail, a few data scientists and CTOs from well-known companies in our industry will claim they "have a reason!" for the latest Google Dance. The typical analysis consists of perusing through months of ranking data leading up to the event, then seeing how the rankings shifted across all websites of different types.
With today’s approach to regression analysis, these data scientists point to a specific type of website that has been affected (positively or negatively) and conclude with high certainty that Google’s latest algorithmic shift was attributed to a specific type of algorithm (content or backlink, et al.) that these websites shared.
However, that isn’t how Google works anymore. Google’s RankBrain, a machine learning or deep learning approach, works very differently.
Within Google, there are a number of core algorithms that exist. It is RankBrain’s job to learn what mixture of these core algorithms is best applied to each type of search results. For instance, in certain search results, RankBrain might learn that the most important signal is the META Title.
Adding more significance to the META Title matching algorithm might lead to a better searcher experience. But in another search result, this very same signal might have a horrible correlation with a good searcher experience. So in that other vertical, another algorithm, maybe PageRank, might be promoted more.
This means that, in each search result, Google has a completely different mix of algorithms. You can now see why doing regression analysis over every site, without having the context of the search result that it is in, is supremely flawed.
For these reasons, today’s regression analysis must be done by each specific search result. Stouffer recently wrote about a search modeling approach where the Google algorithmic shifts can be measured. First, you can take a snapshot of what the search engine model was calibrated to in the past for a specific keyword search. Then, re-calibrate it after a shift in rankings has been detected, revealing the delta between the two search engine model settings. Using this approach, during certain ranking shifts, you can see which particular algorithm is being promoted or demoted in its weighting.
Having this knowledge, we can then focus on improving that particular part of SEO for sites for those unique search results. But that same approach will not (and cannot) hold for other search results. This is because RankBrain is operating on the search result (or keyword) level. It is literally customizing the algorithms for each search result.
[..]
What Google also realized is that they could teach their new deep learning system, RankBrain, what "good" sites look like, and what "bad" sites look like. Similar to how they weight algorithms differently for each search result, they also realized that each vertical had different examples of "good" and "bad" sites. This is undoubtedly because different verticals have different CRMs, different templates and different structures of data altogether.
When RankBrain operates, it is essentially learning what the correct "settings" are for each environment. As you might have guessed by now, these settings are completely dependent on the vertical on which it is operating. So, for instance, in the health industry, Google knows that a site like WebMD.com is a reputable site that they would like to have near the top of their searchable index. Anything that looks like the structure of WebMD’s site will be associated with the "good" camp. Similarly, any site that looks like the structure of a known spammy site in the health vertical will be associated with the "bad" camp.
As RankBrain works to group "good" and "bad" sites together, using its deep learning capabilities, what happens if you have a site that has many different industries all rolled up into one?
First, we have to discuss a bit more detail on how exactly this deep learning works. Before grouping together sites into a "good" and "bad" bucket, RankBrain must first determine what each site’s classification is. Sites like Nike.com and WebMD.com are pretty easy. While there are many different sub-categories on each site, the general category is very straightforward. These types of sites are easily classifiable.
But what about sites that have many different categories? A good example of these types of sites are the How-To sites. Sites that typically have many broad categories of information. In these instances, the deep learning process breaks down. Which training data does Google use on these sites? The answer is: It can be seemingly random. It may choose one category or another. For well-known sites, like Wikipedia, Google can opt-out of this classification process altogether, to ensure that the deep learning process doesn’t undercut their existing search experience (aka "too big to fail").
The field of SEO will continue to become extremely technical.
But for lesser-known entities, what will happen? The answer is, "Who knows?" Presumably, this machine learning process has an automated way of classifying each site before attempting to compare it to other sites. Let’s say a How-To site looks just like WebMD’s site. Great, right?
Well, if the classification process thinks this site is about shoes, then it is going to be comparing the site to Nike’s site structure, not WebMD’s. It just might turn out that their site structure looks a lot like a spammy shoe site, as opposed to a reputable WebMD site, in which case the overly generalized site could easily be flagged as SPAM. If the How-To site had separate domains, then it would be easy to make each genre look like the best of that industry. Stay niche.
These backlinks smell fishy
Let’s take a look at how this affects backlinks. Based on the classification procedure above, it is more important than ever to stick within your "linking neighborhood," as RankBrain will know if something is different from similar backlink profiles in your vertical.
Let’s take the same example as above. Say a company has a site about shoes. We know that RankBrain’s deep learning process will attempt to compare each aspect of this site with the best and worst sites of the shoe industry. So, naturally, the backlink profile of this site will be compared to the backlink profiles of these best and worst sites.
Let’s also say that a typical reputable shoe site has backlinks from the following neighborhoods:
- Sports
- Health
- Fashion
Well, RankBrain is going to see this and notice that this backlink profile looks a lot different than the typical reputable shoe site. Worse yet, it finds that a bunch of spammy shoe sites also have a backlink profile from auto sites. Uh oh.
And just like that, without even knowing what is the "correct" backlink profile, RankBrain has sniffed out what is "good" and what is "bad" for its search engine results. The new shoe site is flagged, and their organic traffic takes a nosedive.
Some things are certain, though:
- Each competitive keyword environment will need to be examined on its own;
- Most sites will need to stay niche to avoid misclassification; and
- Each site should mimic the structure and composition of their respective top sites in that niche.
If you're an outlier and want to rank your wordpress twentytwelve themed site with no images, no logo, and half-assed social media icons - you're not going to be around for too long not within these SERPs.
As @Broland said - "survival of the fittest!"
As I continue my research and finding on RankBrain I'll continue updating this thread and posting the latest.