- Joined
- Nov 16, 2015
- Messages
- 493
- Likes
- 873
- Degree
- 2
Back in the golden era of SEO before anyone considered penalties or that anyone might actually work at google this was 'quality content':
<h1>Best Injury Lawyer Wisconsin</h1>
If you're looking for best injury lawyer wisconsin, then you have come to the right place because accident lawyer wisconsin is ready to handle your case efficiently and quickly. Whether you've been hit by a bus wisconsin, or need workplace accident lawyer wisconsin we're ready to help on 555-1234-567.
....
Then you'd better have pages for:
* every variant on that query eg. 'accident attorney wisconsin' etc
* every town, county, and residential area in the state
and so on...
---
Some of you are lucky and don't even remember those times. But back then it worked and it worked well.
Now we're in an era where the same shenanigans is going on. Sure we don't have anyone pushing people to bold, italic and underline 57 keywords per 500 word filler article, written for 1c/word.
But what we do have is 'AI tools' to tell us what TF*IDF of phrases on competing pages is, or what 'semantically related keywords' need to be stuffed in to make a 'rounded' article on the topic etc.
So thousands of SEOs and internet marketers are duly creating documents which meet these criteria. Sure they hire better writers than we used to hire back in the day, and sure it's probably more interesting (vaguely) to the readers. Heck sometimes it's even written by a 'subject matter expert' eg Doctors writing for Drug Treatment Centers.
It's just the next evolution in the game - stuffing a different type of keyword (semantically related phrases and so on).
---
"Fair enough but what the hell is the point of this?"
Ok I've rambled long enough - here's the point.
How does Google or any current machine learning tool tell the difference between these levels of content if they all contain exactly the right words and are written by a competent writer?:
* A review by a writer who's never seen the product let alone used it but has written about all the correct features of such a product and added some apparently real opinions about it's shortcomings/benefits.
* A review by a writer who isn't that much of an expert in the topic but is an expert at writing about the topic and has actually tried it out so has both the right words and some slightly more accurate (though not always insightful/deep) recommendations and thoughts.
* A review by a genuine expert whose opinion on the product they are reviewing touches on tiny little details the others miss and really add value to the end reader but don't involve significantly different words to describe.
The truth is they can't. If you imagine the computing power required to even fact check a basic article let alone decide if those facts were pertinent or useful, it is beyond ludicrous.
So right now you can probably get away with the second one pretty damned well...
---
Let's look at an example in an area where I'm an expert - Travel and reward miles:
https://marginallycoherent.com/one-...he-value-of-miles-to-sell-credit-de6de45d995e
I wrote the above last week. The tl;dr is points and miles blogs use ridiculous examples to make credit card miles seem more valuable. For example by picking obscure one way redemptions that price out high in cash then saying 'so I got xp/mile and my site only values them at yp/mile so I got 8x value'.
Imagine an AI that has to fact check all of these posts and somehow sort them. Firstly nobody has the same needs/takes the same redemptions so it has to consider what the average behaviour of miles and points users is and establish a value based on that in order to fact check the comparison yp/mile number.
Then it has to go to google flights and fact check the prices on the flights.
Then it has to understand the content sufficiently to analyse the point being made and sort the various articles discussing a particular type of point or a particular type of redemption and sort them by accuracy.
And it still has the wrong order at this point...
I think we're a million miles at this point from anything that any search engine does and we still have further to go. The typical use of points is a terrible way to use them as the typical person is an idiot. As Churchill pointed out the average voter isn't too smart...
So how do we establish the yp/mile comparison? The base value of the points (the immediate cash equivalent - eg cashing in Membership Rewards on the AMEX portal for cash?) or the ideal value or the typical value achieved by a savvy customer? Let's say somewhere in the middle - so work that out and keep it updated and constantly reassess pages based on their accuracy vs this calculation..?
How do we assess the real 'deal' being achieved or the 'value of the tip' being shared? Look at the likelihood anyone actually wants to book that trip? Maybe the actual travelers always book a return not a one way when they go from LAX to ICN?
In that case we need to understand that return trips need to be looked at for points and points + 1 way cash and so on...
Then what about booking timescales. Articles that compare last minute pricing to 'realistic' pricing where people plan their trip in advance etc...
It's an absolute minefield assessing who has a good tip/valuable advice even for such a simple question.
It's not going to happen soon - actually Google is moving more towards authority as trust and away from words on the page. You can tell whenever you try to find anything obscure these days.
https://twitter.com/tehseowner/status/1121661950708989952
https://twitter.com/dr_pete/status/1112767864442900487
So ‘finding things’ is becoming something Google is bad at because the short tail authority articles on sites with tons of links is crowding out page one completely even for some pretty long tail searches.
Authority of your site is still the only reliable thing they have to go on because right now everything else is too damned hard.
There are contributing factors to that authority, and endless algorithmic points tables - like having real doctors with real links to their site (that conveniently also links to your article on their ‘my contributions’ page) etc - but right now Google can only sort the ‘maybe this person is reviewing a meal delivery service that delivers dog food curry’ vs ‘meh at least it’s a real review of services that deliver human food’ is by putting pcmag first.
Unless AI has a huge leap forward we’re in this space for 10 years where authority is king.
Go build some authority.
<h1>Best Injury Lawyer Wisconsin</h1>
If you're looking for best injury lawyer wisconsin, then you have come to the right place because accident lawyer wisconsin is ready to handle your case efficiently and quickly. Whether you've been hit by a bus wisconsin, or need workplace accident lawyer wisconsin we're ready to help on 555-1234-567.
....
Then you'd better have pages for:
* every variant on that query eg. 'accident attorney wisconsin' etc
* every town, county, and residential area in the state
and so on...
---
Some of you are lucky and don't even remember those times. But back then it worked and it worked well.
Now we're in an era where the same shenanigans is going on. Sure we don't have anyone pushing people to bold, italic and underline 57 keywords per 500 word filler article, written for 1c/word.
But what we do have is 'AI tools' to tell us what TF*IDF of phrases on competing pages is, or what 'semantically related keywords' need to be stuffed in to make a 'rounded' article on the topic etc.
So thousands of SEOs and internet marketers are duly creating documents which meet these criteria. Sure they hire better writers than we used to hire back in the day, and sure it's probably more interesting (vaguely) to the readers. Heck sometimes it's even written by a 'subject matter expert' eg Doctors writing for Drug Treatment Centers.
It's just the next evolution in the game - stuffing a different type of keyword (semantically related phrases and so on).
---
"Fair enough but what the hell is the point of this?"
Ok I've rambled long enough - here's the point.
How does Google or any current machine learning tool tell the difference between these levels of content if they all contain exactly the right words and are written by a competent writer?:
* A review by a writer who's never seen the product let alone used it but has written about all the correct features of such a product and added some apparently real opinions about it's shortcomings/benefits.
* A review by a writer who isn't that much of an expert in the topic but is an expert at writing about the topic and has actually tried it out so has both the right words and some slightly more accurate (though not always insightful/deep) recommendations and thoughts.
* A review by a genuine expert whose opinion on the product they are reviewing touches on tiny little details the others miss and really add value to the end reader but don't involve significantly different words to describe.
The truth is they can't. If you imagine the computing power required to even fact check a basic article let alone decide if those facts were pertinent or useful, it is beyond ludicrous.
So right now you can probably get away with the second one pretty damned well...
---
Let's look at an example in an area where I'm an expert - Travel and reward miles:
https://marginallycoherent.com/one-...he-value-of-miles-to-sell-credit-de6de45d995e
I wrote the above last week. The tl;dr is points and miles blogs use ridiculous examples to make credit card miles seem more valuable. For example by picking obscure one way redemptions that price out high in cash then saying 'so I got xp/mile and my site only values them at yp/mile so I got 8x value'.
Imagine an AI that has to fact check all of these posts and somehow sort them. Firstly nobody has the same needs/takes the same redemptions so it has to consider what the average behaviour of miles and points users is and establish a value based on that in order to fact check the comparison yp/mile number.
Then it has to go to google flights and fact check the prices on the flights.
Then it has to understand the content sufficiently to analyse the point being made and sort the various articles discussing a particular type of point or a particular type of redemption and sort them by accuracy.
And it still has the wrong order at this point...
I think we're a million miles at this point from anything that any search engine does and we still have further to go. The typical use of points is a terrible way to use them as the typical person is an idiot. As Churchill pointed out the average voter isn't too smart...
So how do we establish the yp/mile comparison? The base value of the points (the immediate cash equivalent - eg cashing in Membership Rewards on the AMEX portal for cash?) or the ideal value or the typical value achieved by a savvy customer? Let's say somewhere in the middle - so work that out and keep it updated and constantly reassess pages based on their accuracy vs this calculation..?
How do we assess the real 'deal' being achieved or the 'value of the tip' being shared? Look at the likelihood anyone actually wants to book that trip? Maybe the actual travelers always book a return not a one way when they go from LAX to ICN?
In that case we need to understand that return trips need to be looked at for points and points + 1 way cash and so on...
Then what about booking timescales. Articles that compare last minute pricing to 'realistic' pricing where people plan their trip in advance etc...
It's an absolute minefield assessing who has a good tip/valuable advice even for such a simple question.
It's not going to happen soon - actually Google is moving more towards authority as trust and away from words on the page. You can tell whenever you try to find anything obscure these days.
https://twitter.com/tehseowner/status/1121661950708989952
https://twitter.com/dr_pete/status/1112767864442900487
So ‘finding things’ is becoming something Google is bad at because the short tail authority articles on sites with tons of links is crowding out page one completely even for some pretty long tail searches.
Authority of your site is still the only reliable thing they have to go on because right now everything else is too damned hard.
There are contributing factors to that authority, and endless algorithmic points tables - like having real doctors with real links to their site (that conveniently also links to your article on their ‘my contributions’ page) etc - but right now Google can only sort the ‘maybe this person is reviewing a meal delivery service that delivers dog food curry’ vs ‘meh at least it’s a real review of services that deliver human food’ is by putting pcmag first.
Unless AI has a huge leap forward we’re in this space for 10 years where authority is king.
Go build some authority.