Wednesday, October 17, 2007
OK, maybe I'm not about to call for its "death" as I did with the keyword meta tag, but Duane Forrester's fine piece about Big SEO and automation just triggered a couple of morbid thoughts about our old friend "title tag".
Let's say you have a million pages. So you say you need SEO, eh? That sounds like it's going to be a mighty big job. Forrester correctly points out there isn't very much you can do manually. Although I would counter that you can work on between 500 and 2,000 pages to cover some pretty impressive ground, search-frequency-wise, if you're so inclined.
So what is on-page SEO, exactly. Is it adding appropriate titles, heading tags and headings, meta keyword tags and description tags, to all pages, thereby increasing their rank potential?
Let's work through the logic here. You're going to make sure certain "core" keywords appear multiple times in the document, "amplifying" their weight. But doesn't that just take us back to keyword density?
If the automation process involves "a way to automate the insertion of meta tag based on the actual content of a given page," as Forrester writes, then let's be clear on what's really happening: you're taking what's already on the page, and copying and pasting it into another page element.
If you do something similar for titles, the logical principle is no different.
Let's be honest. These various page elements and approaches to ranking content were mostly invented for a manual world. Logically speaking, if all you're doing to try to rank better (on a million pages at once) is to replicate some existing words within other elements of the page, you're adding only slight value, and zero additional meaning. It might be a good idea, but it's hardly life-changing for the user.
There is still some minimal value left. Well-labeled pages are easier to find and respond to, in that page titles appear in SERP's and in the browser.
You'll need to automate correctly to put keywords and meaning-related cues in the URL structure of the site, as well... but largely because this seems to matter to search engines.
But if that's all we've got, it's not clear that such pages should be ranking higher than their equals with less zealous automated efforts at keyword densification/replication on any given search query. In the case of scraper sites who are super good at this kind of automation, of course their well-constructed pages should not rank at all.
It's little wonder that based on such characterizations of SEO, many businesses view it as a purely technical function. It is not.
It's certainly a sad thing that a good CMS deployment (for example) can improve your overall level of search referrals, as compared with a bad one. Sad or not, it's a practical reality that companies need to study, at least until search engines get even smarter.
Still, there are plenty of other elements of information architecture that tend to get lost in such discussions. Should we use breadcrumb navigation or not? What's the right number of links in the nav bar to aid navigation? What approach should we take to site search? Should we add interactive capability to the site?
Overdo your efforts to please search engines alone, and you might not allocate the time and budget you need to please users. And happy users are the ones that spread the word so well, giving you the off-page love that is a prerequisite to high reputation and thus standing in the search engines.
Labels: cms, seo, title tags
View Posts by Category
Andrew's book, Winning Results With Google AdWords, (McGraw-Hill, 2nd ed.), is still helping tens of thousands of advertisers cut through the noise and set a solid course for campaign ROI.
And for a glowing review of the pioneering 1st ed. of the book, check out this review, by none other than Google's Matt Cutts.
Posts from 2002 to 2010