Thursday, February 15, 2007
Kevin Newcomb's take on the latest incremental update on ads quality. As Kevin does, I interpret the changes to mean an increase in transparency (obvious, because that's what Google tells us); and a more relaxed minimum bid status for "unknown" type new keywords where Google has little data to go on. Here, they might be less likely to punish you for the trends seen with other advertisers trying similar keywords for similar offers, and let you create your own good or bad track record on your own. (It's not a black and white change, more of a tweak in emphasis.)
At the same time, they also allude to the new algorithm being tougher on bad ads and nicer to good ones, so basically just further refinements based on machine learning and so on. If you're on the receiving end of the additional toughening up, it'll hurt even more. The majority of advertisers will likely feel the new regime to be slightly more liberal.
The increased transparency will lead to more questions. Once I'm absolutely sure of what keywords or groups of keywords are low quality, how should I respond. Google explicitly advises that you do not raise your bid, but rather, optimize your campaign. (So much for the "cash grab" theory.) And they point in particular to the relationships between your keywords, ads, and offer. This is what I was getting at in the last post.
Creating more granular campaigns will potentially boost quality for those who do have available content and offers on their sites, but who have been lazy in how they build the campaign structure, for whatever reason.
What still confuses me is how Google can know what score to apply if you're running a complex test that includes multiple ads and multiple destination URL's, where you're actively trying to understand the best places to send users on the site, the best wording to use in ads., etc. for any given keyword. Or does the mere act of doing more systematic testing of this nature possibly give you some brownie points? I think I'll have to ask them about that. By and large, I think the answer is this: the system is designed to be punitive to campaigns that have some aspect that falls really far outside the normal, relevant, user-friendly range. Most campaigns are going to run unimpeded, or in other words ranking and status are largely based on the old standbys of historical CTR and your max bid. That's what I call AdWords 2.0. The current iteration, 2.7, is probably not too far from 2.0 for the vast majority of campaigns, keywords, ads, and sites.
Labels: google adwords, quality score
View Posts by Category
Andrew's book, Winning Results With Google AdWords, (McGraw-Hill, 2nd ed.), is still helping tens of thousands of advertisers cut through the noise and set a solid course for campaign ROI.
And for a glowing review of the pioneering 1st ed. of the book, check out this review, by none other than Google's Matt Cutts.
Posts from 2002 to 2010