MarketingSherpa has a study that newsletter open rates have dropped 10% in the past year, but I almost didn't get the message because I hardly ever read those emails anymore... which sort of explains why my company didn't make it into this year's SEM Buyer's Guide (Paid Search Version), in case you're wondering why Page Zero Media doesn't show up in Sherpa's guide to paid search management firms. I didn't get the emails, and no one bothered to call to remind me to apply for inclusion. I was told I could get into next year's guide. Meanwhile I still receive bulk emails from their affiliate manager asking me to promote the guide we're not in, so that my readers can buy consulting services from a competitor. Sounds like a great deal, doesn't it?
I'm starting to think that the root of all evil may be impersonal emails when a phone call would be preferable. (Mea culpa on this front, too, no question there!)
One thought: mailers who send 7 or 8 different newsletters to their customers are a huge part of the problem. With Sherpa, which publishes many newsletters, it's totally voluntary to join so many different ones, but as a recipient, it is pretty much a case of biting off more than you can chew. At signup time, it seems that your eyes can be bigger than your stomach. One solution: the publisher who recognizes this problem could focus a bit more on fewer newsletters.
I noticed something similar with MediaPost's stable of at least a half-dozen daily newsletters. Only in that case, by registering once, I got in for all six of these things. Six emails every couple of days from a single content outfit? Not in this day and age! I've unsubscribed from half, and am rarely reading the rest. Stuff like this just makes it harder for everyone else to get their email read.
In any case, the Sherpa study seems to be onto something: it's a case study of a marketer who stopped sending out emails "on schedule" to his customer base, and now only sends them out when he has something to say.
Let's go back to the Godin-coined principles of permission-based messaging: the communications must be anticipated, personal, and relevant. Frankly, even semi-anticipated and relevant will work. In the past three days, of all the unwanted emails I received, one permission-based one I actually read was a wine review column by Natalie McLean (Nat Decants). It's not only semi-anticipated and relevant, it's timely. It tells you when new "Vintages" releases will be available in the government-owned liquor stores (this Saturday in this case), and lists some great deals and provides tasting notes on releases that probably won't be around long. I actually paid attention to that one. Not only is this semi-anticipated and relevant, but it's personal in the sense that it's related to a hobby -- it's fun.
In the business world, there are just so many emails that we "should" or "must" (rather than want to) read, we just leave them sitting in the pile.
Most emailers have lost all sense of what permission-based marketing is supposed to do. Excite and delight.
On another note: too many blogs to read? I've got the answer! For the next week at least, posting will be nonexistent as I take time out to focus on long-term strategy and duck-watching.
Thursday, August 18, 2005
What do you do when things are good? Roll out a secondary offering, of course. :)
Wow, GOOG shares are taking a real pounding as the market hears about this impending dilution. The company is now valued at "only" $77.5 billion.;)
At a ratio to earnings, that's not out of line. Looking at revenues, this valuation is rich any way you'd care to measure it.
I guess the big question is: by raising so much new cash, is Google signaling an intention to embark on a truly grand-scale initiative in one of the areas they've been studying? Will they make an acquisition that really makes people sit up and take notice? Or is this just prudent hedging, ensuring long-term stability, without actually knowing in advance what situations might crop up that might require it to choose between cash or stock in a given merger or acquisition scenario?
The latter is perfectly plausible, but aren't we overdue for a really big, bold announcement that will shake up the technology world?
Wednesday, August 17, 2005
Harrison Magun takes notice of offline conversions, umlauts, and booth babes, in a light review of SES that seems perfect for a perfect day in August.
As far as Judge Brinkema was and is concerned, using trademarked keywords in an ad serving system to trigger relevant, non-deceptive ads is perfectly legal. Google won that case. But as MediaPost reports, the Geico spin machine has tried to paint things differently, emphasizing the minor aspect of the case, keywords which actually appear in the ads. Google accepts that trademarked keywords don't generally belong in competitors' ads, and has systems in place to facilitate review of such violations.
At some point along the line, someone at a conference or luncheon is going to tell you that Geico won that case. It isn't so.
Tuesday, August 16, 2005
That wasn't so bad, was it?
Fear of the unknown now gives way to fear of the... known!
Sampling a few of the inactive keywords in one account: the first one I noticed said "increase quality or bid $0.30 to activate." That wasn't so bad, since I was already bidding $0.25.
The worst one I found, though, gave me a minimum bid of $5.00 to activate, on a keyword I was bidding ten cents on! I guess not! Now that must have been one heckuva bad quality score to merit a minimum bid of $5.00.
Now if only we could figure out exactly what went into determining the quality score.
In any case, the vast majority of keywords in this particular account -- in this case, probably around 98% -- are unaffected and have kept their active status.
By giving advertisers more flexibility and choice, Google has ironically created another "learning opportunity" that will be spurned by many advertisers. Since the new minimum bid system is new, I predict it's going to generate some yelling and screaming. I'd rather see Google tinkering in an attempt to improve the ad system, and endure the growing pains, than sticking their head in the sand and just sitting on a so-so status quo.
Danny's fed up with the search engine index size wars, and proposes that the biggies duke it out on a more important front: relevancy (or relevance as we like to call it over here).
He proposes that they all agree on some sort of standard test and to have an independent institute or consortium run the tests.
I don't entirely agree on the idea of a common definition of relevance. Search personalization, for example, can take on a variety of forms. In theory, every user would have a personalized set of results sitting in front of them.
Language itself shifts over time. Definitions of what is true often depend on what scientific camp you're in.
But yes, in an enlightened world, scientists do need to accept at least basic overlapping truths.
So it should be possible to start with baby steps. SE's probably won't agree to anything, internally or amongst themselves, that highlights things like SE index spam.
But it would be pretty easy to (a) take a broad basket of keywords and (b) some commonly-agreed benchmarks for what counts as a "spammy page" (even a scoring system); and have assessments done by (c) qualified reviewers to come up with a determination of how contaminated the major SE's are with spam, on a diverse basket of keywords, in the top 20 listings.
RustyBrick over on Search Engine Watch Forums tried something like this, but IMHO it was too open-ended. I propose a slightly different approach to it that didn't ask raters to determine which engine was most relevant, but rather, merely count how many pages in the top 20 on the sample keyword queries exceed a certain "spamminess score." We're talking about scraped pages, redirects, machine-generated gibberish pages... the real nasty stuff, which appears on a great many queries where it shouldn't. It wouldn't necessarily penalize sites for using spammy techniques, though. If someone's participating in a link farm, or cloaking, or keyword stuffing in the title tag, but the page the user sees is relevant to their query and likely to lead to a desired result (gaining real insight from original content, making a purchase from the type of vendor they were probably looking for, joining a forum, etc.), then the page shouldn't be counted as spammy. Actually, SE's have been thinking along those lines, too. How often have you seen someone using outdated optimization techniques like keyword stuffing in titles and tags, and yet the page itself would have ranked OK anyway, and the SE's actually do rank it well without penalizing it? SE's rightly look past a lot of the stuff we might consider "spammy," as long as the page is relevant.
This type of exercise wouldn't require us to split hairs in defining relevance. It would give us a base to work from that virtually any sentient, rational being would agree on. If snippets of gibberish content are stolen from multiple sources to create a junk page whose only purpose is to generate a bit of AdSense revenue, etc., then that's obvious spam. Users aren't seeking pages of stolen gibberish content... ever. Nor are they wanting a redirect to a casino site when they type "fantasy football statistics 2004."
In other words... in the parlance of applied social sciences, we need to "operationalize" relevance so coders can actually do their jobs consistently.
Danny, Rusty, if you like this idea, count me into the working group. We could hash out a scoring system on what counts as a "spammy page," choose a broad (but confidential, to avoid gaming by the SE's) keyword basket, and round up coders to assess the major engines. This would give us a real "SE spamminess index" as opposed to a highly subjective "relevancy score." It would get published in the Wall Street Journal before you know it, alongside some of those other famous SEM indexes they've been kicking around lately.
Miva got off cheap. With a one-time payment of $8 million and an agreement to license certain aspects of Yahoo technology, they've settled their patent dispute with Yahoo over paid search technology.
Miva, in about fourth or fifth place as a provider of keyword-based advertising, still has shakier financials than they would like. But it looks like they've turned the corner, after considerable energy wasted with restructuring and legal wrangling.
Monday, August 15, 2005
Amusing post by Richard Zwicky on the physical representation of the Google Sandbox. (Did it really happen?)
User attention is finite. Online publishers have often yielded to the temptation of trying to grab a bit more than their fair share of it; understandable, given the profit motive. The result, though, can be either a tragedy of the commons situation ("banner blindness") or just a loss of interest in a portal's cluttered offerings (Excite, AltaVista).
Opening up My Yahoo today I was treated to a type of pop-up that did, to its credit, have a "close" box so I could shut it off. However, it was moving slowly across the screen, right to left, making it quite possible to miss the close button and click on the ad.
Analysts are fond of saying that Yahoo understands advertising better than Google, and particularly understands big brand advertisers. Maybe, but they've often been a bit fuzzier when it comes to understanding their real bread and butter, users. Yes, Yahoo has many users, and no, they're not going to abandon the portal anytime soon. But that user confidence can erode quickly once the tipping point is reached; death by a thousand cuts.
Google, also seeking revenue and jealous that others may be grabbing more than their fair share of finite user attention, is now testing three ads at the top of the SERP's pages as well as ad units with longer body text. It's quite possible that such tests will eventually show no decline in user satisfaction, at least on the surface. But the process of eroding confidence is subtle, so they ought to think twice about proceeding even on the basis of a positive test.
It pays to err on the side of caution, as Google's short history proves.
Sunday, August 14, 2005
"Who held back the electric car? We did!"
This song is being drowned out by rampant forwarding of emails as motorists facing record oil prices demand higher-efficiency vehicles.
That's right -- demand. That concept that big companies have so often treated with contempt.
This guy spends $3,000 modifying his Prius so it gets 250 mpg instead of 50. And the Toyota spokesperson tries to shrug it off as similar to the most extreme "hotrodders of yesteryear" who were crazy enough to tinker with their cars to produce more horsepower, more bling-bling, you know, that whole thing. She then goes on to say that maybe, in the future, the hotrodders of tomorrow will be trying to get more fuel economy out of their cars. You think? And maybe it'll be half the auto-buying public, and not just a few zealots, who will want to reduce their dependence on costly, polluting gasoline power?
Well, guess what Toyota. It's the most emailed news story, according to Yahoo today. People want this. You know how to manufacture it. So do you really think you can sidestep rampant consumer demand by hiding behind misinformation and PR spin? Your premises -- that Toyota can't do it, and that people don't want it -- are demonstrably false. You'd have a better chance of reviving Steve Gutenberg's career than pulling the wool over our eyes on this one.
Related: Tyler Hamilton, a technology reporter for the Toronto Star, has devoted his entire blog to alternative energy.