November 23, 2024

Clarifying Research on comScore Blog Study: How to Measure Blog

Posted by: of ExecutiveSummary.com on 08/8/05

Although I’ve seen several blogs link in the last few hours to comScore’s "Behaviors of the Blogosphere" study that I posted about earlier (though admittedly not the feeding frenzy I’d expected), I’ve also seen a few questions about the methodology. So I thought I’d take a bit of time to address some of those.

A convenient way to do that is for me to answer questions that Darren Barefoot emailed me today. I haven’t asked Darren’s persmission to answer these questions in this forum, but I figure as a fellow blogger he’ll be cool with it:

* Are there more details about your methodology? I’m no statistician, but page 3 of your report doesn’t describe how data was gathered from "1.5 million US participants", nor how those people were selected. There’s an asterisk in the first paragraph of page 3 which suggests more details, but I can’t figure out what it’s referencing.

Let me start with the most important thing: my opinion is the best information market research can give us is this: "Is it bigger than a breadbox?" This research study satisfactorally answers that question for the blogosphere: Yes.

There is no flawless methodology in market research. It’s an inexact science. Samples get biased, corners are cut trade-offs are made, yadda-yadda-yadda. It’s always directional, at best. Research wonks like myself obsess on the details, and if it’s details you want, it is details you will get. This will be one of my "long posts." It’s late and I’m bored, so I’ll dwell on the details. (Man, rereading it, I went completely OCD on your ass!)

In fact, I’ll begin by sharing a new favorite quote, from the second page of How to Lie With Statistics, a classic work (1954) by Darrel Huff (and wonderful illustrations by Irving Geis):

I have a great subject [statistics] to write upon, but feel keenly my literary incapacity to make it easily intelligible without sacrificing accuracy and throughness.
– Sir Francis Galton

You’re right, Darren, it looks like that there should be some footnote on that page that’s
missing. I’ll call it to comScore’s attention and see if we can get
clarification and update the PDF. I’ll also invite them to elaborate in the comments here. And, BTW, they do offer a Methodology page on their site, though as Cameron Marlow complains it could be more detailed.

I can tell you that comScore’s panel is one of the largest in the world
for media research. By comparison, TV viewing habits in America are
laregely determined by a panel of a few thousand maintained by Nielsen
Media Research.

One funny thing to me is that within the bubble I live —
Internet advertising and media research — no one argues much anymore over the methodology of comScore and their chief rival Nielsen//NetRatings, in part because we’ve heard the explanations before but also because they’re such household names in our sector we don’t think to worry about it much. All the biggest web sites and online ad agencies and advertisers are quite familiar with comScore and their numbers. But apparently in the blogosphere they’re not so familiar.

How the panel members were selected… I’d have to defer to comScore for a thorough explanation there, but I’m sure there was an element of "self-selection" along the lines of recruitment to participate in the panel through banner ads and other "customer acquisition" tactics. So one potential bias could be that they get "joiners" in their panel. They also recruited some people with free utilities, such as a virus detector. Everyone gets a clear explanation, though, that their online surfing will be monitored for aggregate research purposes, which they have to opt into.

But they address the bias in various ways. First and foremost, their panel is really, really huge by conventional research standards. Most opinion polls the results of which you read in the newspaper or elsewhere are based on samples typically of 1,000 (or fewer) respondents on the low-end or 20,000 on the high end. comScore’s 1.5 million research subjects simply shatters most research constructs.

Cameron rashly writes, "Given that they do not justify their sample, nor provide margins of error, the initial sampling frame should be considered bunk." He couldn’t be more wrong. I was the ultimate project manager for this research. Two years ago, I made the well-considered decision to steer this research in comScore’s direction precisely because I believe they have the mother of all research panels. Theirs is really the only one I would trust to project reliably to audiences as small as blog readers.

To the extent to which all that wasn’t made more clear in the methodology section is partly comScore’s modesty and partly time constraints getting this out the door.

You can make statistically sound projections based on relatively small subsets of a population. But with a panel this gynormous, projections are quite sound. So that’s one thing that corrects the sample bias: humungous sample size. The Advertising Research Foundation gave comScore the seal of approval based on that alone.

Also, they weight results from the survey against a regular (quarterly? semi-annual?) random-digit-dial (RDD) phone survey. I don’t know the size of that sample, but it’s sufficiently big to be statistically reliable, and RDD is typically known as one of the best random sampling methodologies for populations, because virtually everyone (in the U.S., anyway) has a phone, and numbers are generated randomly, which gets "unlisted" households (curiously, though, it doesn’t get cell phones, so it does tend to under-sample Gen Y).

(See, this stuff get’s really geeky. But you asked.)

Your question also asked how the data were gathered. ("Data" is plural for "dataum"; use the plural verb form, people!) Again, comScore can correct me, but they use some kind of combination of a "proxy network" (a farm of servers set up to cache all web content panelists surfed) and/or some software on panelists’ machines. They have some mechanism, in any event, for seeing everywhere panelists go and everything they do (including purchases, SKUs, money spent, etc.). Then they suck all that data up into the mothership, a multi-terrabyte (I imagine) datamart thing. Results are recent and highly detailed.

* Why is there no discussion of margin of error?

Uh…an oversight, I guess. The whole reason with going with comScore is their accuracy based on sample size is superior in the industry. With 1.5 million panelists’ behavioral data, they can project with extreme accuracy on thousands of sites. Margin of error, within a certain "confidence level," is a measure of reliability in terms of variance, were the same survey to be administered numerous times. So, for example, a sample size of 2,000 respondents, more or less randomly selected, will represent a given population, say 290 million U.S. residents, within a "margin of error" of 2.19% , meaning, if 20% of survey respondents said "I like gum," it could be more like 18-22% in 95 similar surveys out of 100 times it was conducted (i.e., a 95% "confidence level").

So, to have a panel of comScore’s (1.5 million) represent a U.S. online population of 204 million, at a confidence level of 95%, your margin of error would be 0.008% (meaning "dead on"), according to this margin of error calculator. [comScore folks or anyone else out there, please correct me if I’m misrepresenting or mistaken in anything here. I’m not an actual statistian, I just play one on the Interweb.]

* The first graph on page 6 discusses unique visitors to particular domains. These don’t jibe with the sites’ own reports. For example, Boing Boing claims 4.6 million unique visitors (http://www.boingboing.net/stats/) in Q1 of 2005. Yet, the comScore study only reports 849,000. The same goes for Slashdot, which reportedly sees 300,000 – 500,000 visitors on a daily basis. Surely in three months they receive far more than 911,000 unique ones? Which numbers do you claim to be more accurate–comScore’s or the sites’ own?

Assumption 1: I don’t see where you get the 4.6 million unique visitors figure for BoingBoing. When I look at one of the first sections of that page you link to, I see a monthly range of 1.8 to 1.5 million "unique visitors" (UV). So, in the months of our examination, Q1 2005, BoingBoing’s monthly UV stats range from 1.45 to 1.66 million. So, let’s assume for the three months you’re probably talking about an undupilicated audience of 2-3 million, by their site stats,

Factor 1: How does BoingBoing stat package collect uniques? How does it work at all? I can’t be bothered to find out those answers, as stat packages vary (widely) in methodology and accuracy, but one key question is do they count "unique visitors" by IP addresses, cookies or some other means? Probably IP addresses, which is the most common. At least this package distinguishes "visits" from "visitors," as many don’t and bloggers often get confused thinking "visits" (which is surfing sessions) is the same as visitors (unique people), as visitors can have multiple visits during a month.

In any event, if it is using IP addresses to distinguish uniques, as I bet it is, those can be highly variable. Many ISPs assign IP addresses randomly every time a user logs on, so if you are on dial up or you shut your computer off during a month, you might show up as several IP addresses to BoingBoing on your repeated visits throughout the month. Not to mention the same person surfing from work and home being counted twice. So the likelihood is an overcount due to IP address counting.

comScore doesn’t have this problem when it comes to unique identities, because it knows (at least to the household level) that people are unique visitors, because of its persistent software relationship with the computer. )

Factor 2: International traffic. comScore’s panel used for this study comprises only U.S. residents. For advertiser purposes, that’s what most advertisers care about. Also, because of it’s very construct, it would be nearly impossible to get 100% international panel coverage (e.g., Iraq, Nigeria, Belize, etc.).

So their numbers exclude traffic from international sites. (The Methodology section of the report says the sample is U.S. only, but it doesn’t dwell on the point.) Many U.S. sites may between 10-50% traffic from international visitors. That may also explain a lot of the variance.

There is more I could say here, but I think that’s sufficient, as those are probably the main factors for the differences. That and simply that log files analysis systems can also be quite flaky. I once had a client when I was freelance who had two stat tracking packages installed on her site, and there was a 10x difference between them: one said something like 10,000 visitors a month, and the other said 100,000. Go figure.

* The definition of ‘unique visitor’ in the study reads "The number of individual people visiting a site in a given time period." Meanwhile, the text addressing the most popular blogs says "Examples include DrudgeReport, which drew 2.3 million visitors who visited an average of 19.5 times, and Fark, which drew 1.1 million users an average of 9.0 times in Q1 2005."

What’s the ‘given time period’? Clearly you don’t mean a unique visitor in Q1, 2005, because you discuss each visitor coming to a site x times.

Yes, we do mean for the first three months of 2004, DrudgeReport drew 2.3 million unique U.S. visitors who visited an average of 19.5 times (at total of 44.3 million visits during that period). That means, it’s audience is both large and hugely loyal. Fark had 1.1 million visits who visited 10.1 million times (an average of 9) in the first quarter.

Beyond that, Blogdex’s Cameron Marlow, a would-be friend of mine and Ph.D. student at MIT, raises quite a fuss about the methodology of the study over at his blog Overstated (that’s an understatement), where I have to be honest he gets it pretty much entirely wrong. Most of his concerns should have been refuted in this post, and others I argued in his comments field.

13 comments for Clarifying Research on comScore Blog Study: How to Measure Blog

  1. Thanks for that thorough reply. Feel free to quote my email. I should just clarify that I might sound argumentative, when really I was just confused by a few points.

    A couple of follow-up questions, which weren’t answered by the comScore methodology page (which I found after emailing you, and should be linked to from the report) or this post:

    1) Do you think the method of data capture seriously influences the most-popular blog numbers?

    For example, I was somewhat surprised by Slashdot’s ranking. However, how many Slashdot readers are going to accept comScore’s data capture activities (a proxy network or software on panelists’ machines)? Surely, we’re Heisenberging those results, aren’t we?

    2) The comScore methodology page says “at the heart of the comScore Global Network is a sample of consumers enlisted via Random Digit Dial (RDD) recruitment”. This suggests to me that a significant portion of comScore’s panelists were obtained via the phone. Is this correct?

    If this is the case, what sort of acceptance rate did comScore experience during the calling campaign? I ask because, for polls, it now typically takes 5 calls to get one successful respondent. I’d imagine this sort of offering’s rate would be worse, and would probably bias the panelist data set against early adopters. What do you think?

    Comment by Darren — August 8, 2005 @ 8:14 pm


  2. What I think is you’re really splitting hairs. So perhaps the margin of error isn’t 0.008%, maybe it’s 0.009% or 0.01% or even 0.05%. Whatever. Point is the enormous panel size represents a big correction for most bias in sampling.

    On the details of their RDD, which I say is one of the best methodologies in an imperfect science, comScore can answer for themselves.

    Comment by Rick Bruner — August 8, 2005 @ 8:26 pm


  3. Hey Rick, I’m familiar with this methodology since I worked at Jupiter when they merged with Media Metrix. I know all the problems with projecting audience and demos with small samples. (Some of the custom reports they ran were downright embarrassing to show to a client.) Now you say the ginourmous sample Comscore plays around with more than makes up for any sampling biases and can provide accurate data for the smallest of sites, even blogs. Fine. I’ll buy that but first please explain, for example, Gawker.com’s ranking over Engadget. How can the data be THAT wrong if the huge sample is supposed to be so reliable? I do like the report generally, as it should convince the dumb ad buyers to pony up more money because blogs have desireable demos, etc. It’s just that these specific blog breakouts look like the same junk Media Metrix used to peddle for small categories and sites.

    Comment by krucoff — August 9, 2005 @ 2:27 pm


  4. On what basis do we know it’s wrong?

    Comment by Rick Bruner — August 9, 2005 @ 2:46 pm


  5. Well, for starters, you could ask Nick or Lock. I’m sure they’d give you an honest answer.

    Comment by krucoff — August 9, 2005 @ 2:49 pm


  6. Nick said on his own blog he believes Engadget is bigger than Gizmodo. But on the other hand, what is the basis of even those comparisons? Nick’s SiteMeter stats versus Jason’s homegrown log package? As a former Jupiter analyst, you don’t see the inherent flaws of comparing two unaffiliated stats packages, much less giving anywhere near the same creedence to SiteMeter and Alexa as comScore?

    This is the first-ever study that uses one standard platform as a basis of comparison for blog traffic. Bloggers have never experience that before, so no wonder they’re rattled. comScore assures me that the blogs in the top 25 rankings have sufficient sample sizes from their panelists to make those projections bullet-proof. (This wasn’t the MediaMetix panel of old with 10,000 panel members. This was based on the 1.5 million person panel, which we deliberately chose because of its robust size.)

    Gawker versus Engadget? I could easily attribute it to just the international skew: maybe a substantial part of Engadget’s traffic is international, which wouldn’t show up in comScore’s panel, whereas Gawker’s audience may be much more U.S.-centric. That’s just speculation. Or maybe it’s just that Gawker’s audience is bigger. On what basis do we know otherwise?

    Comment by Rick Bruner — August 9, 2005 @ 4:12 pm


  7. You’re the research director at Doubleclick and you honestly believe Gawker gets more traffic than Engadget? That’s crazy. But whatever, let’s forget that one for now. The Comscore report also says Gawker got more traffic than Fleshbot and Gizmodo in 1Q05. Look at Gawker Media’s numbers from the SAME crappy stats package on those sites. As long as we’re comparing rotten apples to rotten apples, Gawker does NOT beat Fleshbot or Gizmodo.

    So international traffic accounts for all this skew? That’s some serious skewing that’s causing some major Calacanis stewing.

    Comment by krucoff — August 9, 2005 @ 5:13 pm


  8. U.S. visitors, not total visitors. And SiteMeter does not measure visitors, it measures visits (sessions). And Engadget, as far as I can tell, doesn’t publish its site logs. And Fleshbot and Gawker are within spitting distance of each other for average daily traffic on SiteMeter, today in the middle of Q3, whereas the report is Q1 data.

    So, it’s really hard to compare any of these sets of numbers unless we keep in mind what they each represent. Arguing over panel numbers versus server numbers is the oldest flamewar in online media.

    So I’m going to retire this argument, save to say that I would opt for comScore’s methodoligical rigor for accurate media research over SiteMeter’s or Alexa. Yes, I would.

    Comment by Rick Bruner — August 9, 2005 @ 5:29 pm


  9. No sweat, please retire this one as I wouldn’t want you to waste your time on me since I really don’t have anything vested in any of this except for genuine intellectual curiousity.

    But, I *was* talking about sitemeter’s visits measurement among the Gawker blogs since the Comscore report has a breakdown of visits that skews Gawker.com’s numbers even higher. That chart makes even less sense the UV one. (And yes, while panel vs server is the oldest research vs publisher hoo-ha, there ain’t no one that has a real read on the elusive UV number.)

    I do agree this methodology is much superior to sitemeters or Alexa when trying to make broad comparisons but I’m still not convinced Comscore can accurately rank these blogs individually by calculating actual numbers for visitors or visits. Perhaps you guys should have included more publishers in the development of this little project? The results prove that much.

    Comment by krucoff — August 9, 2005 @ 6:27 pm


  10. About that comScore report…

    Jasaon Calacanis, of Weblogs, Inc. (Endgadget, Autoblog, and more) raised concerns with the comScore report released yesterday and in a…

    Trackback by paradox1x — August 10, 2005 @ 3:53 am


  11. How Many People Read Blogs? Who Knows!

    How big is the audience who read blogs? Nobody can agree. But we now know it’s bigger than a breadbox. Comscore’s study of bloggers sparked controversy and angst this week, and hostilities flew between two of the best-known blog networks. Comscore’s st…

    Trackback by B.L. Ochman's weblog - Internet strategy, marketing, public relations, politics with news and commentary — August 12, 2005 @ 11:26 am


  12. Behaviors of the Blogosphere II

    Rick Buner offers some explanation of the methodology behind comScore’s controversial blog advertising survey.

    Clarifying Research on comScore Blog Study: How to Measure Blog

    Trackback by Outside The Beltway — August 13, 2005 @ 5:28 am


  13. How Many People Read Blogs? Who Knows!

    How big is the audience who read blogs? Nobody can agree. But we now know it’s bigger than a breadbox. Comscore’s study of bloggers sparked controversy and angst this week, and hostilities flew between two of the best-known blog networks. Comscore’s st…

    Trackback by B.L. Ochman's weblog - Internet strategy, marketing, public relations, politics with news and commentary — August 14, 2005 @ 10:58 am


Sorry, the comment form is closed at this time.

 

Syndicate:

RSS RSS Feed



Posts via e-mail

Enter your email address:

Delivered by FeedBurner

Recent Posts:

Archives:

Buzz Cloud:

Recent Readers:

Tag Cloud:

Categories: