By now, most of you would know that NDTV has filed a lawsuit (194 pages, no less) against the various companies directly or indirectly responsible for television ratings measurement in India. Reading 194 pages is not a piece of cake, but for anyone interested in the issue, it’s a fascinating and highly recommended read. If you are pressed for time or already know the background, consider reading only points 78-239 for an understanding of the core issue.
The lawsuit, a result of several emails and meetings between the two sides over the last eight years, hinges on two issues:
1. NDTV claims that the sample size (peoplemeter audience base) on which data is being reported is too small, leading to statistically unreliable results, as well as unexplained data variations.
2. NDTV also claims that they have evidence that certain “consultants” take bribes to “fix” meters for channels, amounting to manipulation of ratings. They also mention that 1 & 2 are related, as a bigger sample size will reduce the weightage of individual peoplemeter homes, making the task of “fixing” so much more difficult.
I’m not privy to any information on point 2 beyond what you can read in the lawsuit, and hence, would not comment on the same. However, being a media researcher and a keen follower of television audience over years, I certainly have a view on point 1.
Why are ratings even important? Because they form the basis of television buying. They are the core constituent of the buying currency called CPRP (Cost Per Rating Point). As a simplistic example, if a program is watched by 1% audience while another is watched by 2% audience, the second program can command twice the price for an ad spot vis-a-vis the first, all other things being equal.
Ratings data, based on about 8,000 peoplemeters across India, is mapped onto the larger universe, covering most of urban India towns with population of 1 lac or more. Many have questioned how 8,000 meters can decide what India watches, and hence, where the advertising buck should be put.
Anyone with exposure to statistics will tell you that 8,000 is a very big sample size. If populations behave like they are expected to, a sample size of 30+ is good enough to draw statistically robust conclusions with about 95% confidence level. So, I’ve never been a big fan of the “too few peoplemeters” argument. After all, the industries (both broadcasting and advertising) have to be willing to pay for more meters if they want them.
However, the 8,000 number does not tell the entire story. The ratings software allows subscribers to look at data cuts at a segment level. For example, you can analyze the ratings in 15-24 year old males belonging to socio-economic class A in UP towns with population of 10 lac or more. The moment you apply such filters, the sample size will often be in two digits.
The problem only starts there. All these people who are in this two-digit sample are not watching TV all the time. They also work, they also go out, they also have power cuts, etc. Typically, even in peak prime time, only 50% audience may be tuned in to their TVs. In effect, the sample on which the real data is being collected at that time is only half of what the actual sample is. For niche channels, like Star World, NDTV, Discovery and AXN, it gets cut even further. After a while, you have a situation whereby just one additional viewer watching your channel can take up your rating in that market by 15-20%, or vice versa.
In simple language, the error margins begin to escalate rapidly when you move to finer data cuts, or to niche consumption. Companies running the ratings system are private bodies, and have no legal obligation to disclose their statistical error margins. The RTI doesn’t apply to them. But, the reality is that data with upto 30% error is being used to take advertising decisions on a routine basis.
Every Wednesday, when ratings are released, I know of broadcasters who obsess themselves with a program’s rating dropping from 2.1 to 1.9, or celebrate the channel GRPs rising from 180 to 192. Can you try telling them that this drop or rise has no statistical significance? You must be joking!
Which is where the crux of the NDTV argument is. Advertisers, channel heads and content developers are not researchers and statisticians. So, the onus of reporting “accurate” data, with acceptable error margins, lies with the research company and not its clients.
Ideally, only data with 5% or less error margin should be reported. That would mean that a broadcaster may not be able to analyze their performance in a specific segment in a particular market, because the sample size will be small and the error margin will cross 5% comfortably. It will also mean that the advertiser will not be able to take buying decisions at the level of fine segments, especially for niche channels.
From what it comes across in the lawsuit, it seems that the ratings company may have felt that not reporting fine data cuts and reporting only “accurate” (defined as less than 5% error) data would hamper its business, as broadcasters and advertisers will want to pay less for “less data”. That’s where I think there has been an error in judgment. If only what was of good quality was reported, the broadcasters and advertisers will feel the need to know more than just that. Which will mean more peoplemeters, to reduce error margins for fine data cuts. Which would mean that broadcasters and advertisers will have to pay more to get the ratings they need. If they chose to, great. If they didn’t, great too, because then, they can live with lesser but accurate data.
This is not a one-sided debate and I’m not here to take sides. Data quality issues definitely exist, and all the stakeholders – the ratings company, the broadcasters and the advertisers – have lived with the imperfection, despite several debates and discussions on moving to better measurement systems.
And now, NDTV has set the cat amongst the pigeons. Much thanks to them for that. One could have almost predicted that if any broadcaster had to do this, it will be the company run by the statistics-savvy Dr. Prannoy Roy. Now, the onus is on the other broadcasters and advertisers to unite and step in for a clean-up operation. Where the outcome should be based first on the quality of data, and then the volume of it.
It’s easier said than done, but the time could have never been more right!
PS: I don’t think the government is a stakeholder in this. If they step in, it’s only going to create more mess. Let the private entities work this out in their best interest.