"Merely hiking the number of peoplemeters will not give 'accurate' data"
What is required is an agreement & transparency around the lowest TVR that we would want to measure, and our tolerance of relative error, say SMG India's Mallikarjun Das CR and Aarti Bharadwaj
The study of television viewership behaviour has been a topic of hot debate in recent times. One of the focal points of this debate is the number of peoplemeters required to measure viewership patterns for urban India accurately. There are many numbers that are being quoted ranging from 20,000 till 4,0000. While there is no clear agreement on the ideal number of peoplemeters required, the mass opinion seems to be that the current set up ofapproximately 10,000 meters is insufficient. One of the reasons often quoted for this is that 10,000 is a very small number as compared to the total urban population of 330 million that we are trying to study. ‘How can such a populous country like India be represented by only 10,000 peoplemeters? No wonder the data is not stable’ is the most common rant in the media and marketing community at large.
However, from a statistical point of view, this critique is rather incomplete and baseless. This also holds a grave danger for the future. Just by deeming and commissioning an increase in the number of meters does not guarantee an automatic improvement in ‘accuracy’ and ‘stability’. The terms accuracy and stability themselves deserve a deep-dive, which we shall do later. These terms have been bandied in very loose terms and have clouded the community’s understanding of the actual technical issues of measurement.
Going by sampling theory, the main factors that decide the size of the sample are:
• How prevelant is the event we are studying? In other words, the incidence, and
• The level of accuracy we want in the study
The population that a sample is representing has in 99.9 per cent of cases no role to play in determining the sample size. This is crucial and the argument about the teeming millions of India represented by a few thousand peoplemeters is wrong. (As a nitpick, when one is dealing with finite populations, there is a correction factor involved but this adjustment would play a minimal role in our case and have little material impact)
In the context of measurement of television viewing behaviour, the “event” whose incidence we are interested in measuring is the likelihood that the audience is watching that programme. The television viewership ratings derived as a result of the study are a time weighted proportion of the audience watching the programme.
To design a sample to measure TVRs accurately, we would need to decide two crucial, interlinked things. These are:
1. What is the minimum TVR that the study would be able to measure accurately? For instance, would we measure 0.1 TVRs in each market accurately or keep the base minimum cut-off as 0.5 TVR.
2. What is the level of relative error we will tolerate in the TVRs? To understand relative error, let us consider an example of a program having 1.5 TVR with 10 per cent relative error at 95 per cent statistical confidence. This means that we are 95 per cent certain that thetrue TVR delivered by that program lies between 1.35 and 1.65.The acceptable level of relative error varies across fields. This would be as low as 1 per cent in the case of medical researches pertaining to drug effectiveness. On the other hand, 30% relative error would be acceptable in the case of policy researches. While there is no golden rule for this, most large syndicated studies in the field of marketing research do not accept a relative error of more than 30-35%. What is acceptable is a judgement call.
Putting these two issues together, the obvious truth is that for a given level of relative error, the sample size needed to measure a 5 TVR phenomena will be substantially lesser than a 0.1 TVR one. To give the reader an analogy, suppose one wants to measure the incidence of viral fever and sarcoidosis in Mumbai with a relative error of 10 per cent. The sample needed to measure viral fever at this level of accuracy, would be substantially smaller – since the phenomena is commonly prevalent, one would not need a very monstrous sample to accruately measure it. Sarcoidosis, on the other hand, is a rare disease. To measure it at a 10 per cent relative error, the sample could be manifold times that needed for viral fever.
This is crucial to understand. To come back to TV measurement, by doubling, tripling or quadrupling sample sizes,one does not solve the problem. What is required is an agreement and transparency around the lowest TVR that we would want to measure, and our tolerance of relative error. In many target audience combinations and markets, a 0.1 TVR programme could have relative errors in excess of 100 per cent. Doubling sample sizes might say bring down relative error from 150 per cent to 120 per cent, but does that mean a better future? Can one still use the metric to take decisions or is it better to toss a coin?
The above questions would need to be answered not at an All India level, but at a media market level. Essentially, we would need a minimum cut-off TVR for HSM, TN, AP, etc. Also, we would need to define the maximum acceptable relative error in each market to arrive at the sample for each market. Also, we need to decide how finely does one want to cut the target audience.
At this point, it is also important to clarify that relative error is not due to flaws in the system that is conducting the measurement. Indeed, every measurement, inherently, has a relative error built in. In the words of MIT Professor Walter Lewin, “Any measurement that you make without the knowledge of its uncertainty is completely meaningless”.
So what does this hold for the future of TV audience measurement? We should remove the chimera from our heads that a mere increase in the number of peoplemeters will lead to us to that promised land of ‘accurate’ data. No such world exists and with this expectation we are setting ourselves up for disappointment. What we should agree to is: a) The smallest, granular TVR we want to measure, and b) Our maxium acceptable level of relative error beyond which we would rather use gut. And this needs to be a transparent, open agreement.
Aarti Bharadwaj is Vice President, SMG Analytics Centre of Excellence and Mallikarjun Das CR is CEO, Starcom MediaVest Group India.
Instagram, LinkedIn, Twitter, Facebook & Youtube