Never mind if I have retired from news reporting. Working as a reporter for many years, then taking up the role of a journalism teacher, has its own rewards. I have a formidable extended circle of professional friends and former colleagues, who trust me with vital information. I generally never reach out to them as most are bogged down by their sheer workload.
But recently, I thought I’d make an exception.
It was because two survey reports were trending on social media — the first published by The Washington Post, and the second by the BBC.
Both had very curious results.
The Washington Post in its survey report reached an apparent conclusion that hate crimes in India had spiked since Narendra Modi came to power in 2014, and it was apparently Hindus who were the perpetrators. The BBC report reached a conclusion that ‘nationalism’ was driving the spread of fake news in India.
Both the reports were published and several media houses (national and international) picked up on them. Some of those further analytical reports came to the conclusion that the current political environment of the country (read: India under the rule of the Bharatiya Janata Party) is indirectly responsible for the spike in fake news and the rise in violence against minorities. The Washington Post report put this on the headline itself (screenshot 1), the BBC put it in the concluding point of its executive summary of the survey report (screenshot 2).
Combined with the social media push and the support these garnered from other media houses, the two reports quickly became a tool for an anti-BJP social media campaign by Opposition parties, including Congress influencers, who joined the chorus.
Soon after, two journalists from right wing websites, Nupur Sharma (Editor, OpIndia) and Swati Goel Sharma (Senior Editor, Swarajya) examined the two pieces.
In their analyses, both pointed out (Swati's report here and Nupur’s here) the gaping holes in the narratives they could see.
Both pointed out how data (sources, raw data, references, etc.) was apparently selectively chosen and cherry-picked to reach a particular conclusion. As a reader, my key takeaway from both Swati and Nupur’s articles were the same — it was simple to tweak any survey to suit a narrative against the BJP. The most important gaping hole in both the surveys here seemed the paucity and/or the opacity of the raw data.
After the publication of these rebuttals, members from both the publications admitted in their social media chatter that data was selectively captured and none of the surveys may be considered “exhaustive” in nature. Some officials associated with the surveys also put forth on social media that their surveys were just “indicative”.
I then remembered that there was another poll-survey done a few months ago by Reuters, which called India the most dangerous place for women because of “rising sexual violence”.
It was debunked everywhere.
But soon after, people were shocked to learn that this was in any case just a “perception poll”, not a statistical poll, and the number of respondents who took part in the survey on India was just 43. Yes, just after speaking to 43 people about India, the report concluded that our country is the “most dangerous place for women”.
Now, intrigued by the perceptions that seemed to go into making perception polls, I decided to reach out to one of my acquaintances to find out exactly how such raw data could be collected. The person is a journalist who describes herself as a cog in the wheel working in a similar survey conducted by a reputed media house.
I asked the person if she could tell me about raw data and the nature of the conversations in the newsroom while they were working on such ‘India’ surveys.
“You have always wondered about the miniscule sample size of the surveys. But have you ever wondered who those people might be?” she, in turn, asked me.
My acquaintance chuckled as she continued: “We all contributed to this database. The people interviewed were our own friends and family. Obviously, we had to go for people whose numbers were already there in our diaries.”
I was startled.
Because once you consider that a raw sample size could itself be arranged, then a lot else falls into — or, out of — place.
Be it ethnographic research or qualitative research, all of these may be manipulated by carefully choosing who’s responding to your questions.
For example, consider a hypothetical situation — you have seven friends who really despise Narendra Modi because they feel he is a Hindutva icon. Now, there’s a survey happening in your office which will consider how the BJP government has performed under Modi. Without disclosing that you are well aware of their political disposition, you go ahead and include all their names as respondents to the research your company is conducting.
You may choose whatever kind of questionnaire to present to them — you know your friends will give answers that won’t show PM Modi or the BJP in a very good light.
To be perfectly candid, this is often how many small size sample surveys are fixed. It doesn’t require an Einstein to tell you that this is the most commonly used technique by college-level students who wish to manage the outcomes of their research.
Any experienced college teacher will tell you that to reach a particular conclusion in their research projects, the first thing a student does is ensure that he/she has the “right kind” of respondents.
Tell me honestly, if you include 10 of your good acquaintances to take part in a survey, won’t you know exactly what they are going to say?
You, on your part, only have to ensure that the list of the respondents doesn’t get out in the public domain.
Once that list is safely locked away, you go around tom-tomming your conclusion.
Thus, by cherry-picking your respondents, you can actually come out with a conclusion of our choice.
This is the reason most companies have very strict standards of qualitative analysis on small sample sizes. Usually, the team doing the survey doesn’t get to choose the respondents. Each and every respondent is profiled by the company to ensure that their responses actually reflect a qualitative analysis, not herd mentality.
More importantly, the respondents should never be known to the interviewer or to anybody else in the company. In some cases, there are checks and balances to rule out respondents who would apparently have the same life philosophies.
If these checks and balances are not applied, then the surveys don’t have any relevance.
I urge you now to think of recent survey reports again — and check the parameters I have mentioned above.
There is no way to ensure that respondents weren’t biased or cherry-picked to adhere to a set of responses. There is no way to rule out any bias unless the data of respondents is made public. And that is not going to happen.
But there is a reason why a particular outcome to a survey becomes all-important so frequently. Want to know why?
Do you think any left-liberal foreign publishing house will be interested in a survey if the outcome says that India has developed and progressed under Prime Minister Narendra Modi?
There — you have the answer!