Is Putin’s Popular Support Real
Should we believe the polls that show that more than 80% of Russians support Putin? Maybe people are just afraid to tell the sociologists the truth?
Доступно на русскомThe debate about Putin’s real popularity has always been going on, but has intensified since the war started. Putin’s rating jumped — did people really support him?
Sociologists conducting polls have to constantly justify themselves. Denis Volkov, director of the Levada Center, listed and then refuted the main arguments of critics as to why Russian polls cannot be trusted: after the war began, people — especially Putin’s opponents — began to refuse to participate in polls and to answer questions about the war much more often, and if they did answer, they usually did not answer what they “really” thought.
According to Volkov, in reality, the “unreachability” of respondents has changed little since the war (in 2022 it was slightly higher than in 2019-2020, but lower than in 2021), and the number of interviews interrupted on questions about Ukraine is quite low (from two to seven), as in polls on other topics, which, with a sample of more than a thousand people, is insignificant. As for the honesty of answers, the increase in support for Putin after the start of the war is combined with changes in answers to other questions — about economic behavior and sentiment, Volkov assures.
Alexei Levinson, head of the socio-cultural research department at the Levada Center, responded to another reproach — that sociologists simply cannot show results with lower ratings: “As long as the authorities are satisfied with the high ratings they receive honestly, they will tolerate us. When it stops being so, I don’t think we’ll last long.”
While the debate is going on, public opinion polls continue to record that Putin enjoys the broadest support of the population. For example, according to the Levada Center, this year Putin's activities were “generally approved” by 85–87% (the poll is conducted every month).
Scientists have come up with a way, if not to check, then to get closer to answering the question: is it true that the results of social polls are heavily distorted by the fear of giving the “wrong” answer? Their study is called “Putin’s popularity: is it (still) real?”
The same and Putin
They used a method that is often used to study people’s reactions to sensitive topics — the “list experiment.” Respondents, selected according to Russia’s population structure, are randomly divided into two groups. The first, “control” group is given a list of three political figures. The second, the group under study, is given the same list plus Putin. People in each group are asked a question: how many politicians from the list do they support. The answer is not exactly who, but simply a number between zero and three or four, depending on the group.
Since no names are required, this should eliminate the “political risks” of the answer. A person in the group being surveyed names the number 1 — how do you know if it means Putin or someone else? And the answer “2” can mean both “Putin and someone else” and “some two of the other three.” The fact that the respondent does not support Putin, gives only the answer “0.”
It is possible to estimate Putin’s support by the difference in the answers of the two groups. The authors conducted such an experiment in January 2015. The list included Gennady Zyuganov, Sergei Mironov and then still alive Vladimir Zhirinovsky. The average value of responses in the control group was 1.11, and in the study group (the same and Putin) — 1.92. That is, Putin’s inclusion in the list added 0.81 on average. In other words, the level of his support can be estimated at 81%. This is about 5 percentage points (pp) less than the opinion polls gave Putin at that time.
But maybe it is about modern Russian “politicians?” The authors of the study are concerned that the unpopularity of politicians in Russia may have affected the results of the experiment, but they find little evidence that this factor significantly influenced the result. Nevertheless, they set up a similar experiment in 2015, replacing the lists of quasi-opposition leaders with the leaders of the country in the XX century: Joseph Stalin, Leonid Brezhnev and Boris Yeltsin. The result is roughly the same: Putin’s rating is 79%. Less than in social polls, but still a lot.
The less, the more accurate
That would be too easy. Sociologists discovered an interesting effect 10 years ago: adding another figure to the list reduces the result. It is not about who exactly is added, but about the fact that the list is growing. There is a peculiar deflation of the result: it is smaller the longer the list is.
To assess this “deflation,” another experiment is conducted, which the authors of the study compare with a placebo. There is no Putin or anything else related to Russia in it. To Alexander Lukashenko, Angela Merkel and Nelson Mandela in the study group was added Fidel Castro, expressing an attitude towards which seems to be nothing. In 2015, 60% responded to a direct question that they supported his activities, and the result of the experiment with lists was 9 pp less. This effect should be taken into account.
And now about the results that came out next.
Before the war
The researchers returned to this topic at the end of 2020. By that time, the Crimean effect had long since passed, and the COVID effect had not yet passed, and Putin’s rating was about 65%. But the situation in the country had changed dramatically: Alexei Navalny had been poisoned shortly before, in January 2021 he would return to Russia and be arrested, and protest rallies would sweep across the country and be violently dispersed. Repression was growing, and with it the risks of supporting the opposition, albeit with participation in opinion polls. The question of the correctness of their results became acute again.
Sociologists conducted four waves of experiments: in November 2020 and in February, March and June 2021. So many were needed because the results differed greatly from the post-Crimea results. The deviation of Putin’s support in the experiments with lists from the answers to the direct question rose sharply and ranged from 9 to 23% in different waves of the polls.
The authors of the study conclude that “deflation” has also increased. If it was 5–9%, as it was in 2015, it would mean an even bigger drop in Putin's approval than the polls showed. Perhaps even below 50%. But the experiments showed that the “deflation” in the case of Castro and Brezhnev was about 22%. In addition to them, the scientists used in a “placebo” experiment the runner-up in the 2018 presidential election, Pavel Grudinin, whom they also considered a neutral figure for some reason. In this experiment, the “deflation” was much smaller — 12%. This can be explained by the fact that people were still afraid to honestly answer sociologists that they support Grudinin (poll results are distorted by fear?), but at the same time it shows that experiments with lists work. In fact, if people are afraid to support Grudinin even in polls, it understates “deflation,” meaning it is actually higher than 12%.
Why “deflation” has grown so much, the authors of the study do not explain. But adjusted for such “deflation” it turns out that Putin’s support — 40–54% depending on the wave and the list (with leaders from the past or with contemporary politicians) — was again close to that obtained in answers to a direct question — 63% in November-March and 69% in June.
That such an adjustment is necessary to assess real support is indirectly confirmed by measuring Navalny’s popularity. Two lists were used for these experiments: contemporary politicians, as for Putin, and — twice — public figures (Nikita Mikhalkov, Ksenia Sobchak and Grudinin). The results of two of these three experiments almost coincided with the answers to a direct question about Navalny’s support. If we forget about “deflation,” it turns out that in February-March 2021 the question about support for Navalny’s activities was not politically sensitive and people were not afraid to answer it honestly. It is hard to believe this.
After the war started
The last experiment took place in June 2022 (there was no need to go to Russia for this — the experiments themselves have always been conducted by the Levada Center). Over the course of a year, Putin's support estimated in this way jumped by about the same amount as the polls showed.
In order to assess the “deflation” more accurately (and what if its cause is the fact that it is people on the lists), the scientists conducted another experiment with a “placebo.” The control group received a list of three innocent statements — “I can name the UN Secretary General;” “I watch TV / YouTube / online cinemas (Kinopoisk, Ivi, Okko, etc.) at least once a week” and “I have an acquaintance who has been to Cuba,” and in the study group, “I support the activities of Fidel Castro” was added to them. There were similar statements for Putin (“I usually read more than one newspaper/magazine a week;” “I can name the chairman of the Constitutional Court;” “I am satisfied with my income” plus the question about support — for the group studied).
The difference between the answers to the direct question about Castro's support and the results of the “placebo” experiments amounted to 14 pp in the lists with politicians and 31 pp with statements. In the case of Putin, this difference is 21 and 29 points. For both, the gap for lists with statements is larger, so it is hardly a matter of people.
In June 2022, when asked directly, 84% of the participants in the experiment said they supported Putin. His result in the experiment with lists of the country’s leaders is 63%. If you make an adjustment for “deflation,” you get almost the same as in the poll.
What the authors say
The authors conclude that the support for Putin measured by opinion polls seems to be close to reality. They stipulate that they cannot state this unequivocally, but find their assumption of “deflation” simpler and more plausible than those based on which Putin is significantly less popular than the polls show. To do so, at the very least, one would have to find an explanation for why support for Castro or Brezhnev is also “rigged,” and also recognize that Navalny was not a sensitive figure for the respondents.
Another argument: even if the absolute numbers themselves are wrong, their change is about the same as in the polls — both of which show a sharp rise in support for Putin after the war began.
What other experts say
Margarita Zavadskaya, senior researcher at the Finnish Institute of International Relations:
Polls using list experiments are quite popular now, because they allow you to get data without forcing people to answer a sensitive question. This technique is often used in studies of victims of violence or other crimes.
A team of researchers led by Timothy Fry at Columbia University has long been working on this issue in Russia. The findings can be interpreted in two ways: 10-20% is added to Putin, or this difference is a technical artifact of the design, an artificial deflation of the coefficients. Placebo tests allow us to conclude that the real support is as Levada Center and others report. It would be strange that the figure of Navalny is not sensitive for respondents, but Brezhnev, Grudinin or Castro are. We do not see mass falsification either: we conduct our own polls (by phone and online), and our results generally differ slightly from the authors’ estimates. But the results of such experiments should be used very carefully, especially in authoritarian regimes — all researchers emphasize this.
The good news is that even if Russians are not lying and do support Putin, according to researchers from the same group, it is not about sincere love for him, but about so-called endogenous support. What is it? Autocracies with popular leaders tend to survive longer, and scholars are actively studying the factors that influence the popularity of authoritarian leaders. It may be that in non-democratic regimes, the information itself about how popular a leader is affects his or her level of support.
To test this, the researchers used framing experiments (they are based on the fact that perceptions of the same statement can differ depending on how it is phrased) embedded in four recent polls in Russia. Negative information about the Russian president’s popularity was found to reduce his support, while positive information had no effect. Thus, combinations of framing and list experiments suggest that Russians are leaning toward majority opinion. And it is successfully constructed by propaganda. As soon as the conviction that Putin is supported by the majority is shaken, we can expect a change of moods according to the principle of “information cascades.”
Grigory Yudin, professor at the Moscow Higher School of Social and Economic Sciences (Shaninka):
The main problem with this text is that it makes two statements, of which only one is substantiated. The main statement is that one should be careful in using list experiments to assess the popularity of political figures, because such experiments can artificially underestimate it. And it does not depend on the country, the authors emphasize this: we are not talking about the Russian case.
This statement is justified in the article, but the justification is objectionable. The authors conclude that lower levels of support for any politician in the list experiment are artificial, while higher levels in the direct questioning are the real indicators. What is the basis for this assumption is unclear. Why the authors believe that the direct question gives more real values is unknown.
Let’s look at the matter another way: if a person, when confronted with a list of questions, forgets to note that he or she supports a given politician — can we conclude that he or she actually supports him or her? Suppose you support politician X. If you are asked directly, you will say so. But if you are given a list of four names, would you forget that you sympathize with X? What kind of support is that then? In other words, we see that the two question designs produce different results, but we have no reason to conclude (as the authors do) that one of them is somehow “cleaner.” Generally speaking, any question in sociological research constructs its subject, and it is a naive illusion that the easiest way to measure something is to simply ask it head-on. Frontal questions are often the most artificial.
The second statement is already directly related to Russia. It reproduces a thesis that Fry and his colleagues have been making for a long time: mass polls are absolutely correct in estimating Putin's high support in Russia. Moreover, in an earlier paper, the same team even managed to show that polls underestimate this support — that is, Russians actually like Putin even more than the pollsters tell us.
The study uses a questionable methodology for collecting information, offers vague and suspect validity estimates, and yet concludes “genuine support for Putin,” whatever that means. Russia now has a plebiscitary regime where the question about the tsar is not perceived neutrally, but as a request for a ritualized demonstration of loyalty. Polls should not be used as plebiscites — the question “how many actually support the tsar” is meaningless (for more on this, see Grigory Yudin’s article).
Sociologist from the Public Sociology Laboratory (PSLab):
Such experiments are a legitimate and practiced method, everything is clearly done and clearly described, I have no doubts about the quality of the research. Post-Soviet Affairs, in which this article was published, is a prestigious journal, and there is very little chance that something of poor scientific quality could appear there. Besides, one of the authors, Timothy Fry, is a major expert on Putinism.
This article is part of the discussion. The authors can be argued: “Yes, people do not falsify preferences and sincerely choose Putin. But what logic guides them in doing so? Are they satisfied with Putin or, rather, are they afraid that it could be even worse?” But that does not mean that the text or method is of poor quality. The question is how to interpret this data.