A replicated study on nuclear proliferation shows the critical necessity of reviewing accepted scientific results.



In replicating a 2009 study on the role of asymmetric nuclear weapons possession, Mark Bell and Nicholas Miller found that a computational error led to the overestimation of the deterrent effect of nuclear weapons by a factor of several million. It is only through constant re-evaluation of scholarly findings that scholars can reach sufficiently robust conclusions that merit the attention of policymakers.

If there is any policy area where academics have consistently offered critical insights to policymakers, it is on issues relating to nuclear weapons. This is perhaps surprising given the barriers to academic participation in policy debates, especially when we consider the importance of secrecy for nuclear weapons issues and the profound geopolitical implications they raise. Nonetheless, whether in developing nuclear warfighting strategies during the Cold War, or in recognizing and acting to deal with the dangers of poorly protected nuclear materials in the aftermath of the collapse of the Soviet Union, policymakers have consistently sought—and used—the insights of academics.

If this is to continue, it is important that scholars studying nuclear proliferation—who increasingly are employing quantitative methods that policymakers often lack the training to critically assess—adhere to high standards of scientific transparency and replicability. This lesson was emphasized for us during in the process of writing our article, “Questioning the Effect of Nuclear Weapons on Conflict,” which was recently accepted for publication at the Journal of Conflict Resolution.

Image credit: Wikimedia Commons

In many ways, our experience was a model for how replication in the social sciences should work. While many people have horror stories of authors failing to share data from published studies or refusing to respond to questions about their methods, we were fortunate enough to be replicating a study previously published in the Journal of Conflict Resolution, which requires all authors to deposit their data and replication code with the journal, which is uploaded to the journal website upon publication.  We were able to reproduce the results presented in the original paper using the author’s replication data and code. Finally, we were ultimately able to get our replication paper published by the same journal that published the original article we were critiquing.

Nonetheless, our findings highlight the perils of policymakers relying on academic work in the absence of replication. For example, we found that a computational error in the paper we were replicating led the author to overestimate the deterrent effect of nuclear weapons by a factor of several million. More substantively, we found that nuclear weapons have more ambiguous and conditional effects than have been recognized. For example, the notion that two states possessing nuclear weapons should be deterred from fighting wars, while deductively compelling, misses the fact that there are a lot of other reasons for nuclear-armed states not to fight each other—namely, the high costs of conventional war between great powers, the fact that most nuclear-armed states are not geographically proximate to each other, etc. Indeed, it turns out that with the appropriate methodology, pairs of states with nuclear weapons are not significantly less likely to fight wars than non-nuclear states. Moreover, we find little evidence of the so-called “stability-instability paradox.” Pairs of nuclear states do not appear to be more likely to fight each other at low levels once we take into account the history of conflict before nuclearization in a dyad.

We do, however, find some evidence that nuclear-armed states are more likely to engage in low-level conflict against non-nuclear opponents. We find that this is most plausibly explained by the idea that nuclear-armed states expand their interests in international politics and in doing so initiate conflict with states they have not previously fought. The evidence is not consistent with the idea that states regularly use their nuclear weapons as a shield behind which to aggress against more powerful opponents that they would previously have been deterred from fighting.

Taken as whole, the results of our study suggest that nuclear proliferation is neither as dangerous nor as stabilizing as many international relations theorists have suggested.  There is little evidence of a strong and universal deterrent effect of nuclear weapons in nuclear dyads (although it may still be that nuclear weapons deter certain types of war, e.g. wars of conquest) and little evidence that these dyads are more conflictual at lower levels (as the stability-instability paradox would predict). The finding on nuclear states expanding their interests, however, does suggest a note of caution. If applied to a potential Iranian nuclear capability, for instance, it implies that we should not worry about increased conflict against Israel or the United States (other nuclear-armed states) or against previous foes (Iraq or Saudi Arabia, for example) but rather that Iran may broaden its regional focus, engaging in disputes against new adversaries in the Gulf or broader Middle East.

These findings fit nicely with a recent vein of policy-relevant scholarship emphasizing the conditional and often limited effects of nuclear weapons.  To those worried about a nuclear Iran blackmailing its neighbors, Todd Sechser and Matthew Fuhrmann find that nuclear weapons offer little in the way of coercive benefits to the states that acquire them. To take another example, Vipin Narang convincingly argues that the deterrent benefits offered by nuclear weapons depend to a significant degree on the way in which a state arranges its nuclear forces. To deter conventional conflict, a state must explicitly arrange its nuclear forces in order to do so, as Pakistan does today.

The importance of replication is critical if scholars seek to influence policy debates. It is only through this kind of constant re-evaluation of our scholarly findings that we can hope to come to sufficiently robust conclusions that merit the attention of policymakers. While political scientists have in many ways led the way in this regard—through the publication of replications, the establishment of online repositories for data, and the requirements of increasing numbers of journals for scholars to post all the materials necessary to replicate their findings—much more remains to be done. Implementing the highest scientific standards is critical if the increasing number of scholars focusing on nuclear issues are to offer policymakers reliable insights that can inform policymaking.

The authors have also written a guest post ‘How to persuade journals to accept your replicated paper‘ at the Political Science Replication blog.

Note: This article gives the views of the author, and not the position of the Impact of Social Science blog, nor of the London School of Economics. Please review our Comments Policy if you have any concerns on posting a comment below.

About the Authors

Mark Bell and Nicholas Miller are PhD candidates in Political Science at the Massachusetts Institute of Technology.