top of page

S3 Ep 8 - Haixin Dang on 'Disagreement in Science'

Josh Eisenthal: Welcome to The HPS Podcast. For this episode, we're doing things a little differently. As you can probably tell, I'm not one of your usual hosts. I'm Joshua Eisenthal, and I'm based at the California Institute of Technology, where I work as an editor on the Einstein Papers project. You may remember me from season one when Samara and I talked about some of our highlights from earlier podcast episodes.


In any case, it's my pleasure to be taking over from Samara and the normal HPS podcast team for this episode today. I'm going to be talking to Haixin Dang. At the time we're recording, Haixin is an assistant professor of philosophy at the University of Nebraska, but she'll be moving to Case Western Reserve University in the fall.


Her main area of research concerns the overlap between philosophy of science and social epistemology. She studies how scientists generate knowledge when they work together, sometimes in collaborations which are so large that no single individual could possibly understand all aspects of what's going on.


The particular topic that we will be talking about today is scientific disagreement. On the face of it, it might seem like scientists should always aspire to achieve consensus and that disagreement is therefore a mark of failure. However, Haixin wants to impress on us that disagreement is a vital part of the scientific enterprise and not something to be afraid of.

Furthermore, getting clear on the role of disagreement in science may be very important for science communication and tackling science scepticism. If we can be more clear and more honest, about how scientists actually make progress. Perhaps members of the public will be more understanding when the experts disagree.


Josh Eisenthal: I'm here with Haixin Deng. Haixin, welcome to the HPS podcast.

Haixin Dang: Thank you for having me.


[00:01:37]


Josh Eisenthal: To get us started, could you tell us a little bit about your path into HPS?


Haixin Dang: Like most people, when I went to college, I didn't know HPS was even a thing that you can study or a discipline. As an undergrad, I went to Cornell University where kind of on a whim, I took a history of science course taught by Peter Dear, who is an amazing historian of science.


I was hooked right after taking his class. It was on the scientific revolution, and he went through all the major HPS questions that drive our field. I arrived in philosophy through history of science. I then kind of wandered into the more traditional areas of philosophy, like epistemology. But I started in HPS through thinking about history.


[00:02:25]


Josh Eisenthal: Was there some particular historical episode that caught your interest?


Haixin Dang: The thing that really struck me when I was learning about history of science is the fact that during the scientific revolution, there was this change in worldviews. And in order to really understand that worldview, you had to think about the world in this completely different way.


But I think reading Kuhn really opened up my eyes. In particular, the methodology that exists in Kuhn, that you're not just thinking about philosophical questions, but you're using this sort of historical methodology. When you're thinking through these cases you have to read a lot of primary sources, you have to learn about the context of what's going on at the time, and I just found that really challenging and also very fruitful to go through that methodology of thinking through the historical cases and then trying to extract philosophical questions or test your theory against the historical evidence. I found that spoke to me.


[00:03:19]


Josh Eisenthal: I remember also reading Kuhn's structure of scientific revolutions and I remember Kuhn's way of doing it through this very rich historical study also really spoke to me. So, I guess we both had that implanted in us early on.


Our topic that I'd love to talk to you about today is disagreement in science. Could you say a little bit about what disagreement in science is?


Haixin Dang: Surprisingly, or maybe not surprisingly, there's a lot of disagreement about what disagreement is in science.


But we can't really talk about disagreement without talking about scientific consensus, right? Most people think that the aim of science is towards some kind of consensus. That what science tells us is facts or beliefs of the majority of scientists or maybe all scientists. But there's disagreement over what constitutes consensus. Is it 100 percent agreement among every single scientist? Is it 97%? That 97 is the number that's often given out for climate change. That there's 97 percent consensus on human caused climate change. Or is it just super majority? Maybe just a simple majority, maybe just 51 percent of scientists need to agree on something for there to be a scientific consensus or agreement. But I think disagreement is clearly commonplace throughout science, whether it is over which methods are the most reliable, over how strong a discovery claim should be.


If you look into the scientific literature, you might find there's often conflicting studies being published. So, you'll see one study that says drinking red wine is good for your heart. And then another study that says drinking red wine is bad for your heart. Those are kind of disagreements within the community of science.


There's also a disagreement over interpretations of data. So, you see scientists disagreeing over evidence and how you can interpret the evidence, that there's multiple interpretations of the same evidence in that science. There is disagreement, I think, in all steps in science. And the question I think that's philosophically interesting is what's the role of disagreement in the development of science and whether disagreement is normatively good or bad for science?


[00:05:31]


Josh Eisenthal: Because it sounds bad, right? So, a naive view might be, ideally, we wouldn't disagree with each other. Ideally, when we get things right, or when we establish something pretty firmly, and we think we've found the answer, the disagreement fades, or is at least very small, and then we're very happy because now we agree with each other.


Haixin Dang: It's a very, I think, intuitive view that the goal of science is to eliminate disagreement. And hopefully by the time we all agree, we would have found something worthy of scientific knowledge. Like we achieved something good epistemically. But if we look in the history of science, I think the picture gets a little bit complicated because there are many, many cases, very famous cases, where very strong scientific consensus were overturned over time.


You can think about examples like, for a very long time, physicists believed there to be an ether. There was consensus across the physics community, but then later, we found out that there was no ether.


Wagner famously proposed the theory of Plate tectonics. And for a very long time, in the early 20th century, geologists flat out denied the existence of plate tectonics, that the continents could move. It wasn't until very much later when that consensus against continental drift is overturned.


So, I want to think about consensus and dissent in science as almost cyclical or maybe like a sine wave, that there are going to be periods of consensus in science and periods of disagreement. I think it's in the actual workings of science that science has to go through these phases. That we are aiming for consensus, but we're not sure if we're always right.


[00:07:18]


Josh Eisenthal: So is the idea that it's very important to have a kind of mechanism of disagreement in order to acknowledge that consensus will arrive at things that we later come to realise were wrong, or we later come to realise were only true in some limited, constrained way and that there's a particular mechanism for disagreement to make that progress and upset previously held theories. Is that the idea?


Haixin Dang: Yes, exactly. That disagreement is integral to science because it moves science forward. Disagreement helps you become more self-reflective of your own beliefs and biases and less likely to fall into confirmation bias. You want there to be people in your scientific community who ask, ‘Why do we believe this? Maybe there's a different theory or a different way of interpreting the data?’


You want there to be someone who's trying to keep you honest by being either critical of you or disagreeing with you. If you're challenged to defend your position against a dissenting view, it can make your own view stronger. It helps you define your view better.

So those are exactly what you pointed out. These mechanisms in which disagreement helps to progress scientific inquiry forward.


[00:08:33]


Josh Eisenthal: You suggested there's at least a proposal that agreement as opposed to disagreement could be thought of in terms of consensus. And I'm wondering if that's what you think we should think of in terms of how to define scientific agreement, or if there are alternatives?


Haixin Dang: Unfortunately, I don't think we actually have very good definitions of these things, or at least no way of actually measuring to what degree scientists agree or disagree.

There's some work in bibliometrics to infer from the published literature how many papers support a certain view, and how many papers disagree with a certain view. So those are rough estimates, but I actually think the degree to which there is disagreement or agreement in your community is best understood by being a participant of that community. You get a sense of where the community is on a certain question by going to conferences, by talking to your peers, by reading around in the literature. But unfortunately, we don't have a way of measuring any of these things.


For example, even the 97 percent consensus on human caused climate change was measured by looking at published papers. So that was a measure of what the literature says. But there wasn't, for example, a vote. I don't want people to think that agreement or disagreement in science you can determine by voting. This is all just to say that these terms, agreement, disagreement, dissent, consensus, we use them a lot but in fact, when you try to see what they look like in practice, it actually becomes very difficult to define.


[00:10:05]


Josh Eisenthal: It's strange to think of calling a vote, gathering some group of scientists in a room, and taking a vote as if that's somehow a way of establishing the state of the field in terms of what do we agree on and what's our level of confidence in a particular claim. And similarly, doing a kind of meta-analysis of published papers, that's sort of a way of measuring it, but it feels a little bit crude.


Haixin Dang: Exactly. It seems like even though clearly consensus in science is really important we don't actually have a good way of getting a handle on what that even looks like.

One thing I want to also point out is that what's interesting about debates over disagreement and consensus in sciences, is why does it even matter? There are people who are sceptical about thinking about science as proceeding by periods of agreement and disagreement, because what really matters in science is the evidence. The scientists are held to the evidence, they're not held to consensus, they're not held to what their peers think. We don't need to agree on what the data says because the atoms exist, whether or not physicists agree or disagree over the existence of atoms, the atoms are just out there.


[00:11:15]


Josh Eisenthal: But I guess, there's a lot hidden there in terms of how objective the evidence is or if there is such a thing as raw, uninterpreted evidence.


Haixin Dang: Exactly. And I think that's where we can kind of dig down here a little bit.

So, I'm just going to talk through some physics examples, because I think physics is one of these fields where you can make these very precise measurements. You can get very good quality data. You can get a lot of data. But even in physics, there could be disagreement over whether or not the measurements were done correctly, whether or not the experiments are high powered enough or all confounders were accounted for.


For example, LIGO that detected gravitational waves. There's been a really long history of disagreement over whether or not LIGO actually observed gravitational waves. You know, this is super high precision experiments, and it took many, many years for the experimenters to convince the community that the experiment was actually done correctly.


More recently, there was a disagreement over the W boson. The W boson, you know, have been measured several times by different collaborations at the LHC. And most of the measurements of the mass of the W boson have fallen in where the standard model had predicted its mass to be. But in 2022, Fermilab actually went back and looked at their data to see if they could construct a measurement of the W boson and reported a value that was much higher than all of the other values that were measured before. There is currently lots of debate right now among particle physicists over what to do with this outlier measurement.

In another physics example, there's also long-standing disagreement over the Hubble constant in cosmology and that doesn't seem like it's going to be a debate or disagreement that's going to go away anytime soon.


So even in cases where you have really high-quality data, you don't get convergence even on basic physical facts. So, it's not just the social scientists who can't agree on things. It's even the very, very core of physical sciences where we're making observations and we're collecting direct data on a phenomenon, there's still disagreement over how we can interpret that data.


[00:13:37]


Josh Eisenthal: This disagreement goes all the way down it turns out.


Most of the disagreement we've been talking about so far has been between different groups or between different scientists’ competing series of claims. But I know one thing you have worked on is scientific collaborations. And so, I'm wondering what we would say about disagreement within a collaboration?


Haixin Dang: The norm of scientific papers is consensus, right? So, when you read a scientific paper, even if it's authored by 12 scientists or thousands of physicists, what you're reading is a consensus statement.


In some journals, especially medical journals, editors will say explicitly in their policies that the results of the studies should be endorsed by every member of the author list. That's not the standard in physics. In physics, you have this unanimity, ‘no objection’ or something like that.


[00:14:29]


Josh Eisenthal: You don't have to explicitly endorse what's being claimed, but you just have to refrain from objecting. You have to mark that you don't object to it. So, there isn't really a mechanism for within the collaboration saying, ‘well, this many of us involved in the collaboration endorsed this, and that was sufficient.’ There's nothing like that going on.


Haixin Dang: Yes, and I don't know of any collaboration that explicitly have rules for saying we will take a vote on the final results or anything like that.


You know, you can't really write a paper with a thousand people. You can't have actually a thousand people contributing sentences. Very large teams could be involved in collecting the data and cleaning the data, interpreting the data, running the statistical tests on the data. But it's a much smaller subset of people who are actually writing sentences, contributing words. But they're supposed to be writing for the collaboration. They're supposed to be writing in ‘one voice’ for the collaboration. So, once the draft is done, it gets passed around in the collaboration. In large collaborations, there will be meetings and presentations where the data is presented for the first time. Then what is published at the end actually represents the product of this very long process. It involves many different people, and it might not be a consensus statement where everybody votes on it, but it will be a representation of the work that has been done.


[00:15:58]


Josh Eisenthal: I guess there's been ample opportunity for debates and this kind of disagreement to arise and then be resolved in the process of arriving at the final document. But it's a kind of organic process. That isn't necessarily strictly controlled in any explicit way.


Haixin Dang: That's right. Obviously, collaborators want to come to consensus. In fact, I think the pressure to come to consensus is much higher within a team than it is within the community. Because first of all, the products are these discovery claims. And as members of the collaboration, you're aiming to produce results. And there's actually a lot of pressure within the collaboration to either suppress dissent or to resolve dissent. But every once in a while, you get these ruptures where a disagreement within a collaboration doesn't get resolved within the organic process, but actually becomes public.


For example, the quote unquote, ‘observation’ of the super luminal neutrino where this group of physicists claimed that they observed a neutrino traveling faster than the speed of light. That's a blockbuster claim, right? There are members of the collaboration who didn't think they should have published that, that they should have done more tests, looked for more sources of error before they published, who disagreed with the result and didn't sign their names to the paper.


So, every once in a while, you get these small glimpses into these ruptures, but the ruptures are seen as socially detrimental to the cohesion of the collaboration. So, collaborators typically will do everything they can to avoid these kinds of ruptures.


That's very different from the type of community level disagreement where scientific careers are actually made by disagreement with other people, right? By advancing a new interpretation by criticizing other people's views, by offering a competing theory.

We're talking about these cases kind of in an idealized way, but you can think, at the community level, different scientists or groups of scientists are in competition with each other. And science proceeds by these competing groups and that results in a lot of disagreement, but that's good for science, right? That's how science moves forward. But within the teams, you have this very strong pressure to conform and to avoid these very harmful social ruptures. If you disagree with your collaborators too much then you are not collaborating anymore, right? And then the group actually dissolves and there's lots of collaborations that do fail.


[00:18:31]


Josh Eisenthal: That's very helpful, actually, to see the difference of within the collaboration level, and then the community wide level, and the way that disagreement is suppressed, to a certain extent, in one group and welcomed and very much part of the process in the other.

Is there anything in your own research that is still unresolved that you're particularly interested in at the moment that you want to share?


Haixin Dang: I'm really interested in how to manage disagreement within collaborations. I have a paper coming out soon on minority reporting in science. So, I argued that instead of publishing a paper that's meant to be a consensus statement of the collaboration, when there is meaningful dissent within a collaboration, the collaboration should actually publish a majority report and a minority report. So, we publish both sides.


You can think of that almost like in the U S Supreme court. Justice opinions have majority opinions and minority opinions. That's a way of communicating more about what's going on in the collaboration. You can see more about the reasoning going on in the collaboration. And it's also a way I think of maintaining cohesion.


You might think it's a very simple recommendation. It is actually quite controversial. There's lots of fear that allowing minority reporting would lead to negative downstream effects where special interests can come in. For example, in climate change, if climate scientists are allowed to publish minority reports, then oil companies can buy people out or publish these minority reports themselves. And that might be bad for science.


[00:20:12]


Josh Eisenthal: You could just say that the people who are disagreeing aren't disagreeing on the fundamentals. So, they're not giving support to climate change denialists. In fact, you can see exactly what the disagreement consists of. I can totally see how that could enhance faith in the scientific community and what they're working on.


Haixin Dang: Yes. I think allowing a minority report to be written actually delineates the area of disagreement. That you eliminate these crackpot or fringe worries, right? You are able to outline as a group of scientists, with differing range of opinions, what the actual relevant space of disagreement is. And I think that's much more valuable.


I think that might be a good strategy for physics collaborations, for there to be a mechanism for dissenters to register their descent and to actually write something without being kicked out of the collaboration.


Because right now the only mechanism in most of these collaborations, if you want to write a dissenting opinion or dissenting paper to the majority paper, you get kicked out of the collaboration and you have to write it on your own. So, I really like this idea. I hope that by talking about it and having this paper be out there now that maybe some journals or some collaborations would be willing to take this up as a way of communicating more accurately what's going on within the collaboration rather than hiding disagreement or letting the dissenters leave the collaboration.


[00:21:42]


Josh Eisenthal: No, it seems great. And the Supreme Court model seems apt and interesting. And just this idea of increasing transparency and therefore increasing trust in the process, I'd be very excited to see if this actually got picked up by collaborations.


Haixin Dang: I don't want to sound naïve, right? Because I do think this is a controversial proposal that has upsides and downsides. The worries about a proliferation of minority reporting and the capture of the minority report by special interests is a real one. So, I don't want to sound naïve about it.


But I also think that it's worth trying. I'm not saying every single paper should be published using this model, but I think there might be a context in which the nature of the work actually calls for more nuanced reporting. I think that's when scientists shouldn't be trapped in one model of publication. They should have an option.


[00:22:37]


Josh Eisenthal: And that last point, that disagreement isn't necessarily a bad thing at all. It's in fact, in a sense, what science is when it's in motion. I wonder how this gets perceived from outside the scientific community, how the public should try and navigate this, especially when we're the non-experts listening to the experts and the experts disagree with each other. How do we manage that?


Haixin Dang: I think the question of disagreement in science is particularly thorny when we're thinking about the public understanding of science. Scientists often will tell you that the authority of science in the public eye is maintained by the fact that they speak with one voice. The fact that scientists can speak with one voice, that they can come to consensus, is the reason why policymakers and the public should take on scientific claims.


Also, kids, as they grow up and they're learning about science in their science classes, they are often told, here are the results of scientific consensus. These are success cases of science going well.


So, we don't see the competition and how disagreement and dissent drive scientific progress, right? That is part of how the sausage is made, but we see the results, right? And the results look sanitized or cleaned up, where the disagreements have been reconciled or have been buried or have moved on to somewhere else.


So, the public perception of science is very much tied to perceptions of scientific consensus. Sometimes I talk to my students and my students think this, right? That when there is disagreement, there is no knowledge. That we can't learn anything when there is disagreement. If scientists disagree about whether red wine is good for your heart or not, you might think that scientists don't know anything about heart health.


So, the perception of disagreement can be very bad for science. It could be bad because it could delay public action. It could also erode trust in science, so that's another dimension which is bad, right? Any perception of disagreement among scientists can be used as a reason not to trust science. And not trusting science can lead to bad public decision making, but also bad personal choices, like choosing not to get vaccinated. It could also be bad because people are given a false understanding of how science works. If you think that science has to proceed by consensus, then you're missing out on the true social epistemic mechanisms that make science successful, right? Science is successful not because scientists come to consensus. Science is successful because scientists disagree and those disagreements are dealt with over time through this complicated, messy process.


I think the more scientists, public facing scientists, try to hide disagreement, the more they play into the science sceptics worries about science. I think precisely and accurately communicating disagreement to the public - maybe I'm being optimistic here, but the optimist in me wants to say - could actually improve public trust in science. If they see how scientists are being very careful and thinking through all the alternatives and criticizing each other, so they're not engaging in group think, they're in fact very careful.


If we're waiting for the data to be absolutely crystal clear, it might be too late. I think we could do better than that.


[00:26:12]


Josh Eisenthal: And there's this question about science communication, where if you want to present the policy as if it's set in stone or completely 100 percent backed up, then if you change that policy, people unsurprisingly feel like the rug has been pulled out from under them. They don't know what to think and don't know how to trust this source.


 Whereas if you were to present to the public a more honest view, where it's like, ‘We don't yet really know. This is our best advice right now. Please follow the best advice that we have now, and then we'll update you as we go. We’ll tell you how sure we are and what the evidence is as we go’ then it wouldn't result in the public feeling completely wrong footed when they're told something with seemingly absolute certainty and then later told something different again with seemingly absolute certainty.


Haixin Dang: The current strategy a lot of people have taken is to not communicate uncertainty or complexity at all because it's too difficult or too likely to mislead or something. But I think that's the wrong strategy. I think the better strategy is to try to communicate uncertainty and complexity to the best of your abilities.


[00:27:17]


Josh Eisenthal: Was there anything else that you wanted to mention that we didn't get to?


Haixin Dang: The other thing I wanted to mention was another open problem in thinking about disagreement in science is what some philosophers of science have called normatively appropriate dissent. So not all disagreement in science is appropriate.


I think it's a really important point. So, it's not just any dissent that is valuable in science, right? It has to be dissent that actually latches on to the data in some way or latches onto the method, latches onto some important epistemic features of the problem.


As an academic, I'm sure every once in a while, you get these emails from someone who says, ‘Oh, I can prove to you that God is actually the strong force’ or something.


Josh Eisenthal: We get some funny emails concerned with Einstein at the Einstein Papers Project sometimes. That's certainly true.


Haixin Dang: But you don't take those emails that disagree with your view about Einstein seriously, right?


Josh Eisenthal: No.


Haixin Dang: No, right. But there are certain disagreements about Einstein that you do take seriously. So, what is the line between the disagreements that should be taken seriously versus the disagreements that you can dismiss?


It's an open question and a question that I think some philosophers might think that we would never really have a satisfactory answer for. Because if you look in the history of science, you have lots of these crazy fringe beliefs that actually turn out to be true. Like when atoms were first proposed. People thought, that can't be right. Or when the heliocentric universe was first proposed. People were like, that's crazy, the earth obviously doesn't move. So, it's very difficult to draw the line between dissent that you should take seriously versus dissent that you should dismiss.


But that also makes working in this area really interesting, thinking about what kinds of scientific disagreement is worth engaging in, what scientific disagreement is actually productive, and which scientific disagreements are actually counterproductive to science. Disagreement that actually is poisonous to the conduct of science. Trying to delineate that is very difficult.


[00:29:27]

Josh Eisenthal: Right. We saw a lot of that kind of question with COVID in terms of, what are the dissenting voices that we want to allow into the discourses versus the ones that we really don't because it's going to muddy the water in a very unhelpful way?


Haixin Dang: They're going to be bad actors, right? So, the merchant of doubt kind of model, where in order to sow public distrust in science, you just need to sow a little bit of doubt. Just push a little bit back on the data and you could have really detrimental effects. It's very difficult to eliminate or even to tell who's a bad actor and who's not a bad actor.


Josh Eisenthal: Thank you, Haixin. This has been a real pleasure to talk to you about this.

Haixin Dang: Thanks for having me.


Samara Greenwood: Thank you for listening to season three of The HPS Podcast. If you're interested in the detail of today's conversation, you can access the transcript on our website at www.hpsunimelb.org. Stay connected with us on social media, including BlueSky, for updates, extras, and further discussion. We would like to thank the School of Historical and Philosophical Studies at the University of Melbourne for their ongoing support. And thank you for joining us in the wonderful world of HPS. We look forward to having you back again next time.

Comments


bottom of page