The Not Unreasonable Podcast
Subscribe in iTunes, stitcher, or by rss feed. Sign up for my newsletter here and also see us on youtube!
Show notes at notunreasonable.com
The Not Unreasonable Podcast
Brian Nosek on the Gap Between Values and Actions
Brian Nosek, has been at the center of the two most important recent social revolutions in academia. First is implicit bias where Brian co-founded Project Implicit http://projectimplicit.net/ based on a pretty incredible idea: that we don't do what we say we value. The concept of implict bias has really taken off and the practice of implicit bias detection and training has gone "way out in front of the research" as we discuss.
While he was busy kicking off a fundamental change in our society (felt very strongly in academia) he decided to upend (and massively upgrade) the culture of research itself by discovering that huge swaths of empirical research fails to replicate. I'm no academic but I would find this terrifying if I was. As Brian says in the interview: "in some fields, people still don't like it an email from me" because that means he's about to try to replicate their work!
How was Brian able to pull all this off? There's even a technology innovation hidden in all this that makes his work possible. He's a true innovator and an honor to have him on the show!
show notes: https://notunreasonable.com/?p=7611
youtube: https://youtu.be/NkKuF--5V60
Twitter: @davecwright
Surprise, It's Insurance mailing list
Linkedin
Social Science of Insurance Essays
My guest today is Brian Nosek professor in the Department of Psychology at the University of Virginia and co founder of Project Implicit, a multi University collaboration for research and education, investigating implicit cognition, thoughts and feelings that occur outside of awareness or control, and co founder of the Center for Open Science, but operates the Open Science Framework enabling open and reproducible research practices worldwide. Brian, welcome to the show.
Brian Nosek:Thanks for having me, David.
David Wright:The first question, have there been any non replicated results in psychology that have actually impacted the world outside of the academic discipline of psychology? So, you know, the example that I think of as if like, we found it radio waves didn't work that way we thought last stuff would break soon, that would impact the world. Now we really big deal. Has there been an equivalent kind of, you know, real, somebody's reverberations outside of the academic community that you found
Brian Nosek:there had been a number, although mod as dramatic as something that we think is being used in practice productively, and suddenly, we realize it doesn't work doesn't quite happen that way. But there are certainly areas of psychology that have gained popular attention and even efforts to translate into practice that now have gotten a lot more additional scrutiny of is this really something? And there's a there's a simple and then there's a hard answer to that question. The hard answer. Hard version is the right version. But the simple one, I'll say first, which is, there's a few that got Particular attention on challenges for replicability. One is the idea of ego depletion, that willpower is like a muscle in the same way that we can get ourselves physically tired and need time to recuperate our muscles in order to be able to do more physical things. Our willpower may act like a muscle where if we engage a lot of self control, then we lose the ability to control ourselves for a little while until it recuperates very appealing idea, lots and lots of research around that idea of lots and lots of potential applications. And some of the key findings several years ago failed to replicate and became a very hot area of debate. A second one, just to put a couple on the table is the idea of social priming, that we are awash in things happening in the world that may highlight particular ideas in our heads. And just those subtle things that activate a concept may then have lots and lots of consequences for how we end up behaving. Consequently, that's a very vague way of stating that some of the grounded, demonstrations of that that have come under scrutiny are examples like making people think of words associated with the elderly, causes them to walk slower afterwards, compared to if they hadn't had those concepts primed in their heads. Or, metaphorically, sitting next to a box causes people to think of more creative uses of a novel object as compared to sitting inside the box. Okay, because they're, they're working outside the box. Okay. So, yeah, there are some laughter there. Because that's like, whoa, that's crazy. But that is a published study showing that from the heyday of the primary literature, that such things could happen. So I'll pause there, because that's part of the simple story as well, here's two big concepts that seemed seem from replication efforts to be difficult to observe, again,
David Wright:the thing that I'm kind of interested in, you know, as you're talking there, and some ideas, it seem ridiculous, right? And some ideas seem intuitively appealing. I'm curious about how the conversation inside like the whatever you call it, the lunch room, or something at lunch amongst academics progresses here, because, you know, I had this, I just did a podcast with Tyler Cowen recently, and, and one of the things that I do as a non academic in studying these ideas, which I do, on my own time, because I'm fascinated by them, is I read literature. And I read a quote a lot of it on Google Scholar, and I'm interested in your talks about preprints. And the rest of that, because that's mostly what I read. But the thing that I don't have access to is a network of academics that I can talk to and knock ideas around with. And what Tyler said to me was disappointing to me, because he said, Well, that's actually how learning happens. You can read all the papers you want, but really, no, we train ourselves by interacting physically with people. Right? And so we enrich that communication channel between first rate thinkers, and that's that's actually how we figure this stuff out. We don't read papers and sit around and think about it for a while and come up with a new paper. And so I'm wondering if like what your take is on the impact that this replication project has on the conversations like, did academics kind of not really buy the ones that weren't replicating? Or and so not really care? And this sort of this weird things just going on a slideshow? Nobody cares about? Or how impactful has this been to how academics actually think and learn if toddlers, right?
Brian Nosek:Yeah, what I think the replication movement did on this issue is turn your backroom conversations into shared conversation interesting. So for example, I mentioned the elderly, slow walking study, this is from a classic paper 1994 1996. Right, as I was entering grad school, this paper came out demonstrating this effect. And everybody's like, that's amazing that you could prime these subtle ideas and have these consequential behaviors, like how fast someone walks, affect it as a consequence. And so that opened up all kinds of new possibilities of how we might be being primed with lots of different things and what consequences that might have. But what would happen in the conduct of research was, you know, our lab was relatively close in terms of personnel and personality and social connections to the other labs that have done are doing related work. And one of my colleagues in my lab, another grad student tried to replicate that finding, like in 1997, and was not able to obtain the result. So of course, she says, Oh, well, I must have screwed something up, right? Because what do I know, this is so goes to the next conference, you know, talking with people at the bar, and somebody else from another lab says, oh, yeah, I couldn't get it either. And then, you know, a couple years later, someone else says, Hey, I couldn't get that either. Yeah, we tried like three different ways. And all of that's happening, but in a very small group, right. So that papers having dramatic effect, it's spawning all kinds of research. And there is a small cluster of social close social networking, where people are saying, wait a second, this is hard. And I'm never actually coming to. So therefore it's false, or whatever. But rather, this is, this is more complicated than it seems. But all sort of behind this very subtle. Once the replication became a social thing in the field with these efforts to replicate prominent findings, and then the media scrutiny of that those conversations became very public. Lots of people say, oh, yeah, that probably I know. And it's like, oh, my gosh, it's everywhere. Yeah, what do we do? So I think there is a very strong element of Tyler's point that is accurate, which is, much of the advancement in understanding and direction is via informal social networking, whether it's in person face to face conversation, but, but really, just more outside of the paper itself, papers or benchmarks that are little tools that we use to communicate, but so much more communication happens in the community as a community.
David Wright:You wrote an interesting paper recently, which was titled, we're in this together, where you talked about how, in the, I think it was social priming, they were two kind of schools of thought, we can say this. And they just, they didn't talk to each other. So like, you know, the way I imagine, idealistically wrongly, I guess, is we have this body of knowledge out there, the corpus of all academic published work as it appears to me as a Google Scholar aficionado, and I can look at papers, they cite each other, and I can follow the chains of citation and I can learn more about the state of the art a particular idea than I could, and eventually, the academic process is going to resolve itself into a fewer number of more accurate ideas over who knows 10s of years generations. But it seems like they can sometimes these groups can splinter off and keep going down a path on their own. And so it's like it's like a you know, it's kind of like there's this thing is technology Conway's Law your ship the org chart, right? It's like it was somehow we've we've represented the social network within the academic literature, inter linkages. And that makes me pessimistic that papers do anything different than just represent social connections between people. Can you talk me out of them?
Brian Nosek:I don't know if I can talk you out of it, are nevertheless more optimistic. Making progress and your comments are great because they prompt getting back to I mentioned, there's a service simple and a hard story about replication. And this example kind of prompts that hard story, which is, any replication will focus on social priming, since that's where we're talking about. Any similar replication does not demonstrate that a particular idea is true or false, just like any original study does, because the ideas are always more abstract and general than any one particular experimental paradigm does. So priming as an illustration is, I can argue with evidence that priming is a highly reliable, replicable phenomenon. Because it is. So when I say bread, you're much more likely and much faster at identifying a word as butter than a word that's unrelated to bread. The semantic relation between bread and butter, makes it easier facilitates the processing of the subsequent word or when it's ambiguous, you know, making that interpretation. That's a super reliable phenomenon. And there's even steps away from that direct semantic connection of evaluative connections that's highly reliable, I give you a prime of something that you like a sunny day. And if it Prime's positive aspects, you'll be faster at processing, other things that are positive compared to things that are negative, there's that that has some leakage. So those happen on the scale of milliseconds, and are very small, in some fashion kinds of influences between the primes and their outcomes. That's one tradition in the primary literature that is robust, has matured, and in the community of scientists, you might say, is, at the micro level, that that tradition in the priming literature is let's go deeper and deeper and deeper and figure out and unpack exactly the methodology that's making this work. And so it's high at there's tons and tons of replication, there's paradigm building, there's like, let's tweak it this way. And that way, my colleague, Tony Greenwald, was trying to push the limits on how much this can happen unconsciously. And so presenting those Prime's more and more briefly. And he even found that it's not even full words that are being processed, it's portions of words, and you can stick to words that have been come to mind that say both had positive meaning but you take parts of those words that have negative meaning, put them together, when the Prime still serves as a positive prime, because of the earlier parts and all kinds of word stuff. But it's really getting down in the methodology. The other tradition, I'm oversimplifying all of this, of course, but the other approach was go big, like, whoa, this prints that what else can print things prime, what kind of is the next almost amazing thing that could be primed. And that's where the walking then, you know, a prime of a single exposure to an American flag, eight months later, making people more likely to vote Republican than Democrat, the, the outside of the box example we talked about earlier. So those dead drift apart, and they became, in some ways, insular communities studying similar things. Citations didn't cross the boundary very much. And the community of researchers serves started to form their own subgroups, even at these conferences, you wouldn't have these two groups necessarily presenting at the same pre conference meetings in the same symposia. And so that is, that is, I think, on the discouraging side of what you're describing, is that, that camps, like everywhere else can kind of emerge and you get sort of like, this is kind of what we do. And we are self reinforcing, because we're all kind of in this boat together of believing in these phenomena and trying to understand and unpack them in ways and sometimes it requires some intervention from the outside in some nominal way, right that that really gets the some of the fundamental assumptions to be questioned within that particular area.
David Wright:Well, you I think what you want to maybe find is some way in which the conflict right so that you can get or some something that they disagree on so you can figure out who's got the better. You know, I don't know tool for the job. You want to resolve that separation. I I've been recently kind of looking into theories of motivation. And so there's two, there's several schools of this, but two of them that I've researched most deeply are self determination theory. And then other ones called self efficacy theory, right, Albert, Albert Bandura. And then Richard Ryan and a dc, dc, dc. Yep. And there's a there's even a paper that compares different theories of motivation out there, which I read, but it doesn't. It just sort of has a chart, and it says, here's some things that are similar amongst them. And you're in your we're in this together paper, you have this connected papers.com website, which I was messing around those those returns for this was like, wonder how these guys shape up in it. And man, it's like different galaxies like, yes, Brian, they're talking about the same bloody thing. Right? You gotta be kidding me. Like, what? And there's one reference, and I forget which one referenced which twice, and one of them reference the other one once and like the big, you know, I gotta read a big thick book that has everything in it, right? You know, hundreds. It's like, there's how do you? How do you fix that? You just got it? Yeah. What do you do?
Brian Nosek:Right now? Yeah, this is a big problem. And, you know, in the meta problem is that the reward system encourages researchers to have their theoretical position, their point of view their theory, that accounts for some things in the world. And so saying, I'm working on somebody else's. Well, that's, yeah, that's pretty weak. Right? Yeah. Test, right. If you have to work on somebody else's theory. It's funny, that's not true in a lot of other disciplines, right? No one says, Oh, I'm working on Einstein's theory. Right, yes. But in areas perhaps that don't have a stronger emphasis speculative, a stronger basis of knowledge that we are collectively building from. Perhaps then, you know, there's even more incentive for individual researchers to claim some space. I'm the motivation guy. And so I'm going to my account is the account of motivation. And there is not much incentive for two different motivation groups to really work on resolving how their points of view are similar or different. A lot of the debates happen. So plenty of debates happen in motivation and other types of theories where you're like, man, they're saying the same thing. Why don't they work closer together? Debates happen, but they happen this potshots, right, you have your,
David Wright:a dis tape, you're a rapper
Brian Nosek:From far away as possible, is trying to destroy them from a distance without actually engaging them hand to hand run that. And that, obviously doesn't resolve anything, because they're just sort of drift along in parallel. So one of the things that is, to me really interesting, and the best example that's happened recently is from the Templeton world charity foundation, is that they decide, you know, there's these six or eight prominent theories of consciousness. They're a funder that cares a lot about consciousness research and wants to form progress. And they're they sort of a derby exact same thing you did with motivation, researchers. Now, none of these folks are talking to each other. What's up? Like? Do any of these theories make different predictions? Do they have a different way of accounting for the phenomenon? Are there is there some way where we could line them up and say, Oh, if this occurred that would support this theory over that theory? Like, how are they making progress? So what they did was they organized a series of meetings between camps as it worked, okay? And they would have two groups come together. And they say, you're here in the Bahamas, right? They bring their Bahamas where their offices, right, you're here, you're gonna sit in this room for three days, until you come up with critical experiments where your theories make different predictions. Interesting. And I think I'm not sure if this is right, but it's they they've done they did this exercise like four different times and once or twice, they were able to get the teams to get to experiments that differentiated predictions, and not easily lots of yelling, lots of fighting. Lots of like, knowing the theory says this No, it doesn't you talking past each other but but confronted with the situation of we are here to design an experimental context where we have different productions really forced them to confront. Oh, do we have different expectations? And the times where it didn't successfully conclude the program officer that chatted with about this said that you know, sometimes people just left the table they just didn't want to. This was too ego challenging took The front. And so they just I don't I'm not I'm not doing this. I'm not doing that. That's amazing.
David Wright:You know, one of the things that you know, most impresses me, Brian about the things that you've done is how much of just that kind of cat wrangling you've been able to do in your career? I mean, it's amazing, the two organizations you founded, I wonder if you could reflect a little bit on like, how, what have you learned about getting people together to kind of work on problems? I mean, the number of co authors on these, like me, for that matter, number of teams involved to get the CO authorship like, how many different studies replicate and getting people together, get them motivated to work on something the same thing? And, you know, what have you learned about that?
Brian Nosek:Yeah, it's been a really interesting experience being involved in these large scale projects. On the one part of the answer is that what I learned early and trying to form some of these sort of highly collaborative projects, was that there were a lot of other people in the field that had similar interests and concerns as I did, yep. Right. So this, this backroom conversation we talked about where, you know, people are saying, Oh, I don't know about this finding, everybody is having that kind of conversation about whatever is their area of interest. So there were so many people that felt like the culture was just different than where they were. The research culture was feeling like, okay, everything's fine. And we're finding more and more amazing things. And they're sitting there saying, What is going on here? Like, am I the only one that sees that we have this problem, or that this finding probably doesn't replicate? And so just the act of saying, we're going to do this project, we're going to try to replicate some findings. You want to help. That was it. And dozens, ultimately, hundreds of people said, yeah, I totally want because I'm really motivated. And the helping in the for those projects was done entirely from a service motivation. There's nobody that joint these projects with 300 co authors that thought this is how I'm going to make my career. Yeah, right. No, they're saying, I am passionate about this, I care about this problem. Finally, we can make some progress on it. Because the we needed it to be a large scale project, but no individual person or lab or group could have possibly pulled it off. So we had to find this way to do it horizontally. And so they see this, we might actually be able to make some progress. And so people jumped in and helped. So that part was great. The other side of it is in managing the conflict part. So you're wading into replication, particularly when it was not a common thing caused a lot of angst. And reasonably so. Right? We don't we don't do organized replications in our field, you're, you're emailing me, Brian, saying, you want to replicate my work. The only interpretation I can reasonably have of that is that you don't trust me. Yeah, sure. And you're trying to take me down, and you're attacking me. Like, that is a reasonable interpretation when the when replication is not a common thing. Now that it's been normalized, in some fields, people and people still don't like it an email from me, but they respond versus a thing, you know, and find, here's the materials and good luck to you jerk. And, but it's much more collegial kind of knee jerk rather than the original responses. So part of it was just shifting the norms. But the thing that, for me, in entering that, and that we developed as a strategy for doing these projects, is really going every living, practicing the values that we are trying to bring to science to the maximum extent that we possibly could, right, instead of entering something and behaving according to the existing standards, we would approach people and say, here's our project design, it's all public, go check it out, you can see what we're doing. Here's what we're asking for. If you want to get involved. If you want to critique it in advance, we welcome that. Here. Once we have the design, you don't have to do anything. But here is our design. If you want to look at it and give any critique, tell us how we might do a better, we'd love whatever feedback. So being fully transparent right from the outset, having high accountability standards for ourselves, giving original researchers as much opportunity to weigh in and comment on the designs as much as we could. All of that I think paid dividends because that default stance of replication is an attack how I could not have been unseeded, I think, without some community of practice that really worked on shifting the social part of that as much as the evidence part.
David Wright:I mean, really into my next question actually perfectly, which I don't know, much like Randall Collins, you've read, but he's got this book, The sociology of philosophy is where he talks about sociologists talks about how schools of thought movement aren't just kind of happening from nowhere. I mean, he documents a whole bunch of circles of people that got together, and then they hung out. And they sort of developed this, this conceptual framework approach, the world has tons of them, I mean, every major school that we're talking about 1000s of years of history that he surveyed here, it's a movement, this is a show social phenomenon. So he's saying philosophies are a social thing, which is super interesting. And your point there is, I think, one he would agree with, which is it's, it takes a whole bunch of people takes a village to kind of move the needle philosophically, for a community, they gotta be excited about it. And you know, this isn't the first time you've kind of created a community like this, but I mean, holy cow, has it worked? I what I'm interested in like, like, physically, like, how did you get the first few people in the tent? And you realize this is how we're gonna do this? Or did you? Did you try kind of replicating on your own first and you know, it didn't work out and you're like, I guess more people on board?
Brian Nosek:Yeah. So I had an easy way to start, which is I had a lab with grad students, okay, and I'm in charge, I get to say, this is what we're doing. I hope that I did that in a way that was positive, effective leadership and inspire and interest in these rather than commanding. But nevertheless, that's what worked was, we had, I started a Project Implicit in graduate school, measuring your implicit biases for social social categories. And you can go to the our website, implicit that harvard.edu and measure your own biases. That work was very useful in creating an opportunity to conduct replications very efficiently as a side impact, because part of the website got very popular. So we had 1000s of people come in every week to do these tests. And we started a research site on the side. And what we ended up starting to do just as standard practice in the lab, was when we saw something interesting. In our area of research, in the publication, it published, we would first run a replication, just say, oh, let's make sure we can get it in our web format, because they almost always showed in the lab web research wasn't that common? And because participants were cheap, right? We didn't have to do this cost analysis that others did, which is my really going to replicate my own finding, when I only have this much access to data, I should really try to do something new. We said, oh, we'll just run it again. Make sure we can get it and then we'll tweak it in variant for the new question that we wanted to investigate. And, and this is the early 2000s, mid 2000s. And over and over and over again, we would fail to replicate. Now at that time, we included had the ambiguity, ambiguous interpretation of oh, well, maybe this just doesn't work on the internet. Right? Maybe it's because they didn't the lab were on the internet, I don't know. So while I didn't think that was very plausible, because I've been able to find all kinds of other things on the internet very similar. It nevertheless was a barrier to broader change. But we started to accumulate that experience, experience. And it accumulated so much so that it ended up that I had designed a couple of experiments, where we did the exact same studies, through our web infrastructure, and in the lab through the web infrastructure, and with lab participants through the internet. So they would, you know, go and go come to a lab, but still do it through the through our web infrastructure. So we're saying we're really figure this out. And, and we show the effects occur and all the places for that study. But that design actually became the thing that we transported them to be one of these large replication projects where we said, oh, it's not just our web versus lab thing. It's all of this explanation that researchers have about why effects don't occur when you change from Iowa to Indiana, and then from Indiana to Ohio. Like of course, those they're totally different. Midwesterners in different parts of the Midwest are just different people. So of course you don't replicate find you. And we're saying, oh, wait a second. Is do we Can we really default to that kind of explanation? And so that that I, this is long way around you what your question was, which was, we sort of built up slowly that initial base. The first was, we're just doing this as a lab work. And then we start to talk to collaborators on the same thing and say, oh, yeah, we're interested in this. And then the real opening was saying, why don't we just make public what we're doing? And then that's when people just came out of the woodwork. Dozens of people literally, within a few days, does it i We posted things, and we're gonna try this big replication project. Anybody want to help us out? And within days, it were up to like, 75 people. I'm like, oh, okay,
David Wright:that's so cool.
Brian Nosek:We can really do this.
David Wright:Yeah. That's so cool. What's amazing to me about that. I love that story. Because the first thing you did was made replication cheaper. That's right. Yeah. So and that's true core software, and you have a software background, right? In college, which kind of kicked all that off. And once it's cheaper, you can do more of it. Right? Classic software economics. Right? That's pretty cool. Nicely done. So I think that replication is a pretty fascinating thing. We've been using that word a lot. One of the things that surprised me to reviewing some of the work you've done is that the word replication contains some multitudes a little bit, maybe you could define replication for me and talk a bit about kind of your strategy for replica replica replicating studies. And you know, what some strengths, weaknesses of different methods are?
Brian Nosek:Yeah. So that the term replication is itself a source of debate both in what is a replication? And then how do you interpret the outcomes of replication? For me, the simplest definition of replication is testing the same question using different data. So this would not be what is sometimes called replication, but we would say is called reproduction, which is using the same data over again, just to see if you apply the same analysis to the same data, you get the same result, right, that should occur 100% of the time, right? You have the same data, you're doing the same analysis, the same outcome should occur that that's in less metaphysics is. Okay, we're not going down that path. All right. So I'm gonna assume 100%. reproducibility should be achievable. One step away from it is we might make different decisions and how we would analyze that data, same data, but there's all kinds of idiosyncrasies in deciding how to deal with outliers. If there's transformation rules, there's always decisions, yes, to me in that analysis. So that would be testing robustness. We're going to apply different decisions that are reasonable, how likely is that result to remain across those decisions? And then replication going the furthest is, let's get some new data and try to apply the same. Tried to test the same question in a new context. Where there were there gets to be a lot of debate is, is what does it mean to test the same question? Do I have to use the identical methodology? Do I have to use? If I find a problem methodology? do I improve the methodology? Am I then testing the same question? And then if I talk about in terms of a question, which is what we advocate, like testing the same idea, then it's no longer about the rote, experimental context. It's really about the conceptual question. So our argument for what about replication is that it's saying that something is a replication is a theoretical commitment. It's saying I think, given what we understand about the world, everything that I ate know that this experiment should be anticipated by that prior experiment, and what we saw in the results, so I should get the same kind of result in this new context, regardless of what has changed, because any new data is something will have changed.
David Wright:And one of the things that you contend with, I think, in that last form, which probably is the most powerful, I think, because you should be able to change some things that don't matter and kind of come up with the same things true enough, right? Doesn't matter. If I'm looking at the sun from the moon or the earth, I'm gonna get this similar gravitational measurements around I know what the right way of translate that into dim domain is gonna be. It's still just gravity, right? From a different perspective with a different experiment. I think that thing that kind of always I wrestle with when I think about this stuff is just how complicated people are. And you're saying psychology here, right? So you got to narrow the thing down to something really super tiny, in order to be able to, you know, actually attack the same question. It does. You know, back to this kind of point about the different schools of thought of things like It's almost like all things come up almost can be true. Because of how complicated I was out, I'm losing me ever so slightly different, you know, talk me through how and whether you can actually kind of achieve that deeper source of insight and a replication process or, or what you might call that.
Brian Nosek:Yeah, it's a really interesting problem to unpack, because I think, you know, what we're trying to do, obviously, obviously, in science is develop explanations that predict and account for the regularities that happen in the world. And replication, I think, is because of the complexity you describe, which is lots of variables are there and they might be influencing each other in unexpected ways. Replication becomes a way to really get theories from being vague, in general, to much more precise, and clarifying the boundary conditions of when these kinds of phenomena will and won't be observed. So what always happens in research, including my own papers, is that we will do some experiments. And we will offer an explanation that is invariably more general than what the experimental context was, right? We set up this situation where we give people this kind of thing, and they make this kind of judgment. And then we say we have a more general account about how the world works. But someone says, Okay, well, if I twist this and twist that, I don't see what you saw. I said, Oh, why did it move that I didn't do exactly that? Right. So now I have to qualify my theoretical account to take, take into account this place where it doesn't occur. But that's really where replication is productively confrontational to our ideas, we have a more general explanation. And what I should be able to commit to if I'm really trying to advance my theory, is that when you present me with a design, ideally, I don't know what the results are yet to be to rationalize my way into why those outcomes should have occurred the way they did, but you present me with a design, I should be able to commit to what you will find, will that reproduce? Like what we saw before? Or will it not? Or is it in a in a zone where I can recognize my theory isn't mature enough yet to predict. And if it's in the last category, in the ambiguous zone, I don't know if it's going to work and the situation you're setting up, then that's not a replication test. It's a generalizability test. Right? Replication is very clear expectations. This is what my theory anticipates This is what should occur. generalizability tests are, why the world is complicated. And there's lots of other variables and theory isn't specified there. So let's try it out. I'm totally curious to see if it happens in your context. Oh, it doesn't. Okay, boundary condition. Yeah. It wasn't qualified my theory, my theory yet, because I didn't have expectations, except that now it sort of rules out some area of potential explanation. But also doesn't threaten my position on the theory, because it doesn't didn't directly challenge core evidence. We're a replication if you had done it. And I agree in advance. Yep, that's testing the same question. Failing to replicate demand, some updating, I may still figure out after the fact Oh, this is why I didn't, or percentage chance, there's all kinds of reasons I might end up preserving my original expectation. But I need to be confronted by that in a different way to take seriously that I had an expectation, and it didn't occur the way that I thought it would.
David Wright:I'm wondering, so I want to touch on implicit bias for a minute. And the way I want to kind of, kind of move into that is maybe you can talk a bit about the amazing thing. So the website purchase implicit, I went on the website and did some testing myself was fun, and interesting. So he showed me a little bit there, to your efforts. I think that the best way to probably hit this is maybe you could describe a little bit about what that does. But the what we're interested in is kind of how you use this ability to turn on a dime, because it's software, you can just sort of spin up a new test to sort of follow your learning journey to test the boundaries of the idea. Because like the process is what enables you to hopefully have learned more about the underlying phenomenon than you otherwise would have been able to do.
Brian Nosek:Yeah, no, that's great. Yeah. So the, the, there are a variety of different way methodologies we have used but the primary one, the Implicit Association Test is the feature test on the website. And you can use it to measure Association strengths for a variety of different concepts. So the basic idea for those that haven't experienced it is take a deck of cards, and instead of four suits have four different categories. You can have male faces and female faces. And then words representing science and words representing humanities. And the participant then takes that deck of cards, shuffles it up and as to sort them out into two piles, and have to do it twice. First time is they sort all of the male faces and science words into one pile. And all the female faces in humanities words into another pile. And they just saw it as fast as they can. And I'm sitting there with a stopwatch, figuring out how fast can you sort that deck of cards, then you shuffle them up again. And now you have to sort them again, but with categories changed. Now you have to put the female faces and the science words in one pile, and the male faces and humanities words in the other. And you can anticipate what we observe in comparing how long it takes to cert in these two conditions. It's easier for people regardless of their conscious beliefs, and people, I'd say 80% of respondents find it easier to sort male faces and science words together compared to female faces and science words. And the interpretation is a straightforward one. If those concepts are more associated in our memories, it should be easier to do the same action, put the male faces and science words into the same pile go left than if those things are not associated or memory, harder to put female faces and science things together. So the base concept there of how the methodology tries to assess associations has a strong evidence base and literature of other kinds of methodologies that measure strength of association. But it also has a unique application, and then all kinds of different things that could be done with it, oh, let's trade out the gender faces and put in race pieces of different races, let's change out the science and humanities words and put in terms meaning good or bad, or myself, for other people or other things, you can start to really vary it. But one of the things that we were able to do, because of the website engaging, lots of interest, was really zero in on lots of interesting debates on how the methodology was actually working. And how we can refine the methodology to really try to isolate the substantive content of interest. Because this the test evokes surprise in respondents myself, especially during each of the tests on the website, I would always test it on myself, first of oh, I have this bias. Now to this one works. Okay. Let's see what other people react to it. And it can evoke reactance, because it can be different than what our conscious beliefs or values are. I don't I don't want to associate men or women differently with science. I think everybody that is interested in science should go into science. I don't want to associate black and white people differently with positivity, I value everybody. I don't want that. So why would it be coming out of my head? So that response has been very useful in engaging the research community and the public with methodology? Well, if it's not about race, or about gender, then what about the methodology might have produced it? And then let's test that. Right. So maybe it's the order Oh, you just told me I discern all of these male faces and science words together first, maybe if I did, the female faces and science faces first, maybe I would have been faster on that, because you practice doing one thing, and it's hard to unpracticed that have anything to do it with gender has to do with the method. So we would dozens and dozens of experiments, testing all kinds of different variations of the methodology, some of which do have an impact, it doesn't matter which one you do first. But then we can refine the methodology to do as much as we can to eliminate that effect. So the progressive nature of science has, it's our experience with implicit measurement is a perfect illustration is both expansive and contractive. At the same time, so you know, the initial demonstrations, oh, my gosh, everybody has bias, and oh, my gosh, this is gonna have all these implications in the world. Well, no, that's not quite right. Because methodology actually says, Not here, not here, and maybe limited, more limited in this case. And then simultaneously, researchers saying, Oh, wow, here's this new way in which we see that these are related to biases expressed in the world. Like there's a recent publication that takes some of the Project Implicit data, and looks at the average level of bias in geographical units in the US and finds that the average level of implicit race bias is associated with the number of slaves that were in that geographical unit. In the 1860s, like, that's, that's crazy. And maybe maybe that would not be robust to other variations. You also can't make any claim for its replicability yet. But the researchers that did said, You know what this, this might be helping us understand a little bit about the origins of these, they that they are fundamentally embedded in the culture, how we, how we get these implicit biases that communicated through the social culture that's immediately around us. And so some areas of the country that have a deeper history and these challenges may have those kinds of biases more embedded in their minds, in other places wandered a little bit away from No, no,
David Wright:that's perfect. Because what that what's fascinating about that, there's a paper I wanted to touch on, which was pretty stunning for me, which I think I think I'm interpreting properly with your other comment, because what you're saying is that, there's, let's say that there's, well, there's an implicit biases, which they've measured there, they didn't measure explicit bias in those areas. But you probably measure that too. And you might find some things that are consistent with that. And then you have this causal factor, which is a third thing, though, which is this history of slavery. And you put a paper out a little while ago, that talked about how the changing implicit bias is kind of hard to do. Changing explicit bias is even harder. And by the way, it doesn't matter whether you change the implicit or the explicit, right? And so it's actually something else underneath all that. And so we're kind of like, get her distracted with this concept of implicit bias. But, you know, saying you're keeping your eye on the ball for what behaviors and actions are actually happening in the world. It's something you might want to keep in mind.
Brian Nosek:Yeah, no, that's exactly right. And the, this is one of the areas in which the application of implicit bias research, when way out in front of the research evidence, right ever, lots and lots of people have gone through implicit bias trainings of one sort or another at their school or, or corporate environment or wherever they work. And I can't say all, but I can say much of that is inert to actually changing behavior, right. And the reason I can say that with some confidence is that we've studied behavior change associated with these sorts of educations, and we haven't yet found reliable ways in which these sorts of training interventions have impact on notable behaviors, right. And they, I give lots of lectures on this to educate about implicit bias, or at least I used to, and one of the most impactful ways I can explain it to the audience that really, really all it's just a simple training to then change the bias within an organization or their context, is to say, I've been studying this since 1996, I still have implicit bias. It's like my full time job to study it all day, all the time. And I still have it in my head, like, you can't just take an hour session and say, Oh, good. We're all done with that. So so that's one challenge is just the leap from Wow, this is really interesting. We want to study it to Oh, my God, this is a social problem that requires some intervention, and then pursuing that intervention without actually evaluating what impact it has. The second interesting part is that it's not at all clear. And this is what you're referring to in the front in that paper. It's not at all clear that we need to change the implicit bias. Right, that once we demonstrate that it happens, right, the obvious thing that people would say is, oh, my God, we have an implicit bias. We don't want it. So let's change it. And the evidence that's accumulated in the basic literature to date says that's pretty hard. And even if we were thinking it's possible, there's a pragmatic element, which is which implicit biases are you going to change? So okay, we affected your race bias now about gender now, how about sexual orientation? Now? How about age now? How about, you can go on and on and on, we have biases about everything. That's just part of how it is our minds operate and sort of distill information about the world. And sometimes they're consistent with our values, and sometimes they aren't. So as a practical intervention, trying to change, implicit biases, is may not be the right way to lead in terms of trying to meet our equity goals that we have for addressing bias in general. A more effective strategy may be to say, let's assume that people have bias implicit or explicit, and help create systems that reduce the opportunity for those biases to be expressed, or if they are expressed, have systems that help us to detect it, and then address it. Right. Like lots of parts of different types of technologies in the world. Don't try to prevent the thing. They try to make sure they have good quality control along the way. So examples of this are if we don't want race to influence someone's judgment in a particular context, can we remove information about race so that it doesn't affect judgment in that context? Now, of course, that ends up getting more complicated as you start to pull the strand of it to say, Okay, well, I don't want race to influence judgment in this interview. So we could remove information about race, from the interview, have avatars and other things that disguise the ethnic and racial identity of the person that I'm interviewing. But that doesn't address if I have a view that there's been some historical inequities between blacks and whites for being candidates for that type of position that I can't address by just blinding myself. So how do I address historical inequities? There's all kinds of interesting problems to get pulled on, as threads for how do we create structures that enforce or promote the equities that we desire? And then how do we address the decision making processes that are always going to be imperfect that we bring to it? In that context?
David Wright:It was especially interesting. I mean, we're talking about forms of social engineering kind of here. Right? So how do we create a society that we want insurance? We deal with this all the time? That's my business? You know, how do you get people to buy insurance? Brian, you pass a law making illegal not to, that's the natural progression. I mean, maybe maybe there's a little heavy handed. And it's working. And people don't like it. But they do it anyway. And we keep electing governance to maintain insurance mandates of all kinds and insurance, contract banks and all this kind of stuff. Right? So there are ways of getting at this stuff, which is a little more direct. But the irony of this is, as you pointed out the practice of say, taking the ideas of implicit bias and running with them. Is that you know, you actually think it's not quite so influential. being somebody who was very early in researching this and very prominent in its promulgation. You're actually think it's a little overrated, it seems implicit bias.
Brian Nosek:Yeah, I think it's it's more modest that for sure that it's in between the the advocates and the critics, I guess.
David Wright:Right? Yeah, right.
Brian Nosek:There was a fellow of at Yale Law School, when we were first doing this work that said, you know, what I want to do is I want to have police officers go out, take it every morning, and then have their score on their badge when they go out that day, for work. So everyone can see what their bias was that day. He was kind of joking, but kind of not. And so, you know, that's the there are it was exuberant reactions to thinking about how is it that we can apply this basic concept into managing our social realities? And it's not that I think that none of them apply. It's that I think there is that any extension or application of how implicit bias might be having an impact in some context. And then how we might address that impact in that context, has to be accompanied by evaluation, by assessing whether this device actually is having an impact, right? Can we create create some assessment to understand what all the call the factors that are influencing this particular behavior? And then whatever intervention we design, can we get evidence that it's actually impacting the the judgment and getting us closer? What are our goals are, and what's been stunning to me in experience in this area of work is that there are very few and it really zero is not an exaggeration. organizations that have the wherewithal to say, we're actually going to assess and evaluate whether our changes in organizational practice address bias, implicit bias or not just the biases that we might have in in performance evaluation and succession planning and hiring, whatever it is. And of course, we could go outside of corporate context too. But there's very little investment in actually assessing that problem that they're wanting to address. It feels like just lots and lots of window dressing. And not that it's an I. That sounds too negative it is and I am negative about it. But I think the intentions are good. It's just based on hope. And you can't you can't do it all just based on hope. I hope this work like Starbucks did that huge intervention on implicit bias. They're the consultants that we're designing that we're calling me And I gave them whatever advice I could give and said, you know, the best thing that Starbucks could do here is evaluate it. Yeah, do whatever you're eventually want to try, their best thing you could do is evaluate it because you have 3000 stores, you're going to do it in every single store, you have a chance for randomization, you could demonstrate a causal impact that almost no one else could do by randomizing whether a store gets the intervention or not just doing that, while the dramatic effect for Fazzino effect would have a dramatic effect and advancing us. But of course, they don't want to see no effect. Right, that would hurt their thought that would be a threat existential threat, perhaps organizationally, but I think that's the wrong way to think about it.
David Wright:Well, I mean, that you know, that reintroduces in there is a course a theme in your career, which is the difference between what we want to do and what we do. Right and and you're having quite a lot more success on changing behavior on the reproducibility replicability of studies. Maybe we can sort of close on the topic of kind of what things are working for, for the replicability project, and kind of how you're advancing those. I think it's really exciting.
Brian Nosek:Yeah, that it has been very exciting that this has not been just a here's a problem, and boy is in a terrible affair is a problem. It really has turned into a productive movement by the community. And that community keeps spreading and growing, right, we focused on psychology, my home discipline, but really, it's happening everywhere across the sciences, we just did in the, within the past year published a replication project of cancer biology studies that showed very similar things as the psychology work out has done. And so there's lots of that happening across disciplinary fields. But the real great part is that interventions are occurring across the different parts of the research ecosystem that create that structure and system of rewards. So what my my favorite is, is registered reports, which is changing the publishing model, right? If we start with the insight that the the reward system for a scientist is to get your work published, and get published in the most prestigious outlet that you possibly can, and then publish as frequently as you can, then recognize that's a focal point, right? publication is the currency of advancement. So what does it take to get published? Well, it takes exciting sexy findings to get published in those prestigious outlets. And that's if you sort of work through the process, that's the core problem that produces all of these biases of ignoring negative results of exaggerating findings of re analyzing the data until it looks publishable, etc, etc. So register reports goes after the key challenge in that, which is, let's change the way that publication decisions are made at journals. From what you do now, which is do all your research, write your report, send with the journal and hope the reviewers don't find enough wrong with it to reject it, which they almost always do. But eventually, I can get through that. And the register report model says, Okay, no, that's not how we're going to do it. Instead, what you're going to do is you're going to clarify your research question, do some preliminary work to make sure your methodology is viable, maybe. And then write out your methodology of what you're going to do? And you send that to the journal. And the reviewers instead of evaluating Did you produce exciting outcomes? They evaluate, did you ask him? Are you going to ask an important question? And is your methodology and effective tests of that question, brilliant. And then they give, if it passes those criteria, and they give, in principle, acceptance, this journal commits to publishing what you do as long as you follow through with what you said you were going to do, regardless of the outcomes. So just that simple change fundamentally changes the reward system for me, the author, instead of having to produce exciting results. To get my reward. I have to ask important questions, and design really effective tests of those questions. That's exactly what I'm supposed to be doing. Yeah. So we've aligned the reward system with what we want scientists, I hope, the collective we, what we want scientists to do, and then the results are the results. The evidence, we have now three to 50 or so journals have adopted registered reports as a publishing option, and the early evidence suggests that it basically entirely removes publication bias, instead of 90% positive result rate that journals that early findings found about 30% Positive results are publishing lots and lots more negative results. And independent evaluations of the methodology and papers find that they're more rigorous higher quality papers, compared to similar papers written by the same authors or published in the same journal. So we may be getting both more credible evidence and more rigorous research through this mechanism. So it's one of a variety of different things. But to me, one of the most important interventions to really change the system in the fundamental way.
David Wright:That's pretty cool. That's more than pretty cool. It reminds me of venture capital kind of model, you know, we're going to find an idea and see if it works out with the additional layer of actually informing everybody about what happened. Yes, you know, people don't repeat.
Brian Nosek:That's the key. Right? There's science, you know, a lot of there's a lot of misunderstanding in the replication debates, and in this part of it, which is fear of error, or mistake or failure. Yeah. And that's crazy. In any field that's pushing the boundaries, right. Science and venture capital are both they're pushing the boundaries on what's possible. So there's, if you're doing it, right, there's going to be false starts all the time, all the time, like, lots is going to fail, because we don't know we have to try it. So the way that the reward system in science saying only report your successes, then necessarily creates an exaggerated, misguided sense of what the reality is, and just ends up creating all kinds of friction in the process. It's unnecessary. If we can show the error and the failure, and not have it be that Oh, Brian, you were wrong about that idea, some cost in my career. But while you're really designed an interesting test of that idea, yeah, it didn't work out. But now we know that that isn't not a viable path will be so much healthier as a discipline if we could adopt that as a mindset.
David Wright:So what surprised? So what you're talking about here is? Well, you mentioned as an option, so it's not like they're going whole hog into this yet. But you can correct me if I'm wrong on that. But we're talking about changing the culture pretty dramatically in prep and journals from one of sensational findings to sensational process. And you can maybe talk about the score framework that you've developed as well. But what are your chances of actually really moving the cultural needle at journals? I mean, that's pretty big, ambitious project.
Brian Nosek:Yeah, none of these things happen quickly, and nor where should they even write this model, while like, express it in glowing terms, right, just like we talked about earlier, I'm going to perceive this has more globally applicable and effective as a solution than the ultimate evidence will reveal. So if we don't have which, luckily, the this change movement in science is accompanied by a meta science movement, that is simultaneously evaluating the impact of all of these changes. But that would be really important for making sure that these interventions are implemented appropriately, are maximized and effectiveness, and that we identify the boundary conditions of where they are limited. So for example, it may be that this register reports model creates a lot of conservatism in what questions people ask, right? If you have to agree in advance to publish my crazy idea, you're gonna evaluate it? Well, that's, that's crazy. There's no way that's going to happen. Why would I commit to publishing? Yeah, that craziness. So maybe I won't take risks there. So we can imagine lots of plausible scenarios where, okay, it's appropriate for this kind of work and not that kind of work, etc, etc. But, so the growth path is nonlinear, right? Every year, we have more journals signing up for registered reports than had offered at the prior year. But in also in a way that allows us to manage the adoption and evaluation process in concert. So it started in social behavioral sciences is active and neuroscience now is starting to move into other areas of research with more journals adopting, we can see how does this work for bench related science? Red Hat, what comes up in computational approaches to try to adopt this model? Can we even apply it to qualitative research? Oh, that's an interesting possibility. And so it's that kind of stuff of investigating, how do we optimize the product, for it to be fit for purpose for different types of research is a key part of effective, scalable, sustainable adoption. And then the other part you call out score is that it's coming along at the same time that there are other interventions that collectively are contributing to improving research practice. So sharing data and materials and CO is becoming more normative, there's lots of repositories are open science framework is one of many that offer tools for that preregistration. Making the calling your shots in advance committing to identifying studies, has been gaining popularity across different disciplines, particularly social behavioral sciences. And then the projects like squirrel, which were just wrapping up as a DARPA funded project is looking at, can we also improve the on the assessment side, so we can improve the rigor with all of these new practices, practices to try to make sure that the pipeline is effective as possible. But it'd be really useful to be able to evaluate the outputs more effectively and efficiently. And so the goal of score is to create automatic indicators of confidence in research claims. So wouldn't it be useful if we could get an initial read of how much we can trust this particular piece of evidence, based on its connection to the existing literature based on other cues in the in the evidence itself. And so that's really been the problem that we've been pushing on for the last three years. And we have some preliminary evidence, that's enough to sort of start to transition from pure research into research and development, that there may be some effective scoring. That is, that is a complement to the human peer review processes that we apply to research findings, to help direct our attention to those areas of the research where there may be a little bit more cautious about or more confident in to justify further investment of time or resources or attention.
David Wright:So one more question, actually, then we can close but you've actually post a little bit about open publishing Preprints, post prints, there's free repositories, there's archive, and sai archive, I think it is. These are outside the journal. I only read free versions of things because I don't have access to Elsevier when things go. Right. Is that good enough to learn from I know, it's outside of the the kind of the culture of academia kind of that stuff. But does that work to be able to read these things? Yeah, different standards. It does.
Brian Nosek:At this point, and I haven't seen that the latest but there's something like 70% of the literature is now publicly accessible. And that's a great improvement over the last 20 years, the the community of open access advocates have been really, really effective at developing an understanding that more open accessibility of the research outputs, the reports is important for society. And it's easy to justify as the public paid for a lot of this should the public be able to access it, that the business models for publishing are based on paper. And they just make no sense in a digital world anymore. And as a consequence, we create up these artificial unnecessary barriers and added costs to accessing the literature. So preprints and open access more generally, are becoming normative, across different disciplines. Physics has been doing it for since 1991, with archive and the rest of the research communities are catching up. And that that's a very productive for democratic interests, just making research more accessible, particularly for areas of the world that could be contributing substantially to science but can't access the actual literature. And then the application people who are reading the science like you who want to translate it, or apply it or do something with it, to better the world. But it isn't enough to solve the dysfunctional reward system. And that's the other part of it. So it's a compliment to that it addresses all their challenges for the use of research. But it doesn't itself make the research more trustworthy. And that's really for, for us the core focus area is let's not just make it available. Let's make it so that you can use it and use it with confidence.
David Wright:Okay, so we'll close there. Where can people help you, Ryan, tell us where to find the project websites? How can people sign up to take these surveys or if they're academics to pitch in? Yeah,
Brian Nosek:the Center for Open Science, its general website is cls.io. That shows our range of services registered reports being one but then another a number of others that are initiatives to try to shift the research culture, our main infrastructure, the Open Science Framework, is that OSF that IO, that's a free open source tool for researchers to enact these rigor and transparency behaviors in their own work. We have more than half a million users now. And then if you're interested in implicit bias, go to Project Implicit dotnet or implicit.harvard.edu. And you can take tests or engage the researchers that are advancing that work, to try to figure out how it actually works and how it actually behaves in reality and what we can or should do about it.
David Wright:Awesome. My guest today is Brian Nosek. Brian, thank you very much.
Brian Nosek:Thanks for having me.