The Not Unreasonable Podcast

Joe Edelman on Designing Meaningful Things

October 28, 2022 David Wright
The Not Unreasonable Podcast
Joe Edelman on Designing Meaningful Things
Show Notes Transcript

I worry about whether we can improve the insurance system. I once wrote an essay arguing that all insurance is compelled, so the only way to get someone to buy insurance is to force them to do it. The implication is that nobody will ever do anything good without being forced. We learn some lessons the hard way but then quickly forget. What's more we hate this compulsion! We chafe at the rules and many of us shirk them when we can. It's a mess. 

Joe Edelman has a better vision. In his vision (in my words), we have values that give us a sense of meaning. Connecting with other people who share these values is (should be) our social objective. Groups anchored around values will develop norms for deepening the pursuit of these values, norms we'd gladly accept because the values are so meaningful to us. 

Norms are another way of saying "social rules we live by" and the minute you are putting rules down you are constraining some actions and, even if only by negation, compelling others. If we can create a system where rules are embraced rather than resented, we can create a vastly superior society than we have today. Listen for our discussion of this! 

Joe's homepage: https://nxhx.org/
youtube: https://youtu.be/Sjennrn5LNA
show notes: https://notunreasonable.com/2022/10/28/joe-edelman-on-designing-meaningful-things

Twitter: @davecwright
Surprise, It's Insurance mailing list
Linkedin
Social Science of Insurance Essays

David Wright:

My guest today is Joe Edelman. Joe is a philosopher, game designer, programmer and founder of the school for social design, where he has developed some remarkable social technology to collect data on users values and design systems to nurture them. Joe's work is remarkably ambitious, I think and exciting. And he has made more progress than I would have thought possible before getting to know his material. Joe, welcome to the show.

Joe Edelman:

Thank you. It's good to be here.

David Wright:

So first question, you criticized social systems design the software. So these are social media sites, Facebook, Instagram, maybe tick tock for crowding out pre existing values. Now, what I think is interesting, as I look at the kind of broad history of social system design, that's kind of the point is somebody has an intent to indoctrinate a value system. And they design a social system to do that. And so crowding out of existing values is part of the plan. Now, I'm wondering what you think about that. It perhaps contains a prediction for where the socialist systems might go. But you have a different tack. So you want to actually nurture pre existing values as opposed to change them?

Joe Edelman:

Yeah, well, there's two places where I kind of make distinctions that aren't in that frame. So one is I the word values, I think it's confusing. I break it up into two categories, which I think, behave very differently. So one I call ideological commitment. So you might have an ideological commitment to fairness to giving people from different backgrounds a fair share, shake, when you're, when you're hiring them. You might have an ideological commitment to a kind of transparency or accountability. And then a different set of values would be sources of meaning. So what are the things that are meaningful you fully to you personally, as you work as you interact, maybe being creative with other people is meaningful to you maybe being vulnerable or real. And so when I say respecting the values of the users, in the case of social media, I mean, the second thing is the sources of meaning, creating the kinds of conversations and interactions that are meaningful to people. So you can have an agenda and the ideological side, you can say, oh, I want I want more voices to be heard, for instance. But that's a little separate from what's meaningful to your users, and whether your platform makes a makes room for that. And then another distinction is the distinction between crowding out and something like inspiration. So you can inspire people. Like I think I was involved with couchsurfing for many years, I was one of the people that worked in couchsurfing. And we inspired a lot of people with values like hospitality and connecting across cultures and things like that. And these were sources of meaning, and we spread them. But this is very different than crowding out. This is not like a social pressure mechanism that gets people to be hospitable or to be to connect across cultures, this is more of a possibility that becomes a new new source of meaning for the users. So I wouldn't call that crowding out.

David Wright:

So one of the things that I struggle with, as I've been studying for this interview, it and I come across this problem a lot, whenever a lot of philosophy and I've had, you know, some philosophers on this show, is you get really tangled up in in. I don't want to use a slightly pejorative term, but jargon, right? So definitions of specific words or concepts is a real analytical separation between some of these ideas, right. But man, do they get muddled easily, in your mind. And, you know, I was, I went through some of your exercises with my inlaws this past weekend, to try and elicit values of what their values were. And I think that's partly successful, we achieved I think the emotional outcome that we're, I think, is part of the intent, which is this sort of feeling of meaningfulness where I was conducting the interview, and of both of mine loss, my wife was out of town, so I did it snuck it in there. And it was a very exciting and energetic conversation, right, we felt like we had arrived on things that really mattered to them. But this the I think the distinction between values and ideologies, were quite blurred, you know, to some extent, I think that my kind of practical, I don't always actualization of these ideas was that an ideology is simply your values that you want other people to do or a value agenda, right or a thing you want to spread around the world, but it's not necessarily something you might not even feel it yourself. It might be very cynical. So you might say I want everybody to wear blue sweaters on Fridays and in a really meaningful to me I mean, it's almost like the it's very subjective in a certain sense. You can achieve the emotional outcome without necessarily adhering I think to your definite General kind of categories. What do think about that mean? You've done a lot of this? Right? So what do you think about that?

Joe Edelman:

Yeah, sure. I mean, I think it is, a lot of it takes our students a lot of analysis and unpacking before they begin to separate these things. They come in bundles. Like, for instance, masculinity, bundles up a lot of different ideological commitments, and a lot of different sources of meaning there's like, being a provider can be like, an expectation that like weighs on you, or it can be a strong source of meaning. And you can say, hey, I'm, you know, I'm doing this for my family. And it's like, super meaningful to me that I am this kind of, like pillar of support, right? So one way to think about the distinction is that it's just different motivations, like, for maybe the same thing. So if you, let's say, you're talking to your kid, and you're trying to be really honest, as a way of setting expectations in your relationship, that's what I would call an ideological commitment to honesty within this relationship with your with your child, or something like we are honest here. And that's just a very different motivation than like, oh, I find it super meaningful, to be honest, right now. Like, these are just, you know, they might manifest from the same behavior. But you can probably tell me after you did it, which kind of conversation you're happening having with your son? Was it the source of meaning motivated one? Or was it the lake setting expectations for the family? kind of motivated? One, you wouldn't have such a hard time saying which one it was, you know, the,

David Wright:

I searched for, called, like, concrete, truth, which is just a reference this stuff against sometimes. And what I mean by that is like behaviors, right? So what are the things you do? As a consequence of believing whatever it is, you say you believe, and you know, there's a lot, the human mind can be a mess, right? I mean, you got this post of post hoc rationalization where people like they say, they, they kind of meant to do something when they did it, but they actually just just totally fabricated. You know, the, we're a complicated machine, human consciousness, but behaviors are real, you did do this thing, or we did not those are factual. And to me, like, affecting behavior has to be the goal, your internal state is kind of something that is unobservable. And you can kind of get at it with the interview techniques. And they really are really interesting and powerful, honestly. And I do mean that. But like, ultimately, if it does, or does not move behavior, that's kind of like the test of whether it worked, or whether it's real, when you think about that.

Joe Edelman:

Yeah, I think that's kind of true. But it can get you into a lot of trouble. Because there's often many ways of describing behavior, many models, which will, like which you can use to analyze behavior. And if you're not listening to people at all, then you don't really have so you can, you know, like, for instance, with the prisoner's dilemma and game theory, right, you can always change the payoff matrices to make it look like people are being self interested. And so there's sort of like these two models there. Right. Okay. I believe that people are being self interested. And or do I believe that people actually care about other people's welfare? And, and so in the same behavior will, you know, go for either can point towards either model. So I think this is a problem with especially economists have this preference profiles, or utility maximization framework, that really you can apply that to any behavior? Like, it's such a flexible framework that you can apply it to any behavior? And then you have this problem? Well, okay, are people just maximizing their utility? Are they just following dopamine? Are they just, there's all these like, models. And so then it does, I think makes sense to listen to people to try to differentiate between multiple models that each can can predict behavior,

David Wright:

you know, I unreleased, actually, but I, the podcast, which is in the bank, should be released soon as the one I did with Brian Nozick, who's a psychology psychologist, really big into right now into you know, kind of the validity of a lot of social or social science experiments, whether they, you know, they're they're hacked or not, but earlier on in his career, he was one of the kind of founding group of implicit bias research. And implicit bias is super kind of confusing. Because on the one hand, you know, we profess to have a certain value, and we may or may not adhere to that value in our behaviors, it's, it's easy to, like convince somebody that they have a different value, apparently, but it's not easy to change the behavior. So, like the behavioral outcome, you know, sort of this messy middle part, which is you know, partly individualistic, partly socially constructed partly whatever you can talk somebody normals anything or saying anything or you know, who knows what they really mean or believe. But, you know, the behavior in the end you know, I suppose what I'm what question is like, what are the consequences? of poorly motivated behavior that doesn't change, like in two different motivations, same behavior, what's the difference in kind of your model? Or what's the cost? Or the penalty? Or the, you know, the, what's the point?

Joe Edelman:

I think so, would critiquing again maybe the the standard kind of economic preference profile thing, you know, you get if you have too flexible a model, then it can describe behavior really well, like it over fits, but it becomes less useful. So you have all these people, they're using social media, they say they regret it, they're, they're trying to get off cigarettes or whatever. Preference profiles just say, Okay, well, they prefer cigarettes, they prefer Tik Tok, right? Like, we don't listen to their words, there's nothing else we can do. There's no guidance there about taxes, there's no guidance there. About right like, this is just what the people want. And that's like, that's not very powerful, right. So then you have to, you have to look for other models that might explain some kind of, you know, inner, and there's been a succession of these in in behavioral economics. And, you know, there's likely a system one system two kind of, there's, there's a whole bunch of like multiple mines, elephant and rider. And then more recently, there's the kinds of so the model I use is that there's, there's values, and then there's a whole bunch of different norms and expectations, it's really easy to give people a new norm in a room. And that actually does change their behavior. But it doesn't change them in a deep way. Right? It just says like, oh, in this room, we do things this way. And then as soon as they're out of the room, they'll go back to their behavior. So it's quite, there's many layers, and it's quite hard to see through the different layers to see what people really want. And also people people change, people get inspired by role models, they they change what they want. And so this is like a messier model with many more kinds of layers, probably easier to confuse yourself. But it has the chance of getting at some deeper truths.

David Wright:

I like messy, by the way, because hey, that's just the way it is. Right? It's your point running away from that, I mean, it's gonna be messy. And if it ain't messy, it probably isn't useful. Because there's just too many what appear to be edge cases in your simple model, but actually are just kind of like pretty mainstream. Maybe you can talk a little bit and probably should get into this now of how exactly you do elicit values, what is the techniques you choose, you train? It maybe can describe the school a little bit too, and kind of what the goals are and how you go about it? Sure, yeah.

Joe Edelman:

So I teach people to make social things that are meaningful and meaningful to him in certain specific ways. So, you know, somebody's trying to make. So for instance, some of the people from Facebook groups went through my program, they were really interested in a certain kinds of group groups of like cancer survivors, or people who have drug addictions or so like, can people be vulnerable? What kind of ways can people support each other in Facebook groups? How can they, you know, can can depressed people discuss their suicidal ideations and like get help from each other and support each other this kind of stuff, right? And what's the difference in the diet design of chat threads, and comments, and you know, the way the profiles, look in the groups and so on, that makes people more likely less likely to connect. So that's an example of a kind of application, that people go through my course. But they do this with not just social apps, but also organizations. ownership structures, we've had some like legal structures, community land trusts, things like that, always with social roles, and some some underlying sources of meaning that, that we want to foster that either come from the users, participants, or the the creators of the thing. And so the course goes in parts in the first month is mostly about measurement. And figuring out how to measure the meaning that's happening or not happening. Where I teach is kind of a survey method. I teach an in person interview method, or like a zoom based interview method. That's like a hot, very high res way of telling whether the meetings happening. And then survey method that you can survey 10s of 1000s of users the same time or something. That's the first month of the course. And then the second month is a bunch of design related stuff. So how would the product actually change to make those numbers better? So that's the program I teach. And then the method so the interview method, the easiest way you can start from a lot of different sources of data, to get people sources of meaning. But the easiest is just stories of meaningful things that happen. So if you have people On Facebook groups, again, who have had really meaningful encounters with strangers, you might just ask them to tell those stories. And then by asking you a series of questions like follow up questions after hearing the story of the meaningful experience. You can create one of these we call them values cards, one of these little cards that sort of describes the meaning and helps you build a survey to see whether that's just happening for that one user, or happening for for many people. And you can double click on any part of that if

David Wright:

Yeah there's lots. So I'll tell you one, just personal observation about this. I was worried and I made an exchange just before I did this with my in laws, you didn't know this. But I asked you what if somebody says what the heck is meaning, because that was something that I was wondering about, because, and I might be a little bit over studied on this stuff. Because I've I've gone down several different paths of defining meaning. And I've gotten confused for which one I'm talking about. The the, but that didn't come up, it didn't come up. And you pointed out some materials from ens. We get out here. And then I asked my in laws, so what can you think of a meaningful experience? And both of them ran out? They're like, Oh, yeah, I hear you. And they just sort of have this might be like to lay interpretation of the meaning because they haven't, like, you know, look into this stuff and all but we got to the right stuff, right content, right, it's kind of back to the sort of the language, we might not be talking about the same words, but they understand what I'm looking for. conceptually. And that's kind of more of like a Jordan Peterson meaning or meta meeting here now, where he's saying, you know, this is motivating action, I'm trying to motivate them to tell me a story of something important to them. And they, they kind of get what I'm getting at even I could use almost any word there. There was broadly similar. So that's kind of interesting. And they came back with just important experiences, to them, that had a huge emotional resonance. So that was, it was it was, anyway, it was quite powerful. I want to I want to come back to the concept of what you can do to design because that's interesting, because, you know, I think that as I go through social media, you know, I have my own kind of preferences for social media, I don't use Facebook, I do use, you know, LinkedIn and Twitter, for whatever reason, but I wonder whether like that whatever reason, is actually down to the design of the software, and it's giving me what I want inside of it. Like, you know, if you, I don't know how much you use these things, in different ones, but maybe you can give me some give us some examples of, of patterns that that Do or do not encourage certain behaviors, because you tend to just think I'm just a person interacting with the world. But one interesting concept that that you kind of put me onto was the design of the flow of information can really materially constrain or encourage certain kinds of behavior and emotional outcomes.

Joe Edelman:

Yeah, sure. So I like Twitter, I think Twitter is a little difficult, like, some people have set it up in a way that really works and other people haven't. Interesting. So there's a Twitter's kind of bimodal in terms of how meaningful it is, and how meaningless and addictive it is, there's a lot of people have very bad experience with Twitter. But I have a good experience. And my experience is around a kind of, maybe you have the same thing. I I'm kind of a parent academic, right. I'm a researcher, but I'm not affiliate, I'm not part of the university. And so that's kind of lonely. And Twitter is where my like department water cooler, kind of, you know, this, I have this feeling that I could tweak things, I can tweak things when I'm working, you know, I say that I'm working on, people give me feedback, they get excited about it. I feel like Twitter is a lot where my colleagues are and where I can kind of chat with a bunch of other philosophers, people that are also economists, you know, this kind of, and I feel very lucky to have this. I don't know a set of colleagues that are hanging around in a in a virtual space. And this is super meaningful to me this kind of camaraderie, intellectual camaraderie, or something like that. And I know other people who feel the same way about Twitter, and what is it that makes Twitter like that? I think the follow graph, and the way that Twitter helps you build your follow up graph and kind of prune it is, is is really helpful. Lists are also kind of helpful in that direction. And there's kind of an interesting, like, there's a Twitter is operate like the, the actual shape of the graph is is relevant here. Whereas Facebook is got friends and family and much more of kind of, like almost everybody is Facebook friends. The Twitter operates in a certain kind of in like, it's in between friends and professional, right, where's the continuous like, more? More fully professional? Yep. So the use is more likely they use it I take it.

David Wright:

I was just going to Oh, sorry. But I was thinking that, you know, they do occupy different sort of social have spaces? Is it just the case that there will always be one for family and kind of everybody one for your sort of like intellectual interests and one for professional interests? Like, does this design matter? There was always going to be one in each of those slots, because he can't overlap those senses of identity. Is that, I mean, it's like, that's an argument against this concept that any of this matters. Like, we're just gonna have one social network for each of these.

Joe Edelman:

Oh, well, so telegram is becoming more and more like a social network and telegram is very heavily used in Europe in Berlin. And which is where they send telegrams doesn't have this problem telegram. You can create groups and channels, and you can group your groups and channels by tabs. So I have a tab for friends. I have a tab for like work colleagues, I have a tab for. And yeah, as telegram builds itself out as a telegram has a very nice structure for discord and slack also makes all of these different rounds. So yeah, I don't think that the companies have to go along with the social graphs. Interesting. Yeah.

David Wright:

Cool. So let me pick up the threads or as you were just about to start talking about design features and how they can influence these things uninterrupted.

Joe Edelman:

Yeah, so one thing that's very important is whether you have a sense of who you're posting to. And in Facebook, this is very bad, in my opinion, the way that they've set up the recommender and newsfeed for most people, for many people, you kind of have no idea who will see the things that you post, especially if like, I have, like maybe 4000 friends and followers on Facebook or whatever. And I feel like it's just a, it's a lottery, like, like some, some random person that I met in Alabama might see the thing that I posted that I have nothing in common with, you know, somebody, like my aunt might see it. professional colleague might see it right. And, and it's only going to be like 100 out of those 5000 people or something that will at least engage with it. And this is terrible. This is like creates a kind of stage fright, right? Like you, you don't know who you're talking to. And so that's a good example of, of what not to do. And this is also easy to get in this situation with, you know, somebody like Elon Musk is in this situation on Twitter, right? Like, where just everybody is gonna, he's talking to everybody. And that's it, it's very chilling effect. And things where I find in general, my best communication starts with an intimate group, and then kind of grows from there. And this is something that I expect to see more of in the future is social networks that let you post first to a smaller circle. And then once they've seen it, revised it, you know, commented on it, then it goes out, then it goes out. So you kind of start safe, and you start with a better sense of your audience.

David Wright:

Yeah, and WhatsApp, I use that for, you know, when I write something, or, obviously, I don't publish too much written content. But when I do this Whatsapp group, a couple of them through emails, close friends. So is it right to say that the the collective, the thing that these people in these groups have in common are values? Is that kind of the way you're designed? Or you imagined designing these groups is centered around certain kind of preferred value preferences?

Joe Edelman:

Yeah, I mean, I think there's reasons to communicate with people besides shared values. But yeah, I think shared values are one of the most useful ways to gather people because for two reasons, one is that, like, people with shared values will look for the same kind of excellence in terms of the conduct conversation they're having. And so that makes it very clear what to post like people that appreciate great intellectual rigor are something you might have put them into together in a group that can be intellectually rigorous together. The other thing is that they like to do the same kinds of interaction. So people that really like being creative together will be creative and mean group or something. People that really like being honest, or open or vulnerable, or something, we'll be honest, are open and vulnerable, and some kind of sharing about our lives group or something. And so yeah,

David Wright:

what are the most common values? I mean, you probably don't know how many these interviews you've participated or observed, as I was, as I was, the one that came out most prominently with my in laws anyway, was this concept of working hard to like, we're workers, we love to work, we feel good. If you took the outcome away, and we still did the work, we would still love you. And we still feel good about ourselves, you know, sort of ticking the boxes for you. And I thought, that's good. I agree. I like that one, too. But that's a lot of people are going to have that one. Probably. And so there's gonna be massive overlap on some of these. I have to think.

Joe Edelman:

Sure. Yeah, that's right. Yeah. And it's actually that lessons there. I'm import because if everybody shares the values and you don't feel so special, it's such a special connection that people have it right? Yeah, I think that even more common than that one are. So that's maybe maybe the contribution or so I run into two versions of that maybe more. One would be something about contributing, like finding a place to contribute. And another would be about responding. Like, like, showing up, I guess, would be the, maybe the word people have us, like, you know, I show up I, I, when there's an issue, I'm there for it, when there's some work to be done when there's a bunch of pilot dishes I show up for that, you know, or whatever. Interesting. So I'm not sure if if the one that you had with your in laws was was one of those, something even more common than that, or honesty related ones like honesty and integrity like I, I say the thing that used to be sad i or i follow my, my true feelings about things,

David Wright:

do you? I mean, I had a kindness called Robin Hanson on the podcast couple times, actually, one of them was talking to his book, the elephant in the brain, where the robin Hanson back here is saying, Do you really buy that though? Like, are they actually honest, I mean, isn't the case that we have here most secret motivations for so many things, we lie all the time, even to ourselves? So honesty is the thing that, you know, it's kind of like a thing that we want to believe about ourselves. And maybe the but it ain't the case. Because, you know, that's just we can't live our lives and actually, you know, have that value.

Joe Edelman:

Sure, well, so it's different whether somebody is claiming to have a code, or whether they're claiming to have a source of meaning. So interesting. Somebody may not be that honest that much of the time, but when they are honest, they find it super meaningful. They're like, Oh, this was so freeing, and so real, this, you know, to finally say, this things I might write, maybe they've been lying the last three years, so they're not an honest person. Right? Right. But when they say that honesty is one of their sources of meaning, they're telling the truth.

David Wright:

That's amazing. So to me that it creates this category of value, and I don't mean that and we're overloaded that word big time here, but you know, of, you know, emotional positivity that isn't really deep, cuz if they were, if they just sort of said, Oh, I'll be honest, all the time, or everywhere, their life would probably fall apart, right? And I was gonna say the thing, you know, it's like, you just can't live that way. So you might say, it's like, there's a dosage makes the poison, right? So it's like, Jambi a little bit more honesty at this margin would be good for you. But if you overdo it, then your life is gonna fall apart, and you're not going to feel meaningful, because you're going to disrupt all these relationships, you're gonna feel maligned, and all this kind of stuff, right? So it's not like it's an unambiguous and limited good. Right, showing these values?

Joe Edelman:

Yeah. Yeah, I think that's one of the places where ethics, morality, the philosophers have been really wrong. Because there's a, whether you're a utilitarian or a Kantian, or whatever, there's a tendency towards the idea that, that ethics is about moral codes is about always doing it that way. And that's just dumb. Yeah. Yeah,

David Wright:

it's, you know, and I'll put a few cards on the table here for myself. So this is the thing that I'm really interested in learning in, in this conversation, and maybe, you know, and then certainly after this for the rest of my life, probably, and maybe even with you some is, is how to motivate virtuous behavior. And I take it freely in from a kind of operationalization of that concept. I don't mean be better to people, although I do mean that you're right. I mean, I would love to pursue that. But I don't know how to pursue that. The thing I know how to pursue is how to get people to buy insurance. I mean, I, I can't help but laugh a little bit about that. But in the end, the most most, I think straightforward. conception of insurance is it's, it's preparing for things that could happen to you that are bad, or to people around you that are bad, right? So there's like a, there's a virtuous, deeply virtuous aspect to insurance. And if you, if you do that properly, then you live a better life. Right? And we have lots of delayed gratification is sort of like a very generalized way of thinking about insurance like so like, I'm not getting some today, but later on, some bad's gonna happen to me. Like, for example, last weekend, my father in law said, Hey, there's a nail in your tire your car, and I was like, Oh, wow, he knows more about these things. It means that you could drive for a little while on it, but you'd have to get that replaced, as Okay, that was Sunday. And on Tuesday, I had to take my kid around all kinds of stuff. And we're driving back home, the tire blew up grinding on the street, and I was like, I left it. Yeah, I knew I had to do it. Didn't freaking do it. And now I happen you know, so we managed to get in the driveway at my house. Um, and it's pouring rain outside very dangerous. I mean, that was stupid. And then change the tire bringing in all that kind of crap, right? So like, I simply, you know, I think of that that kind of behavior is analogous to not buying insurance, right? So I could have imposed costs now, but I just sort of delete it, I'm gonna do it later. And, and to me, like, so motivating virtuous behavior is something I care about. And I want to look for frameworks that can help us tie ourselves to the mast. Right, so how can you motivate virtuous behavior? Because to me, it's like, if you can't get somebody to buy insurance with your social theory, it's probably not right. Right? Because like, that's something that's like, you're trying to talk yourself into it, we need a mechanism to force us to do something that is good, right? Do you see a way for your framework of using values as a source of motivation to get somebody to buy insurance, or more generally pursue kind of this virtuous behavior that is immediately perhaps costly?

Joe Edelman:

Yeah, I'm not sure that tying yourself to the mast is the best way to frame it is the word I use is redesigned. So, because the tide of the mask kind of thing is individualistic, but as you as you wrote, I read one of your blog posts about underwriting and morality. And you talked about social like, like how much of people's ethics is social. And I think that's really true. So, and I'd say environmental, even beyond social, so you know, people who, people in certain social contexts and certain economic contexts are going to have an easier time buying insurance, right. And they're going to have an easier time people, people in a group where honesty is rewarded are gonna have a much easier time being honest, than people in a group where, you know, people get really pissed at you when you say something real, right. And we, I think we all have the experience of wandering into both kinds of rooms, you know, you know, go to somebody else's family or something, you're like, Oh, holy shit, like, I can't say anything here. And it's not how it works. Right. And it's similar with with these other. Maybe prudence would be the virtue for insurance or something. There's going to be rooms where prudence is the thing where prudence is easy, are, where everyone's doing it, where the whole? And yeah, and this has to do with a whole bunch of structural factors, including, I mean, so So one structural factor is is, is what crowd you're with another structural factor is, do you have a regular Can you budget? Like, do you have a regular income? That you can allocate your, you know, insurance premium to? Right? How far can you forecast your your life? How normal is your life? And how, you know? Like, if you're budgeting and insurance premium, can you can you predict that in five years, you'll still be able to pay that premium? And so yes. So. So that's what I would do, instead of tying yourself to the mask, instead of pre committing or something like that, I would say, how do you get somebody into a crowd? Yeah, prudence is the thing, how do you get somebody to, you know, a forecast of their income for the next five years, so that they can say, by the time you know, something like this might actually happen? They're still gonna be paying their insurance premium, they're still gonna be insured. Right? How do you change the structural factors? So that prudence is, is advisable? Well, in the possible the

David Wright:

what what I think history teaches us about buying insurance is that the way you do it is you pass a law to force people to do it, legally. Or you this compulsion, and so there's another essay I wrote, which is all insurance purchase or compelled, I'm not happy about this. But this is what I observe is that voluntary insurance kind of doesn't exist. And that I mean, kind of individualistic, you know, as you were kind of saying there, I can tie myself to the mast. And I think you're right, it's like actually, even Odysseus, somebody else at the time to the mast. And so you have this social compulsion, or, and that that gives a very general term that word social can be legislative compulsion, it can be contractual compulsion. It's all compulsion. Like it, there's there's not really another way. It's an I call it relationally. Violent, right? So it's somebody forcing you to do something against your will. And one of the amazing things about our society and I wonder what you think about if you've studied any of this, if apologist Joe Henrik, secret of our success, at this point, is the way that we are mostly compelled by mimicking people who high status people and doing what they do, and we kind of do it. Not really mindlessly. But there's some rationalization that goes on there maybe, but we want to we want to, we want to kind of like copy their success. And so we do the things they You don't really totally understand why they do them. And that exists on individualistic and also the societal level. And so cultures, copy cultures, as well as when culture has got a really sophisticated insurance system, culture B will be like, Well, that sounds like a good idea. And so they do the same thing don't really totally, you know, they kind of get the arguments, but they don't know why it's efficient. They don't know the magnitude or the contribution of that part of the system to the success of culture a, they just do it, but just copying stuff. Because they want to, you know, because they want to mimic a high status individual. These are not rational processes. This is not me saying I want to do this. And you know, there's not a causal chain, that is kind of in any way conscious, it's being forced to by somebody else, or by simply substitute, you know, subverting yourself to to another person's behavior or kind of entities behavior. And one of the things that encourages me about your work is that there's a little more thought behind what we're doing. But I don't see a lot of evidence in the world for that kind of really being able to generate, you know, meaningful outcomes economically at scale. What do you think?

Joe Edelman:

Yeah, sure. Yeah. I mean, so I think of society is made up of, like a, like, organized in terms of scales. And that's this kind of macro scale, like national policies, policies that apply to everybody in the state, that kind of thing. I work mostly on the mesa and microscale. So I work mostly on designing things. within an organization within a social network, social networks are kind of amazing in that they're, like, you know, code is kind of like law, like, it's super it structures, interactions very, very tightly. Yes. So you know, you change how commenting works, you change how, what, you know, what kind of information shows up to strangers, and so on, it changes for everybody immediately. Right. And it's also super well instrumented. So you can see the effects you can be much more rational and responsive. In your design. Of couchsurfing. We, I mean, there were huge changes, changes that, you know, when I, when I arrived, there was like, 70%, positive experiences when I left like 94%, positive experiences, huge reductions in theft, and, you know, other kinds of crime on couchsurfing things like that. And this is because of changes we made, where we we make the change, we watch the cohort, you know, sometimes a B testing, we split, the users and to half of them have to change, we see the incidents, you know, over the next few days, right, sometimes even hours. So this is very different than the kind of domains that you were talking about. And I'm curious, social networks are one place where this much more intentional iterative approach works, even for billions of people. This is maybe the first time on earth that's ever happened. Yeah. And I wonder how much government will follow, I have the feeling that what will happen, what I hope will happen is that first social networks will be much better, like they'll be good for us, we're gonna they're bad for us, I think on that, it's hard to say there's a lot of costs a lot of benefits. So first, we'll crack that and we'll make decent social networks. And then if we do that, then I think the government's can can follow. And we'll have instrumented experimental governments that you know, that work that are very well aligned with social good, and that are very intentional and responsive. But that might be a while before we get to that stage.

David Wright:

So your use word socially good there. And that was another kind of theme from the beginning of this concept of, you know, what is social good. You know, your most would probably adopt an ideological slant on that. They'll say social good is, whatever your left wing, right wing agenda, you know, everybody's got their kind of preferences, pretty strong preferences. And what they want to do is they want to commandeer the system to promote their social good, right? Because I think that people just have this the ideological urge is super strong, right? And I admire your restraint in that if it's, if I'm seeing it right. Or maybe you do have preferences for defining very carefully what the social good means other than simply whatever you want. I mean, it's kind of like this. You know, pluralist verging on kind of relativist, kind of, you know, you can take this pretty far. What do you think about that? I mean, it must be in your mind, the very least you could teach somebody these tools who could then Co Op them for some serious ideological work, right?

Joe Edelman:

Yeah, yeah. No, I think that is a concern. And I worry about it sometimes. So I am, I think that it's true that I'm less ideological than a lot of people on the left and right I just think it's also we're seeing an exaggerated political polarization and exaggerate heated ideological situation right now. Yes. Because of what plays on social media largely, also, because current systems are breaking down, and then people sort of fight just like they did in Germany, post pre World War Two, right, there's the communist the fastest, like, it's clear that the current thing isn't working. And so this brings out the, you know, enemies of the system on both sides. So we have that plus, we have social media and outrage, and, you know, all the incentives to polarization in the media ecosystem. So I think that people are not actually as polarized as it seems. And not usually as polarized as they are today. And the social good. So then what does social good mean? I think that, um, it does mean to me and to like, sort of liberals of the last 100 years, or something like people should be able to do the things they like? And then the question is like, Okay, how exactly you define that, I think economists usually still believe that the problem is that they define it in terms of preferences, in terms of revealed preferences, like in terms of behavior, people should be able to do the stuff they're doing, which just doesn't work, because people actually regret a lot of the stuff that they're doing. They don't actually like the stuff they're doing. So then you have to have some other way of defining what people should be free to do. But which, you know, which I people should be able to do, what's meaningful to them, is the way that I would do it. But this is not such a big departure from like, kind of ordinary liberalism. It's just switching to preferences for the meeting. Yeah.

David Wright:

So I'm going to guess that, you know, there are limits to your, your pluralism, right? I mean, people can get meaning from doing some pretty awful stuff. And an objective, I mean, I'm not saying objective, I suppose it's hard to kind of use that word. You know, most would agree that some things are real bad, you know, hurting people torturing those kind of stuff, like, get a sense of meaning of doing something awful to people or animals or something. So you kind of like, there, there has to be a toolkit here for evaluating values, right? I mean, the extreme at scale deployment of the Chinese, you know, surveillance state, you know, here we go. So, you can see, ever, you know, if, if they, if the, if like a communist dictatorship using kind of older terminology, but, you know, fascist whatever it is, dictatorship can appear inside the minds of people don't understand this person's value system is one that's going to threaten me in the future. That's not a good piece of information to get, you know, to, for that person who's subject to that inquiry. So, you know, increased in this is kind of one of the interesting, you know, I do mean it when I say this is a social technology we're developing here, that will give ever more power to people who have an ideological agenda. You know, I don't know, this isn't really an answerable question. But surely, it's something that you kind of think about once in a while.

Joe Edelman:

Yeah, so first of all, I, and I may be wrong in this. It's not a very, it's not a it's not something that I'm, I have a lot of empirical data on. But from my interviews with people, I people's values are really good. Mostly. Yeah. Like when I, you know, like, like, when you were interviewing your source, your your, your in laws, like I've done a lot of that I've done hundreds of those kinds of interviews, maybe 1000. And mostly, I'm just like, Yes, it's good stuff. That's good stuff. Yeah. So this doesn't make me like worried about people, when left to their their values, it makes me kind of feel like it will be great. And that there's a lot of social good, that comes from freeing people from the things that keep them from doing what's meaningful to them. That's overall, my kind of, maybe I'm just an optimist. And in terms of people misusing this stuff, I think it's a real danger. I think that that's part of why we have to spread a kind of stance about what the world should look like, along with the technology. Like we have to spread our own kind of ideology, and it's kind of meaning libertarianism or something.

David Wright:

Yeah. Yeah, pluralism, we call it pluralism right

Joe Edelman:

Pluralism. Yeah. Yeah, just spread pluralism along with what we do. And I'm worried that we won't be able to do that. But that's the thing to try, I guess.

David Wright:

Well, I mean, I did a podcast with Joe Henrik, as I mentioned a minute ago, cultural evolution. And here's the thing I worry about your reaction to this, I worry that there is an inherent destruction of meaning. That is part of the economic growth process. So back to this point about copy successful cultures. To me success in it comes down to a kind of, you know, wealth measurement. So Tyler Cowen on the podcast talking about his book several attachments GDP plus so he says GDP which is great got all these great features of measuring well being, plus some, you know, externalities, and there's some things he's thrown in there, it's not too super clear about what those are. But you get the concept, he's sort of relaxing the strict definition of GDP as a measure of well being. And I by I mean at least by it as a as a revealed preference for human societies, right. So the most successful societies tend to be the ones that are richest. And those are the ones that are admired by lesser successful societies who, you know haphazardly try and adopt the policies of more successful societies over time, that the way to get to successful economic growth is to coordinate huge groups of people in markets, that allows them to, you know, specialize and trade with each other and, and generate economic growth right. Now, the thing that Henrik taught me was that there was a in the olden days, 2000 years ago, something like that, you'll be living things called kin groups, where we coordinate with our extended family, a few other people. And he tells a story, which is a little more controversial, but it talks about how the Catholic Church broke down those concrete barriers through Atlanta cousin marriage and stuff. So the point there was that we had a more broadly connected society. Economically speaking, we traded with people that weren't our kin group. Yeah, we trusted them. And so search certain social, like mechanisms allowed us to trust these more trustees, more distant people, but we have a lot less information about them. And so my theory goes, we've lost meaning in that trade. So we get greater scale, less meaning, and they are, they're intimately linked, in order to have this scale, we have to be able to communicate and coordinate with people we will never meet. I mean, if you ever, like, you know, imagine that just, you can't even imagine the concept of 10 million people, like, that's just so many people. And that's not even a very big place in terms of global scale today, right? You gotta coordinate with hundreds of millions, billions of people at once, you're never gonna know them, they're gonna be distinct people. And so you naturally wind up not really deriving as much meaning from them, the trade we've made is to reduce the local relationships to spread the coordination across society that is inherently less meaningful. And so I worry that if that the trend is clear, and it means more growth, less meaning in our society, and I wonder what you think about that? A lot. Yeah.

Joe Edelman:

So I think your analysis is like, 100%. Correct, except for the word trend. Okay. So like, we did, I think, a series of trades that worked for a while trading, things like intimacy against things like economic growth and productivity. And those were good trades for a while. And we on the margin, like we kept making, I think, fairly good decisions as a society. And you get a lot of meaning back from being wealthy. Like, like a wealthier society, right? You you get, you know, you can organize it to choirs and more leisure games. Yes, yeah. Right. You start having richer arts and richer sciences, and there's a lot of winds that come from, and those winds are on the back of people, you know, working in factories, and, you know, like, not necessarily, you're doing a trade off on the level of industrial lives to write like, an Industrial Light, you know, it, you grind, but you get paid. Yeah, and you get to use that in a way that's meaningful to you. And that I so I actually think that wealth and meaning were kind of aligned, but making a kind of a tough trade off, like with serious costs, and serious benefits in terms of the like, wealth per meeting, or whatever, happening the whole time. And what we've seen recently is that wealth and meaning became misaligned. And that's why we have like, massive increases in depression, obesity, like social media addiction, you know, people are stopping to marry, they're stabbing having kids like all these demographic shifts, like isolation in Bowling Alone, all this stuff, right? political polarization, this is happening, because the trade that we learned how to make as a society is no longer paying off. But that doesn't mean we should go backwards. That means we should stop making that particular trade. Right? We should stay at the same level of affluence and work on the thing that we're no longer getting by, by other means, right? So because because additional wealth or additional grind, actually on the level of an individual family, or is no longer paying a meeting benefit the way it did, maybe 60 years ago, 60 years ago is like work hard. And you're gonna get all these benefits, right? Yes. And now it's kind of like oh, no, actually working in additional bit is kind of a bad deal. Like it's not the end, the wealthiest societies are no longer the most meaningful, like are the societies that are working hardest is one of the reasons I live in Europe rather than the US. Europe has lower GDP per capita. But it's I think way more like meaning per capita. But the difference is small, right? Like, it's just the last little marginal bit where the US and I think also the UK kind of screwed this up. And so it's not about becoming, you know, you know, going back to where we were 200 years ago or something. It's about unwinding the hustle just a little bit. And looking for community wins, meaning wins other ways of organizing more meaningful things. Is that good answer your question.

David Wright:

It is, yeah, it's a it's a coherent answer. I'm not sure I buy it though, as as kind of the way things are gonna play out. Because the the thing that is super powerful that many people malign is the transmission mechanism for behavior at scale. So, you know, the Hendrix story is more successful societies. And he's a little, I think, agnostic about what the definition of success, but others will say, Well, that's what I would say anyway. So the wealthier societies are admired and copied. Now we can see wealth and measure it. And we can see they have, you know, I grew up in Canada, came down here, there were more luxury cars, my parents were marching, I remember when marking in the United States and Canada, they're just cheaper, they're just cheaper than they are elsewhere. And so we buy more of them. And you're like, boy, that's cool. You can have a Mercedes and I can get a Toyota, or Lexus to work or whatever, right. And, you know, that's a small thing. But that, you know, if that gulf widens, and now you can get flying cars, United States, you can get into Canada, right. And now there's this pretty big gap, because we chose this more meaningful path. But now we're kind of feel like we're falling behind because then the measurable, observable consequences of our decisions, we look like we're losing, because you know, we might have a better internal state, but envy will take hold, right? And suddenly, we want to ditch those cultural decisions and move towards the ones that are generating the more kind of obvious fiscal outcomes.

Joe Edelman:

Yeah, sure. So I think you're right. And we got to be careful of that. And there's two, two ways I see to, to work against that possibility. So one is just making the meaning difference, much more observable, and numerical and documented. And I think you could do that with numbers like we don't we don't have a number. That's like the meaning equivalent of GDP right now. But we should we will, if my work keeps going the way that it's going, I think we will in three years. You'd also want personal stories. And I think this goes to the other thing, so people copy things that are packaged up, like I think Hollywood actually does a lot of that work. That's why so many other countries wanted to copy America. And it's not really because it's because we're wealthy. I mean, that's a big part of it. But it's because we have some kind of packaged lifestyle that people can see and aspire to. And this happens on all levels in society, like you just meet somebody, you know, maybe they used to be in. They used to be a broker, and now they're a massage therapist, and they're much happier or something, you meet one person like this, it makes you think, Oh, maybe I should quit my finance job or something. Right? Yeah. Like, that person is so happy, right? Yeah. And it's works the same way you see a movie and you're like, oh, that person is, you know, really looks really good on Wall Street in this movie, right? So we'll need to somehow whatever society is that that wins on meaning we'll have to also dominate media and export, like package lifestyles in this way.

David Wright:

Well, you know what, one other piece to this is the concept of status is something Robin Hanson talks about a lot. And, and, and he studies a very deeply I think he hates it. And I don't think he would admit to that, maybe, but he really get the feeling. He hates the concept of status and how it contaminates our rationalist otherwise rationalist impulses. And, you know, I asked this question of Joe Henrik, where, you know, he documented as well as well, this sort of change in the marker of status from a dominance status hierarchy in like, you know, let's say, like lizards are something, there's evidence actually that the dominance hierarchies don't even exist in chimpanzees, but through to humans, where it's definitely more, it's definitely it's a different kind of thing where it's not just dominance, much more about being able to coordinate with people and being sort of like, prestige is different. And so the prestige hierarchy is the one that exists today. And there was an amazing that changed at one point in evolutionary history. I mean, Joe makes the point that it's inevitable. It's a biological mechanism. I mean, I don't know but you You know, the thing that we're talking about there is changing the way that we measure status, because you don't mimic the wealthy person, you meet him at the high status person. Right. And, and one of the interesting things that social media and software gives us is like, you know, the traditional definition of prestige, is you sit in a room, and all the other descriptions that Henrik has for it, or like, is there a physical space, so who walks in the room, but he pays attention, he's got the physical charisma, right? And online, charisma is different. It was imposed now, and you probably noticed this, sometimes you'll you'll talk to somebody online versus in person. And you can have a very different impression of them from a from a prestige standpoint, to be an impressive person on Zoom, and person who like, oh, oh, hi, I thought you were gonna be taller or something, right? I think somebody in personally been very impressive on Zoom, we can be flat, you know, it's like there's something else going on there. And so maybe there's an opportunity to scramble that prestige system to create something else that we want to envy. And so mimic I found ditional structure for the way that we guide our own behavior. Any thoughts on that? I don't know. Like, maybe we're talking about trying to do something new there.

Joe Edelman:

I do think so. And I think that even just the articulacy of added values or sources of meaning that I'm talking about has an effect there. I wonder if you feel like it did with your in laws? I don't know. Maybe you can't say this in the air. But like, I find that often I admire people more and I respect people more after I discover their sources of meaning. Yeah. And so I have the feeling, I think, what what so you know, my general model is that as a culture, we get articulate and communicative about different kinds of aspects of ourselves at different times. So, you know, a couple, 100 years ago, people were getting articulate about their goals for the first time. And they started talking about their goals, I want to open the library, I want to start a business I want to or whatever. And then, you know, 20th century, people start getting educated, but their feelings, they start saying I feel sad today. And so I see what's coming as people are getting articulate about their sources of meaning I find it meaningful to x, I find it meaningful to why. And I think this will change status, like already. People that are either, like, it's impressive to see people even like let's say, if you just take leaders, you can you can interview leaders about what's meaningful about being a leader for them, you know, like a Navy SEAL captain, teacher at a high school, Product Manager at a startup or whatever, and people will reveal these things that are meaningful about leadership for them, and you just start being like, wow, I would love to work with you, I would love to work for you like that leadership value that you have is inspiring to me. And it seems like it must create an amazing team. And the more visible This is, the more people will have those kinds of feelings, and it will change the structure of satisfying

David Wright:

how much of this is something that you can discover analytically, because I feel like our reaction to these things are so intuitive, you know, back to the point of making early on, like, we can just be talked into it by a charismatic person, maybe a zoom, charismatic charisma is different from person to person, but it's still charisma, we're still kind of like, it's not like, I'm intuiting myself into a value system. And then I'm seeing that, you know, if I lack the cognitive, linguistic tools to discover my value system, I'm just gonna copy somebody else's value system. Is that Is that better? Or, like, it's still maybe maybe your is your framework only accessible to a small piece of the population? And then they're just gonna push it out?

Joe Edelman:

Yeah, so already we so in my class, in the first month, you learn an interview technique, where you can interview anybody, and I'm pretty confident that you're not projecting values into them. When you interview them, you're you're actually pulling out their sources of meaning. We have a whole bunch of checks in the interview technique, to try to make sure that you're not like incepting them with a source of meaning that you wish that they have are making it up, right. There's a whole bunch of double checking in the interview techniques. So I think that's real. And then we want to make it actually much more scalable than that much more accessible than that. So for instance, we have one one person who we're working with has a machine learning model, where you just write a few sentences of a meaningful story. Like, you know, I was with my buddies, we went fishing something like that. And it's like, oh, it's one of these the source of meaning that you mean, and and you click right. And so this is an experiment. It's not. This doesn't have as many checks and balances on it yet as the interview technique, but my hope is that we can build mechanisms like this that will reliably help people do the introspection And we'll make it accessible to everybody.

David Wright:

How do you motivate him to do it like this work? There is actually quite a lot of cognitive work involved in kind of uncovering these for the, for the subject and for the interviewer for that matter, or the AI, I guess. So they have to what are they? What are they going to get out of it? How do you talk him into it? And not? I mean, like, how do you convince somebody who doesn't want to do it that they should do it? And is saying, I have too many things worry about Joe? What Why am I sitting here talking to you about are going to type all these answers into an AI?

Joe Edelman:

I've never had that problem.

David Wright:

You're selecting for people who already come into your orbit, right? So your Twitter groups who are doing it with you now, but how do you get to the people who don't give a damn about Twitter?

Joe Edelman:

Yeah, I. So it's it's useful thing to know. So. So there's two things one is this is this is rewarding stuff, I guess it was probably like this with your in laws from the story you told, like, people actually love to talk about this stuff. They feel seen, often people cry when you really name their source of meaning, especially if it's kind of alien to you. Like because they feel like they feel seen in a deep way by somebody who actually wouldn't have understood without this, like very highly structured conversation. So it's very touching. And people love it. Usually, that's one way is this just the intrinsic value of it. But in other ways that it's very useful. So, you know, people do this to cohere with their team about the product, they're building in my course a lot, right? Like, so, somebody goes through my course, or a couple of people on a team go through my course. And they have to agree with everybody else about what's meaningful that they want to make their product around. And so this is like so then they they're all kind of excited to make this kind of map of different things that they find meaningful in their product and where they agree where they don't. But it's useful in other ways, like, one of the things that we want to do is build recommender systems, things like YouTube recommended a newsfeed that know what's meaningful to you, and get you that. And I think, you know, if you look at, like, if you download Spotify, and you started or Apple Music, you have this onboarding experience where you says, Okay, what genres do you listen to? Do you like Kanye West better, or Run DMC or whatever? You know, sometimes these things take 10 minutes. And then it's like, Okay, I'm ready to give you great music recommendations. And I think we can do very similar things.

David Wright:

Yeah, I always get that, by the way. Always boring. I'll just going to, you know, search for this. Because I think of this. When you're talking about Twitter there. It made me think of how I use Twitter. And I think I, I like Twitter best as well. Although I don't have much of a like a personal preference presence on their Twitter doesn't like me, I find the stuff that I this stuff, it's okay, I've gotten over that. It's, you know, you start out interesting, though, you kind of start out from a state least I always did have kind of like, you know, the world around me is indifferent to me, right? And so, you know, if that doesn't change, then it's not like I've lost something. It's just like, I didn't become a blue check. But like, I was never really going to do that. Like, what the heck, who does that those people are different people for me, right? And so there's like, who knows? You can it's like we don't like but I curate the algorithm. Right? So the way that I think of it is, I can't click on that don't want to click on that political bait, you know, because then I'm just gonna get a whole bunch more of it. And I want that, again, I'm feel I feel the urge to do this. But if I could find a way to sort of like, you know, anonymously, click on something and read something without Twitter knowing, because I want to tell Twitter, I want to, I want to sort of curate the feed, and have it only give me stuff that I'm looking for. Right. And I think that is kind of like, that's a general way that I try and teach the AI the neural net or whatever it is. To give me stuff as opposed to try and communicate in an explicit sense, like, Oh, I like alternative rock, like, I don't know, there's some of those songs I like, but like all that other stuff, too. I like I think I just like good music. Like, I'm the only two songs and artists I like I'm gonna tell you I like I don't know, my wife and I were listening to stuff from like, 90s rock stuff like Nickelback is skinny man. Like, the one song someone will say they repeated the song four or five times I don't care. I select the song. I don't want any more, though. Don't give me the Nickelback Greatest Hits. Give me a break. I don't I don't I don't think that like an explicit identification process is going to get that nuance. But I think that an AI could write so I don't believe our language, actually is going to convey the nuance in anything close to the amount of time I'm willing to invest in going through UI.

Joe Edelman:

Yeah. I think we also can infer people's sources of meaning. I personally, I think the right thing to do is to check like so. Twitter doesn't say anything. When it gives you a tweet. It doesn't say why like what assumptions about you it's making sure the ideal thing would be like You know, there's little three dots in there right at the tweet, you could click it, you could say, why did you show me this? And it says, I showed you this because I think you're interested in Nickelback, and political threads. And you're like, actually no, bro, like, I got over my Nickelback phrase, you acts that thing out, updates the algorithm, right? And so that to do that would require a kind of articulacy from you. Right? You need to know what it means to AI political threads, you need to know what it means by Nickelback. But it but having that articulacy and sharing with the algorithm, lets you like double check and kind of stay on top of your own profile, and be kind of conversant in what's happening instead of like a clueless pawn, you know, and so I think that's the right way to do it with sources of meaning also is that we can guess them, we can infer them, we can try to make a very fluid AI driven experience. But ideally, also the people can say, oh, yeah, that is a source of meaning of mine or not, so that they can make adjustments. And, and, and also just talk about it with each other too.

David Wright:

It's, it's got to be there. Here's another kind of back to and I got two sort of concepts, introducing them, we'll close that this one is about the adaptive, or the Moto, the changing nature of my preferences. And to the concept of insurance, there are some just basic things I just knew that I just have, like, they have to have. And so for insurance, like you can't just because this year, you're worried about something, and next, you're going to cancel that insurance policy. Because you don't anymore, like you've kind of like it, you just don't buy it now. Because it's like the chance of you know, because for the rest of the world, it doesn't matter whether it's this year or next year, just because your mind happened to be like, you know, it's a sort of the spotlight came on to this sort of section of reality, and you're worried about this now. and off you go. Now, you know, you worry about something else, like our kind of capricious nature, really, really hampers our ability to stick with stuff, right? Because we're just sort of all over the place, right? And I suppose musically, I am all over the place. Great, that's fine. But for things that kind of matter. In my life, I need to be more stable. Like I need to change that tire every time I get a freaking nail in it. And I will for a little while. But maybe in a few years, what happens I might not, you know, like, I need some kind of, you know, I need to be tied to the mast on certain topics, right? And I'm wondering if values are stable enough ways of accessing that kind of like permanent preference or permanent, kind of like a behavioral compulsion? Or do they change? Yeah, this?

Joe Edelman:

Yeah, they do change. Although they usually change, I think we usually feel intense emotions, and they change. So they don't change that often. For most people, and they also change when you enter a new context. So I was talking about leadership earlier, right? Maybe you get promoted or something. And then you're suddenly grappling with what kind of leader do I want to be or something and then you develop a new new view on that, maybe you read some books, and you're like, oh, yeah, this guy's really inspiring. I'm gonna be that kind of leader. So that would be a time where your sources of meaning are changing, because you have a new context. But I think so yeah, I don't know, they're kind of in the middle. And it sort of it remains to be seen, I think this is a place where my work is a little too early to know. What I think there's, there's probably both there's probably both kind of capricious, short term sources of meaning. Maybe my girlfriend's taking clown workshops right now. And she just like loves performing on stage. I'm not sure how long this will last. But I'm still really glad that she's and I would want a recommender system to know about this, and like, recommend her clan, clan workshops, but it might not be the case next year. But it's still a good thing to sort of identifying and then there's other things that will probably be sources of meaning for her entire life.

David Wright:

Well, I mean, my wife, I will say it on there, recently got a tattoo. And, you know, ain't much more permanent than that. And, you know, if

Joe Edelman:

do you like it?

David Wright:

I do. Yeah, I mean, I'm very supportive if she was this make gives her a sense of meaning. And I mean, that Joe, this is like, it was we're talking about last night, my wife and I said anything more about this stuff. And now she sort of vaguely cares about this kind of hobby of mine. But, you know, she, she got it, and we had this conversation, and it was referenced to our kids, we got four kids. And so I'm not, I'm not, there's no, it's a sort of a, you know, pictorial image image or graphic kind of thing. But, but it's a reference for children and that's no, your your progeny is another thing that's equally permanent. There was no, you know, there's, like, once you've had a kid, that's your kid, and, you know, whatever happens, that kid is still a part of you. And so that's equivalently permanent, right? And as a as a deep sense of meaning for you know, for most parents, probably. And so, you know, there is, you know, that's the sort of level of Maybe that's even more permanent than insurance should be, you know, sort of this family stuff, this sort of like, you know, you know, unbreakable links, you have biological links you have to folks that you share genetic material with. But you know, I guess there's a, you know, to me, the intriguing thing about values is just this question, which I totally get you, we'll see how it turns out. But is it something that might match the sort of level of persistence that you want, for something like insurance or no more generally, just sort of virtuous behavior, right phases of your life.

Joe Edelman:

I do think one thing that I'm excited about here is to if we do find out that somebody has a fairly long lasting source of meaning. I think re affiliation around that is is a huge opportunity. I mean, it has been for me, like, if you change your friend group to be the people that are, you know, the people you want to be like, and the people that support your best self. This is, this is great. And I think this is often portrayed as, like, as if there's one kind of value one direction to go in, you should hang out with ambitious friends, you should hang out with people who have their life together their shit together. So I'm used to kind of the conservative talking points, but I don't think of it like that. I think it's like, each person needs to make their own profile of the kind of people that bring out their best self. And it might be clowns, I think for my girlfriend, it might be clowns, you know, like, but they should go and make some clown friends, because that's what's gonna support them to like blossom, right? And somebody else should maybe go and get some ambitious friends, because that's what's going to support them to blossom. And I think we can be much more intentional about that, and much more directed about that, as a society. And this will do some of the work that you're that you're talking about, I hope.

David Wright:

Okay, now, last question to end on this. Tell me where you're at, and what the kind of the thing that you're kind of developing right now, you mentioned there the next few years and metric that you might be able to scalable, you know, aggregation of information is a tremendously powerful thing. If you can get it's one of the reasons why economic growth is so powerful, I think because you can aggregate it all the way up means the same thing in dollars. You know, is it metrics? What's the sort of current stage of development in the framework? And where can people learn more, you know, as well?

Joe Edelman:

Sure, yeah. So first of all, the school I run, I'm kind of moving from the school towards trying to make a bigger structure. So the school is at SF sd.io. School for Social design.io. And you can go there to learn more. And there's also links to my other kinds of, to this whole ecosystem from that website. So that's probably the best starting website. There's like a textbook that I wrote, there's a lot of material exercises, stuff like that all linked from that site. So what I'm trying to do now is build more of a movement or even a kind of an ideology about this pluralism, or this meaning, like reshaping society around meaning. So the design, the school has given us experience for reshaping individual institutions one at a time, to make them more meaningful for either users, or the people at a workplace or, you know, some group like that. So we've had everything from very small scale, meaningful things designed that way, like weddings, you know, individual rituals, interest to social networks like Facebook, right? So that's, but that's always one at a time, and always amongst the designers of the thing, right. So this is the this is kind of easy mode. In a way, if you want to change a whole society, you don't do it really one at a time, and only with people who have the power to redesign one thing at a time, that will never get you there. And so it's time to think about how you make this into more of social vision abroad, social vision, like we should make everything more meaningful, right. And also how you build some of these larger structures. Like I was saying this alternative to GDP that captures meaning economic systems, like we want to make some marketplace just like the organic is kind of a sub market of food. We would like to make sub markets for meaningful things, meaningful workplaces, meaningful apps, things like that. And it's actually pretty wide open right now. Like I'm so I feel as I'm talking, I feel like I'm being very vague, compared to how I talk when I'm talking about, you know, values or redesigning Facebook or something like that. And the reason I being Vegas because I actually don't, I don't really know. But we're building a team and the team includes machine learning researchers that will figure out how to make some of these recommenders that work for meaning aligned people and to automatically gasps people's sources of meaning, and, you know, economic researchers that can help us build these metrics. So there's like a research group, there's probably going to be some startups. There's going To be some more social movement, kind of stuff, more political kind of, or social vision kind of stuff and building the team for that, and also funding for that is kind of the next stage.

David Wright:

My guest today is Joe Edelman. Joe, thank you very much.

Joe Edelman:

My pleasure. Thank you, David.