Episoder
-
The gang from Episode 10 is back, with yet another Consistently Candid x Pigeon Hour crossoverAs Sarah from Consistently Candid describes:In this episode, Aaron Bergman and Max Alexander are back to battle it out for the philosophy crown, while I (attempt to) moderate. They discuss the Very Repugnant Conclusion, which, in the words of Claude, "posits that a world with a vast population living lives barely worth living could be considered ethically inferior to a world with an even larger population, where most people have extremely high quality lives, but a significant minority endure extreme suffering." Listen to the end to hear my uninformed opinion on who's right.- Listen to Consistently Candid on Apple Podcasts, Spotify, or via RSS
- My blog post on suffering-focused utilitarianism- Follow Max on Twitter and check out his blog- Follow Sarah on Twitter
Get full access to Aaron's Blog at www.aaronbergman.net/subscribe -
Please follow Arthur on Twitter and check out his blog!
Thank you for just summarizing my point in like 1% of the words
-Aaron, to Arthur, circa 34:45
Summary
(Written by Claude Opus aka Clong)
* Aaron and Arthur introduce themselves and discuss their motivations for starting the podcast. Arthur jokingly suggests they should "solve gender discourse".
* They discuss the benefits and drawbacks of having a public online persona and sharing opinions on Twitter. Arthur explains how his views on engaging online have evolved over time.
* Aaron reflects on whether it's good judgment to sometimes tweet things that end up being controversial. They discuss navigating professional considerations when expressing views online.
* Arthur questions Aaron's views on cause prioritization in effective altruism (EA). Aaron believes AI is one of the most important causes, while Arthur is more uncertain and pluralistic in his moral philosophy.
* They debate whether standard EA global poverty interventions are likely to be the most effective ways to help people from a near-termist perspective. Aaron is skeptical, while Arthur defends GiveWell's recommendations.
* Aaron makes the case that even from a near-termist view focused only on currently living humans, preparing for the impacts of AI could be highly impactful, for instance by advocating for a global UBI. Arthur pushes back, arguing that AI is more likely to increase worker productivity than displace labor.
* Arthur expresses skepticism of long-termism in EA, though not due to philosophical disagreement with the basic premises. Aaron suggests this is a well-trodden debate not worth rehashing.
* They discuss whether old philosophical texts have value or if progress means newer works are strictly better. Arthur mounts a spirited defense of engaging with the history of ideas and reading primary sources to truly grasp nuanced concepts. Aaron contends that intellectual history is valuable but reading primary texts is an inefficient way to learn for all but specialists.
* Arthur and Aaron discover a shared passion for rock climbing, swapping stories of how they got into the sport as teenagers. While Aaron focused on indoor gym climbing and competitions, Arthur was drawn to adventurous outdoor trad climbing. They reflect on the mental challenge of rationally managing fear while climbing.
* Discussing the role of innate talent vs training, Aaron shares how climbing made him viscerally realize the limits of hard work in overcoming genetic constraints. He and Arthur commiserate about the toxic incentives for competitive climbers to be extremely lean, while acknowledging the objective physics behind it.
* They bond over falling out of climbing as priorities shifted in college and lament the difficulty of getting back into it after long breaks. Arthur encourages Aaron to let go of comparisons to his past performance and enjoy the rapid progress of starting over.
Transcript
Very imperfect - apologies for the errors.
AARON
Hello, pigeon hour listeners. This is Aaron, as it always is with Arthur Wright of Washington, the broader Washington, DC metro area. Oh, also, we're recording in person, which is very exciting for the second time. I really hope I didn't screw up anything with the audio. Also, we're both being really awkward at the start for some reason, because I haven't gotten into conversation mode yet. So, Arthur, what do you want? Is there anything you want?
ARTHUR
Yeah. So Aaron and I have been circling around the idea of recording a podcast for a long time. So there have been periods of time in the past where I've sat down and been like, oh, what would I talk to Aaron about on a podcast? Those now elude me because that was so long ago, and we spontaneously decided to record today. But, yeah, for the. Maybe a small number of people listening to this who I do not personally already know. I am Arthur and currently am doing a master's degree in economics, though I still know nothing about economics, despite being two months from completion, at least how I feel. And I also do, like, housing policy research, but I think have, I don't know, random, eclectic interests in various EA related topics. And, yeah, I don't. I feel like my soft goal for this podcast was to, like, somehow get Aaron cancelled.
AARON
I'm in the process.
ARTHUR
We should solve gender discourse.
AARON
Oh, yeah. Is it worth, like, discussing? No, honestly, it's just very online. It's, like, not like there's, like, better, more interesting things.
ARTHUR
I agree. There are more. I was sort of joking. There are more interesting things. Although I do think, like, the general topic that you talked to max a little bit about a while ago, if I remember correctly, of, like, kind of. I don't know to what degree. Like, one's online Persona or, like, being sort of active in public, sharing your opinions is, like, you know, positive or negative for your general.
AARON
Yeah. What do you think?
ARTHUR
Yeah, I don't really.
AARON
Well, your. Your name is on Twitter, and you're like.
ARTHUR
Yeah. You're.
AARON
You're not, like, an alt.
ARTHUR
Yeah, yeah, yeah. Well, I. So, like, I first got on Twitter as an alt account in, like, 2020. I feel like it was during my, like, second to last semester of college. Like, the vaccine didn't exist yet. Things were still very, like, hunkered down in terms of COVID And I feel like I was just, like, out of that isolation. I was like, oh, I'll see what people are talking about on the Internet. And I think a lot of the, like, sort of more kind of topical political culture war, whatever kind of stuff, like, always came back to Twitter, so I was like, okay, I should see what's going on on this Twitter platform. That seems to be where all of the chattering classes are hanging out. And then it just, like, made my life so much worse.
AARON
Wait, why?
ARTHUR
Well, I think part of it was that I just, like, I made this anonymous account because I was like, oh, I don't want to, like, I don't want to, like, have any reservations about, like, you know, who I follow or what I say. I just want to, like, see what's going on and not worry about any kind of, like, personal, like, ramifications. And I think that ended up being a terrible decision because then I just, like, let myself get dragged into, like, the most ultimately, like, banal and unimportant, like, sort of, like, culture war shit as just, like, an observer, like, a frustrated observer. And it was just a huge waste of time. I didn't follow anyone interesting or, like, have any interesting conversations. And then I, like, deleted my Twitter. And then it was in my second semester of my current grad program. We had Caleb Watney from the Institute for Progress come to speak to our fellowship because he was an alumni of the same fellowship. And I was a huge fan of the whole progress studies orientation. And I liked what their think tank was doing as, I don't know, a very different approach to being a policy think tank, I think, than a lot of places. And one of the things that he said for, like, people who are thinking about careers in, like, policy and I think sort of applies to, like, more ea sort of stuff as well, was like, that. Developing a platform on Twitter was, like, opened a lot of doors for him in terms of, like, getting to know people in the policy world. Like, they had already seen his stuff on Twitter, and I got a little bit, like, more open to the idea that there could be something constructive that could come from, like, engaging with one's opinions online. So I was like, okay, fuck it. I'll start a Twitter, and this time, like, I won't be a coward. I won't get dragged into all the worst topics. I'll just, like, put my real name on there and, like, say things that I think. And I don't actually do a lot of that, to be honest.
AARON
I've, like, thought about gotta ramp it.
ARTHUR
Off doing more of that. But, like, you know, I think when it's not eating too much time into my life in terms of, like, actual deadlines and obligations that I have to meet, it's like, now I've tried to cultivate a, like, more interesting community online where people are actually talking about things that I think matter.
AARON
Nice. Same. Yeah, I concur. Or, like, maybe this is, like, we shouldn't just talk about me, but I'm actually, like, legit curious. Like, do you think I'm an idiot or, like, cuz, like, hmm. I. So this is getting back to the, like, the current, like, salient controversy, which is, like, really just dumb. Not, I mean, controversy for me because, like, not, not like an actual, like, event in the world, but, like, I get so, like, I think it's, like, definitely a trade off where, like, yeah, there's, like, definitely things that, like, I would say if I, like, had an alt. Also, for some reason, I, like, really just don't like the, um, like, the idea of just, like, having different, I don't know, having, like, different, like, selves. Not in, like, a. And not in, like, any, like, sort of actual, like, philosophical way, but, like, uh, yeah, like, like, the idea of, like, having an online Persona or whatever, I mean, obviously it's gonna be different, but, like, in. Only in the same way that, like, um, you know, like, like, you're, like, in some sense, like, different people to the people. Like, you're, you know, really close friend and, like, a not so close friend, but, like, sort of a different of degree. Like, difference of, like, degree, not kind. And so, like, for some reason, like, I just, like, really don't like the idea of, like, I don't know, having, like, a professional self or whatever. Like, I just. Yeah. And you could, like, hmm. I don't know. Do you think I'm an idiot for, like, sometimes tweeting, like, things that, like, evidently, like, are controversial, even if they, like, they're not at all intent or, like, I didn't even, you know, plan, like, plan on them being.
ARTHUR
Yeah, I think it's, like, sort of similar to the, like, decoupling conversation we had the other night, which is, like, I totally am sympathetic to your sense of, like, oh, it's just nice to just, like, be a person and not have to, like, as consciously think about, like, dividing yourself into these different buckets of, like, what sort of, you know, Persona you want to, like, present to different audiences. So, like, I think there's something to that. And I, in some ways, I have a similar intuition when it comes to, like, I try to set a relatively strong principle for myself to not lie. And, like, it's not that I'm, like, a Kantian, but I just, like, I think, like, just as a practical matter, the problem with lying for me at least, is then, like, you have to keep these sorts of two books, sets of books in your head of, like, oh, what did I tell to whom? And, like, how do I now say new things that are, like, consistent with the information that I've already, like, you know, falsely or not, like, divulge to this person. Right. And I think, in a similar way, there's something appealing about just, like, being fully honest and open, like, on the Internet with your real name and that you don't have to, like, I don't know, jump through all of those hoops in your mind before, like, deciding whether or not to say something. But at the same time, to the, like, conversation we had the other night about decoupling and stuff, I think. I think there's, like, it is an unfortunate reality that, like, you will be judged and, like, perhaps unfairly on the things that you say on the Internet, like, in a professional sphere. And, like, I don't know, at some level, you can't just, like, wish your.
AARON
Way out of it. Yeah, no, no, that's, like, a. Okay, so I. This is actually, like, I, like, totally agree. I think, like, one thing is just. I, like, really, honestly, like, don't know how, like, empirically, like, what is the actual relationship between saying, like, say, you get, like, I don't know, like, ten, like, quote tweets, people who are, like, misunderstanding your point, like, and, like, I don't know, say, like, 30 comments or whatever replies or whatever. And, like, it is, like, not at all clear to me, like, what that corresponds to in the real world. And, like, I think I may have erred too much in the direction of, like, oh, that's, like, no evidence at all because, sorry, we should really talk about non twitter stuff. But, like, this is actually, like, on my mind. And this is something like, I didn't. Like, I thought about tweeting, like, but didn't, which is that, like, oh, yeah, I had, like, the building underground tweet, which, like, I think that's a good example. Like, anybody who's, like, reasonably charitable can, like, tell that. It's, like, it was, like, I don't know, it was, like, a reasonable question. Like, and we've mentioned this before, like, this is, like, I don't want to just, like, yeah, it's, like, sort of been beaten to death or whatever, but, like, I feel like maybe, like, I came away from that thinking that, like, okay, if people are mad at you on the Internet, that is, like, no evidence whatsoever about, like, how it, like, how a reasonable person will judge you and or, like, what will happen, like. Like, in real life and, like, yeah, maybe I, like, went too hard in that direction or something.
ARTHUR
Yeah, yeah, yeah. I mean, to, like, agree, maybe move on to non twitter, but, yeah, like, to close this loop. I think that, like, I agree that any. Any one instance of individuals being mad at you online, like, it's very easy to, like, over react or extrapolate from that. That, like, oh, people in the real world are gonna, like, judge me negatively because of this. Right. I think in any isolated instance, that's true, but I just. I also get the sense that in the broad world of sort of, like, think tanks and nonprofits and things where, like, your position would. Especially if you're, like, in a research position, like, to some degree, like, representative the opinions of an employer. Right. That there's a kind of, like, character judgment that goes into someone's overall Persona. So, like, the fact that you have, like, one controversial tweet where people are saying, like, oh, you think, you know, like, poor people don't deserve natural light or something like that. Like, that. Any one instance, like, might not matter very much, but if you, like, strongly cultivate a Persona online of, like, being a bit of a loose cannon and, like, oh, I'm gonna say, like, whatever controversial thing comes to mind, I can see any organization that has, like, communicating to a broader audience is, like, an important part of their mission. Like, being hesitant to, like, take a chance on a young person who, like, is prone to, you know, getting into those kinds of controversies on, like, a regular basis.
AARON
Yeah, yeah. And actually, like, maybe this is, like, sort of meta, but, like, I think that is totally correct. Like, you should 100% up. Like, up if you're an employer listening to this. And, like, I don't know. Who knows? There's, like, a non zero chance that, like, I don't know, maybe, like, not low. Lower than, like, 0.1% or something like that. That will be the case. And, like, no, it is totally true that, like, my. I have, like, subpart. Wait, you better. I'm gonna, like. No, no quoting out of context here, please. Or, like, not know, like, clipping the quote out of, like, so it becomes out of context. But, like, it is, like, I have definitely poor judgment about how things will be, um, like, taken, uh, by the people of the Internet, people of the world. I, like, legitimately, I think I'm below not. Probably not first percentile, probably below 50th percentile, at least among broadly western educated, liberal ish people. And so, yes, it's hiring me for head of communication. I mean, there's a reason I'm not. I wouldn't say that I'm not applying to be a communications person anywhere, but I don't know, it's not crazy that I would. If you want to. Yeah, you should. Like, it is, like, correct information. Like, I'm not trying to trick anybody here. Well, okay. Is there anything else that's on your mind? Like, I don't know, salient or, like.
ARTHUR
That'S what I should have done before I came over here, but nothing, like, on the top of my head, but I feel like there's, I don't know, there's all kinds of, well, like, there's something you've, like, wandered into.
AARON
Yeah, like, I think you have bad cause prioritization takes.
ARTHUR
Oh, right.
AARON
Like, maybe we shouldn't just, like, have the AI versus, like, I don't know, it's like my, like, the AI is a big deal. Tribe is like, yeah, not only winning, but, like, pretty obviously and for obvious reasons. So, like, I don't know, I don't, like, really need to have, like, the, you know, the 70th, like, debate ever about, like, oh, it's like, AI.
ARTHUR
Wait, sorry. You mean they're winning for obvious reasons insofar as, like, the victories are apparent or that you think, like, the actual arguments leading to them.
AARON
Oh, yeah.
ARTHUR
Becoming more prominent are obvious.
AARON
Yeah. Setting aside the. In the abstract, what, non, like, empirical or empirical, but, like, only using data, like, pre chat, GPT release, like, setting aside that whole cluster of arguments, there is the fact that, like, I don't know, it seems very, very apparent to, like, the chattering classes of people who care about this stuff that, like, AI is, like, both the overt has expanded tremendously, like, also moved. It seems like the AI is as big of a deal, like, as the Internet is, like, the lower bound and, like, much, much more important than that. Is, like, the upper bound. And so, like, and, like, that's a. That's like, a significant shift, I guess. One thing is just, like, there have a lot been a lot of conversations, like, in EA spaces, and, like, I'm just, like, thinking about the AdK podcast. I feel like I've heard it multiple times, but maybe I'm making that up where it's like, one person is, like, makes the case for, like, I don't know, taking AI or, like, thinking that, like, yeah, AI broadly is, like, the most important altruistic area, right? And then the other person says no, and then they do the same, like, five discussion points back and forth.
ARTHUR
Yeah.
AARON
So, like, I don't think we should do that.
ARTHUR
Sure.
AARON
That was a really long winded way of saying that.
ARTHUR
I see. So, so, but, but you're, you're trying to emphasize that, like, the kind of, like, reality of the pace of, you know, improvement in artificial intelligence and the fact that it is going to be, like, an incredibly important technology. Like you said, the lower bound being, like, as important as the Internet, I think, of the upper bound is like, I don't know, something like electricity provided we're not gonna, you know, all die or something. Or maybe more transformational extra. But. But I guess we're trying to say is that, like, the Overton window has, like, shifted so much that, like, everyone kind of agrees this is a really transformative technology. And, like, you know, therefore.
AARON
Well, I guess I. Sorry, wait, I interrupted. I'm an interrupting person. I'm sorry.
ARTHUR
That's good. It's a natural part of conversation, so I don't feel bad.
AARON
Continue.
ARTHUR
Oh, oh, no, no. I just. I like, like, yeah, maybe we don't need to rehash the, like, whether or not AI is important, but I'm curious, like, what you think. Yeah, like, what do you think is sort of wrong about my.
AARON
No, I was just about to ask that, like, when I interrupted you. I actually don't fully know what you believe. I know we, like, go into different, like, vibe camps or, like, there's another. There's like, a proper noun, vibe camp. This is like a lowercase letters.
ARTHUR
Vibe count, vibe sphere.
AARON
Yeah, yeah. And, like. But, like, I don't know, do you have, like, a thesis?
ARTHUR
Yeah, see, okay. I don't. I think in many ways, like, maybe just to lay out, like, I think my lack of a thesis is probably the biggest distinction between the two of us when it comes to these kind of cause prioritization things.
AARON
Right.
ARTHUR
Because, like, I think I, like, over the years have, as I became more interested in the effect of altruism, have sort of changed my views in many different directions and iterations in terms of, like, my basic moral philosophy and, like, what I think the role of EA is. And I think over time, like, I've generally just become, like, more kind of pluralistic. I know it's a bit of a hand wavy word, but, like, I think I have sufficient uncertainty about, like, my basic moral framework towards the world that, like, this is just a guess. Maybe we'll discover this through conversation. But I think, like, perhaps the biggest disagreement between you and I that, like, leads us in different directions is just that I am, like, much more willing to do some kind of, like, worldview diversification sort of move where like, just, you know, going from, like, a set of assumptions, you know, something like hedonistic utilitarianism and, like, viewing ea as, like, how can I as an individual make the greatest, like, marginal contribution to, like, maximizing this global hedonistic welfare function, right. I think, like, I hold that entire project with a little bit of, like, distance and a little bit of uncertainty. So, like, even if, you know, like, granting the assumptions of that project that spits out, like, okay, AI and animals are, like, the only things that we should care about. I think, like, I'm willing to, like, grant that that might follow from those premises. But I think, like, I hold the premise itself about, like, what the kind of EA project is or what I, as an individual who's, like, interested in these ideas should do with my career at, like, sufficient, you know, distance that I'm, like, willing to kind of, like, entertain other sets of assumptions about, like, what is valuable. And, like, therefore, I'm just, like, far less certain in committing to any particular cause area. I think before, before we get deeper into the weeds about this, just to put like, a sharper point on the, like, more meta point that I'm trying to make is that, like, so I think, like, I don't know if there was this ADK episode from like, a long time ago about solutions to the Fermi paradox. And I know this sounds unrelated, but I'm gonna try.
AARON
No, no, that's cool.
ARTHUR
And one of the things he talked about was like, you know, basically, like, the Fermi paradox isn't actually a paradox if you, like, understand the ways that, like, essentially, like, when you have uncertainty in, like, a bunch of different point estimates, those uncertainties, like, when combined, should yield like, a probability distribution rather than just like, the headline is often, like, the point estimate of, like, oh, we should expect there to be like, so many aliens, right? But it's like when you have uncertainties on, like, each decision, you know, like, each assumption that you're making in all of the parameters of the equation, right? Like, so I think, like, I guess to apply that a little bit to kind of my, like, sort of moral philosophy is, like, I think, like, the reason why I just am very kind of, like, waffly on my cost prioritization and I'm, like, open to many different things is just that, like, I start from the basic assumption that, like, the, you know, the grounding principle of the EA project, which is like, we should try to do good in the world and we should try to do, like, you know, good in the world in ways that are, like, effective and actually, like, you know, have the consequences that we, we want. Right. That, like, I am very bought into that, like, broad assumption, but I think, like, I have sufficient uncertainty at, like, every chain of reasoning from, like, what does good mean? Like, what, you know, what is the role of, like, me as an individual? Like, what is my comparative advantage? What does it mean to be cause neutral, like, at all of these points of decision? I feel like I just have, like, sufficiently high level of uncertainty that, like, when you get to the end of that chain of reasoning and you arrive at some answer of, like, what you ought to do. Like, I think I hold it sort of very lightly, and I think I have, like, very low credence on any, like, one, you know, conclusion from that chain of research.
AARON
Yeah, yeah. That's what cut you off too much. But, like, but, like, I think there's, like, a very, like, paradigmatic conversation which is like, oh, like, should we be pluralistic? And it's happened seven bazillion times. And so, like, I know I want to claim something different. So. Sorry. I guess there's two separate claims. One is like. And you can tell if, like, I sort of. I was sort of assuming, like, you would disagree with this, but I'm not sure is, um. Yeah, like, even if you just, like, purely restrict, um, you're, like, philosophizing or, like, restrict your ethics just to, like, um, humans who are alive right now and, like, like, basically, like, have the worldview that, like, implies malaria nets. Yeah, um, I, like, think it's, like, very unlikely that, like, actually, like, the best guess intervention right now, like, is the set of, like, standard yay interventions or whatever. And, like, another, like, very related, but, like, some, I guess, distinct claim is, like, I don't know exactly. I don't. Yeah, I really don't know at all what this would look like. But, like, it seems very plausible to me that even under that worldview, so not a long term is worldview at all, like, probably doing something related to, like, artificial intelligence. Like, is, like, checks out under. Yeah, under, like, the most, like, norm, like, normal person version, like, restricted version of EA. And, like, I don't know.
ARTHUR
I think I. Yeah, so I think I am inclined to agree with the first part and disagree with the second part. And that's why I want you to spell this out for me, because I. I actually am sympathetic to the idea that, like, under sort of near termist restricting our class of individuals that we want to help to human beings, like, who are alive today. Right. Under that set of assumptions, I similarly think that there's, like, relatively low likelihood that, like, the standard list of sort of, like, give well, interventions are the best. Right.
AARON
Well, not.
ARTHUR
Or.
AARON
Yeah, yeah, or, like, I'm telling you, like, yeah, if you think. Sorry. Um, yeah, my claim was, like, stronger than, like, that. That. Or, like, what one would interpret that as, like, if you just, like, take it like, super literally. So, like, I think that, like, um, not only expose, like, they're not even our, like, real best guesses, like, like, an actual effort would, like, yield other best guesses. Not only like, oh, yeah. Like, this is our, like, this is like a minority, but like, a plurality of the distribution, if that makes sense.
ARTHUR
Okay, then. Then I do think we disagree because I think where I was going to go from that is that I think to me, like, I'm not as informed on these arguments as I should be. So, like, I will fully admit, like, huge degree of, like, epistemic limitation here, but, like, I think my response was just going to be that I think, like, the case for AI would be sort of even weaker than those givewell style interventions. So even though they're, like, unlikely to be, you know, the best, like, you know, like x post in some, like, future where we, like, have more information about other kinds of ways that we could be helping people. Right. They're like, still, you know, better than the existing alternatives and.
AARON
Yeah, yeah, I'm gonna.
ARTHUR
So what is the case for, like, near termist case for AI? Like, what if you could.
AARON
Yeah, yeah. Just to, sorry. I like, promise I will answer that. But like, just to clarify. Yeah, so I'm like, more confident about, like, the give world charities are, like, not the ex ante best guess than I am that the better, like one of the best. Like, in fact, ways to help only humans alive right now would involve AI. So, like, these are related, but like, distinctive and the AI one I'm like, much less confident in and haven't, I guess, in some sense, just because it's so much more specific.
ARTHUR
Actually, let's do both parts because I realized earlier also what I meant was not ex ante, but ex post. Like, with much larger amount of information about other potential interventions, we might determine that something is better than Givewell. Right. But nonetheless, in the world that we actually live in, with the information that we currently have, the evidence is sufficiently strong for impact under the kinds of assumptions we're saying we're operating under. Right. That, like, you know, other, other competing interventions, like, have a very high bar to click. Like, maybe they're worthwhile in, like, a hit space giving kind of way. Like, in that, like, it's worth, like, trying a bunch of them to, like, see if one of them would outperform givewell. But, like, for the time being, you know, that whatever givewell spreadsheet says at any current time, I think is pretty, like, is pretty compelling in terms of, like, you know, higher certainty ways to help individuals.
AARON
Yeah. So, um.
ARTHUR
So, so one, I want to hear, like, why you disagree with that. And then two, I want to hear, like, your case for, like, AI.
AARON
Yeah, okay. I think I'm responding to this. Like, you can cut me off or whatever. Um, so, like, fundamentally, I want to, like, decouple. Haha. Or. Yeah, this is something I like doing and decouple, um, the, like, uh, yeah, who we care about. And, like, um, how, like, how aesthetically normal are we gonna be? So, like, I want to say, like, okay, even, yeah, if you. If you're, like, still in the realm of, like, doing analytic philosophy about the issue. And, like, you just, like, say, like, okay, we're just, like, gonna restrict, like, who we care about to, like, humans alive right now. There's, like, still a lot of weird shit that can, like, come out of that. And so, like, my claim, I think actually, like, what's what. Maybe this is, like, somewhat of a hot take, whatever. But I think, like, actually what's happening is, like, there is, like, a, quote, unquote like, worldview that, like, vibe associates and to some extent, like, explicitly endorses, like, only just like, for whatever reason, like, trying to help humans who are alive right now, or, like, maybe, like, who will become alive in the near future or something. But, like, this is always paired with, like, a default, like, often non explicit assumption that, like, we have to do things that look normal. Or, like. And to some extent you can. Some extent you can, like, formalize this by just, like, saying you, like, care about certain deep impact. I think there's, like, not even that technical, but, like, mildly, like, technical reasons why. Like, if you're still in the realm of, like, doing analytical philosophy about the issue, like, that doesn't check out, like, for example, you don't actually know, like, which specific person you're gonna help. I'm, like, a big fan of, like, the recent reaping priorities report. So I spent, like, five minutes, like, rambling and, like, doing a terrible job of explaining what I. What I mean. And so the idea that I'm getting at is that I think there's like a natural, like, tendency to think of risk aversion in like, an EA or just like generally, like, altruistic context. That basically means, like, we like, understand like, a chain of causality. And there are like, professional economists, like, doing RCT's and they like, know what works and what doesn't. And, like, this isn't, like, there's like, something there that is valuable. Like, doing good is hard. And so, like, you know, careful analysis is actually really important. But I think this, like, doesn't, there's a tendency to, like, ignore the fact that, like, these type of, like, give well style, like charities and give well, style, like, analysis to identify the top charities. Basically to, as far as I know, almost exclusively, like, looks at just one of, like one of like, the, the most salient or like, intended, like, basically first order effects of an intervention. So we, like, it's just not true that we know what the impact of like, giving $3,000 to the gens malaria foundation is. And, like, it's like, you know, maybe there are, like, compelling, compelling reasons to, like, think that basically it all washes out or whatever. And, like, in fact, like, you know, reducing deaths from malaria and sickness is like the absolute, like the single core effect. But, like, as far as I know, there's, like, not, that seems to be mostly just like taken as a given. And I don't think this is justified. And so I don't think this, like, really checks out as like, a type of risk aversion that stands up to scrutiny. And I found this tweet. Basically, I think this is like, good wording. The way to formalize this conception is just have narrow confidence intervals on the magnitude of one first order effect of an intervention. And that's an awfully specific type of risk aversion. This is not generally what people mean in all walks of life. And then I mentioned this rethink priorities report written by Laura Duffy first, Pigeonhauer Guest. And she basically lists three different types of risk aversion that she does in some rating priorities, like analysis. So, yeah, number one, avoiding the worst. Basically, this is the s risk style or modality of thinking. The risk. The thing we really, really want to avoid is the worst states of the world happening to me. And I think to many people, that means a lot suffering. And then number two, difference making risk aversion. Basically, we want to avoid not doing anything or causing harm. But this focus is on, like, not on the state of the world that results from some action, like, but like your causal effect. And then finally, number three, ambiguity aversion. Basically, we don't like uncertain probabilities. And for what it's worth, I think, like, yeah, the givewell style, like, leaning, I think, can be sort of understood as an attempt to get to, like, addressed, like, two and three difference making an ambiguity aversion. But like, yeah, for reasons that, like, are not immediately, like, coming to my head and like, verbalize, I, like, don't think. Yeah, basically for the reasons I said before that, like, there's really no comprehensive analysis there. Like, might seem like there is. And like, we do have like, decent point estimates and like, uncertainty ranges for like, one effect. One. But like, I. That doesn't, as far as I can tell, like, that is not like, the core. The core desire isn't just to have like, one narrow, like, nobody. I don't think anyone thinks that we, like, should intrinsically value, like, small confidence intervals. You know what I mean? And this stands in contrast to, as I said before, also s risk of french organizations, which are also, in a very real sense, doing risk aversion. In fact, they use the term risk a lot. So it makes sense. The givewell vibe and the s risk research organization vibes are very different, but they, in a real sense that they're at least both attempting to address some kind of risk aversion, although these kinds are very different. And I think the asterisk one is the most legitimate, honestly. Yeah. Okay, so that is a. There was sort of like a lemma or whatever and then. Yeah. So, like, the case for AI in like, near term only affecting humans. Yes. So, like, here's one example. Like, this is not the actual, like, full claim that I have, but like, one example of like, a type of intervention is like, seeing if you can make it basically like, what institutions need to be in place for, like, world UBI. And let's actually try to get that policy. Let's set up the infrastructure to get that in place. Like, even now, even if you don't care about, like, you think long termism is false, like, don't care about animals, don't care about future people at all, it seems like there is work we can do now, like, within, you know, in like, the realm of like, writing PDF's and like, building. Yeah, building like, like, political institutions or like, at least. Sorry, not building institutions, but like, affecting political institutions. Like via. Like, via, like, I guess like both like, domestic and like, international politics or whatever that, like, still. And sorry, I like, kind of lost like, the grammatical structure of that sentence, but it seems plausible that, like, this actually is like better than the givewell interventions, just like if you actually do like an earnest, like best guess, like point estimate. But the reason that I think this is plausible is that all the people who are willing to do that kind of analysis are like, aren't, aren't restricting themselves to like, only helping humans in like the near future. They're like, I don't know. So there's like a weird, like missing middle of sorts, which, depending on what the counterfactual is, maybe bad or good. But I'm claiming that it exists and there's at least a plausible gap that hasn't really been ruled out in any explicit sense.
ARTHUR
Okay, yeah, great. No, no, that's all very useful. So I think, I guess setting x risky things aside, because I think this is a usual way to get at the crux of our disagreement. Like, it's funny, on the one hand, I'm very sympathetic to your claim that sort of like the kinds of things that give, well, sort of interventions and, you know, RCT's coming out of like development economics are interested in, like, I'm sympathetic to the idea that that's not implied by the kind of like basic near termist EA, philosophical presupposition.
AARON
Thank you for just summarizing my point in like 1% of the words.
ARTHUR
Yeah, yeah. So I'm like, I actually strongly agree with that. And it's precisely why, like, I'm more open to things that aren't like givewell style interventions, why I'm very sympathetic to the economic growth side of the growth versus RCT perennial debate, all that. That's maybe interesting side discussion. But to stay on the AI point, I guess putting existential risk aside, I want to make the standard economist argument for AI optimism and against what you were just trying to say. So like, to me, like, I think it is like plausible enough that we should be concerned that, like, increasing AI progress and dissemination of AI technologies decreases returns to labor in the global economy. I think it's plausible enough that we should care about that and not dismiss it out of hand. But, like, I think it's far less likely that, like, or sorry, not. Well, I want to be careful with. I think it's potentially more likely that almost exactly the opposite is true. So, like, if I look at like the big picture history of like global economic growth, like the classic, you know, hockey stick graph where like GDP per capita for the world is like totally flat until, you know, like about 200 years ago. Right? Like, I think the standard, like this is a super interesting rich topic that I've been like learning a lot more about over the last few years. And I think, like the devil is very much in the details. But nonetheless, I think the kind of like classic, you know, postcard length summary is basically correct that like, why did that happen? That happened because like productivity of individual workers, like dramatically increased, like orders of magnitude due to technological progress, right? And like, whether that to what degree that technological progress is sort of like political institutional technologies versus like direct, like labor augmenting technologies is like, you know, whatever, way too deep to get into in this discussion. I don't have like good informed takes on that. But like, nonetheless, I think that, like, the basic like, sort of lump of labor fallacy, like, is strongly at play at these worries that AI is going to displace workers. Like, I think if like, you look at all these, you know, previous technologies, like the, you know, Luddites destroying the power looms or they weren't really power looms, but they were like this more like, you know, better kinds of handlers or whatever. Right, right. Like, I think the worry that people have always had, and again, I get, I'm giving the standard economists soapbox thing that everyone has heard before, but like, I just don't see why AI is categorically different from these other technological advancements. And that, like, at a glance, like, for me as an individual, like trying to build a research career and like get a job and stuff, my ability to access GPT four and Claude, like has I think, like dramatically increased my marginal productivity and like would presumably also increase my wage in the long term because I can just do a lot more in the same amount of time. So, like, it seems to me like just as if not more likely that the better AI technology gets, you have people that are able to produce more in economic value with the same amount of labor and therefore are going to increase economic growth and increase their wages rather than just somehow displace them out of the labor market. And I think there is something that I think EA should maybe paying more attention to, but like maybe they're too concerned with existential risk. There is some interesting experimental economics research already, like looking at this question, which is like having people who work in kind of like standard sort of like, you know, operations and middle management sort of office jobs, like using AI in their work. And I think one of the interesting findings seems to be like a lot of these experiments are finding that it has sort of an equalizing effect, which is like for the most productive employees at a given task, their productivity is like only very modestly improved by having access to large language models. But like, the least productive employees see like very large improvement in their productivity from these technologies. So, like, in my opinion, it seems plausible that like, you know, better access to these sorts of technologies would, if anything, make your, like, standard, you know, employee in the global economy, like, you know, not only more productive, but have this sort of like leveling of the playing field effect. Right. Where like people who, who do not have the current capacities to like produce a lot of value are sort of, you know, brought up to the same level as like.
AARON
Yeah. So, like, I think these are all reasonable points. I also think, um, sorry, I think I have like three, like points, I guess. Yeah. On the object level, I like, don't think I have anything to like, add to this discussion. The one thing I would point out is that it seems like there's, as far as I can tell, like no disagreement that like in principle you can imagine a system that is better than all humans at all tasks that does not have the effect you're talking about in principle, better than humans, better than, better and cheaper than all humans at all tasks.
ARTHUR
Right. With no human input required.
AARON
Yeah, in principle, yeah.
ARTHUR
Okay.
AARON
Yeah, yeah. Like, I don't think this is a radical claim. So like, then there's like the now moving away from the object level. Like, okay, so we've like set this now. Like the normal, like default thing is to like have an debate where, oh, you make some more points in the direction you just said. And I said makes more points. I just said. But like, the thing I want to point out is that like this discussion is like absent from near termist EA because all the people who are taking ideas seriously have already moved on to other areas. And there was one more, but just.
ARTHUR
To jump on that for a second. But I think I totally take your point that then maybe a lot more people should be thinking about this. Right. But to me, like, whether that's possible in principle, like, like, and I think you're obviously going to agree with me on this. Like, to what degree that's relevant depends on like whether we are living in a world where like those systems are on the horizon or are going to exist in the near term future. Right. And like, to what degree that, you know, imprincible possibility, like, represents the actual path we're heading on is like sort of the real crux of the issue.
AARON
Oh, yeah. Okay. Maybe I actually wasn't sure. Yes, because we're living in a more.
ARTHUR
Standard story where like this just increases the marginal product of labor because everyone gets more productive when they, like, learn how to use these technologies, and it doesn't mean it's not going to be disruptive, because I think there's a lot of interesting IO research on how, with the implementation of computer technologies in a lot of workplaces, it was very difficult to train older employees to use the new systems. So really, the only solution for a lot of firms was essentially just, like, fire all of their old employees and hire people who actually knew how to use these technologies. But presuming we get past the disruptive transition where the old people get screwed or have to learn how to adapt, and then the young people who grew up learning how to use AI technologies enter the workforce, it seems very possible to me that those people are just going to be the most productive generation of workers ever. Accordingly.
AARON
Yeah. Yeah. Again, I think there's, like, sorry, I, like, don't. I guess I was about to just, like, make this. It make the same point that I. That I was before. I guess, like, put a little bit more, like, yeah, be a little bit clearer about, like, what I mean by, like, this debate isn't happening. It is, like, it doesn't seem. Maybe I'm wrong, but, like, like, reasonably confident that, um, givewell isn't doing the thing that, like, the long term esteem on open philanthropy is where they're like, try to answer this question because it's really fucking important and really informs and really informs what kind of, like, what the best near term interventions are. And, like, maybe that's, like, I don't want to pick on Givewell because, like, maybe it's in, you know, givewell is, like, maybe it's, like, in their charter or, like, in some sense, just like, everybody assumes that, like, yeah, they're going to do, like, the econ RCT stuff or whatever, but, like, well, but there'd be value.
ARTHUR
That, like, that would be my defensive give. Well, like, is that, like, you know, you, like, comparative advantage is real, and, like, you know, having an organization that's like, we're just not gonna worry about these. Like, they don't even do animal stuff, you know? And I think that's a good decision. Like, I care a lot about animal stuff, but I'm glad that there's an organization that's, like, defined their mission narrowly enough such that they're like, we are going to, like, do the best sort of econ development rct kind of stuff. And if you're, like, into this project, like, we're gonna tell you the best way to use your.
AARON
Yeah, I think that like, I don't know, in the abstract. Like, I think. I guess I'm, like, pretty, pretty 50 50 on, like, whether I think it's good. I don't think they should, like, if anybody's deciding, like, whether to, like, give a dollar to give well or, like, not give well with Nea, I think, like, yeah, it's like, don't give a dollar to give well. Like, I don't think they should get any funding, EA funding or whatever. And I can defend that, but, like, so, yeah, maybe that particular organization, but I. Insofar as we're willing to treat, like, near term sea as, like, an institution, like, my stronger claim is, like, it's not happening anywhere.
ARTHUR
Yeah, well, I mean, I, like, you're right. At one level, I think I more or less agree with you that it should be happening within that institution. But I think what, at least to me, like, your broad sketch of this sort of near termist case for AI, like, where that discussion and debate is really happening, is in, like, labor economics. You know what I mean? Like, it's not that aren't people interested in this. I just think the people who are interested in this, like, and I don't think this is a coincidence are the people that, like, don't think, you know, the paperclip bots are going to kill us. All, right? They're like, the people who are just, like, have a much more, like, normie set of priors about, like, what this technology is going to look like.
AARON
Yeah, I do.
ARTHUR
And, like, they're the ones who are, like, having the debate about, like, what is the impact of AI going to be on the workforce, on inequality, on, you know, global economic growth, like, and I think, like, but, like, I guess in a funny way, it seems like what you're advocating for is, like, actually a much more, like, Normie research project. Like, where you just have, like, a bunch of economists, like, being funded by open philanthropy or something to, like, answer these questions.
AARON
I think the answer is, like, sort of, um. Does some extent. Yeah, actually, I think, like, I. Like, I don't know. I'm, like, not. Yeah, I actually just, like, don't know. Like, I don't, like, follow econ, like, as a discipline. Like, enough to, like, I, like, believe you or whatever. And, like, obviously it's, like, it's, like, pretty clearly, like, both. I guess I've seen, like, examples I've thrown around of, like, papers or whatever. Yeah, there's, like, clearly, like, some, like, empirical research. I, like, don't know how much research is, like, dedicated to the question, like, yeah, I guess there's a question that's like, if you know, like, yeah. Is anybody, like, trying to. With, like, reasonable. With, like, reasonable parameters, estimate the share of, like, how, like, late the share or the returns to, like, labor or whatever will, like, change in, like, the next, like, ten years or five? Not. Not only. Not only, like, with GPT-3 or, like, not. Not assuming that, like, GPT four is going to be, like, the status quo.
ARTHUR
Yeah, I mean, to my knowledge, like, I have no idea. Like, basically I don't have an. All the stuff that I'm thinking of is from, like, you know, shout out Eric Brynjolfsson. Everyone should follow him on Twitter. But, like, like, there's some economists who are in the kind of, like, I o and, like, labor econ space that are doing, like, much more, like, micro level stuff about, like, existing LLM technologies. Like, what are their effects on, sort of, like, the, like, you know, I don't know, knowledge work for lack of a better word, like, workforce, but that, yeah, I grant that, like, that is a very much more, like, narrow and tangible project than, like, trying to have some kind of macroeconomic model that, like, makes certain assumptions about, like, the future of artificial.
AARON
Yeah, and, like, which maybe someone is doing.
ARTHUR
And, I mean, I.
AARON
No, yeah, I'm interested. People should comment slash dm me on Twitter or whatever. Like, yeah, I mean, I think we're just, like, in agreement that. I mean, I mean, like, I think I have some, like, pretty standard concerns about, like, academic, like, academia, incentives, which are, like, also been, like, rehashed everywhere. But, like, I mean, it's an empirical question that we, like, both just, like, agree is an empirical question that we don't know the answer to. Like, I would be pretty surprised if, like, labor economics has, like, a lot to say about. About fundamentally non empirical questions because, like, it doesn't. Yeah, I guess, like, the claim I'm making is, like, that class of research where you, like, look at, like, yeah. Like, how does chat GPG, like, affect the productivity of workers in 2023? 2024? Really? Just like, I mean, it's not zero evidence, but it's really not very strong evidence for, like, what the share of labor income will be in, like, five to ten years. Like, yeah, and it's, like, relevant. I think it's, like, relevant. The people who are actually building this technology think it's going to be, like, true, at least as far as I can tell. Broadly, it is a consensus opinion among people working on building frontier AI systems that it is going to be more transformative or substantially more transformative than the Internet, probably beyond electricity as well. And if you take that assumption, like, premise on that assumption, it seems like the current. I would be very surprised if there's much academic, like, labor economics that, like, really has a lot to say about, like, what the world would be, like in five to ten years.
ARTHUR
Yeah, I think I was just gonna say that I'm, like, sufficiently skeptical that people, like, working on these technologies directly are, like, well positioned to, like, make those kinds of. I'm not saying the labor econ people are, like, better positioned than them to make those progress, but, like, I think.
AARON
No, that's totally fair. Yeah, that is really to be fair.
ARTHUR
Also that, like. Like, I think some of this is coming from, like, coming from a prior that I, like, definitely should, like, you know, completely change with, like, the recent, you know, post GPT-3 like, explosion these technologies. But I just think, like, for, like, just if you look at the history, like, I'm not. I'm not saying I endorse this, but, like, if you look at the history of, like, you know, sort of AI, like, not like optimism per se, but, like, enthusiasm about, like, the pace of progress and all this, like, historically, like, it had a, like, many, many decade track record of, like, promising a lot and failing that, like, was only, like, very recently falsified by, like, GPT-3 and.
AARON
I mean, like, I think this is basically just, like, wrong. It's like, a common misconception. Not like you're. I think this is, like, totally reasonable. This is, like, what I would have. Like, it seems like the kind of thing that happened. I'm pretty sure, like, there have been some, like, actually, like, looking back analyses, but it's, like, not wouldn't. It's not like there's zero instances, but, like, there's been a real qu. It is not, like, the same level of AI enthusiasm, like, as persistent forever. And, like, now we're like, um. Yeah, now it seems like. Oh. Like we're getting some, like, you know, results that, like, that, like, maybe justify. It seems like, um. Yeah, the consent, like, people are way. Hmm. What am I. Sorry. The actual thing that I'm trying to say here is I basically think this is just not true.
ARTHUR
Meaning, like, the consensus was like, that.
AARON
Like, people didn't think Agi was ten years away in 1970 or 1990.
ARTHUR
Well, I mean, some people did. Come on.
AARON
Yeah. So I can't.
ARTHUR
You mean, just like, the consensus of the field as a whole was not as I like.
AARON
So all I like, I have, this is, like, this is the problem with, like, arguing a cast opinion. Like, my cashed opinion is, like, I've seen good, convincing evidence that, like, the very common sense thing, which is like, oh, but AI, there's always been AI hype is, like, at least misleading and, like, more or less, like, wrong and that, like, there's been a, like, a, yeah, and like, I don't actually remember the object level evidence for this. So, like, I can try to, like, yeah, that's fine.
ARTHUR
And I also like to be clear, like, I don't have a strong, like, strongly informed take, like, for the, like, AI. Yeah, hype is overblown thing. But, like, putting that aside, I think the other thing that I would wonder is, like, even if individuals, like, who work on these technologies, like, correctly have certain predictions about the future that are pretty outside the window or that people aren't sufficiently taking seriously in terms of what they think progress is going to be. And maybe this is some lingering, more credentialist intuitions or whatever, but I think that. I am skeptical that those people would also be in a good position to make kinds of economic forecasts about what the impacts of those technologies.
AARON
Yeah, I basically agree. I like, yeah, it's, I guess, like, the weak claim I want to make is, like, you don't have to have that high a percentage on, like, oh, maybe there's, like, some, like, maybe these people are broadly right. You don't have to think it's above 50% to, like, think that. Like, I think the original claim I was, like, making is, like, um, is like, why probably, like, standard, like, labor economics, like, as a subfield. Like, isn't really doing a ton to, like, answer the core questions that would inform my, like, original thing of, like, oh, like, is ubi, like, like, a better use of money than, like, I give you against malaria foundation or whatever? Um, I, like, yeah, I just, like, don't. Yeah, maybe I'll be, like, pleasantly surprised. But, like, yeah, we could, we could also, I don't know. Do you want to move on to.
ARTHUR
A. Yeah, yeah, sure.
AARON
Sorry. So I didn't mean to. You can have the last word on.
ARTHUR
No, no, I don't. I don't think I have the last word. I mean, I think it's funny, like, just how this has progressed in that, like, I think, like, I, I don't completely, like, I think, I don't completely disagree, but I also don't feel like my, like, mind has been, like, changed in a big way, if that makes sense. It's just like, maybe we're in one of these weird situations where, like, we kind of, like, do broadly agree on the, like, actual object level, like, questions or whatever, but then there's just some, like, slight difference in, like, almost, like, personality or disposition or, like, some background beliefs that we, like, haven't fully fleshed out that, like, that, like, at least in terms of how we, like, present and emphasize our positions. Like, we end up still being in different places, even if we're not actually that.
AARON
No, something I was thinking about bringing up earlier was, like, oh, no. Yeah, basically this point. And then, like. But, like, my. My version of, like, your defensive of the, like, I guess, like, the give. Well, class is, like, my defense of donating to, like, the humane league or whatever, and, like, maybe it doesn't check. And, like, I don't know. I just. Yeah, it's for whatever reason, like, I. I, like, yes, something. I'm still, um. I guess I still don't. Sorry. I just did, like, a bunch of, like, episodic, like, jumps in my head, and I, like, I always forget, like, oh, they can't see my thought patterns in the podcast. Yeah, it seems, like, pretty possible that a formal analysis would say that even under a suffering focused worldview, yet donating to s risk prevention organizations, beats, for example, are at least beats like the Humane League or the Animal Welfare Fund, which we recently raised funds for.
ARTHUR
Do you want to talk? So, there's many things we could talk about. One potential thing that comes to mind is, like, I have a not very well worked out, but just, like, sort of lingering skepticism of long termism in general, which, like, I think doesn't actually come from any, like, philosophical objection to long termist premises. So, like, I think the.
AARON
Yeah, I think.
ARTHUR
I don't know what you want to talk about.
AARON
I mean, if you really want. If you're, like, really enthusiastic about it.
ARTHUR
I'm not.
AARON
Honestly, I feel like this has been beaten to death in, like, on 80k. There's cold takes. Like, there have been a. Sorry. I feel like we're not gonna add anything. Like, I'm not gonna add anything either.
ARTHUR
Okay. I don't feel like I would.
AARON
I mean, we can come. Another thing is, like, yeah, this doesn't have to be super intellectual. Talk about climbing. We're talking about having a whole episode on climbing, so, like, maybe we should do that. Like, also anything. Like, I don't know. It doesn't have to be, like, these super, like, totally.
ARTHUR
No, no. That was something that came to mind, too, and I was like, oh, the long term isn't thing, but like, it would be fun to just like, talk about something that's like, much less related to any of these topics. And in some ways, given both of our limitations in terms of contributing to these object level EA things, that's not a criticism of either of us, but just in terms of our knowledge and expertise, it could be fun to talk about something more personal.
AARON
Yeah, I need to forget that it's. Yeah. I don't know what is interesting to you.
ARTHUR
I'm trying to think if we should talk about some other area of disagreement because I feel like, like, I'm like, this is, this is random and maybe we'll cut this from the podcast. This is a weird thing to say, but I feel like Laura Duffy is one of the few people that I've, like, met where we just have like a weird amount of the same opinions on like, many different topics that wouldn't seem to like, correlate with one another, like, whatsoever. And it's funny, like, I remember ages ago, like, listening to y'all's discussion on this podcast and just being like, God, Laura is so right. What the fuck does Aaron believe about all these things?
AARON
And I'm willing to relitigate some if it. If it's like something that hasn't been beaten to death elsewhere.
ARTHUR
So I think we should either talk about like, something more personal, like we should talk about like rock climbing or something, or we should, like, now I.
AARON
Have to defend myself. You can't just say, you know, yeah. Was it the, like, oh, old philosophy is bad.
ARTHUR
Old philosophy.
AARON
Old philosophy is fucking terrible. And I'm guessing you don't like this take.
ARTHUR
I do not. Well, I find this take entertaining and, like, I find this take, like, actually, like, I mean, this totally, like, this sounds like a huge backhand compliment, but, like, I actually think it's, like, super useful to hear something that you just, like, think is, like, so deeply wrong, but then you, like, take for granted when you, like, surround yourself with people who, like, would also think it's so deeply wrong. So I think it's, like, actually, like, very useful and interesting for me to, like, understand why one would hold this opinion.
AARON
Also, I should. I guess I should clarify. So, like, I. It's like, this is like, the kind of thing that, like, oh, it's like, kind of is like, in my vibe disposition or whatever. Yeah. And, like, also, it is not like, the most high stakes thing in the world, like, talking in the abstract. So, like, when I said, like, oh, there, it's fucking terrible. I was, like, I was, like, being hyperbolic.
ARTHUR
Oh, I know.
AARON
I know. No, but, like, in all serious, like, not in all seriousness, but just, like, like, without being, I don't know, using any figurative language at all, I'm, like, not over. There are definitely things that I'm, like, much more confident about than this. So, like, I wouldn't say I'm, like, 90. Oh, it's a. Me too.
ARTHUR
I'm, like, pretty open to being wrong on this. Like, I don't think I have, like, a deep personal vested stake.
AARON
Yeah, no, I don't think.
ARTHUR
It's just, I think. Okay, so this is something. Or actually maybe. Maybe an interesting topic that we are by no means experts on but could be interesting to get into is, I think, like, a lot of the debates about, like, what is, like, the role of, like, kind of higher education in general or, like, somewhat hard to separate from these questions of, like, oh, yeah, old text. Because I'm sort of have two minds of this, which is, like, on the one hand, I think I buy a lot of the, like, criticisms of, like, the higher ed, like, sort of model. And that I think, like, this general story which is not novel to me in any way, shape or form, that, like, we have this weird system where, like, universities used to be a kind of, like, the american university system, like, you know, comes in a lot of ways from, like, the british university system, which. Which, if you look at it historically, is sort of like a finishing school for elites, right? Like, you have this, like, elite class of society, and you kind of, like, go to these institutions because you have a certain social position where you, like, learn how to, like, be this, like, educated, erudite, like, member of the, like, elite class in your society. And there's, like, no pretense that it's any kind of, like, practical, you know, skills based education that'll help prepare you for the labor force. It's just like, you're just, like, learning how to be, like, a good, you know, a good, like, member of the upper class, essentially. Right? And then that model was, like, very, like, successful and, like, I think, in many ways, like, actually important to, like, the development of, like, lots of institutions and ideas that, like, matter today. So it's not like, it's not like, you know, it's, like, worth, like, taking seriously, I suppose. But, like, I think there's some truth to, like, why the hell is this now how we, like, certify and credential, like, in a more kind of, like, merit, like, meritocratic sort of, like, world with more social mobility and stuff. Like, why is this sort of, like, liberal arts model of, like, you go to, like, learn how to be this, like, erudite person that, like, knows about the world and, like, the great texts of the western tradition or whatever. Like, I think there's something to the, like, this whole thing is weird. And, like, if what college is now supposed to do is, like, to train one to be, like, a skilled worker in the labor force, like, we ought to seriously rethink this. But at the same time, I think I do have some, like, emotional attachment to, like, the, like, more flowery ideals.
AARON
Of, like, the liberal, oh, you know, I'm sorry.
ARTHUR
You're good. You're good. So I think, like, that, I don't know, that could maybe be interesting to talk about because I think in some ways that's, like, very related to the old text thing, which is that I think, like, some of my attachment to the, like, oh, we should read these great works from the past. Like, very much is difficult to cash out in terms of, like, some kind of, like, ea or otherwise, like more like practical concrete. Like, oh, this is the, like, value that one gains from this. And I think more of it is like, my, like, lingering intuition that there's something, like, inherently enriching about, like, engaging in this, like, tradition of knowledge.
AARON
Oh, yeah, I think we probably agree on this, like, way more than. Or like, at least the. You just said all the things that you just, like, said. Maybe more than you suspect. Yeah, like, I am like, sort of, I think, like, relative to, like, I don't know, maybe what you would guess, like, knowing my other views, I am, like much. I am like, sort of, like, anti. I'm like, anti anti flowery. Like liberal arts. Like, I don't know, sort of like a bunch of, like, ideas in that, like, space. No. Just yet to be, like, concrete about the philosophy thing. Like, my claim is it's bad, like, if you personally, like, want to like. Or like, for whatever, like, just like, take as a given that, like, what you want to do is like, read and like, think about philosophy. Um. Uh, like, not, but like, um, like, look for the, like, object level content, which is like, you know, ideas and stuff. Then, like, on the merits, it's fucking terrible. I like, yes. Which is like, it's like, so this.
ARTHUR
We totally disagree about.
AARON
Okay.
ARTHUR
Yeah, okay.
AARON
Yeah, so I don't know, like, er. Um, yeah, but, but like, but like, I. Yeah, we can, we can do like a. We can, like, double team convincing the audience that like, actually, like, uh, no, it's we shouldn't just, I don't know, sort of. Sort of like. Like, flower. Yeah, flowerly. Like, like, liberal arts stuff is, like. Is, like, probably good.
ARTHUR
I think it might be more interesting to talk about what we disagree about them.
AARON
Yeah.
ARTHUR
Which is, like.
AARON
I mean. Yeah. So do you have an example of, like, I don't know, it's like, it seems like plausibly we just disagree on the merits about philosophy, like, rather than, like. And it's, like, being hashed out in this, like, weird way about, like, old philosophy in particular. I don't know. Like, Aristotle seems, like, wrong about everything and, like, his reasoning doesn't seem very good. And I read, oh, I mean, this is a hobby horse of mine. Like, I don't need to beat a dead horse too much. But, like, kantian ethics is just, like, totally fucking incoherent. Not incoherent. It is, like, just so obviously wrong that, like, my, like. And I've. Yeah, again, this is, like, a hobby wars on Twitter. But, like, I am, like, truly willing to say that, like, there are. I have not identified, like, a single reasonable, like, person who is, like, who will, like, bite the bullets on, like, what Kantianism, like, actually implies. People actually, like, say that they're Kantians. Say they have, like, kantian intuitions, but, like, kantian ethics is just, like, absolutely batshit insane. Right, sorry.
ARTHUR
Okay. Well, I'm not going to defend kantian ethics, but I think, well, a few things. One, just since Kant is maybe an interesting example, right. I think that the fact that, like, kantian ethics is, like, weird and incoherent and no one is actually willing to bite these bullets, it's kind of funny to me that you chose this example because, like, you know, Kant has this, like, incredibly large and rich body of work that goes, like, far beyond, oh.
AARON
Yeah, totally ethical view.
ARTHUR
Right. And I think, like, you know, reading, like, the critique of pure reason or, like, the lesser known, like, the critique of the power of judgment, his stuff on aesthetics and, like, his metaphysics, like, I don't know, I think a lot of those tech, like, as utterly frustrating as it is to read Kant, like, I think, you know, like. Like, I don't know. What I'm trying to say is Kant is a bad choice for, like, you know, old philosophy is bad because object level, they're wrong. When I think, like, a lot of Kant's work is, like, not obviously wrong at all.
AARON
Okay. Yes. So part of this is that, like, I, like, don't. I'm like, actually, I'm really not familiar yes. So one thing is, like, I am totally, like, not claiming it. Like, I just don't have an opinion on, like, most things that Kant wrote. What I am willing to claim, though, is that, like, no, if you're looking. Even if he's right. Even if, like, he's, like, right or broadly right or whatever. Like. Like that, sir, I guess there's a couple claims here. One is, like, the more object level, which I think of, like, sort of this is, like, restate. Like, I think broadly, there's like, a, like, a correlation between, like, time and, like, takes gets more correct. But, like, also, like, the whole idea of reading foundational texts. Yes. Is just. Doesn't make a lot of sense if what you care about is, like, the content of philosophy. Got it.
ARTHUR
Okay, good. Okay. So, yes, I think this is where we really disagree. And, like, on the one hand, I'm like, I think this actually still does end up being related to the, like, flowery justifications for a liberal arts education. Because I think, like, on the one hand, like, I totally agree when it comes to a question of, like, philosophical pedagogy for, like, intro level courses, right? Because, like, the vast majority of people taking an intro to philosophy course, like, are not going to be professional philosophers. They're probably not even going to take a second philosophy course. So, like, I'm sympathetic to the idea that, like, secondary sources that more succinctly, in excessively summarize the ideas of old philosophers, or, like, maybe better suited than, like, this sort of weird valorization of the primary text. Right? Like, I think that is probably true. Nonetheless, I think if you buy into in any degree, these, you know, liberal arts ideals, like, I think an inextricable part of that project is engagement with a historical tradition. And I just think that, like, like, like, well, maybe let me separate two things. So claim one is that, like, if you're interested, insofar as you're interested in ideas at all, like, you should be interested in the history of ideas. And I think if you're interested in the history of ideas, then, like, reading primary sources actually does matter because it's, like, very hard to, like, understand, like, to, like, understand how ideas are situated within, like, a long running historical conversation and. And are, like, products of the, like, cultural, social, historical context that they came out of without reading them in their original or, like, their original translation. So that would be, like, claim one is, like, the history point, but even, like, object level, I think, like, if you want to go past a, like, philosophy 101, like, introduction to certain historical ideas, like, I think Laura tried to make this point in the. In the podcast, and this is where I was like, yes, Laura, yes.
AARON
Like, get him.
ARTHUR
You know, like, I think she was trying to make a point about, like, translation, and she said something about, I think it was like, like, wisdom or eudaimonia, or, like, one of these, like, concepts from, like, aristotelian ethics. I think, like, she made the point, which I want to reiterate, which is that, like, when you abstract these ideas from the context of their original text, it's, like, much harder to understand particular concepts that aren't easily amenable to translation, either into English or into contemporary sort of vernacular ideas. And this may be part of why this seems more obvious or seems more true to me, is that a lot of what I studied as an undergraduate was the buddhist tradition. And I think, especially when you step outside of ideas that super directly influenced a lot of later developments in western philosophical thought, and you go to a very different culture and tradition. It's obvious to me that if you want to understand Buddhism and buddhist ideas in english translation, you have to leave certain terms untranslated, because there are certain things that just have a very complicated semantic range that do not neatly map onto english concepts. And I think when that is the case, it's much harder to understand those terms and concepts in, like, an accessible secondary source, and it's much easier to get a sense for, like, what the, like, true semantic range of a term is, like, when you read it in its original context. Yeah, so I can give an example.
AARON
If you want, but I mean, I. Couple things. One is, like, there is, like, a self, like, self reinforcing, like. Sorry. Yeah, so, so I think some of the justification you just laid out is true. It's sort of, like, begging the question, or, like, as. I think that's, like, a terrible, like, phrase, but actually, yeah, this is sort of relevant. I think we should change begging the question. Just mean what it actually sounds like. Okay.
ARTHUR
Yeah, yeah, because it's a useful concept. Yeah, but it's so easy to misunderstand if you don't.
AARON
Yeah, okay. No, I'm gonna. So what I actually mean is, like, it's like, like, somewhat circular should be.
ARTHUR
Called just, like, assuming the conclusion or something like that.
AARON
Yeah, so, so, like, if, like, the, like, state of the world is such that, like. Oh, yeah. Like, like, if you are, like, you will be expected to, like, know, like, what Aristotle's ideas are in. Like, in, like, a meaningful. Like. Like a, um. Yeah, in, like, like, a content sensitive way. Then. Then, like, then, like, yeah, like, reading, then, like, leaving things untranslated is, like. I guess what I mean is there is, like, a. Once you've accepted the premise that, like, that, like, there is virtue in, like, reading old stuff. Yes. Then, like, sometimes, like, the old, like, the original words are indeed good at helping you understand the old stuff or whatever structure there. And, like, um. Yeah. Um, separately, I am not against philosophers coining terms. Like, uh. So, like, I am. Yeah, I am, like, pretty happy for somebody for, like, a contemporary analytic philosopher to, like, out of just, like, you know, out of, like, uh, not even respect, but, like, out of tradition, say, like, okay, I'm gonna, like, co op sort of co opt or, like, recycle the term. You like, new Daimonia. And then, like, here, I'm gonna, like, give, like, as good of a definition as, like, I possibly can, and it's not even gonna be a fully explicit definition. It's also going to be, like, I pointed some examples and say, like, that's a central example of eudaimonia, et cetera. But, like, I don't want to repeat this entire, like, chapter where I try to define this word every time I use the word. So I'm gonna, like, use that as a variable, and it, like, yeah, I'm, like, not against analytic philosophy, like, coining terms and, like, using them that way. And, like, if we want to do, like, a nod to history by, like, using, you know, old terms, like, from, like, another language, whatever, Latin here and there. That's cool. Yeah, those are my points. What was your. Wait, there was your first point. What was your first point?
ARTHUR
Well, I was just saying that, like, okay, like, to, like, maybe offer a more full throated defense then, because you were saying some degree is begging the question then. Yeah, like, I see. Yeah, I think you're on it. Like, I think that's true. But, like, like, I guess to me, like, part of the, like, sort of ideas for their own sake kind of, you know, like, vibe or inclination or whatever. Is that, like, history kind of history, like, ought to matter, right? Because, like, I think, like, I take your point that it's begging the question that, like, if history matters, then, like, you have to, like, read the historical text or whatever to, like, understand the history. Right.
AARON
But, like. No, no, but there's a separate point here.
ARTHUR
Okay.
AARON
Yeah, sorry. Brief interruption. I think. I think I. Like, I'm. Yeah, I think this is, like, substantively different. Yes. And I think I just, like, disagree that, like, sorry, I think we're both. Before giving like, lip service to, like, the truth term, like, oh, flower. Like liberal arts. But, like, my conception of that is, like, seemingly different, I think, than yours. And yours is more history based, so. Sorry, keep going.
ARTHUR
Yeah. Okay, well, then maybe I'll defend that a little bit, which is just that I think, like, I. And this is. This is one of the ways in which I feel, like, not vibe associated with ea, even though I, like, love ea, like, in terms of, like, the merits of its ideas, is that I think a lot of people with the kind of, like, ea, you know, sort of like, spf. Every book should be a six paragraph blog post, sort of like, whatever orientation. Like, I think there's something like, like, really, really mistaken about that, which is just that I think, like, people have, like, far, like, they. They cloak themselves in this, like, rhetoric of epistemic humility that's all about, like, confidence intervals and, like, you know, like, credences and, like, how sure you are about your beliefs. But I think, like, people miss the, like, meta level point, which is that entire way of thinking is, like, totally alien to how, like, humans have thought about many topics for, like, most of human history. And that it would be really weird if, like, we, you know, like, yeah, weird in, like, the social psychology sense. Like, we western educated, industrialized, democratic, like, citizens who, like, went through the, like, you know, like, hoops of, like, getting super educated or whatever, like, in the 2020s, like, happened across the, like, correct, you know, like, like, higher level framework for, like, investigating questions of, like, value and morality and ethics, right? And, like, I think people should just, like, be a lot more humble about, like, whether that, like, project itself is justified. And I think because of that, that leads me to this, like, history matters not just for its own sake, but, like, for doing philosophy in the present, right? Which is, like, if you think that, like, we ought to have a lot of humility about, like, whether our current, like, frames of thinking, like, are correct. And, like, I think doing the kind of, like, almost sort of, like, postmodern foucault deconstruction thing of, like, just pointing out that those are, like, really historically contingent and weird and, like, have a certain, like, lineage to, like, how we, like, arrived at the set of assumptions. Like, I think that, like, people should respect or, like, should take seriously that, like, move, right? And, like, that move being this, like, exposing a lot of what we think is, like, natural incorrect and just, like, the best way to, like, come about our, like, views of the world, right, is, like, a product of our, like, time and era and circumstance and culture. Right? And I think, like, if you take that really seriously, then you become much more interested in history, and then you become much more interested in what is the actual genesis of these ideas that we take for granted. I know I've been rambling for a little bit, but I think to put a sharper point on this with a particular issue, I think we don't have to talk about this object level, though maybe it would be interesting, would be the whole debate over free will. The whole debate over free will takes for granted that we as human agents have a lot of the discussion about free will takes for granted this idea that we intuitively view ourselves as these solitary, unified self that can act in the world and that can make these free agential decisions. And that if our scientific image of ourselves as naturally evolved creatures that are the product of evolution and are, like, governed by the same, like, laws of physics and things as everyone else, like, then there's this, like, fundamental metaphysical problem where it, like, seems like we are these, like, determined beings, but then, like, in our personal experience, it, like, feels like we're these, like, choosing beings that are, like, somehow, like, outside of the causal nexus of, like, scientific processes, right? And I think, like. Like, I use this as an example because there's some interesting work in, like, sort of cross cultural philosophy that just suggests that this is just, like, actually the product of, like, a christian intellectual heritage that, like, is trying to solve a certain theodicy problem, which is, like, how can we be, like, viewed as, like, moral agents that can make decisions that are worthy of salvation and damnation when, like, also, you know, when, like, when, you know, we, like, live in a world, like, governed by, like, a benevolent God, right? So sort of, like, reconciling, like, God's sort of benevolence and omnipotence with the idea that we as humans are, like, free agents who can make good and bad decisions, right? And that, like, if God was truly, like, Justin, right. Like, why wouldn't he just, like, make us make all of the right decisions? Like, how do you hang. I'm not presenting the problem very well, but, like, there's this philosopher, Jay Garfield, who does a lot of work in, like, Buddhism, that basically argues that this, like, whole framing of the free will debate comes from a christian cultural intellectual legacy. If we actually want to understand the problem of free will, I think it's useful to know where this whole framing of the problem came from and how other cultures and traditions have thought about this historically in ways that are very different. The buddhist tradition being the example in this case, which just doesn't seem to think there is a there there, so to speak. Like, there's not really some kind of problem to be solved because they start from a different set of metaphysical assumptions. Like, I think that is interesting. And, like, that should matter when we're trying to answer questions about, like, what should we value? And, like, how do we be good in all these things?
AARON
Okay, you're really taxing my working memory because I. Wait, sorry, sorry. That was supposed to be a light hearted thing. Yeah, a lot there. Um, I think the most fundamental point is, um, I disagree. So I reject that. Primary texts are generally a good way of doing the good thing, which you just argued for, which is learning about intellectual history. I think intellectual history is great and that a terrible way to do it, at least if you're a finite being, like all beings are, is, or like, with finite time and mental energy, et cetera. Like, I'm not gonna say, like, no, there's, like, never. Yeah, my claim is like, yeah, intellectual history, you're better off reading a book about the. About the texts where they are, like, the object of study or, like, not this, or I guess, the ideas of the object of study. And, like, actu. Like, um. Yeah, it's just, like, not very efficient or whatever to, like, to, like, read primary text. And now, like, if you're for whatever reason, like, really interested in, like, one specific sub area of something, like. And, yeah, yeah. So if you're interested in, like, a very specific slice of intellectual history, history of ideas, then I. Then, yes, I agree. That is a good reason to read primary text. That is very uncommon.
ARTHUR
Sure. Yeah. So I want to respond to that, but I realized, hey, maybe you can edit this in or something. I slipped out of conversation in a podcast mode. I was like, fuck. I did a terrible job of explaining the whole theodicy problem, origin of free will, which is really that the theodicy problem is like the problem of evil, right? It's like, how do we have a benevolent. How do we reconcile, like, God's omnipotence and benevolence with the problem of evil? Yeah, it's like the problem of evil, right? But the specific free will move is then, like this, postulating this libertarian, not in the political sense, postulating this libertarian free will comes from christian theologians who are saying, like, oh, this is how we get around. The problem is that humans have been endowed by this omnipotent God to be able to make choices, and why God would want us to make choices if he's benevolent is obviously still a deep problem that I don't really understand how you would possibly arrive at that place. But something about we don't fully understand God's plans. It's all very mysterious. Yada yada. But anyways, going back to what you just said, I think again, I'm inclined to agree at some level, for many people in many circumstances, it's like a very inefficient way to learn about these ideas is to read the primary text. I totally agree, but I think in some ways, the farther back you go in history, in a weird way, I think the less that that is true. And the reason why I think that is for the language translation thing that I mentioned earlier, which is just to bring up Buddhism again, because I think it's useful here. Like, in Buddhism we have one of the central ideas is this idea of dukkha, which gets often translated as suffering. So you hear the first noble truth is translated as life is suffering, or is pervaded by suffering. But dukkha doesn't really mean suffering because surprise, surprise, ancient pali words aren't easily translatable into English. Like maybe something like unsatisfactoriness would be like a better translation, but like, it's just like much more like subtle and multifaceted than that, right? And it, like, there isn't a single english word that it maps well onto. And I understand that, like, some, like, really good, like, scholar who's like a good writer could maybe like give a few paragraphs about, like how, like, what dukkha really means. But to me, like, I felt like over like, years of study, like, I got a much better sense of that through just like, reading a lot of, like, original text and like, seeing where.
AARON
The word came up in English or in.
ARTHUR
Yeah, in english translation. But I'm saying, like, with, like, technical terms, like, yeah, left untrained. Right? And like, it was only through, like, understanding that context that I was able to feel like I could like, wield the word appropriately, right? So, like, and I think this to me comes back to my more sort of like, like wittgenstein like view of, like, what language is, right. Is that I don't think, like, every term like, has some kind of like, correct set of, like, necessary and sufficient conditions. Like, I think my account of language would be, like, much more, excuse me. So my throat would just be much more like, would be much more like social practice and, like, usage based, right? Which is that, like, meaning is more like defined by its, like, use in actual language. And I think like, once you get there, right? Like, if you grant that kind of premise, then it's, like, hard to see, like, how, given how much, like, language has changed and evolved over the years, like, a skillful secondary source interpreter is going to be able to, like, clearly just, like, lay out, like, this is what eudaimonia means in three paragraphs. And it's going to be, like, much more necessary if you have this philosophical view about, like, what languages to, like, actually understand how that term is, like, used in its original context in order to, like, grasp its meaning. Yeah, I just. I think there's just, like, limits of, like, how much these secondary sources can really tell you.
AARON
I somewhat agree. The thing that I agree about is that, yes, it actually seems like if your goal is to get the most genuine unbiased, and I'm actually thinking unbiased in an econometric sense, where you want to get a true point estimate of the real meaning at the very heart of the wittgensteinian, multidimensional linguistic us, like, sub. Like, you know, a hyperspace or whatever, you want to, like, accurately point that, like, target that, um, is like, yes, probably best to, like, read the original texts. Um, and, like, unfortunately, like, probably, um. Uh. So, like, one thing is, like, there's the question of, like, does there exist a, um. Like, what would a really good attempt at not defining in three paragraphs, but maybe defining in half of the length of, like, the total corpus that you just said, like, in the terms of analytical philosophy, like, how close could you get to, like, an unbiased. Yeah, like, point estimate of, like, what dukkha means or whatever. And I actually don't really know the answer. I like. I like. Yeah, suspect that, like. Yes, that, like, you don't.
ARTHUR
I like your putting this in, Conor match returns because I actually think it, like, it gives me an easy way to conceptualize how we can converge on agreement, which is, I think we both agree that to get the least biased estimate possible, you have to have the largest sample size in this case, which is just going to be reading the entire text. But there's going to be some optimal trade off between how biased is your estimate of what a particular term means and how much time does it take to invest in actually reading those primary texts? So, yeah, maybe there's some optimal. That's a mix of extended excerpts and secondary exegesis or whatever that would find the correct balance between those two for any given person at a certain level of interest and time commitment. So, yeah, it's not a hard and.
AARON
Fast yeah, no, Minty is perfect. It's like lukewarm. Do you want it hot or cold or lukewarm?
ARTHUR
Lukewarm is fine.
AARON
Okay.
ARTHUR
I've already, like, way overdone it on, like, caffeine for the day, so I was, like, so tempted by the tea, but, like, mint tea. That's perfect.
AARON
Yeah.
ARTHUR
I think a funny thing about, like, um, or I don't know, also on the, like, vibes.
AARON
Yeah.
ARTHUR
Like, how ea might. I think this is loosely related to, like, the other, like, vibe based way in which I feel, like, different than a lot of other, like, more like, rat type ea people.
AARON
Yeah.
ARTHUR
Is I feel like there's like, a certain, like, like, philistine. How the fuck do you say that word? Philistinism. Philistine. Vibe.
AARON
Don't actually know what that means.
ARTHUR
A philistine is like one who, like, eschews, like, arts and aesthetics and, like, doesn't care about sort of, like, you know, like, like literature, music, cultural, stuff like that. And, like, I feel like that's another just sort of, like, vibe. Vibe aligned way in which I'm like, what the hell are these people doing? And I just feel like there's a lot of, like, there's a lot of, like, philistinism in, like, kind of like, rap culture, which is just like, sort.
AARON
Of, I feel like that's more a stereotype, I think. I think there's, there is like a, so, like. Sorry, sorry. Keep going.
ARTHUR
No, it's good. Because this all, like, I don't know, this is all, like, sounds like, so, like, incredibly pretentious to, like, say, like this, but I do think there is this grain of truth just like, when people, like, really, really strongly believe that, like, I don't know, the world is going to end in ten years or that, like, the goal of their life ought to be able to, like, maximize their personal contribution to, like, global utility. I think that, like, naturally leads you to, like, care a lot less about these, like, other.
AARON
Yeah, maybe. I mean, I think that's like a stereotype. And it's like, what? Like, you would think if, like, the only, like, thing you knew is like, oh, there's a crazy group of people who call themselves the rationalists. I think, like, by and large, it's like, no, it's like, on average people, like, yeah, it's like people are mostly pretty normal. On the other hand, it's true. So this is actually something I've been thinking about about myself, which is that, like, I am not a total philistine. I, like, like, in fact, I think, like, music is, like, a pretty important. It's an important part of my life. Not in that it's like, like, not in an interesting way, but in the sense that, like, oh, I like music and I, like, listen to it a fair amount. I think Spotify is great, even though I have, like, very basic tastes. But, like, besides that, I think I have, like, really weak aesthetic intuitions. And, like, I was thinking about, like, whether that means that I have, like, slightly autistic or whatever.
ARTHUR
Like, yeah, this just reminded me. I don't, like, we can cut this out or whatever, but this fucking. This liam bright tweet was so awesome. He was like, I just thought of this because I think one of the things that I find funny about, like, rationalists, like, calling themselves rationalists, is that it, like, it's kind of.
AARON
Kind of pretentious or something.
ARTHUR
Yeah, I mean, it's pretentious, but also the, like, philosopher in me is, like, so annoyed that they've, like, co opted this term that, like, has a very specific meaning in, like, epistemology. You know what I mean? But this tweet is so funny. I like that. The argument for rationalism is that its practitioners have easily the best track record of success as scientists, whereas the argument for empiricism is that. The problem is, when you think about it, rationalism is kind of wacky bullshit.
AARON
Like, I need to be, like, way more philosophical to, like, truly appreciate.
ARTHUR
Yeah, yeah. Well, like, the joke is just that, like, if you, like, look, historically, a lot of the, like, you know, like, like, early modern, sort of, like, renaissance people, like, descartes or whatever, it was like, the rationalists who, like, did all of the, like, good empirical, like, science.
AARON
I didn't know.
ARTHUR
And then. And then, like, the empiricists, like, most of their critiques of rationalism are, like, well, from first principles. Like, rationalism doesn't really make a lot of sense, you know? So, like, I don't know. It was just a good joke. Unlike the irony of that, honestly, I.
AARON
Think you just know, like, way more about intellectual history. For better. I mean, obviously there's, like, you know, like, for better. I mean, the worst part was gonna, which I was, like, sort of half planning to say was, like, there was, like, probably some opportunity costs there, but, like, maybe not. Some people just, like, know way more than me about, like, everything. It's just like, yeah, totally possible. Were you a philosophy. You were a philosophy major, right? Okay, so that's my one out, is I wasn't. I was a philosophy minor. For a big difference. I know.
ARTHUR
Nice.
AARON
In fact, actually, that's sort of endogenous. Ha, ha. Because one of the reasons that I chose not to do a major was because I didn't want to do, like, the old philosophy, like, you had to.
ARTHUR
Do, like, some history requirement or.
AARON
Yeah, like, yeah, a couple things that were like that. Yeah, I think, like, maybe like, two or three, like, classes. Like, yeah, basically old bullshit. Um. Sorry. I mean, to reopen that. Yeah. Also, I'm like, you don't have, like, I'm happy to go, actually. I'm not happy to go for literally.
ARTHUR
As long as you want.
AARON
Like, honk out sooner than I guess. But, like, yeah, just, like, saying, like, you don't have to. You don't have to, like, no, I'm not a slave here.
ARTHUR
I'm having a good time. Okay.
AARON
Yeah. Okay. I feel like I actually don't really want to talk about climbing, in part because I. I'm just. Yeah. Honestly, like. Like, so I, like, made this clear to you. Maybe I'll cut this part out. But, like, yeah, I'm not, like, doing amazing, and there's, like, a lot of baggage there. Not that I don't think anything would, like, it's really just, like, if there's nothing, like, deep or, like, oh, no. I think I'm gonna, like, handicap myself for life by, like, having a conversation about climbing. I just, like, honestly, like, don't really feel like it right now. That's okay. Yeah.
ARTHUR
We can call it. Or if there's some, like, I don't know, some, like, very light hearted topic we could conclude on.
AARON
Um. I don't know. I'm trying. Like, I'm sort of drawing a blank. There's, like, too many. Oh, on the opposite of. There's, like, too many options. Like, one. Like, one thing that, like, I thought that ran through my head was, like, uh. Like, I don't know. Like, what's your, like, life story? Slash, like, is there anything in your. We've been talking about ideas. Like, not. Not ourselves.
ARTHUR
Like, right.
AARON
Yeah. I don't know. Is there an interesting. Is you want to tell your whole life story or maybe a sub part or just, like, a single episode that's like. Yeah, I don't know. We can also just. Yeah, this is just, like, one of, like, I don't know. N thoughts.
ARTHUR
Yeah. Yeah, I think, like, how do I even summarize? It's always funny when people. This is so mundane. We can just cut all of this out if it's really boring and mundane, but, like, it one of the things I've noticed about this when like I talk to people and people are you from like I feel like I like moved when I was in middle school. So like I never know exactly how to answer the question of like where I'm from because it's not like I'm some like like you know like diplomats kid who like moved around my whole life. It's more just like I like grew up in like the like San Francisco Bay Area like until I was twelve and then like moved to Denver. So it's like. But yeah, I feel like I don't.
AARON
Have an answer for you because I'm really boring. I basically lived in DC forever.
ARTHUR
Yeah, but you know I think Denver was a really nice place to grow up. I feel like this inherently gets me down climbing path.
AARON
No, that's fine. There's no like, there's no like oh big, like there's no like trauma or whatever.
ARTHUR
But I feel very lucky that like my like parents are very like outdoorsy people. Like my mom is like a professional ski racer for some time. Like she grew up like skiing a lot and then I had an uncle who like both of my parents grew up in the Denver area so like, and have a bunch of siblings. So like almost all of our extended family like lives there and one of them, my dad's brother, like their parents weren't like really that outdoorsy or like rock climbers or anything but he like loved rock climbing and when he was 18 he like went to see you boulder and like started climbing and stuff. So like, yeah, I feel like for my like life story, the big thing for me was just like having a sort of like mentor who had like been like rock climbing for a long time like well before the age of like gyms and stuff and was just like very experienced in the outdoors and like you know after like a few times of going to the climbing gym when I was twelve, like I went like to El Dorado Canyon and he like took me up some like big like multi pitch like routes and stuff and I feel like just like, I don't know, having that experience in outdoors was like so formative for me because then when I like got a certain level of like experience and competence and was able to go do that on my own, like I had a friend who, who his dad was a rock climber and he kind of had a somewhat similar background of like doing it when he was little and then we were both at the level where we could like safely like sport climb on our own and I got, like, quick draws for, like, my birthday and then, like, you know, just, like, started going rock climbing with him. And I feel like a lot of my, like, in high school. I don't know. In high school, like, I don't know if this comes off as, like, surprising or unsurprising, but I was, like, actually a fucking terrible student for, like.
AARON
No, that's surprising. Yeah, that's surprising because you're. I don't know. Because for the obvious reason, they, like, obviously smart. And, like, usually being smart correlates with not being a terrible student.
ARTHUR
Right. Yeah, but, yeah, I was like. I feel like. I don't know, to be honest. Like, kind of the classic, like, underachieving, like, smart kid for, like, most of my life. Like, I just, like, something about the. Just, like, discipline and, like, having to learn all this stuff that, like, wasn't that interesting to me. I feel like I had the very stereotypical story of just kind of, like, blowing off school a little bit and, like, I kind of got my shit together, like, halfway through high school because I was, like, you know, very much had the expectation, like, I'm gonna go to college and, like, realize, oh, shit, if I want to get into, like, a good college, I have to, like, do well in high school. So I kind of, like, got my shit together and got, like, good grades for two years. But, like, for most of my life, I did not get very good grades. And, like. But I think, like, throughout all of that, like, much more important or, like, thing that I was, like, focused a lot more on was literally just, like, going climbing with my friend Eric.
AARON
Yeah, no, fuck it. Let's talk. Let's talk about it. Although, like. Yeah, no, no, I was like, I don't know. Maybe we'll have to supplement or whatever with, like, another, like, half episode or something. Yeah, no, no, because I remember on the hike, we, like. Yeah, I don't know. Have we mentioned this? That, like. Yeah, I think we've met in person. Yeah, that is true. Hike. Which is fun, which we should do. Not the same thing, but, like, another hype again. Yeah, if you are so inclined. Agreed. Oh, yeah. So I was just like, I don't get. That's interesting. I, like. I feel like I'm bad. I don't want to say. Oh, wow. Thanks for sharing your answer. Let me share my. But, like, one. Like, I don't know, maybe. Maybe people will, like, find interesting like, that. Like, I was the. Also very similar, like, timeline and, like, importance to me was, like, climbing. Like, yeah, I think I literally started when I was twelve, just like you or whatever, but I was a total gym rat and so, like, I don't know. Did you? I don't know. Sorry, I feel like I was. I've been like, the modality of like saying, like, oh, convince me that climbing outside is better, but like, I don't know, like, what are you. Yeah, yeah. Like, um. Oh, I doing.
ARTHUR
Certainly don't think it's better or worse, I think.
AARON
No, no.
ARTHUR
Very different experiences.
AARON
Yeah, yeah, totally. Like, I don't know. I don't even have a quet. Like, I don't know if I like, have it. Have an actual question, though. Like, did you think, you know, like, why you were like, drawn to climbing outside?
ARTHUR
Yeah, I think for me I do. Which was just that, like, I. It's always hard to disentangle, like, you know, I'm sure you've thought too about the, like, how much does parenting matter versus like genes and environment and things like that. So it's like, you know, it's hard to know how much of it was, like, literally my parents influence versus just like, the fact that I am my parents children genetically. But like, I think I was always like drawn to kind of like outdoor, sort of like adventuring, just like generally because like, even prior to climbing, I like, grew up, like skiing and like hiking and like reading the Boy Scouts by the chance. No, I was not. Yeah, I think I was. Part of my like, bad student ness was like wrapped up in the same reason why I never like, did like Boy Scouts or like, or like team sports, which I was just like very much like a stubborn kind of individualist. Like, I want to do my own thing and like yada, yada, yada, you know, from a young age. So I think, like, organized things, like the Boy Scouts were like, not that appealing to me. And that was part of what was appealing about climbing outside in particular was that sense of like incredible, like freedom and independence, you know? So, yeah, I think, like, at first I just fell in love with like, the movement of rock climbing through the gym, like, just like you did, like the first few times I in the gym. But then when my uncle took me tread climbing, I was like, oh, no, this is the thing, like, you know what I mean? Like, getting to the top of these like, giant walls. Like, I think it's super, like, cliche whenever, like, people talk about this stuff, but it's like, there's something about that experience of like, when you climb a big route outside and you're like, staying standing on the top of like, a, you're halfway up, like, a 400 foot cliff and you just have this deep sense of, like, humans are, like, one not supposed to be here in some sense. And that, like, without this, like, modern equipment and shit, like, you would never be in this position. Like, this is so dangerous and crazy, even though it's not necessarily dangerous and crazy if you're, like, doing it correctly. And that sense of, like, oh, very few people are able to, like, be in these kinds of positions. Like, there was something, like, very, like, aesthetically appealing to me about that, honestly. And, like, I think so. That was a big aspect of it. Just, like, the actual places that you get to see and go was really inspiring. I think I love just, like, being in nature in that way, in a way that's very interactive. You know? It's not just like, you're, like, looking at the pretty trees, but you're, like, really getting to understand, like, especially in, like, trad climbing. Like, oh, like, this kind of, like, part of the rock is like something that, like, with my gear and abilities and skills, I can, like, safely travel. And it, like, gives you this whole, like, interactive sense of, like, understanding this part of nature, which is like a rock wall, you know? And that was quite beautiful to me.
AARON
That's awesome. We're totally the opposite people in every possible way. Not every possible way, but the other.
ARTHUR
Thing that I was gonna say, oh, there's so there's, like, the aesthetic side of things. And then there was also a big part of it for me was, like, this, like, risk sort of thing, which is not like, I think a lot of this, especially whenever you tell anyone you're interrupting rock climbing, they're like, oh, have you seen free solo? You know, it's like the meme or whatever. But, like, I think when, like, people think about rock climbing, like, they just think of, like, sort of a reckless, kind of, like, adrenaline junkie sort of pursuit.
AARON
Yeah.
ARTHUR
I think what was really beautiful about rock climbing to me and, like, spoke to me on, like, both an intellectual and aesthetic level was that, like, something that's interesting about it is, like, gym climbing, right, is, like, extremely safe, right? Like, way safer than, like, driving to the rock climbing gym, right? But there's this whole spectrum in climbing from slightly more dangerous but still relatively safe and probably safer than the drive to and from is sport climbing outside on modern bolted routes where the falls are safe, that's also very safe all the way to free soloing on the far other end, hard routes or whatever. And then somewhere in the middle would be, like, trad climbing, like, well traveled, established routes that are, like, within your ability are, like, scary. And, like, I think what I loved about it was, like, there's this whole spectrum of, like, risk level, right?
AARON
Yeah.
ARTHUR
And, like, a lot of what, like, becoming a better rock climber in, like, this, like, outdoor context is. Is, like, learning how to, like, rationally analyze and assess risk, like, on the fly, right? Like, because you have to learn, like, on a mental level to. To overcome this inbuilt fear of falling and fear of heights. And you have to look at it and be like, okay, my little lizard brain is like, this is absolutely insane. I'm 200ft off the ground looking straight to the river. I am totally freaked out right now. You have to learn how to override that and think more rationally and be like, how far below me is my last piece of gear? Like, is that piece of your solid? Do I, like, trust the cam that I placed in that crack or whatever, you know? And, like, I think, like, I love that, like, mental skill of, like, like, learning how to, like, work with your mind in that way and, like, learning how to, like, overcome these, like, instinctual feelings and, like, put them in this, like, rational context and, like, do that on the fly that, you know? And one of those, like, parameters is, like, your, like, physical ability, right? Like, that's something that happens in trad climbing is, like, sure. Like, maybe some routes are, like, kind of sketchy and not well protected, but they're like, you know, five seven, and you're, like, you know, you can climb five seven with your eyes closed, you know, in your sleep. And, like, like, you have to, like, learn to, like, trust that physical ability and, like, I don't know. I'm just rambling, but, like, I think at an aesthetic level, like, both the beauty of where you are and the pursuit itself being, like, something that's, like, where you're, like, safety is very much in your own hands and, like, you can, like, totally be safe and secure, but you have to, like, no one can draw that line for you. Like, you have to decide, like, what risk you're willing to tolerate and, like, how you're willing to manage it. Like, I, like, was so addicted.
AARON
That's so. That's awesome. No, I'm glad I can't say it all, like, resonates, but, like, that's. I don't have anything interesting to say. That's just really cool. Like, I'm glad you. I'm glad you got to expect that. I, like, yeah, I mean, it's kind of like, this is sort of a dumb conversational movie. Just like, oh, just like, oh, just like doing the same, saying all that. But for me. But, like, I, like, interjected before and said that we were, like, very different. I forget what, like, prompted that exactly. Yeah. I mean, for one thing, I, like, guess I. Yeah, so, I mean. I mean, like, I don't want to. I don't want to overplay. Like. Like, the difference is I liked climbing outside. I would always, um, definitely, like, much more bouldering, though. Um, yeah.
ARTHUR
Side note, I think is a lot more dangerous than before realize.
AARON
Oh, yeah. Outdoors at least. Yeah, yeah.
ARTHUR
Just like, a lot of, like, I think, like, the most common, like, climbing injuries are, like, bouldering injuries because it's just like.
AARON
And there's a huge range from, like, you know, if you're. It's like a nine foot tall thing and you have seven crash pads, you're fine.
ARTHUR
Yeah. Like a very flat.
AARON
Yeah, yeah. Straight forward landing. No, you are also just, like, evoking, like, memories of, like. Yeah, both. Like, I. One thing I never engaged in is what you're just talking about, which is dealing with fear in the context of an actual, like, something where you actually do have to evaluate safety. So, like, I was fighting past. I was like, escape cat in the sense of, like, getting used to, like, climbing in the gym where it's, like, really safe. Actually, I was. I was, like, looking at my leg. At one point, I did get one injured, which is, like, at one point, like, I fell with, like, a rope, like, wrapped around my leg. And that was really fucking painful. But, like, I can't really see it anymore. At one point, it was. It was like a scar for a while, but it was like. It was, like, wrapped around my, like, the bottom of my leg or whatever. Um, I was fine. Um.
ARTHUR
Don't put your leg behind the rope.
AARON
Yeah, yeah, no, yeah. Backstopping. Don't do that. No, but, like, it was like. I mean, there was, like, some mental challenge there. And also there was, like, the hardest, like, actual, as in outdoor, like, route that I did was called, uh. I don't know. No one's gonna, like, recognize this out of, like, the nine people who are. Yeah, but buckets of blood, which is local, actually. It's like a local boulder. I don't think I did forget if I did it. Like, it's like a b ten in car. Yeah. Like, near the.
ARTHUR
Holy shit.
AARON
Like, in Carter rock. It's like a couple hard moves. It was like, right? Yeah, like, my. So the hard moves are short, but then there's, like, a b five top, which, like, honestly. Yeah, probably one of the most dangerous. Like, I guess not. Not dangerous, but, like, I guess, like. Like, danger in terms of, like, expected value. Things I did was like, yeah. In fact, topping that, and this is literally only one time. I don't want to, like, overplay this. And I would not. It was not like a life or death thing. It was like, I don't know. Yeah, you could also. I also have friends who are spotting me. I, like, they told me after. Sorry. Let me get back to the point, which is that. Yeah, basically, like, after, like, the hard part, like, doing the v five top, which was, like, you know, not. Not a super high ball, which means, like, I don't know, about 20ft or something. But, like, yeah, I don't know. Probably I'll look at the order of, like. I mean, I can just go and measure, like, 15 or 20ft, but, like, not over a super solid landing or. I don't remember the exact circumstances, but anyway, yeah, but, like. But, like, that was. That was, like, a one time thing, I guess maybe I'm also like, this is my signaling blame, like, brain, like, flexing my credentials or whatever. No, but for the. For the. For the most part, like, safety. Like, that, being scared was, like, just, like, I just went on, like, the cost side of the. Of the equation. I didn't like mixer. I mean, I definitely did, like, pushing, like, physical boundaries. Yeah. Um, and also, I mean, yeah, maybe this is, like, uh. I don't know, like, my. The cynical side of my brain, which I think is actually, like, correct in this case. It's just like, okay, like, as much as, like. Like, I'm not talking about this for you, but, like. Like, for me, you can always, like, put things in, like, high minded terms and in fact, and sometimes, like. Like, you really do enjoy stuff, but, I mean, a lot of that, like, is derived from your brain, like, trying to do, like, status y things and, like, signaling things or whatever. Um, but, like, yeah, with that said, um. Uh, yeah, like. Like, one thing I've found in, like, especially actually indoor rope climbing and, like, pushing my boundaries, like, there was. It, um. I had never, like, previously been really good at anything physical. And, like, I was not. Like, I wasn't like, it was, like, pretty, like, average or whatever. Like, I played baseball. I was like, okay. And, like, I mean, I was, like, kind of short. So, like, I don't think I ever had, like, big basketball. I wasn't like, a rec basketball team. You know, I wasn't like, super, like, overweight. I wasn't, like, super, you know, I actually was tiny at what, some point. That's a whole nother story. But, like, you know, by the time I was, like, 1415 or whatever, yeah, it was, like, pretty average. Um, I mean, yeah, this is, like, sort of just besides the point, but, like, I guess the opposite of you. Like, I was always sort of like a try, you know, tryhard kid in school, like, did pretty well. So, like, kind of, like, used, you know, like, success for, like, lack of a better word in, like, that domain or whatever. But, like, finally, like, one thing I guess I figured out is that I was, yeah, I guess both be a combination of, like, really liking it and, like, working, like, pretty hard. And also I'm, like, pretty sure that you can get into, like, the genetic aspect or whatever, but, like, to some extent just, like, like, not, like, being, like, lucky. Is that, like, I was really good at endurance, especially if I trained it. I think I was, like, naturally good. Yeah, yeah. And so, like, I don't know. I don't have, like, a ton of, like, insight here. It was just cool to be, like, yeah. To be actually, like, quite good at even, like, this one, like, subtype of climbing, which is, like, basically indoor compet not only competitions. I mean, I was. I did like, competitions, but also, also just, like, in general being, like, pretty good at, like, you know, the 40 foot, like, indoor. Indoor roots or whatever and, yeah, I mean, like, this is maybe just, like, take this part out and, like, the last thing I'm trying to do at all is, like, is like, I don't think this actually comes across as, like, a brag because at this point, like, I am, like, not nearly as good shape as I was. Like, look, when I was, like, going to the competitions, like, there were a couple years where, like, I was able to make, like, the national, like, level competition, which is, like, like, a couple, like, late, like, levels, I guess two yet two, like, competitions, but, like, those decompose into, like, three, like, cutoffs or whatever that you, like, have to get to or whatever and no, like, I don't think, honestly, was there anything, like, super deep here? I could, like, make something up? It was just cool. Like, yeah, like, being like, um.
ARTHUR
Uh, yeah, I think that that resonates with me, too. And that, like, in some ways, probably that status y sort of stuff explains part of why.
AARON
Oh, I mean, who cares?
ARTHUR
Like, no, no, no. But I'm saying it maybe explains part of why I, like, wasn't that interested in competition climbing. Was that like, I mean, one, like, it was the sort of contrarian, like whatever individualist streak I talked about earlier, which was the. I like because I think very early on I had this much more kind of like whatever high minded aesthetic climbing is beautiful kind of like, notion. Like, I think that made competitions less interesting to me. But also the other thing was that, like similar to you, I was like, holy shit, this is sport that I'm actually really good at. But like, I wasn't insanely good at like competition climbing. Like I was, I was, you know.
AARON
Did you do any better than average competitions like that?
ARTHUR
I. Yeah, like some. I did some competitions, yeah, but like very little. Like I was on like the competitive team at like my gym for like a little while and then I was like, nah. And I noped out of that. But you know, I mean I climbed like, like, you know, solidly, like twelve plus. Yeah, some like, you know, low 13s in the gym or whatever.
AARON
Yeah, that's, that's like, that's like seriously talented.
ARTHUR
Yeah, but, but like, I wasn't like. I think because I was. It's certainly like in part like largely influenced by some of those status things as well. I think part of why I like, was like, oh, competitions aren't for me was that I found that like everything that I was rambling about earlier and the sort of like cognitive skill, like managing risk sort of stuff, like I realized that was actually the skill that I was like on the much farther right tail of the distribution on. And like, you know, there's a lot of so much just like luck and privilege of like having the like, you know, like parents and mentors and friends and connections to like be able to like get into that kind of like outdoor climbing at a young age that a lot of people just like don't have access to. So it's like a very limited sample. But I felt like I was like of the like gym kids that I knew my age. Like a good chunk of them were like stronger than me and climbing at the gym, but I was like, but you guys. Yeah, like, you know, climb, like, you know, 511 trad.
AARON
Climb.
ARTHUR
You know what I mean?
AARON
Like an example is me. Like, I definitely. There was never a point in time when I was able to climb 511 trad.
ARTHUR
Yeah, so I think like some of that, like, some of that was certainly part of like, what motivated me was it was like, oh, I'm actually really good at this like sport or whatever, or at least compared to like, you know, not compared to people who like make it their lives or whatever, but in terms of, like, casual hobbyist, like, teenagers, early twenties. So I think, like, that was. That was really appealing to me about it as well. Like, no doubt.
AARON
Yeah, there's, like, a. Yeah, I, like, maybe I'll just, like, end up, like, keep, like, keeping talking or keep talking until, like, I just, like, say everything there's to be said, but, like, there's, like, I don't know. I feel like I have, like, a lot to say about, like, climbing and, like, my climbing history or whatever, but, like, part of this. And, like, I really don't mean this to be, like, oh, like, what was me or whatever, but, like, another interesting. Yeah, so, like, I guess at least I find it interesting. Maybe you all too pejanora listeners, like, highlights. Like, the thing I just said was, like, this is such a. This is. This sounds so, like, melodramatic or something, but, like, I really, I think, like, learned, like, for the first time and in a way that. Sorry. Without hedging, like, sorry. Basically, I, like, discovered, I guess, for the first time that, like, like, you can't just hardweight your work to relative success. And I actually, this is, like, a. Something, like, very close to that is, like, in a. In, like, my list of, like, hot takes or, like, my hot takes thread, like, ongoing hot takes it on Twitter or whatever. And it sounds. I don't know, I don't want to be, like. So I don't want to come across as, like, cynical. I don't actually. I am, like, a skeptical person. I don't think I'm, like, broadly a cynical person. Um, and, like, I don't want. I actually don't think this is cynical. I think this is just, like, I don't know, like, maybe it, like, kind of reads that way and it's the kind of thing that, like, a lot of people, like, implicitly believe. But, like, I don't know, it sounds like very pessimist coded to say, but, like, um, I think it's a lot of. It's, like, very important in, like, a lot of domains. Um. Uh, but, like, yeah, I do think at some point, and maybe this had to do with. This is getting way back, but, like, maybe it had to do with, like, baseball at least, like, being largely a skill sport in some sense. Like, yeah, like, like, most healthy adult humans. Like, there is not. If they could, like, figure out how to swing a bat in, like, such a way that they, like, made, like, there is no, like, hardcore obvious physical constraint on, like, any arbitrary human being. Like, being, like, the best baseball player in the world. And that's not literally true because of course, like, you know, hand eye coordination and willpower and like a lot of things like that are in fact like, like largely genetically determined, probably even if you like, take away the willpower part or like just go to like, you know, hand eye coordination, reaction time, whatever, etc. Etc. But it's like not, it's not as salient to you or whatever. And like, I think it's true that like definitely I remember I was like getting frustrated with like just not being as good as I wanted to be. And I do think it's to some extent, like if you push it to the limit and you like, and you like, I think, you know, a random, like mediocre, like twelve year old or a 14 year old or whatever, tries like insanely hard to like be a really good baseball player that, um, either they can or they're like getting wrong information somehow. And I think climbing. Yeah, there was just like a real. It made it a lot more salient or like the physical constraints became a lot more salient, especially in the sense of just watching other people, like, not, again, this sounds like melodramatic. Like not work nearly as hard. Yeah. And just really like naturally being able to do a lot like harder climbs and like there I think, hmm. Yeah, again, this like sounds melodramatic or whatever. I like, I think there was like, maybe there was a time where it kind of was in like an emotion, like some sort of like emotional sense, like hard. Like I didn't get both, like non obvious so I was like learning something new, but also like wasn't like super obviously like true for or something. There's like a, like a fate, like a. Yeah, maybe like a, you know, one to like three year long phase where like, um, I was sort of like watching other people get bigger and stronger and yet me trying to push the limits in terms of doing what I could to make myself stronger. And at the end of the day, they were still just a lot better than me and. Yeah, I don't know. I keep saying like, oh, I don't want to come across a cynical. I'm like. And like, I don't. Yeah, I don't. I don't know. But that's, that's the whole story or not the whole story. That's like the whole. It's not. No, like, you know, bigger lesson in there, right?
ARTHUR
I think it. Part of what's fun about it to me as a sport is like, like, I mean, I guess if you're lucky enough to, like, have that natural talent is one thing, and, like, clearly both of us did to some degree, you know? But, like, also, I think being just, like, a sort of, like, naturally very skinny, like, not super, like, athletic build person. Like, part of what I loved about climbing was, like, despite all of what you're saying, I feel like there's, like, a reasonable, like, diversity of, like, body types that are, like, to be very successful because it's all, like, very relative, like, strength, like, to your body weight and all that. And, like, I think that aspect of it, plus the fact that I think there's a much higher, like, level of achievement that's possible than people realize with, like, very little physical training. Like, just an enormous amount of climbing at the, like, typical range of, like, hobbyist skill level. Like, so, so much of it is, like, technique based.
AARON
Oh, yeah, I totally disagree.
ARTHUR
Really? Yeah, I mean, like, I think it. I think. I. I don't know. I want to hear what you have to say, but, like, I also think I'm talking about, like, a particular range or distribution that, like, I think, like, up until, like, I don't know, to put it in concrete terms, like, in typical gym grades, I think you can you bolder, like, up to, like, v six, maybe even v seven with, like, very little, like, serious climbing strength. Like, I think it's only when you get into that upper echelon of, like, serious competitors and, like, strong amateurs where you, like, you just need to hang board, you know what I mean? But, like, I think there you can get much farther than people realize on, like, technique alone. And, like, this is at least obvious to me because over the last, like, two plus two and a half years or so, I have, like, basically not rock climbed at all. And just like, a month ago, I realized that I live now a very, like, physically inactive lifestyle and that I should start being physically active again. And I'm, like, going on, like, probably, like, five gym sessions in the last month. Like, I've been finally, like.
AARON
Or, like, crystal city.
ARTHUR
Yeah, yeah, yeah. Cause I was like, oh, I'll just, like, start going to the climbing gym again. And I was, like, almost a little bit. Like, I had this almost ego thing where I was a bit embarrassed to start climbing again because I was like.
AARON
Oh, man, this is.
ARTHUR
I'm gonna be so bad. But, like, people aren't gonna know that. Like, really, I'm, like, good. You know what I mean? And it was so stupid. But, like, I got over that and, like, to be honest, like, I have done no serious training. I literally just, like, go and boulder until I'm tired, and then I go home and, like, already on my, like, fifth session, I'm climbing at, like, 85% of my maximum ever bouldering level. You know what I mean? And, like, I'm not in good climbing shape, so, like, I think. I don't know. It's just surprising, like, how far you can get when, like, you know, you don't forget your technique. Right. And at the end of the day, like, I've been climbing for, like, 13 years now, you know? And it's like, I have, like, just built up a lot of, like, skill and experience that's, like, still there.
AARON
Yeah. This is. Man, you're saying, like, the perfect things to, like, get me to, like, keep talking, and I pre. Like, I, like, I'm like, that's, like, a good thing or whatever. Yeah. So maybe. I know. I know I'm not. I'm sort of inviting myself here. Maybe I'll. Maybe I'll join you one of these days. I have been thinking. This has been on my mind on and off, you know, for. Yeah. Like, yeah, I don't know how much I'll cut, but, like, long story short, I. Sorry. Yeah, so. Bunch of, like, connected thoughts. Yeah. Like, one thing I want to say is, like, I don't think that will be true for me. It's an empirical question. Right. Like, um. And I think that's largely just because of, like, the, um. Now. Yeah. Connecting this back, like, to, like, the body type thing. Like. Like what? Um. Yeah, one of the reasons other people were, like, more talented and also something that, like, I have. Has been, like, you know, just. Just, like, I have, like, I guess struggled with is such, like, a. I don't know, like, therapy term or whatever. Sorry. I'm, like, dancing around just saying that, like, I was, like, there's a lot of pressure. Not from anyone, like, explicitly, but, like, at least in my case, just, like, for myself to, like, be good and, like, that means being light and there's. But I think there's been, like, a fair amount of, like, hand wringing in, like, especially, like, youth competitive circles about. About, like, eating disorders and stuff. And, like, um. I was. I don't. I don't think I was, like, ever, like, full on anorexic. I was definitely, uh, not doing, like, I was definitely, like, affect. Yeah, pretty consciously trying to, like, limit my weight in, like, a time where, like, I really wasn't supposed to be. And, like, combine that with, like, a couple other, like, situation. Like, things, like, I can get into. But, like, as, yeah, can just, like, mention is, like, other factors or whatever. Like, was probably, like, a pretty bad idea for me. And all that is to say is, like, yeah, it would have been nice to be one of the people who was, like, naturally very, like, you know, you know, very, very skinny, frankly. But, like, and that was something I was like, I guess, like, yes, to some extent, like, jealous of. And also, like, again, like, super matter of factly, like, yeah, I am, like, probably a good almost double my. I mean, I've grown vertically in this time period, but probably almost double the weight when I was climbing at my best right now, which is like, yeah, I mean. I mean, like, one thing that, like.
ARTHUR
I. I feel bad now because I realize that, like, I think my general point about, like, technique being really important is totally true. But, like, when I put that point estimate on it.
AARON
No, no, no.
ARTHUR
That should have come with, like, the enormous asterisks of, like, I am naturally an extremely skinny person. Like, so much so that I, like, literally, like, this is something I struggled with a lot, actually, as a teenager was, like, given our, like, body expectations for men or whatever, it just, like, became obvious to me that, like, I, like, literally could not gain weight by eating unless I, like, severely over ate to the point of discomfort at, like, every meal. Right? And because of that, like, when I go through times where I don't climb or do any other form of fitness, like, all that happens is I lose weight because I lose muscle, you know? So, like, every time I take time off from climbing and come back, like, I am starting in the position of, like, actually weighing less than I did when I was really into climbing and, like, building muscle on top of that. So, like, enormous asterisks to my, like, no, no, no. Three weeks back, 80% of my original endurance is going to take longer to build. So I don't think I could sport climb very hard, but, like, bouldering strength or whatever, like, a huge part of that is, like, I stay very skinny and, like, thankfully not because I have, like, eating problems, but just because I, like, don't gain weight.
AARON
Yeah, yeah, yeah, sure, sure. No, no, I mean, like, yes, I think you're actually. I think you're totally, like, right? Like, you didn't need to add the disclaimer, but the disclaimer is, like, definitely true. And actually, like, I don't know, this is, like, whatever. At first I said, I was like, oh, I don't really want to talk about this. Like, whatever. Let's just like, yeah, like, dive in or whatever. I do think, like, there are. I don't. I can't remember, like, specific examples of this. I'm pretty sure, like, somehow, like, either, like, media sources or, like, specific people basically said something along the lines of, like, no, actually being skinny, like, doesn't help. And this is, like, basically just, like, gaslighting or whatever. Oh, no, no, it totally does. Like, yeah, yeah.
ARTHUR
I mean, the way.
AARON
Not just being skinny. I mean, like, I guess, you know, to some extent, that's, like, a little, like, it's like, the laugher curve or whatever. Like, the ideal amount of, like, muscle weight isn't literally zero.
ARTHUR
Right.
AARON
You know, but, like, it's pretty. You know, it's pretty skinny is, like, it's, like, ideal.
ARTHUR
And the way I think about, like, how, like. And thankfully, I think there's been, like, I was never super deep in, like, the competition world, but I think a lot of, like, prominent people have, like, spoken out about this in the last few years. I think there's, like, more attention on, like, the risk of eating disorders now. Like, the problem is, I'm sure by, like, no means solved. I'm sure it's still a huge issue. But I think part of what's so tricky about it and, like, why it'll continue to be an issue is, like, it just is a fact that, like, at a physical level, like, one of the most important things in rock climbing is, like, how strong are you in some very particular modalities versus how much do you weigh? And, like, you're saying there's some curve in the optimum mushroom, not zero, but starting from a baseline of being relatively fit and in good climbing shape and having strong muscles in those very particular muscle groups and modalities that are necessary for climbing when you're looking at that equation. Right. It's much easier to just like. Or not like, again, I'm like.
AARON
No, you're easier to lose weight.
ARTHUR
Yeah.
AARON
Oh, yeah.
ARTHUR
Very practical level. Virtually everyone, especially when you get to the point that you're, like, pretty close, you're, like, pretty far in the diminishing returns of, like, training hard, right?
AARON
Yeah, yeah.
ARTHUR
Once you get well into that diminishing returns, part. Part of, like, the curve of, like, training climbing strength, it's just going to be much easier for people to, like, lose and again, easier in a very, like, circumscribed, like.
AARON
I mean, for most people, though.
ARTHUR
Yeah. I mean, the reason why I'm adding that caveat is just, like, I want to be sensitive to the, like, there are, like, very real, like, psychological and like, other health costs to that. So it's not, like, easy.
AARON
Like, oh, yeah.
ARTHUR
But, like. But in terms of, like, if you are single mindedly focusing on. On getting better at rock climbing, like, there becomes a point where it's much easier for people to lose weight than it is to get stronger. And I think, like, that, like, fundamental fact, like, people are gonna have to, like, figure out how to, like, grapple with that for this sport to not be incredibly unhealthy for a lot of people that take it really seriously, you know? Like, I think it's just, it's no to me, like, that's the reason why there's no, like, sadly, like, no, like, mystery as to why there's this eating disorder problem.
AARON
Yeah, and, like, yes. Something you said before. It's like, oh, like you were, you know, I forget which one is, like, maybe, like, embarrassed. Like, like, when you went to the gym for the first time or whatever, and, like, yeah, and you were, like, pleasantly surprised. But, like, this sort of. There's been a lot of reasons why I haven't gone climbing since. I think I forget the exact. I think it was, like, I want to say, like, October 22 or 21 or something.
ARTHUR
Yeah.
AARON
Could be 2021, something like that. And there's, like, a proximate cause, which is like, oh, I was also, like, doing. I was, like, climbing college. But, like, actually, yes, it's, like, worth mentioning. Like, one thing, I think I was like, I. It's like, oh, you know, you don't never know the counterfactual or whatever, but, like, I think one of the something, like, very fortunate that happened was that I wound up at a college where I was, by. I was so far, at least coming in as a freshman. I was so far and away the best person on the very casual climbing team that I was able to sort of detach a little bit and, like, without it incurring such a, like, social psychological cost or whatever. And then eventually I, like, in fact, this is relevant, you know, there's a. Went to EA global, came back, got really sick, and, like, was, like, very, very sick for like, a week or two. And, like, and by that time, like, because of, like, the being sick and ye global, I think at some point I had been away, like, not climb for, like, three weeks or something, which is, like, by far the most. Yeah, or it was like a whole month or something. Yeah, yeah, the most. I had, like, been away from climbing, like, you know, in, like, years and years, and then I just, like, never went back. And I might, I don't know? Yeah, I. Honest to God, I'm sounding so melodramatic. But, like, I don't know. Yeah, yeah, I might. I think I do. Like, but, like, the dynamic you were mentioning, which, like, you were pleasantly surprised and, like, maybe it's not impossible that, like, I will be too, but, like, no, it's, like, not gonna be as fun or whatever. Like, you know, being like, oh, yeah, I used to be, like, the best person in the gym, and, like, now I'm really, really, really not. Yeah. And, like, much worse than I used to be. I mean, yeah, people like improvement. You know, it's psychologically, I mean.
ARTHUR
But that is the flip side, right? Well, I guess, one. It's interesting how much my personal story in some ways, like, mirrors yours. And that, like, I started taking climbing a lot less seriously when I was in college. And similarly, like, like, you were very similar to you. Like, I wasn't the strongest person on the casual climbing team. There's one guy who's, like, a super strong boulderer, but I was, like, definitely the most experienced rock climber. And, like, we would do outdoor excursions and stuff. And, like, you know, it was just like, I had a lot of other things going on. I was focusing on life, and I was casual. Right. It was like, it was something I still did regularly, but I wasn't, like, like, you know, really, really serious about, like, training and, like, fitness and all that. And then similarly, I went and studied abroad the fall of my junior year and was in, like, you know, was in, like, northeastern India, where there are no rock climbing gyms and no rock climbing to be found and, like, didn't rock climb at all for months and then came back and then, like, probably went to the gym a couple times, then COVID happened.
AARON
Yeah. Yeah.
ARTHUR
And, like, it was just all these things. Like, I just sort of, like, stopped doing it. Kind of, like, I kind of never came back, you know? And then I tried to come back to rock climbing after my kidney surgery, but I was, like, too.
AARON
Arthur donated his kidney. How did we not talk about that? Sorry, man, we're gonna have to cut this part from the podcast.
ARTHUR
But, like, I came back too soon after that and, like, sort of, like, injured my abdomen and then, like, took a long time off after that because I was like, I want to make sure when I start climbing again that I won't injure myself.
AARON
Yeah.
ARTHUR
Thankfully hasn't happened. I think I'm fine. Fine. But I will say on this psychological point, like, if you really internalize, like, if you really try to, like, let go of that, like, self comparison.
AARON
Yeah.
ARTHUR
Like how you used to be.
AARON
Yeah.
ARTHUR
I think part of what's been fun about coming back to climbing again is that, like, I am already so much noticeably stronger on like, the fifth trip to the climbing gym than I was on the first trip. Right. Because when you're starting from, like, a baseline of, like, no rock climbing fitness, like, especially when you already know sort of what to do to, like, get that back, like, you progress very quickly at the beginning. So, like, I think, I don't know. If I was to, like, lightly encourage you, I would say that if you're able to, like, let go of that, like, comparison to your previous self or whatever and just like, have fun kind of being a beginner again, like, at least in terms of fitness, like, and just like starting from that zero baseline, like, it's pretty cool how fast you improve, you know? Like, and I see that with my friends who, like, have gotten into it. Like, they get strong really fucking quickly because it's like these very weird, specific muscles that you just don't have if you don't do it.
AARON
Yeah.
ARTHUR
Anyways, I think TLDR we have solved everything. All of the problems of ethics and AI and reading old philosophy. Indeed.
Get full access to Aaron's Blog at www.aaronbergman.net/subscribe -
Mangler du episoder?
-
Intro
Around New Years, Max Alexander, Laura Duffy, Matt and I tried to raise money for animal welfare (more specifically, the EA Animal Welfare Fund) on Twitter. We put out a list of incentives (see the pink image below), one of which was to record a drunk podcast episode if the greater Very Online Effective Altruism community managed to collectively donate $10,000.
To absolutely nobodyâs surprise, they did ($10k), and then did it again ($20k) and then almost did it a third time ($28,945 as of March 9, 2024).
To everyone who gave or helped us spread the word, and on behalf of the untold number of animals these dollars will help, thank you.
And although our active promotion on Twitter has come to an end, it is not too late to give!
I give a bit more context in a short monologue intro I recorded (sober) after the conversation, so without further ado, Drunk Pigeon Hour:
Transcript
(Note: very imperfect - sorry!)
Monologue
Hi, this is Aaron. This episode of Pigeon Hour is very special for a couple of reasons.
The first is that it was recorded in person, so three of us were physically within a couple feet of each other. Second, it was recorded while we were drunk or maybe just slightly inebriated. Honestly, I didn't get super drunk, so I hope people forgive me for that.
But the occasion for drinking was that this, a drunk Pigeon Hour episode, was an incentive for a fundraiser that a couple of friends and I hosted on Twitter, around a little bit before New Year's and basically around Christmas time. We basically said, if we raise $10,000 total, we will do a drunk Pigeon Hour podcast. And we did, in fact, we are almost at $29,000, just shy of it. So technically the fundraiser has ended, but it looks like you can still donate. So, I will figure out a way to link that.
And also just a huge thank you to everyone who donated. I know that's really cliche, but this time it really matters because we were raising money for the Effective Altruism Animal Welfare Fund, which is a strong contender for the best use of money in the universe.
Without further ado, I present me, Matt, and Laura. Unfortunately, the other co-host Max was stuck in New Jersey and so was unable to participate tragically.
Yeah so here it is!
Conversation
AARON
Hello, people who are maybe listening to this. I just, like, drank alcohol for, like, the first time in a while. I don't know. Maybe I do like alcohol. Maybe I'll find that out now.
MATT
Um, All right, yeah, so this is, this is Drunk Pigeon Hour! Remember what I said earlier when I was like, as soon as we are recording, as soon as we press record, it's going to get weird and awkward.
LAURA
I am actually interested in the types of ads people get on Twitter. Like, just asking around, because I find that I get either, like, DeSantis ads. I get American Petroleum Institute ads, Ashdale College.
MATT
Weirdly, I've been getting ads for an AI assistant targeted at lobbyists. So it's, it's like step up your lobbying game, like use this like tuned, I assume it's like tuned ChatGPT or something. Um, I don't know, but it's, yeah, it's like AI assistant for lobbyists, and it's like, like, oh, like your competitors are all using this, like you need to buy this product.
So, so yeah, Twitter thinks I'm a lobbyist. I haven't gotten any DeSantis ads, actually.
AARON
I think I might just like have personalization turned off. Like not because I actually like ad personalization. I think I'm just like trying to like, uh, this is, this is like a half-baked protest of them getting rid of circles. I will try to minimize how much revenue they can make from me.
MATT
So, so when I, I like went through a Tumblr phase, like very late. In like 2018, I was like, um, like I don't like, uh, like what's happening on a lot of other social media.
Like maybe I'll try like Tumblr as a, as an alternative.
And I would get a lot of ads for like plus-sized women's flannels.
So, so like the Twitter ad targeting does not faze me because I'm like, oh, okay, like, I can, hold on.
AARON
Sorry, keep going. I can see every ad I've ever.
MATT
Come across, actually, in your giant CSV of Twitter data.
AARON
Just because I'm a nerd. I like, download. Well, there's actually a couple of things. I just download my Twitter data once in a while. Actually do have a little web app that I might try to improve at some point, which is like, you drop it in and then it turns them. It gives you a csV, like a spreadsheet of your tweets, but that doesn't do anything with any of the other data that they put in there.
MATT
I feel like it's going to be hard to get meaningful information out of this giant csv in a short amount of time.
AARON
It's a giant JSON, actually.
MATT
Are you just going to drop it all into c long and tell it to parse it for you or tell it to give you insights into your ads.
AARON
Wait, hold on. This is such a.
MATT
Wait. Do people call it âC-Longâ or âClongâ?
AARON
Why would it be long?
MATT
Well, because it's like Claude Long.
LAURA
I've never heard this phrase.
MATT
This is like Anthropicâs chat bot with a long context with so like you can put. Aaron will be like, oh, can I paste the entire group chat history?
AARON
Oh yeah, I got clong. Apparently that wasn't acceptable so that it.
MATT
Can summarize it for me and tell me what's happened since I was last year. And everyone is like, Aaron, don't give our data to Anthropic, is already suss.
LAURA
Enough with the impressions feel about the Internet privacy stuff. Are you instinctively weirded out by them farming out your personal information or just like, it gives me good ads or whatever? I don't care.
MATT
I lean a little towards feeling weird having my data sold. I don't have a really strong, and this is probably like a personal failing of mine of not having a really strong, well formed opinion here. But I feel a little sketched out when I'm like all my data is being sold to everyone and I don't share. There is this vibe on Twitter that the EU cookies prompts are like destroying the Internet. This is regulation gone wrong. I don't share that instinct. But maybe it's just because I have average tolerance for clicking no cookies or yes cookies on stuff. And I have this vibe that will.
AARON
Sketch down by data. I think I'm broadly fine with companies having my information and selling it to ad targeting. Specifically. I do trust Google a lot to not be weird about it, even if it's technically legal. And by be weird about it, what do mean? Like, I don't even know what I mean exactly. If one of their random employees, I don't know if I got into a fight or something with one of their random employees, it would be hard for this person to track down and just see my individual data. And that's just a random example off the top of my head. But yeah, I could see my view changing if they started, I don't know, or it started leaching into the physical world more. But it seems just like for online ads, I'm pretty cool with everything.
LAURA
Have you ever gone into the ad personalization and tried see what demographics they peg you?
AARON
Oh yeah. We can pull up mine right now.
LAURA
It's so much fun doing that. It's like they get me somewhat like the age, gender, they can predict relationship status, which is really weird.
AARON
That's weird.
MATT
Did you test this when you were in and not in relationships to see if they got it right?
LAURA
No, I think it's like they accumulate data over time. I don't know. But then it's like we say that you work in a mid sized finance. Fair enough.
MATT
That's sort of close.
LAURA
Yeah.
AARON
Sorry. Keep on podcasting.
LAURA
Okay.
MATT
Do they include political affiliation in the data you can see?
AARON
Okay.
MATT
I would have been very curious, because I think we're all a little bit idiosyncratic. I'm probably the most normie of any of us in terms of. I can be pretty easily sorted into, like, yeah, you're clearly a Democrat, but all of us have that classic slightly. I don't know what you want to call it. Like, neoliberal project vibe or, like, supply side. Yeah. Like, some of that going on in a way that I'm very curious.
LAURA
The algorithm is like, advertising deSantis.
AARON
Yeah.
MATT
I guess it must think that there's some probability that you're going to vote in a republican primary.
LAURA
I live in DC. Why on earth would I even vote, period.
MATT
Well, in the primary, your vote is going to count. I actually would think that in the primary, DC is probably pretty competitive, but I guess it votes pretty. I think it's worth.
AARON
I feel like I've seen, like, a.
MATT
I think it's probably hopeless to live. Find your demographic information from Twitter. But, like.
AARON
Age 13 to 54. Yeah, they got it right. Good job. I'm only 50, 99.9% confident. Wait, that's a pretty General.
MATT
What's this list above?
AARON
Oh, yeah. This is such a nerd snipe. For me, it's just like seeing y'all. I don't watch any. I don't regularly watch any sort of tv series. And it's like, best guesses of, like, I assume that's what it is. It thinks you watch dune, and I haven't heard of a lot of these.
MATT
Wait, you watch cocaine there?
AARON
Big bang theory? No, I definitely have watched the big Bang theory. Like, I don't know, ten years ago. I don't know. Was it just, like, random korean script.
MATT
Or whatever, when I got Covid real bad. Not real bad, but I was very sick and in bed in 2022. Yeah, the big bang theory was like, what I would say.
AARON
These are my interest. It's actually pretty interesting, I think. Wait, hold on. Let me.
MATT
Oh, wait, it's like, true or false for each of these?
AARON
No, I think you can manually just disable and say, like, oh, I'm not, actually. And, like, I did that for Olivia Rodrigo because I posted about her once, and then it took over my feed, and so then I had to say, like, no, I'm not interested in Olivia Rodrigo.
MATT
Wait, can you control f true here? Because almost all of these. Wait, sorry. Is that argentine politics?
AARON
No, it's just this.
MATT
Oh, wait, so it thinks you have no interest?
AARON
No, this is disabled, so I haven't. And for some reason, this isn't the list. Maybe it was, like, keywords instead of topics or something, where it was the.
MATT
Got it.
AARON
Yes. This is interesting. It thinks I'm interested in apple stock, and, I don't know, a lot of these are just random.
MATT
Wait, so argentine politics was something it thought you were interested in? Yeah. Right.
AARON
Can.
MATT
Do you follow Maya on Twitter?
AARON
Who's Maya?
MATT
Like, monetarist Maya? Like, neoliberal shell two years ago.
AARON
I mean, maybe. Wait, hold on. Maybe I'm just like.
MATT
Yeah, hardcore libertarianism.
LAURA
Yeah. No, so far so good with him. I feel like.
AARON
Maia, is it this person? Oh, I am.
MATT
Yeah.
AARON
Okay.
MATT
Yeah, she was, like, neoliberal shell two years ago.
AARON
Sorry, this is, like, such an errands. Like snipe. I got my gender right. Maybe. I don't know if I told you that. Yeah. English. Nice.
MATT
Wait, is that dogecoin?
AARON
I assume there's, like, an explicit thing, which is like, we're going to err way on the side of false positives instead of false negatives, which is like. I mean, I don't know. I'm not that interested in AB club, which.
MATT
You'Re well known for throwing staplers at your subordinate.
AARON
Yeah.
LAURA
Wait, who did you guys support in 2020 primary?
MATT
You were a Pete stan.
LAURA
I was a Pete stan. Yes, by that point, definitely hardcore. But I totally get. In 2016, I actually was a Bernie fan, which was like, I don't know how much I was really into this, or just, like, everybody around me was into it. So I was trying to convince myself that he was better than Hillary, but I don't know, that fell apart pretty quickly once he started losing. And, yeah, I didn't really know a whole lot about politics. And then, like, six months later, I became, like, a Reddit libertarian.
AARON
We think we've talked about your ideological evolution.
MATT
Have you ever done the thing of plotting it out on the political? I feel like that's a really interesting.
LAURA
Exercise that doesn't capture the online. I was into Ben Shapiro.
MATT
Really? Oh, my God. That's such a funny lore fact.
AARON
I don't think I've ever listened to Ben Shapiro besides, like, random clips on Twitter that I like scroll?
MATT
I mean, he talks very fast. I will give him that.
LAURA
And he's funny. And I think it's like the fast talking plus being funny is like, you can get away with a lot of stuff and people just end up like, oh, sure, I'm not really listening to this because it's on in the background.
AARON
Yeah.
MATT
In defense of the Bernie thing. So I will say I did not support Bernie in 2016, but there was this moment right about when he announced where I was very intrigued. And there's something about his backstory that's very inspiring. This is a guy who has been just extraordinarily consistent in his politics for close to 50 years, was saying lots of really good stuff about gay rights when he was like, Burlington mayor way back in the day, was giving speeches on the floor of the House in the number one sound very similar to the things he's saying today, which reflects, you could say, maybe a very myopic, closed minded thing, but also an ideological consistency. That's admirable. And I think is pointing at problems that are real often. And so I think there is this thing that's, to me, very much understandable about why he was a very inspiring candidate. But when it came down to nitty gritty details and also to his decisions about who to hire subordinates and stuff, very quickly you look at the Bernie campaign alumni and the nuances of his views and stuff, and you're like, okay, wait, this is maybe an inspiring story, but does it actually hold up?
AARON
Probably not.
LAURA
Yeah, that is interesting. It's like Bernie went woke in 2020, kind of fell apart, in my opinion.
AARON
I stopped following or not following on social media, just like following him in general, I guess. 2016 also, I was 16. You were not 16. You were.
MATT
Yeah, I was in college at that time, so I was about 20.
AARON
So that was, you can't blame it. Anything that I do under the age of 18 is like just a race when I turn 18.
LAURA
Okay, 2028 draft. Who do we want to be democratic nominee?
AARON
Oh, Jesse from pigeonhole. I honestly think he should run. Hello, Jesse. If you're listening to this, we're going to make you listen to this. Sorry. Besides that, I don't know.
MATT
I don't have, like, an obvious front runner in mind.
AARON
Wait, 2028? We might be dead by 2028. Sorry, we don't talk about AI.
MATT
Yeah.
AARON
No, but honestly, that is beyond the range of planability, I think. I don't actually think all humans are going to be dead by 2028. But that is a long way away. All I want in life is not all I want. This is actually what I want out of a political leader. Not all I want is somebody who is good on AI and also doesn't tells the Justice Department to not sue California or whatever about their gestation. Or maybe it's like New Jersey or something about the gestation crate.
MATT
Oh, yeah. Top twelve.
AARON
Yeah. Those are my two criteria.
MATT
Corey Booker is going to be right on the latter.
AARON
Yeah.
MATT
I have no idea about his views on.
AARON
If to some extent. Maybe this is actively changing as we speak, basically. But until recently it wasn't a salient political issue and so it was pretty hard to tell. I don't know. I don't think Biden has a strong take on it. He's like, he's like a thousand years old.
LAURA
Watch what Mitch should have possibly decided. That's real if we don't do mean.
AARON
But like, but his executive order was way better than I would have imagined. And I, like, I tweeted about, know, I don't think I could have predicted that necessarily.
MATT
I agree. I mean, I think the Biden administration has been very reasonable on AI safety issues and that generally is reflective. Yeah, I think that's reflective of the.
AARON
Tongue we know Joe Biden is listening to.
MATT
Okay.
AARON
Okay.
MATT
Topics that are not is like, this is a reward for the fundraiser. Do we want to talk about fundraiser and retrospective on that?
AARON
Sure.
MATT
Because I feel like, I don't know. That ended up going at least like one sigma above.
AARON
How much? Wait, how much did we actually raise?
MATT
We raised like 22,500.
LAURA
Okay. Really pissed that you don't have to go to Ava.
AARON
I guess this person, I won't name them, but somebody who works at a prestigious organization basically was seriously considering donating a good amount of his donation budget specifically for the shrimp costume. And, and we chatted about it over Twitter, DM, and I think he ended up not doing it, which I think was like the right call because for tax reasons, it would have been like, oh. He thought like, oh, yeah, actually, even though that's pretty funny, it's not worth losing. I don't know, maybe like 1000 out of $5,000 tax reasons or whatever. Clearly this guy is actually thinking through his donations pretty well. But I don't know, it brought him to the brink of donating several, I think, I don't know, like single digit thousands of dollars. Exactly.
LAURA
Clearly an issue in the tax.
AARON
Do you have any tax take? Oh, wait, sorry.
MATT
Yeah, I do think we should like, I mean, to the extent you are allowed by your employer too, in public space.
AARON
All people at think tanks, they're supposed to go on podcast and tweet. How could you not be allowed to do that kind of thing?
MATT
Sorry, keep going. But yeah, no, I mean, I think it's worth dwelling on it a little bit longer because I feel like, yeah, okay, so we didn't raise a billion dollars as you were interested in doing.
AARON
Yeah. Wait, can I make the case for like. Oh, wait. Yeah. Why? Being slightly unhinged may have been actually object level. Good. Yeah, basically, I think this didn't end up exposed to. We learned this didn't actually end up happening. I think almost all of the impact money, because it's basically one of the same in this context. Sorry. Most of the expected money would come in the form of basically having some pretty large, probably billionaire account, just like deciding like, oh, yeah, I'll just drop a couple of mil on this funny fundraiser or whatever, or maybe less, honestly, listen, $20,000, a lot of money. It's probably more money than I have personally ever donated. On the other hand, there's definitely some pretty EA adjacent or broadly rationalist AI adjacent accounts whose net worth is in at least tens of millions of dollars, for whom $100,000 just would not actually affect their quality of life or whatever. And I think, yeah, there's not nontrivial chance going in that somebody would just decide to give a bunch of money.
MATT
I don't know. My view is that even the kinds of multimillionaires and billionaires that hang out on Twitter are not going to ever have dropped that much on a random fundraiser. They're more rational.
AARON
Well, there was proof of concept for rich people being insane. Is Balaji giving like a million dollars to James Medlock.
MATT
That's true.
AARON
That was pretty idiosyncratic. Sorry. So maybe that's not fair. On the other hand. On the other hand, I don't know, people do things for clout. And so, yeah, I would have, quote, tweeted. If somebody was like, oh yeah, here's $100,000 guys, I would have quote, tweeted the shit out of them. They would have gotten as much possible. I don't know. I would guess if you have a lot of rich people friends, they're also probably on Twitter, especially if it's broadly like tech money or whatever. And so there's that. There's also the fact that, I don't know, it's like object people, at least some subset of rich people have a good think. EA is basically even if they don't identify as an EA themselves, think like, oh yeah, this is broadly legit and correct or whatever. And so it's not just like a random.
MATT
That's true. I do think the choice of the animal welfare fund made that harder. Right. I think if it's like bed nets, I think it's more likely that sort of random EA rich person would be like, yes, this is clearly good. And I think we chose something that I think we could all get behind.
AARON
Because we have, there was a lot of politicking around.
MATT
Yeah, we all have different estimates of the relative good of different cause areas and this was the one we could very clearly agree on, which I think is very reasonable and good. And I'm glad we raised money for the animal welfare fund, but I do think that reduces the chance of, yeah.
LAURA
I think it pushes the envelope towards the animal welfare fund being more acceptable as in mainstream ea.org, just like Givewell would be. And so by forcing that issue, maybe we have done more good for the.
AARON
That there's like that second order effect. I do just think even though you're like, I think choosing this over AMF or whatever, global health fund or whatever decreased the chance of a random person. Not a random person, but probably decrease the total amount of expected money being given. I think that was just trumped by the fact that I think the animal welfare, the number I pull out of thin air is not necessarily not out of thin air, but very uncertain is like 1000 x or whatever relative to the standards you vote for. Quote, let it be known that there is a rabbit on the premises. Do they interact with other rodents?
MATT
Okay, so rabbits aren't rodents. We can put this on the pod. So rabbits are lagging wars, which is.
AARON
Fuck is that?
MATT
It's a whole separate category of animals.
AARON
I just found out that elk were part of it. Like a type of deer. This is another world shattering insight.
MATT
No, but rabbits are evolutionarily not part of the same. I guess it's a family on the classification tree.
AARON
Nobody, they taught us that in 7th grade.
MATT
Yeah, so they're not part of the same family as rodents. They're their own thing. What freaks me out is that guinea pigs and rabbits seem like pretty similar, they have similar diet.
AARON
That's what I was thinking.
MATT
They have similar digestive systems, similar kind of like general needs, but they're actually like, guinea pigs are more closely related to rats than they are to rabbits. And it's like a convergent evolution thing that they ended up.
AARON
All mammals are the same. Honestly.
MATT
Yeah. So it's like, super weird, but they're not rodents, to answer your question. Rabbits do like these kinds of rabbits. So these are all pet rabbits are descended from european. They're not descended from american rabbits because.
LAURA
American rabbits like cotton tails. Oh, those are different.
MATT
Yeah. So these guys are the kinds of rabbits that will live in warrens. Warrens. So, like, tunnel systems that they like. Like Elizabeth Warren. Yeah. And so they'll live socially with other rabbits, and they'll dig warrens. And so they're used to living in social groups. They're used to having a space they need to keep clean. And so that's why they can be, like, litter box trained, is that they're used to having a warren where you don't just want to leave poop everywhere. Whereas american rabbits are more solitary. They live above ground, or in my understanding is they sometimes will live in holes, but only occupying a hole that another animal has dug. They won't do their hole themselves. And so then they are just not social. They're not easily litter box trained, that kind of stuff. So all the domestic rabbits are bred from european ones.
AARON
I was thinking, if you got a guinea pig, would they become friends? Okay.
MATT
So apparently they have generally similar dispositions and it can get along, but people don't recommend it because each of them can carry diseases that can hurt the other one. And so you actually don't want to do it. But it does seem very cute to have rabbit.
AARON
No, I mean, yeah. My last pet was a guinea pig, circa 20. Died like, a decade ago. I'm still not over it.
MATT
Would you consider another one?
AARON
Probably. Like, if I get a pet, it'll be like a dog or a pig. I really do want a pig. Like an actual pig.
MATT
Wait, like, not a guinea pig? Like a full size pig?
AARON
Yeah. I just tweeted about this. I think that they're really cool and we would be friends. I'm being slightly sarcastic, but I do think if I had a very large amount of money, then the two luxury purchases would be, like, a lot of massages and a caretaker and space and whatever else a pig needs. And so I could have a pet.
MATT
Like, andy organized a not EADC, but EADC adjacent trip to Rosie's farm sanctuary.
AARON
Oh, I remember this. Yeah.
MATT
And we got to pet pigs. And they were very sweet and seems very cute and stuff. They're just like, they feel dense, not like stupid. But when you pet them, you're like, this animal is very large and heavy for its size. That was my biggest surprising takeaway, like, interacting with the hair is not soft either. No, they're pretty coarse, but they seem like sweeties, but they are just like very robust.
LAURA
Have you guys seen Dave?
AARON
Yes.
LAURA
That's like one of the top ten movies of all time.
AARON
You guys watch movies? I don't know. Maybe when I was like four. I don't like.
LAURA
Okay, so the actor who played farmer Hoggett in this movie ended up becoming a vegan activist after he realized, after having to train all of the animals, that they were extremely intelligent. And obviously the movie is about not killing animals, and so that ended up going pretty well.
AARON
Yeah, that's interesting. Good brown.
MATT
Okay, sorry. Yeah, no, this is all tracked. No, this is great. We are doing a drunk podcast rather than a sober podcast, I think, precisely because we are trying to give the people some sidetracks and stuff. Right. But I jokingly put on my list of topics like, we solved the two envelopes paradox once and for all.
AARON
No, but it's two boxing.
MATT
No. Two envelopes. No. So this is the fundamental challenge to questions about, I think one of the fundamental challenges to be like, you multiply out the numbers and the number.
AARON
Yeah, I feel like I don't have like a cash take. So just like, tell me the thing.
MATT
Okay.
AARON
I'll tell you the correct answer. Yeah.
MATT
Okay, great. We were leading into this. You were saying, like, animal charity is 1000 x game, right?
AARON
Conditional. Yeah.
MATT
And I think it's hard to easily get to 1000 x, but it is totally possible to get to 50 x if you just sit down and multiply out numbers and you're like, probability of sentience and welfare range.
AARON
I totally stand by that as my actual point estimate. Maybe like a log mean or something. I'm actually not sure, but. Sorry, keep going.
MATT
Okay, so one line of argument raised against this is the two envelopes problem, and I'm worried I'm going to do a poor job explaining this. Laura, please feel free to jump in if I say something wrong. So two envelopes is like, it comes from the thing of, like, suppose you're given two envelopes and you're told that one envelope has twice as much money in it as the other.
AARON
Oh, you are going to switch back and forth forever.
MATT
Exactly. Every time. You're like, if I switch the other envelope and it has half as much money as this envelope, then I lose 0.5. But if it has twice as much money as this envelope, then I gain one. And so I can never decide on which envelope because it always looks like it's positive ev to switch the other. So that's where the name comes from.
AARON
I like a part that you're like, you like goggles?
MATT
So let me do the brief summary, which is that basically, depending on which underlying units you pick, whether you work in welfare range, units that are using one human as the baseline or one chicken as the baseline, you can end up with different outputs of the expected value calculation. Because it's like, basically, is it like big number of chickens times some fraction of the human welfare range that dominates? Or is it like some small probability that chickens are basically not sentient times? So then a human has like a huge human's welfare range is huge in chicken units, and which of those dominates is determined by which unit you work in.
AARON
I also think, yeah, this is not a good conducive to this problem. Is not conducive to alcohol or whatever. Or alcohol is not going to this issue. To this problem or whatever. In the maximally abstract envelope thing. I have an intuition that's something weird kind of probably fake going on. I don't actually see what the issue is here. I don't believe you yet that there's like an actual issue here. It's like, okay, just do the better one. I don't know.
MATT
Okay, wait, I'll get a piece of paper. Talk amongst yourselves, and I think I'll be able to show this is like.
LAURA
Me as the stats person, just saying I don't care about the math. At some point where it's like, look, I looked at an animal and I'm like, okay, so we have evolutionarily pretty similar paths. It would be insane to think that it's not feeling like, it's not capable of feeling hedonic pain to pretty much the same extent as me. So I'm just going to ballpark it. And I don't actually care for webs.
AARON
I feel like I've proven my pro animal bona fide. I think it's bona fide. But here, and I don't share that intuition, I still think that we can go into that megapig discourse. Wait, yeah, sort of. Wait, not exactly megapig discourse. Yeah, I remember. I think I got cyberbullyed by, even though they didn't cyberbully me because I was informed of offline bullying via cyber about somebody's, sorry, this is going to sound absolutely incoherent. So we'll take this part out. Yeah. I was like, oh, I think it's like some metaphysical appeal to neuron counts. You specifically told me like, oh, yeah, Mr. So and so didn't think this checked out. Or whatever. Do you know what I'm talking about?
LAURA
Yeah.
AARON
Okay. No, but maybe I put it in dawn or Cringey or pretentious terms, but I do think I'm standing by my metaphysical neurons claim here. Not that I'm super confident in anything, but just that we're really radically unsure about the nature of sentience and qualia and consciousness. And probably it has something to do with neurons, at least. They're clearly related in a very boring sciency way. Yeah. It's not insane to me that, like, that, like. Like the unit of. Yeah, like the. The thing. The thing that, like, produces or like is, or like is directly, like one to one associated with, like, particular, like. Like, I guess, amount, for lack of better terms, of conscious experience, is some sort of physical thing. The neurons jumps out as the unit that might make sense. And then there's like, oh, yeah, do we really think all the neurons that control the tongue, like the motor function of the tongue, are those really make you quadrillion more important than a seal or whatever? And then I go back to, okay, even though I haven't done any research on this, maybe it's just like opiate. The neurons directly related neuron counts directly of. Sorry. Neurons directly involved in pretty low level hedonic sensations. The most obvious one would be literal opioid receptors. Maybe those are the ones that matter. This is like, kind of. I feel like we've sort of lost the plot a little.
MATT
Okay, this is like weird drunk math.
AARON
But I think your handwriting is pretty good.
MATT
I think I have it. So suppose we work in human units. I have a hypothetical intervention that can help ten chickens or one human, and we assume that when I say help, it's like, help them. The same of it. So if I work in human units, I say maybe there is a 50% chance that a chicken is zero one to 1100 of a human and a 50% chance that a chicken and a human are equal. Obviously, this is a thought experiment. I'm not saying that this is my real world probabilities, but suppose that these are my credences. So I do out the EV. The thing that helps ten chickens. I say that, okay, in half of the world, chickens are one 100th of a human, so helping ten of them is worth, like, zero five. Sorry, helping ten of them is zero one. And so 0.5 times zero one times ten is zero five. And then in the other half of the world, I say that a chicken and a human are equal. So then my intervention helps ten chickens, which is like helping ten humans so my total credence, like the benefit in that set of worlds with my 0.5 probability, is five. And so in the end, the chicken intervention wins because it has, on net, an ev of 5.5 versus one for the human intervention. Because the human intervention always helps one human. I switch it around and I say my base unit of welfare range, or, like moral weight, or whatever you want to say, is chicken units. Like, one chicken's worth of moral weight. So in half of the world, a human is worth 100 chickens, and then in the other half of the world, a human is worth one chicken. So I do out the ev for my intervention that helps the one human. Now, in the chicken units, and in chicken units, like, half of the time, that human is worth 100 chickens. And so I get 0.5 times, 100 times one, which is 50. And then in the other half of the world, the chicken and the human are equal. And so then it's 0.5 times one, times one, because I'm helping one human, so that's 0.5. The ev is 50.5. And then I do have my ev for my chicken welfare thing. That's, like, ten chickens, and I always help ten chickens. And so it's ten as my units of good. So when I worked in human units, I said that the chickens won because it was 5.5 human units versus one human unit for helping the human. When I did it in chicken units, it was 50.5 to help the humans versus ten to help the chickens. And so now I'm like, okay, my ev is changing just based on which units I work in. And I think this is, like, the two envelopes problem that's applied to animals. Brian Tomasic has, like, a long post about this, but I think this is, like, this is a statement or an example of the problem.
AARON
Cool.
LAURA
Can I just say something about the moral weight project? It's like, really just. We ended up coming up with numbers, which I think may have been a bit of a mistake in the end, because I think the real value of that was going through the literature and finding out the similarities and the traits between animals and humans, and then there are a surprising number of them that we have in common. And so at the end of the day, it's a judgment call. And I don't know what you do with it, because that is, like, a legit statistical problem with things that arises when you put numbers on stuff.
MATT
So I'm pretty sympathetic to what you're saying here of, like, the core insight of the moral weight project is, like, when we look at features that could plausibly determine capacity to experience welfare, we find that a pig and a human have a ton in common. Obviously, pigs cannot write poetry, but they do show evidence of grief behavior when another pig dies. And they show evidence of vocalizing in response to pain and all of these things. I think coming out of the moral waste project being like, wow. Under some form of utilitarianism, it's really hard to justify harms to, or like harms to pigs. Really. Morally matter makes complete sense. I think the challenge here is when you get to something like black soldier flies or shrimp, where when you actually look at the welfare range table, you see that the number of proxies that they likely or definitely have is remarkably low. The shrimp number is hinging on. It's not hinging on a ton. They share a few things. And because there aren't that many categories overall, that ends up being in the median case. Like, they have a moral weight, like one 30th of a human. And so I worry that sort of your articulation of the benefit starts to break down when you get to those animals. And we start to like, I don't know what you do without numbers there. And I think those numbers are really susceptible to this kind of 200.
AARON
I have a question.
MATT
Yeah, go.
AARON
Wait. This supposed to be like 5.5 versus one?
MATT
Yeah.
AARON
And this is 50.5 versus ten? Yeah. It sound like the same thing to me.
MATT
No, but they've inverted this case, the chickens one. So it's like when I'm working in human units, right? Like, half the time, I help.
AARON
If you're working in human units, then the chicken intervention looks 5.5 times better. Yes. Wait, can I write this down over here?
MATT
Yeah. And maybe I'm not an expert on this problem. This is just like something that tortures me when I try and sleep at night, not like a thing that I've carefully studied. So maybe I'm stating this wrong, but, yeah. When I work in human units, the 50% probability in this sort of toy example that the chickens and the humans are equal means that the fact that my intervention can help more chickens makes the ev higher. And then when I work in the chicken units, the fact that human might be 100 times more sentient than the chicken or more capable of realizing welfare, to be technical, that means the human intervention just clearly wins.
AARON
Just to check that I would have this right, the claim is that in human units, the chicken intervention looks 5.5 times better than the human intervention. But when you use chicken units, the human intervention looks 5.5 times better than the chicken intervention. Is that correct?
MATT
Yes, that's right.
AARON
Wait, hold on. Give me another minute.
MATT
This is why doing this drunk was a bad idea.
AARON
In human.
LAURA
No, I think that's actually right. And I don't know what to do about the flies and shrimp and stuff like this. This is like where I draw my line of like, okay, so lemonstone quote.
MATT
Tweeted me, oh, my God.
LAURA
I think he actually had a point of, there's a type of ea that is like, I'm going to set my budget constraint and then maximize within that versus start with a blank slate and allow the reason to take me wherever it goes. And I'm definitely in the former camp of like, my budget constraint is like, I care about humans and a couple of types of animals, and I'm just like drawing the line there. And I don't know what you do with the other types of things.
MATT
I am very skeptical of arguments that are like, we should end Medicare to spend it all on shrimp.
AARON
No one's suggesting that. No, there's like a lot of boring, prosaic reasons.
MATT
I guess what I'm saying is there's a sense in which, like, totally agreeing with you. But I think the challenge is that object level.
AARON
Yeah, you set us up. The political economy, I like totally by double it.
MATT
I think that there is. This is great. Aaron, I think you should have to take another shot for.
AARON
I'm sorry, this isn't fair. How many guys, I don't even drink, so I feel like one drink is like, is it infinity times more than normalize it? So it's a little bit handle.
MATT
I think there has to be room for moral innovation in my view. I think that your line of thinking, we don't want to do radical things based on sort of out there moral principles in the short term. Right. We totally want to be very pragmatic and careful when our moral ideas sort of put us really far outside of what's socially normal. But I don't think you get to where we are. I don't know what we owe the future was like a book that maybe was not perfect, but I think it eloquently argues with the fact that the first person to be like, hey, slavery in the Americas is wrong. Or I should say really the first person who is not themselves enslaved. Because of course, the people who are actually victims of this system were like, this is wrong from the start. But the first people to be like, random white people in the north being like, hey, this system is wrong. Looks super weird. And the same is true for almost any moral innovation. And so you have to, I think saying, like, my budget constraint is totally fixed seems wrong to me because it leaves no room for being wrong about some of your fundamental morals.
LAURA
Yeah, okay. A couple of things here. I totally get that appeal 100%. At the same time, a lot of people have said this about things that now we look back at as being really bad, like the USSR. I think communism ends up looking pretty bad in retrospect, even though I think there are a lot of very good moral intuitions underpinning it.
AARON
Yeah, I don't know. It's like, mostly an empirical question in that case, about what government policies do to human preference satisfaction, which is like, pretty. Maybe I'm too econ. These seem like very different questions.
LAURA
It's like we let our reason go astray, I think.
MATT
Right, we, as in some humans.
AARON
No, I think. Wait, at first glance. At first glance, I think communism and things in that vicinity seem way more intuitively appealing than they actually, or than they deserve to be, basically. And the notion of who is it? Like Adam Smith? Something Smith? Yeah, like free hand of the market or whatever. Invisible hand. Invisible free hand of the bunny ear of the market. I think maybe it's like, field intuitive to me at this point, because I've heard it a lot. But no, I totally disagree that people's natural intuition was that communism can't work. I think it's like, isn't true.
MATT
I'm not sure you guys are disagreeing with one.
AARON
Yeah.
MATT
Like, I think, Laura, if I can attempt to restate your point, is that to at least a subset of the people in the USSR at the time of the russian revolution, communism plausibly looked like the same kind of moral innovation as lots of stuff we looked back on as being really good, like the abolition of slavery or like, women's rights or any of those other things. And so you need heuristics that will defend against these false moral innovations.
AARON
Wait, no, you guys are both wrong. Wait, hold on. No, the issue there isn't that we disregard, I guess, humans, I don't know exactly who's responsible for what, but people disregarded some sort of deserving heuristic that would have gardened against communism. The issue was that, like, it was that, like, we had, like, lots of empirical, or, like, it's not even necessarily. I mean, in this case, it is empirical evidence, but, like, like, after a couple years of, like, communism or whatever, we had, like, lots of good evidence to think, oh, no, books like that doesn't actually help people, and then they didn't take action on that. That's the problem. If we were sitting here in 1910 or whatever, and I think it's totally possible, I will be convinced communism is, in fact, the right thing to do. But the thing that would be wrong is if, okay, five years later, you have kids starving or people starving or whatever, and maybe you can find intellectuals who claim and seem reasonably correct that they can explain how this downstream of your policies. Then doubling down is the issue, not the ex ante hypothesis that communism is good. I don't even know if that made any sense, I think.
LAURA
But we're in the ex ante position right now.
AARON
Yeah, totally. Maybe we'll find out some sort of, whether it's empirical or philosophical or something like maybe in five years or two years or whatever, there'll be some new insight that sheds light on how morally valuable shrimp are. And we should take that into account.
LAURA
I don't know. Because it's really easy to get good feedback when other fellow humans are starving to death versus. How are you supposed to judge? No, we've made an improvement.
AARON
Yeah, I do think. Okay. Yes. That's like a substantial difference. Consciousness is, like, extremely hard. Nobody knows what the hell is going on. It kind of drives me insane.
MATT
Whomstemonga has not been driven insane by the hard problem of consciousness.
AARON
Yeah. For real. I don't know. I don't have to say. It's like, you kind of got to make your best guess at some point.
MATT
Okay, wait, so maybe tacking back to how to solve it, did you successfully do math on this piece of paper?
AARON
Mostly? No, mostly I was word selling.
MATT
I like the verb form there.
AARON
Yeah. No, I mean, like, I don't have, like, a fully thought out thing. I think in part this might be because of the alcohol. I'm pretty sure that what's going on here is just that, like, in fact, like, there actually is an asymmetry between chicken units and human units, which is that. Which is that we have much better idea. The real uncertainty here is how valuable a chicken is. There's probably somebody in the world who doubts this, but I think the common sense thing and thing that everybody assumes is we basically have it because we're all humans and there's a lot of good reasons to think we have a decent idea of how valuable another human life is. And if we don't, it's going to be a lot worse for other species. And so just, like, taking that as a given, the human units are the correct unit because the thing with the unit is that you take it as given or whatever. The real uncertainty here isn't the relationship between chickens and humans. The real question is how valuable is a chicken? And so the human units are just like the correct one to use.
LAURA
Yeah, there's something there, which is the right theory is kind of driving a lot of the problem in the two envelope stuff. Because if you just chose one theory, then the units wouldn't really matter which one. The equality theory is like, you've resolved all the inter theoretic uncertainty and so wouldn't that get rid of.
AARON
I don't know if you know, if there's, like. I'm not exactly sure what you mean by theory.
LAURA
Like, are they equal, the equality theory versus are they 1100 theory? And we're assuming that each of them has five probabilities each end. So if we resolved that, it's like we decide upon the 1100 theory, then the problem goes away.
AARON
Yeah, I mean, that's true, but you might not be able to.
MATT
Yeah, I think it doesn't reflect our current state or, like.
AARON
No, just like taking as given the numbers, like, invented, which I think is fine for the illustration of the problem. Maybe a better example is what's, like, another thing, chicken versus a rabbit. I don't know. Or like rabbits. I don't know.
MATT
Chicken versus shrimp. I think it's like a real one. Because if you're the animal welfare fund, you are practically making that decision.
AARON
Yeah. I think that becomes harder. But it's not, like, fundamentally different. And it's like the question of, like, okay, which actually makes sense, makes more sense to use as a unit. And maybe you actually can come up with two, if you can just come up with two different species for which, on the merits, they're equally valid as a unit and there's no issue anymore. It really is 50 50 in the end.
MATT
Yeah. I don't know. I see the point you're making. With humans, we know in some sense we have much more information about how capable of realizing welfare a human is. But I guess I treat this as, like, man, I don't know. It's like why all of my confidence intervals are just, like, massive on all these things is I'm just very confused by these problems and how much that.
AARON
Seems like I'm confused by this one. Sorry, I'm, like, half joking. It is like maybe. I don't know, maybe I'll be less confident. Alcohol or so.
MATT
Yeah, I don't know. I think it's maybe much more concerning to me the idea that working in a different unit changes your conclusion radically.
AARON
Than it is to you.
LAURA
Sometimes. I don't know if this is, like, too much of a stoner cake or something like that.
AARON
Bring it on.
LAURA
I kind of doubt working with numbers at all.
MATT
Okay. Fit me well.
LAURA
It's just like when he's.
AARON
Stop doing that.
LAURA
I don't know what to do, because expected value theory. Okay, so one of the things that, when we hired a professional philosopher to talk about uncertainty.
MATT
Pause for a sec. Howie is very sweetly washing his ears, which is very cute in the background. He's like, yeah, I see how he licks his paws and squeezes his ear.
AARON
Is it unethical for me to videotape?
MATT
No, you're more than welcome to videotape it, but I don't know, he might be done.
AARON
Yeah, that was out.
MATT
Laura, I'm very sorry. No, yeah, you were saying you hired the professional philosopher.
LAURA
Yeah. And one of the first days, she's like, okay, well, is it the same type of uncertainty if we, say, have a one in ten chance of saving the life of a person we know for sure is conscious, versus we have a certain chance of saving the life of an animal that has, like, a one in ten probability of being sentient? These seem like different types.
AARON
I mean, maybe in some sense they're like different types. Sure. But what are the implications? It's not obviously the same.
LAURA
It kind of calls into question as to whether we can use the same mathematical approach for analyzing each of these.
AARON
I think my main take is, like, you got a better idea? That was like, a generic.
LAURA
No, I don't.
AARON
Yeah. It's like, okay, yeah, these numbers are, like, probably. It seems like the least bad option if you're going by intuition. I don't know. I think all things considered, sometimes using numbers is good because our brains aren't built to handle getting moral questions correct.
MATT
Yeah, I mean, I think that there is a very strong piece of evidence for what you're saying, Aaron, which is.
AARON
The whole paper on this. It's called the unreasonable efficacy of mathematics in the natural sciences.
MATT
Or this is. This is interesting. I was going to make sort of an easier or simpler argument, which is just like, I think the global health ea pitch of, like, we tend to get charity radically wrong.
AARON
Often.
MATT
Charities very plausibly do differ by 100 x or 1000 x in cost effectiveness. And most of the time, most people don't take that into account and end up helping people close to them or help an issue that's salient to them or help whatever they've heard about most and leave opportunities for doing what I think is very difficult to argue as not being radically more effective opportunities on the table as a result. Now, I led into this saying that I have this very profound uncertainty when it comes to human versus animal trade offs. So I'm not saying that, yes, we just should shut up and multiply. But I do think that is sort of like the intuition for why I think the stoner take is very hard for me to endorse is that we know in other cases, actually bringing numbers to the problem leads to saving many more lives of real people who have all of the same hopes and dreams and fears and feelings and experiences as the people who would have been saved in alternate options.
LAURA
Isn't that just like still underlying this is we're sure that all humans are equal. And that's like our theory that we have endorsed.
AARON
Wait, what?
MATT
Or like on welfare ranges, the differences among different humans are sufficiently small in terms of capacity to realize welfare. That plausibly they are.
AARON
Yeah, I don't think anyone believes that. Does anyone believe that? Wait, some people that everybody's hedonic range is the same.
LAURA
Randomly select a person who lives in Kenya. You would think that they have the same welfare range, a priority as somebody.
MATT
Who lives in the description. The fundamental statistics of describing their welfare range are the same.
AARON
Yeah, I think that's probably correct. It's also at an individual level, I think it's probably quite varied between humans.
LAURA
So I don't think we can say that we can have the same assumption about animals. And that's where it kind of breaks down, is we don't know the right theory to apply it.
AARON
Well, yeah, it's a hard question. Sorry, I'm being like kind of sarcastic.
LAURA
I think you have to have the theory right. And you can't easily average over theories with numbers.
MATT
Yeah, no, I mean, I think you're right. I think this is the challenge of the two envelopes. Problem is exactly this kind of thing. I'm like four chapters into moral uncertainty. The book.
AARON
By Will.
MATT
Yeah. McCaskill, Ord and Bryke Fist. I'm probably getting that name. But they have a third co author who is not as much of like an.
AARON
Yeah, I don't know. I don't have any super eloquent take except that to justify the use of math right now. Although I actually think I could. Yeah, I think mostly it's like, insofar as there's any disagreement, it's like we're both pointing at the issue, pointing at a question, and saying, look at that problem. It's, like, really hard. And then I'm saying like, yeah, I know. Shit. You should probably just do your best to answer it. Sorry, maybe I'm just not actually adding any insight here or whatever, but I agree with you that a lot of these problems are very difficult, actually. Sorry, maybe this is, like, a little bit of a nonsense. Whatever. Getting back to the hard problem of consciousness, I really do think it feels like a cruel joke that we have to implicitly, we have to make decisions about potentially gigantic numbers of digital lives or, like, digital sentience or, you know, whatever you want to call it, without having any goddamn idea, like, what the fuck is up with consciousness. And, I don't know, it doesn't seem fair. Okay.
MATT
Yeah, wait, okay, so fundraiser. This is great. We've done all of these branching off things. So we talked about how much we raised, which was, like, amount that I was quite happy with, though. Maybe that's, like, selfish because I didn't have to wear a shrink costume. And we talked about. Cause prio. We haven't talked about the whole fake OpenAI thing.
AARON
Fake open AI.
MATT
Wait. Like the entire.
AARON
Oh, well, shout out to I really. God damn it, Qualy. I hope you turn into a human at some point, because let it be known that Qualy made a whole ass Google Doc to plan out the whole thing and was, like, the driving. Yeah, I think it's fair to say Qualy was the driving force.
MATT
Yeah, totally. Like, absolutely had the concept, did the Google Doc. I think everybody played their parts really well, and I think that was very fun.
AARON
Yeah, you did. Good job, everybody.
MATT
But, yeah, that was fun. It was very unexpected. Also, I enjoyed that. I was still seeing tweets and replies that were like, wait, this was a bit. I didn't get this after the end of it, which maybe suggests. But if you look at the graph I think I sent in, maybe we.
AARON
Should pull up my. We can analyze my Twitter data and find out which things got how many views have.
MATT
Like, you have your text here. I think the graph of donations by date is, like, I sent in the text chat between.
AARON
Maybe I can pull it like, media.
MATT
Like you and me and Max and Laura. And it's very clear that that correlated with a. I think it's probably pretty close to the end.
AARON
Maybe I just missed this. Oh, Laura, thank you for making.
MATT
Yeah, the cards were amazing cards.
AARON
They're beautiful.
MATT
Oh, wait, okay, maybe it's not. I thought I said. Anyway, yeah, we got, like, a couple grand at the start, and then definitely at least five grand, maybe like, ten grand, somewhere in the five to ten range.
AARON
Can we get a good csv going? Do you have access to. You don't have to do this right now.
MATT
Wait, yeah, let me grab that.
AARON
I want to get, like, aerospace engineering grade cpus going to analyze the causal interactions here based on, I don't know, a few kilobytes of data. It's a baby laptop.
MATT
Yeah, this is what the charts looked like. So it's basically like there was some increase in the first. We raised, like, a couple of grand in the first couple of days. Then, yeah, we raised close to ten grand over the course of the quality thing, and then there was basically flat for a week, and then we raised another ten grand right at the end.
AARON
That's cool. Good job, guys.
MATT
And I was very surprised by this.
AARON
Maybe I didn't really internalize that or something. Maybe I was sort of checked out at that point. Sorry.
MATT
I guess. No, you were on vacation because when you were coming back from vacation, it's when you did, like, the fake Sama.
AARON
Yeah, that was on the plane.
LAURA
Okay, yeah, I remember this. My mom got there the next day. I'm like, I'm checking out, not doing anything.
AARON
Yeah, whatever. I'll get rstudio revving later. Actually, I'm gradually turning it into my worst enemy or something like that.
MATT
Wait, how so?
AARON
I just use Python because it's actually faster and catchy and I don't have to know anything. Also, wait, this is like a rant. This is sort of a totally off topic take, but something I was thinking about. No, actually, I feel like a big question is like, oh, are LLMs going to make it easy for people to do bad things that make it easier for me to do? Maybe not terrible things, but things that are, like, I don't know, I guess of dubious or various things that are mostly in the realm of copyright violation or pirating are not ever enforced, as far as I can tell. But, no, I just couldn't have done a lot of things in the past, but now I can, so that's my anecdote.
MATT
Okay, I have a whole python.
AARON
You can give me a list of YouTube URLs. I guess Google must do, like, a pretty good job of policing how public websites do for YouTube to md three sites, because nothing really just works very well very fast. But you can just do that in python, like, five minutes. But I couldn't do that before, so.
MATT
I feel like, to me, it's obvious that LLMs make it easier for people to do bad stuff. Exactly as you said because they let make in general make it easier for people to do stuff and they have some protections on this, but those protections are going to be imperfect. I think the much more interesting question in some sense is this like a step change relative to the fact that Google makes it way easier for you to do stuff and including bad stuff and the printing press made it way easier for you to do?
AARON
I wouldn't even call it a printing press.
MATT
I like think including bad stuff. So it's like, right, like every invention that generally increases people's capability to do stuff and share information also has these bad effects. And I think the hard question is, are LLMs, wait, did I just x.
AARON
No, I don't think, wait, did I just like, hold on. I'm pretty sure it's like still wait, how do I have four things?
LAURA
What is the benefit of LLMs versus.
AARON
You can ask it something and it tells you the answer.
LAURA
I know, but Google does this too.
AARON
I don't mean, I don't know if I have like a super, I don't think I have any insightful take it just in some sense, maybe these are all not the same, but maybe they're all of similar magnitude, but like object level. Now we live in a world with viruses CRISPR. Honestly, I think to the EA movement's credit, indefinite pause, stop. AI is just not, it's not something that I support. It's not something like most people support, it's not like the official EA position and I think for good reason. But yeah, going back to whatever it was like 1416 or whatever, who knows? If somebody said somebody invented the printing press and somebody else was like, yeah, we should, well I think there's some pretty big dis analysis just because of I guess, biotech in particular, but just like how destructive existing technologies are now. But if somebody had said back then, yeah, let's wait six months and see if we can think of any reason not to release the printing press. I don't think that would have been a terrible thing to do. I don't know, people. I feel like I'm saying something that's going to get coded as pretty extreme. But like x ante hard ex ante. People love thinking, exposed nobody. Like I don't know. I don't actually think that was relevant to anything. Maybe I'm just shit faced right now.
MATT
On one shot of vodka.
AARON
$15 just to have one shot.
MATT
I'll have a little.
AARON
Yeah. I think is honestly, wait. Yeah, this is actually interesting. Every time I drink I hope that it'll be the time that I discover that I like drinking and it doesn't happen, and I think that this is just because my brain is weird. I don't hate it. I don't feel, like, bad. I don't know. I've used other drugs, which I like. Alcohol just doesn't do it for me. Yeah, screw you, alcohol.
MATT
Yes. And you're now 15.99 cheaper or 50. 99 poorer.
AARON
Yeah, I mean, this will last me a lifetime.
MATT
You can use it for, like, cleaning your sink.
AARON
Wait, this has got to be the randomest take of all time. But, yeah, actually, like, isopropyl alcohol, top tier, disinfected. Because you don't have to do anything with it. You leave it there, it evaporates on its own.
MATT
Honestly. Yeah.
AARON
I mean, you don't want to be in an enclosed place or whatever. Sorry. To keep. Forget. This is like.
MATT
No, I mean, it seems like a good take to me.
AARON
That's all.
MATT
Yeah, this is like a very non sequitur.
AARON
But what are your guys' favorite cleaning suppliers?
MATT
Okay, this is kind of bad. Okay, this is not that bad. But I'm, like, a big fan of Clorox wipes.
AARON
Scandalous.
MATT
I feel like this gets looked down on a little bit because it's like, in theory, I should be using a spray cleaner and sponge more.
AARON
If you're like, art porn, what theories do you guys.
MATT
If you're very sustainable, very like, you shouldn't just be buying your plastic bucket of Clorox infused wet wipes and you're killing the planet.
AARON
What I thought you were going to say is like, oh, this is like germaphobe coating.
MATT
No, I think this is fine. I don't wipe down my groceries with Clorox wipes. This is like, oh, if I need to do my deep clean of the kitchen, what am I going to reach for? I feel like my roommate in college was very much like, oh, I used to be this person. No, I'm saying he was like an anti wet wipe on sustainability reasons person. He was like, oh, you should use a rag and a spray cleaner and wash the rag after, and then you will have not used vast quantities of resources to clean your kitchen.
AARON
At one point, I tweeted that I bought regular. Actually, don't do this anymore because it's no longer practical. But I buy regularly about 36 packs of bottled water for like $5 or whatever. And people actually, I think it was like, this is like close to a scissor statement, honestly. Because object level, you know what I am, right. It's not bad. For anything. I'm sorry. It just checks out. But people who are normally pretty technocratic or whatever were kind of like, I don't know, they were like getting heated on.
MATT
I think this is an amazing scissor statement.
AARON
Yeah.
MATT
Because I do.
AARON
I used to be like, if I were to take my twelve year old self, I would have been incredibly offended, enraged.
MATT
And to be fair, I think in my ideal policy world, there would be a carbon tax that slightly increases the price of that bottled water. Because actually it is kind of wasteful to. There is something, something bad has happened there and you should internalize those.
AARON
Yeah, I think in this particular, I think like thin plastic is just like not. Yeah, I don't think it would raise it like very large amount. I guess.
MATT
I think this is probably right that even a relatively high carbon tax would not radically change the price.
LAURA
It's not just carbon, though. I think because there is land use implicated in this.
AARON
No, there's not.
LAURA
Yeah, you're filling up more landfills.
AARON
Yeah, I'm just doing like hearsay right now. Heresy.
MATT
Hearsay. Hearsay is going to be whatever. Well, wait, no, heresy is, if you're arguing against standardly accepted doctrine. Hearsay is like, well, it's both. Then you're just saying shit.
AARON
I'm doing both right now. Which is that actually landfills are usually like on the outskirts of town. It's like, fine.
LAURA
They'Re on the outskirts of town until the town sprawls, and then the elementary school is on a phone.
AARON
Yeah, no, I agree in principle. I don't have a conceptual reason why you're wrong. I just think basically, honestly, the actual heuristic operating here is that I basically outsource what I should pay attention to, to other people. And since I've never seen a less wrong post or gave Warren post about how actually landfills are filling up, it's like, fine, probably.
LAURA
No, this is me being devil's advocate. I really don't care that about personal waste.
MATT
Yeah, I mean, I think plausibly here, there is, right? So I think object level, the things that matter, when we think about plastic, there is a carbon impact. There is a production impact of like, you need to think about what pollution happened when the oil was drilled and stuff. And then there is like a disposal impact. If you successfully get that bottle into a trash can, for what it's worth.
AARON
My bottles are going into their goddamn trash can.
MATT
Ideally a recycling. No, apparently recycling, I mean, recycling is.
AARON
Well, I mean, my sense is like apparently recycling. Yeah, I recycle metal. I think I do paper out of convenience.
MATT
If you successfully get that bottle handle a waste disposal system that is properly disposing of it, rather than like you're throwing it on a slap, then I think my guess is that the willingness to pay, or if you really crunch the numbers really hard, it would not be once again, a huge cost for the landfill costs. On the flip side, if you throw it in a river, that's very bad. My guess is that it would be right for everyone on Twitter to flame you for buying bottles and throwing them in a river if you did that.
AARON
What is an ed impact on wild animal welfare and equilibrium? No, just kidding. This is something. Yeah, don't worry, guys. No, I was actually the leave no trade coordinator for my Boy scout troop. It's actually kind of ironic because I think probably like a dumb ideology or.
LAURA
Whatever, it's a public good for the other people around you to not have a bunch of garbage around on that trail.
AARON
Yeah, I do think I went to an overnight training for this. They're very hardcore, but basically conceptually incoherent people. I guess people aren't conceptually incoherent. Their concepts are incoherent who think it's really important that you don't impact, that you walk through mud instead of expanding the trail or whatever. This is not even worth the time right now. Let's figure out how many digital shrimp needs by the heat up of the universe.
MATT
Yeah, I mean, I will say it's probably worth mentioning here, right, that in practice, your carbon and land use footprint is actually really small relative to the average. Yes, you buy a bunch of bottled water, but you live in a dense, walkable community and you rarely drive and all of these things. So in practice, all the people who are roasting you on Twitter for buying.
AARON
All the water, what they should roast me for is buying grass. Diet is. This is actually plausibly like the worst thing that I do from a climate standpoint. Yeah, I think this is probably mean. I've given my take on listen, if you're one of the seven people listening to this, you probably know what it is.
MATT
Yeah, it is true that you have. Two of your regular listeners are here on the podcast with you, which reduces the audience.
AARON
Yeah, I think my sister, the episode of my sister is going to get some. It's going to get some people.
MATT
Oh, yeah, me too.
LAURA
I want to hear a normie also.
AARON
I don't know. I get along great, honestly. Should I call her up? But her friend is like, she's the perfect intermediary my sister and I. Yeah, I guess we don't talk about. I don't know, she's like much more, like happy go lucky. Like less, I don't know, like nerdy person or whatever. But yeah, like our friend. You know what? I'm friends with Annie, too. Annie is like a good intermediary.
MATT
Can I just say, I think my favorite pigeonhower moment. Okay, I have a couple favorite pigeonhole. One is when you literally said, like, hedonic utilitarianism and stuff. That's like in the transcript of the Un Max episode. And it's just like the most perfect distillation of pigeon hour is that line. But then also when you just left in an episode with you and Sarah, where you just left in the part where you're discussing whether she could sleep on your couch if she visits DC.
AARON
How weird.
MATT
And I think at some point you're like, we're going to edit this out.
AARON
I always do that and then don't do it. Whatever.
MATT
Talking about travel logistics.
AARON
It'S like not a big deal.
MATT
You can tell that last bit of vodka really got me.
AARON
But, yeah, I feel like this isn't especially, like, insane. I don't know. Well, no, mine was like, I took it out eventually, but I left in Nathan's 15 minutes bath or, sorry, five minute bathroom break. I actually felt kind of bad because it was honestly as normal and as minimally embarrassing as it could have been. Like, oh, yeah, I'm going to use the bathroom now. And there was like five minutes of silence. Yeah.
MATT
And I have been amused by the fact that this is like, Sarah has started her own podcast now. You have inspired others with your hedonic utilitarianism, honestly, and your travel.
AARON
I really shouldn't be. As people can tell, I'm not the most eloquent person. Alcohol doesn't help, but I'm never the most eloquent person. But you know what it's like I'm creating awareness of people with.
MATT
Well, I mean, what I was going to say is, in some sense, it is this very unique take on the genre to leave in the bathroom break and the discussion of travel arrangements. Right. I'm laughing now, but I sort of genuinely got a kick out of those things. And in some sense, you're subverting the norms. This is actually art.
AARON
Yes, it is. Yeah. Honestly, I think I feel like I mentioned this before, but all this, my ethos here, I feel like very self important. Discussing my ethos as an artist, as a creator, as they say, is basically doing the opposite of what I tried to do the first time when I started the podcast, which is when I interviewed Rob Wiblin and spent hundreds, honestly, it was a good episode. It was a two hour episode, probably spent not something in the hundreds of hours. I think I guesstimated maybe like 250 hours or something in total for two hour episode. It was cool, right? I think the output was awesome, but that was a lot of effort.
MATT
Wait, okay, so how do you get to 250 hours?
AARON
So I did a bunch of research.
MATT
That's like six weeks of work.
AARON
So the stages, I pulled 250 out of my, like. But I do remember, like, I do remember like the, like the hundreds thing, like it's probably at least 100 hours. But good question. No, I think most of it was like going, reading all of Rob's old blog posts and every time he was interviewed on another podcast and taking notes and coming with questions based on that stuff. And then.
MATT
What even, like then you presumably recorded and then you edit. Well, did you edit or did.
AARON
No, they edited. And then there was a whole back and forth before we were putting together what questions, basically having sketching an outline, like a real talk about.
MATT
Sure, I've wondered about this for 80k, like how much the guests prep, the specific questions they are asked.
AARON
I mean, I don't think it's a secret. Maybe, maybe, I don't know if somebody like anonymous ADK Gmail account or email account, like send says, better take that shit out, this is top secret than I will. But no, I don't think it's secret to say that. At least the questions are decided ahead of time. Well, not decided unilaterally by one party. I think the main thing is, just like they're not trying to, the ebook is not conducive to what you would do if you wanted to see if a politician was lying or whatever. It's supposed to be like eliciting people's views. They do this election, I'm being a stand because I don't like a lie, but I really do think it's like a good show or whatever. And they do this election at the level of taking good people to come on. They want to hear the expected stuff. Honestly, I do think maybe it's a little bit on the margin, it should be a little bit more pushback or whatever during the interview to stuff. But no, I think in general, and maybe the more important thing is making it so that people can take out stuff after the fact. I don't know, this makes total sense. I don't see how people don't do that. You know what I mean? Why would you not want to make everybody chill and try to catch them saying something they don't want to say on a live feed or whatever?
MATT
I mean, I think it all depends on what context, right? Because clearly there are venues where you're trying to. Okay, I'm going to make an analogy with a job interview, right? Where it's like you wouldn't want a candidate to be able to edit the transcript, the job interview after the fact, to make themselves look good. Because the goal of the job interview is to understand the candidate's strengths and weaknesses. And that requires sort of pushing them and putting them on the spot a little bit in ways that they may, after the fact, be like, I wish I hadn't said this thing. Because your goal is to elicit an understanding of how they will do in a workplace where they're going to be asked to do various challenging things that they can't go back and edit afterwards. So, too, with a politician where you're trying to decide, do I want this person to be president? You don't want them to present. To get to present their best face in all.
AARON
No. Yeah, I totally agree. It seems like most podcast episodes don't have those features or whatever. There's good reason to get an unbiased view of. It's important to convey full information about what actually went down during the physical. But I guess real live recording instead of write. Nobody thinks that authors have a duty to public. If they wrote a sentence and then take it out, nobody thinks that that should go in a footnote or something. You know what I mean? So it seems like very intuitive to me that. And I think honestly, substantively, the most important thing is that people are just more relaxed, more willing to say things that are closer to the line, because then they can think about it later, say, like, oh, yeah, maybe we can take this out instead of just ignoring whole topic sections of topic entirely. I. It. I might have to cut this out. Yeah.
MATT
No, it's like getting late. We have recorded.
AARON
This is actually kind of sad. That's only 11:00 p.m. I feel like we should be, like, raging right now.
LAURA
But except for on behalf of the machine, likewise.
AARON
Actually, not even on behalf of the.
MATT
Machine, honestly, you're not a mayor Pete, Stan.
AARON
Wait, what does that have to do with.
MATT
Oh, the classic meme is Mayor Pete standing in front of a whiteboard. And the whiteboard has been edited to say, what if we rage on behalf of the machine? It's like a commentary on Mayor Pete's campaign.
AARON
My mom definitely I had do not deserve one. She definitely clicked send. Anyway. It's not even that important. It's not like say, tell me if you're alive. It's like a general.
MATT
Are we going to leave that bit in the podcast?
AARON
Yeah, I'm actually hoping to interview her about tennising me. Just kidding about her. Like I probably shouldn't say that in case it never actually happens.
MATT
Do we want to do a wrap here and do we have any final things we want to say that we.
AARON
Can, like we can always do?
LAURA
We want to put confidence intervals on the number of people who are going to listen.
MATT
Yes.
AARON
Should I pull up the data? Episode data or whatever about how many people. Oh, no, because. Sorry. Wait, I do have access to that. Moving to substack from spot in general.
MATT
People are always underestimate how big the variance is and everything. So I think I need to put some probability on this going like weirdly viral. It's like some subreddit dedicated to sexy men's voices discovers air. Yes, correct.
AARON
In fact, they're already doing it right, as we see.
MATT
And so then this gets like reposted a thousand times.
AARON
Wait, it doesn't give me. Oh, wait, no, sorry. Podcast. So best of pigeon hour, mine's been an episode, has 65 downloads.
LAURA
Nice.
AARON
Wait, what's that? Wait, so some of the old ones are like one of the ones that I was on like a while ago, like before pre pigeonhole had two and 161. I guess it's the peak. I think that was the, even though it's like mostly hosted elsewhere, I guess. Oh, Daniel Salons has one, I think. Laura, you were. Wait, no, this isn't, I think it must be wrong because it says that you only have 29, but that must be since we moved. I moved it to Sunstack. So I don't think this is actually correct.
MATT
I can only say there was at least one because I listened to it. That's the only data I have daily.
AARON
What happened? Yeah, so the most downloads in a single day was 78 on January 24.
MATT
That's surprisingly many.
LAURA
Yeah.
AARON
What happened? Okay, wait, why was on January 20? Oh, that was best of pigeonhole. It's not insane. Whatever.
LAURA
All right, my CI is like 15% to 595%.
MATT
Okay. My 95 is going to be wider for weird tail things. So I'm going to say like ten to 5000. Wait, I don't think there's a 5% chance or I don't think there's a 2.5% chance that there are more than 5000 lessons. I'll go ten to 1000.
AARON
I will go eleven to 990 so I can sound better. Wait in fact we're both right. Mine is more right.
MATT
Okay, I just want to note Aaron that it is a crime that you don't use the fuck your life bing bong throwing the pigeon video as your intro.
AARON
What is anybody talking about right now?
MATT
Okay, give me a sec, give me a sec. I sent this to you at some right here we are on the perfect and throws a pigeon at them.
AARON
I wonder hopefully pigeon is doing well wait really hope yeah this is all working et cetera.
MATT
Honestly if this wasn't recording I will be so happy.
AARON
No, that's what happened in my first episode with Nathan. Paul Nathan or UK Nathan yes forecaster Nathan yes predict market yeah no, I mean I felt bad like it was totally my fault. I was like yeah, that's why I pay 1059 a month for fancy Google Meet and two terabytes. Cool. Can I subscribe?
MATT
Yes, you can press stop all.
Get full access to Aaron's Blog at www.aaronbergman.net/subscribe -
Table of contents
Note: links take you to the corresponding section below; links to the original episode can be found there.
* Laura Duffy solves housing, ethics, and more [00:01:16]
* Arjun Panickssery solves books, hobbies, and blogging, but fails to solve the Sleeping Beauty problem because he's wrong on that one [00:10:47]
* Nathan Barnard on how financial regulation can inform AI regulation [00:17:16]
* Winston Oswald-Drummond on the tractability of reducing s-risk, ethics, and more [00:27:48]
* Nathan Barnard (again!) on why general intelligence is basically fake [00:34:10]
* Daniel Filan on why I'm wrong about ethics (+ Oppenheimer and what names mean in like a hardcore phil of language sense) [00:56:54]
* Holly Elmore on AI pause, wild animal welfare, and some cool biology things I couldn't fully follow but maybe you can [01:04:00]
* Max Alexander and I solve ethics, philosophy of mind, and cancel culture once and for all [01:24:43]
* Sarah Woodhouse on discovering AI x-risk, Twitter, and more [01:30:56]
* Pigeon Hour x Consistently Candid pod-crossover: I debate moral realism with Max Alexander and Sarah Hastings-Woodhouse [01:41:08]
Intro [00:00:00]
To wrap up the year of Pigeon Hour, the podcast, I put together some clips from each episode to create a best-of compilation. This was inspired by 80,000 Hours, a podcast that did the same with their episodes, and I thought it was pretty cool and tractable enough.
It's important to note that the clips I chose range in length significantly. This does not represent the quality or amount of interesting content in the episode. Sometimes there was a natural place to break the episode into a five-minute chunk, and other times it wouldn't have made sense to take a five-minute chunk out of what really needed to be a 20-minute segment. I promise I'm not just saying that.
So without further ado, please enjoy.
#1: Laura Duffy solves housing, ethics, and more [00:01:16]
In this first segment, Laura, Duffy, and I discuss the significance and interpretation of Aristotle's philosophical works in relation to modern ethics and virtue theory.
AARON: Econ is like more interesting. I don't know. I don't even remember of all the things. I don't know, it seems like kind of cool. Philosophy. Probably would have majored in philosophy if signaling wasn't an issue. Actually, maybe I'm not sure if that's true. Okay. I didn't want to do the old stuff though, so I'm actually not sure. But if I could aristotle it's all wrong. Didn't you say you got a lot out of Nicomachi or however you pronounce that?
LAURA: Nicomachian ethics guide to how you should live your life. About ethics as applied to your life because you can't be perfect. Utilitarians. There's no way to be that.
AARON: But he wasn't even responding to utilitarianism. I'm sure it was a good work given the time, but like, there's like no other discipline in which we care. So people care so much about like, what people thought 2000 years ago because like the presumption, I think the justified presumption is that things have iterated and improved since then. And I think that's true. It's like not just a presumption.
LAURA: Humans are still rather the same and what our needs are for living amongst each other in political society are kind of the same. I think America's founding is very influenced by what people thought 2000 years ago.
AARON: Yeah, descriptively that's probably true. But I don't know, it seems like all the whole body of philosophers have they've already done the work of, like, compressing the good stuff. Like the entire academy since like, 1400 or whatever has like, compressed the good stuff and like, gotten rid of the bad stuff. Not in like a high fidelity way, but like a better than chance way. And so the stuff that remains if you just take the state of I don't know if you read the Oxford Handbook of whatever it is, like ethics or something, the takeaways you're going to get from that are just better than the takeaways you're going to get from a summary of the state of the knowledge in any prior year. At least. Unless something weird happened. And I don't know. I don't know if that makes sense.
LAURA: I think we're talking about two different things, though. Okay. In terms of knowledge about logic or something or, I don't know, argumentation about trying to derive the correct moral theory or something, versus how should we think about our own lives. I don't see any reason as to why the framework of virtue theory is incorrect and just because it's old. There's many virtue theorists now who are like, oh yeah, they were really on to something and we need to adapt it for the times in which we live and the kind of societies we live in now. But it's still like there was a huge kernel of truth in at least the way of thinking that Aristotle put forth in terms of balancing the different virtues that you care about and trying to find. I think this is true. Right? Like take one virtue of his humor. You don't want to be on one extreme where you're just basically a meme your entire life. Everybody thinks you're funny, but that's just not very serious. But you don't want to be a boar and so you want to find somewhere in the middle where it's like you have a good sense of humor, but you can still function and be respected by other people.
AARON: Yeah. Once again, I agree. Well, I don't agree with everything. I agree with a lot of what you just said. I think there was like two main points of either confusion or disagreement. And like, the first one is that I definitely think, no, Aristotle shouldn't be discounted or like his ideas or virtue ethics or anything like that shouldn't be discounted because they were canonical texts or something were written a long time ago. I guess it's just like a presumption that I have a pretty strong presumption that conditional on them being good, they would also be written about today. And so you don't actually need to go back to the founding texts and then in fact, you probably shouldn't because the good stuff will be explained better and not in weird it looks like weird terms. The terms are used differently and they're like translations from Aramaic or whatever. Probably not Aramaic, probably something else. And yeah, I'm not sure if you.
LAURA: Agree with this because we have certain assumptions about what words like purpose mean now that we're probably a bit richer in the old conception of them like telos or happiness. Right. Udaimnia is much better concept and to read the original text and see how those different concepts work together is actually quite enriching compared to how do people use these words now. And it would take like I don't know, I think there just is a lot of value of looking at how these were originally conceived because popularizers of the works now or people who are seriously doing philosophy using these concepts. You just don't have the background knowledge that's necessary to understand them fully if you don't read the canonical text.
AARON: Yeah, I think that would be true. If you are a native speaker. Do you know Greek? If you know Greek, this is like dumb because then you're just right.
LAURA: I did take a quarter of it.
AARON: Oh God. Oh my God. I don't know if that counts, but that's like more than anybody should ever take. No, I'm just kidding. That's very cool. No, because I was going to say if you're a native speaker of Greek and you have the connotations of the word eudaimonia and you were like living in the temper shuttle, I would say. Yeah, that's true actually. That's a lot of nuanced, connotation and context that definitely gets lost with translation. But once you take the jump of reading English translations of the texts, not you may as well but there's nothing super special. You're not getting any privileged knowledge from saying the word eudaimonia as opposed to just saying some other term as a reference to that concept or something. You're absorbing the connotation in the context via English, I guess, via the mind of literally the translators who have like.
LAURA: Yeah, well see, I tried to learn virtue theory by any other route than reading Aristotle.
AARON: Oh God.
LAURA: I took a course specifically on Plato and Aristotle.
AARON: Sorry, I'm not laughing at you. I'm just like the opposite type of philosophy person.
LAURA: But keep going. Fair. But she had us read his physics before we read Nicomachi.
AARON: Think he was wrong about all that.
LAURA: Stuff, but it made you understand what he meant by his teleology theory so much better in a way that I could not get if I was reading some modern thing.
AARON: I don't know, I feel like you probably could. No, sorry, that's not true. I don't think you could get what Aristotle the man truly believed as well via a modern text. But is that what you? Depends. If you're trying to be a scholar of Aristotle, maybe that's important. If you're trying to find the best or truest ethics and learn the lessons of how to live, that's like a different type of task. I don't think Aristotle the man should be all that privileged in that.
LAURA: If all of the modern people who are talking about virtue theory are basically Aristotle, then I don't see the difference.
AARON: Oh, yeah, I guess. Fair enough. And then I would say, like, oh, well, they should probably start. Is that in fact the state of the things in virtue theory? I don't even know.
LAURA: I don't know either.
#2 Arjun Panickssery solves books, hobbies, and blogging, but fails to solve the Sleeping Beauty problem because he's wrong on that one [00:10:47]
All right, next, Arjun Panixery and I explore the effectiveness of reading books in retaining and incorporating knowledge, discussing the value of long form content and the impact of great literary works on understanding and shaping personal worldviews.
ARJUN: Oh, you were in the book chat, though. The book rant group chat, right?
AARON: Yeah, I think I might have just not read any of it. So do you want to fill me in on what I should have read?
ARJUN: Yeah, it's group chat of a bunch of people where we were arguing about a bunch of claims related to books. One of them is that most people don't remember pretty much anything from books that they read, right? They read a book and then, like, a few months later, if you ask them about it, they'll just say one page's worth of information or maybe like, a few paragraphs. The other is that what is it exactly? It's that if you read a lot of books, it could be that you just incorporate the information that's important into your existing models and then just forget the information. So it's actually fine. Isn't this what you wrote in your blog post or whatever? I think that's why I added you to that.
AARON: Oh, thank you. I'm sorry I'm such a bad group chat participant. Yeah, honestly, I wrote that a while ago. I don't fully remember exactly what it says, but at least one of the things that it said was and that I still basically stand by, is that it's basically just like it's increasing the salience of a set of ideas more so than just filling your brain with more facts. And I think this is probably true insofar as the facts support a set of common themes or ideas that are kind of like the intellectual core of it. It would be really hard. Okay, so this is not a book, but okay. I've talked about how much I love an 80,000 hours podcast, and I've listened to, I don't think every episode, but at least 100 of the episodes. And no, you're just, like, not going to definitely I've forgotten most of the actual almost all of the actual propositional pieces of information said, but you're just not going to convince me that it's completely not affecting either model of the world or stuff that I know or whatever. I mean, there are facts that I could list. I think maybe I should try.
ARJUN: Sure.
AARON: Yeah. So what's your take on book other long form?
ARJUN: Oh, I don't know. I'm still quite confused or I think the impetus for the group chat's creation was actually Hanania's post where he wrote the case against most books or most was in parentheses or something. I mean, there's a lot of things going on in that post. He just goes off against a bunch of different categories of books that are sort of not closely related. Like, he goes off against great. I mean, this is not the exact take he gives, but it's something like the books that are considered great are considered great literature for some sort of contingent reason, not because they're the best at getting you information that you want.
AARON: This is, like, another topic. But I'm, like, anti great books. In fact, I'm anti great usually just means old and famous. So insofar as that's what we mean by I'm like, I think this is a bad thing, or, like, I don't know, aristotle is basically wrong about everything and stuff like that.
ARJUN: Right, yeah. Wait, we could return to this. I guess this could also be divided into its component categories. He spends more time, though, I think, attacking a certain kind of nonfiction book that he describes as the kind of book that somebody pitches to a publisher and basically expands a single essay's worth of content into with a bunch of anecdotes and stuff. He's like, most of these books are just not very useful to read, I guess. I agree with that.
AARON: Yeah. Is there one that comes to mind as, like, an? Mean, I think of Malcolm Gladwell as, like, the kind of I haven't actually read any of his stuff in a while, but I did, I think, when I started reading nonfiction or with any sort of intent, I read. A bunch of his stuff or whatever and vaguely remember that this is basically what he like for better or.
ARJUN: Um yeah, I guess so. But he's almost, like, trying to do it on purpose. This is the experience that you're getting by reading a Malcolm Gladwell book. It's like talib. Right? It's just him just ranting. I'm thinking, I guess, of books that are about something. So, like, if you have a book that's know negotiation or something, it'll be filled with a bunch of anecdotes that are of dubious usefulness. Or if you get a book that's just about some sort of topic, there'll be historical trivia that's irrelevant. Maybe I can think of an example.
AARON: Yeah. So the last thing I tried to read, maybe I am but haven't in a couple of weeks or whatever, is like, the Derek Parfit biography. And part of this is motivated because I don't even like biographies in general for some reason, I don't know. But I don't know. He's, like, an important guy. Some of the anecdotes that I heard were shockingly close to home for me, or not close to home, but close to my brain or something. So I was like, okay, maybe I'll see if this guy's like the smarter version of Aaron Bergman. And it's not totally true.
ARJUN: Sure, I haven't read the book, but I saw tweet threads about it, as one does, and I saw things that are obviously false. Right. It's the claims that he read, like, a certain number of pages while brushing his teeth. That's, like, anatomically impossible or whatever. Did you get to that part? Or I assumed no, I also saw.
AARON: That tweet and this is not something that I do, but I don't know if it's anatomically impossible. Yeah, it takes a little bit of effort to figure out how to do that, I guess. I don't think that's necessarily false or whatever, but this is probably not the most important.
ARJUN: Maybe it takes long time to brush his teeth.
#3: Nathan Barnard on how financial regulation can inform AI regulation [00:17:16]
In this next segment, Nathan Barnard and I dive into the complexities of AI regulation, including potential challenges and outcomes of governing AI in relation to economic growth and existential security. And we compare it to banking regulation as well.
AARON: Yeah, I don't know. I just get gloomy for, I think justified reasons when people talk about, oh yeah, here's the nine step process that has to take place and then maybe there's like a 20% chance that we'll be able to regulate AI effectively. I'm being facetious or exaggerating, something like that, but not by a gigantic amount.
NATHAN: I think this is pretty radically different to my mainline expectation.
AARON: What's your mainline expectation?
NATHAN: I suppose I expect like AI to come with an increasing importance past economy and to come up to really like a very large fraction of the economy before really crazy stuff starts happening and this world is going very anonymous. Anonymous, anonymous, anonymous. I know the word is it'd be very unusual if this extremely large sector economy which was impacted like a very large number of people's lives remains like broadly unregulated.
AARON: It'll be regulated, but just maybe in a stupid way.
NATHAN: Sure, yes, maybe in a stupid way. I suppose critically, do you expect the stupid way to be like too conservative or too like the specific question of AI accenture it's basically too conservative or too lenient or I just won't be able to interact with this.
AARON: I guess generally too lenient, but also mostly on a different axis where just like I don't actually know enough. I don't feel like I've read learned about various governance proposals to have a good object level take on this. But my broad prior is that there are just a lot of ways to for anything. There's a lot of ways to regulate something poorly. And the reason insofar as anything isn't regulated poorly it's because of a lot of trial and error.
NATHAN: Maybe.
AARON: I mean, there's probably exceptions, right? I don't know. Tax Americana is like maybe we didn't just kept winning wars starting with World War II. I guess just like maybe like a counterexample or something like that.
NATHAN: Yeah, I think I still mostly disagree with this. Oh, cool. Yeah. I suppose I see a much like broader spectrum between bad regulation and good regulation. I agree it's like very small amount. The space of optimal regulation is very small. But I think we have to hit that space for regulation to be helpful. Especially in this especially if you consider that if you sort of buy the AI extension safety risk then the downsides of it's not this quite fine balancing act between too much whether consumer protection and siphoning competition and cycling innovation too much. It's like trying to end this quite specific, very bad outcome which is maybe much worse than going somewhat slowering economic growth, at least somewhat particularly if we think we're going to get something. This is very explosive rates for economic growth really quite soon. And the cost of slowing down economic growth by weather even by quite a large percentage, very small compared to the cost of sort of an accidental catastrophe. I sort of think of Sony iconic growth as the main cost of main way regulation goes wrong currently.
AARON: I think in an actual sense that is correct. There's the question of like okay, Congress in the states like it's better than nothing. I'm glad it's not anarchy in terms of like I'm glad we have a legislature.
NATHAN: I'm also glad the United States.
AARON: How reasons responsive is Congress? I don't think reasons responsive enough to make it so that the first big law that gets passed insofar as there is one or if there is one is on the pareto frontier trading off between economic growth and existential security. It's going to be way inside of that production frontier or whatever. It's going to suck on every action, maybe not every act but at least like some relevant actions.
NATHAN: Yeah that doesn't seem like obviously true to me. I think Dodge Frank was quite a good law.
AARON: That came after 2008, right?
NATHAN: Yeah correct. Yeah there you go. No, I agree. I'm not especially confident about doing regulation before there's some quite bad before there's a quite bad warning shot and yes, if we're in world where we have no warning shots and we're just like blindsided by everyone getting turned into everyone getting stripped their Athens within 3 seconds, this is not good. Both in law we do have one of those shots and I think Glass Seagull is good law. Not good law is a technical term. I think Glass Steagall was a good piece of legislation. I think DoD Frank was a good piece of legislation. I think the 2008 Seamless Bill was good piece of legislation. I think the Troubled Assets Relief Program is a good piece of piece of legislation.
AARON: I recognize these terms and I know some of them and others I do not know the contents of.
NATHAN: Yeah so Glass Eagle was the financial regulation passed in 1933 after Great Depression. The Tropical Asset Relief Program was passed in I think 2008, moved 2009 to help recapitalize banks. Dodge Frank was the sort of landmark post financial cris piece of legislation passed in 2011. I think these are all good pieces of legislation now. I think like financial regulation is probably unusually good amongst US legislation. This is like a quite weak take, I guess. It's unusually.
AARON: So. I don't actually know the pre depression financial history at all but I feel like the more relevant comparison to the 21st century era is what was the regulatory regime in 1925 or something? I just don't know.
NATHAN: Yeah, I know a bit. I haven't read this stuff especially deeply and so I don't want to don't want to be so overcompensant here but sort of the core pieces which were sort of important for the sort of the Great Depression going very badly was yeah, no distinction between commercial banks and investment banks. Yes, such a bank could take much riskier. Much riskier. Things with like custom deposits than they could from 1933 until the Peel Glass Eagle. And combine that with no deposit insurance and if you sort of have the combination of banks being able to do quite risky things with depositors money and no deposit insurance, this is quite dangerously known. And glassy repeal.
AARON: I'm an expert in the sense that I have the Wikipedia page up. Well, yeah, there was a bunch of things. Basically. There's the first bank of the United States. There's the second bank of the United States. There's the free banking era. There was the era of national banks. Yada, yada, yada. It looks like 19. Seven was there was some panic. I vaguely remember this from like, AP US history, like seven years ago or.
NATHAN: Yes, I suppose in short, I sort of agree that the record of sort of non post Cris legislation is like, not very good, but I think record of post Cris legislation really, at least in the financial sector, really is quite good. I'm sure lots of people disagree with this, but this is my take.
#4 Winston Oswald-Drummond on the tractability of reducing s-risk, ethics, and more [00:27:48]
Up next, Winston Oswald Drummond and I talk about the effectiveness and impact of donating to various research organizations, such as suffering-focused S-risk organizations. We discuss tractability, expected value, and essentially where we should give our money.
AARON: Okay, nice. Yeah. Where to go from here? I feel like largely we're on the same page, I feel like.
WINSTON: Yeah. Is your disagreement mostly tractability? Then? Maybe we should get into the disagreement.
AARON: Yeah. I don't even know if I've specified, but insofar as I have one, yes, it's trapped ability. This is the reason why I haven't donated very much to anywhere for money reasons. But insofar as I have, I have not donated to Clrcrs because I don't see a theory of change that connects the research currently being done to actually reducing s risks. And I feel like there must be something because there's a lot of extremely smart people at both of these orgs or whatever, and clearly they thought about this and maybe the answer is it's very general and the outcome is just so big in magnitude that anything kind.
WINSTON: Of that is part of it, I think. Yeah, part of it is like an expected value thing and also it's just very neglected. So it's like you want some people working on this, I think, at least. Even if it's unlikely to work. Yeah, even that might be underselling it, though. I mean, I do think there's people at CRS and Clr, like talking to people at AI labs and some people in politics and these types of things. And hopefully the research is a way to know what to try to get done at these places. You want to have some concrete recommendations and I think obviously people have to also be willing to listen to you, but I think there is some work being done on that and research is partially just like a community building thing as well. It's a credible signal that you were smart and have thought about this, and so it gives people reason to listen to you and maybe that mostly pays off later on in the future.
AARON: Yeah, that all sounds like reasonable. And I guess one thing is that I just don't there's definitely things I mean, first of all, I haven't really stayed up to date on what's going on, so I haven't even done I've done zero research for this podcast episode, for example. Very responsible and insofar as I've know things about these. Orgs. It's just based on what's on their website at some given time. So insofar as there's outreach going on, not like behind the scenes, but just not in a super public way, or I guess you could call that behind the scenes. I just don't have reason to, I guess, know about that. And I guess, yeah, I'm pretty comfortable. I don't even know if this is considered biting a bullet for the crowd that will be listening to this, if that's anybody but with just like yeah, saying a very small change for a very large magnitude, just, like, checks out. You can just do expected value reasoning and that's basically correct, like a correct way of thinking about ethics. But even I don't know how much you know specifically or, like, how much you're allowed want to reveal, but if there was a particular alignment agenda that I guess you in a broad sense, like the suffering focused research community thought was particularly promising and relative to other tractable, I guess, generic alignment recommendations. And you were doing research on that and trying to push that into the alignment mainstream, which is not very mainstream. And then with the hope that that jumps into the AI mainstream. Even if that's kind of a long chain of events. I think I would be a lot more enthusiastic about I don't know that type of agenda, because it feels like there's like a particular story you're telling where it cashes out in the end. You know what I mean?
WINSTON: Yeah, I'm not the expert on this stuff, but I do think you just mean I think there's some things about influencing alignment and powerful AI for sure. Maybe not like a full on, like, this is our alignment proposal and it also handles Sris. But some things we could ask AI labs that are already building, like AGI, we could say, can you also implement these sort of, like, safeguards so if you failed alignment, you fail sort of gracefully and don't cause lots of suffering.
AARON: Right?
WINSTON: Yeah. Or maybe there are other things too, which also seem potentially more tractable. Even if you solve alignment in some sense, like aligning with whatever the human operator tells the AI to do, then you can also get the issue that malevolent actors can take control of the AI and then what they want also causes lots of suffering that type of alignment wouldn't. Yeah, and I guess I tend to be somewhat skeptical of coherent extrapolated volition and things like this, where the idea is sort of like it'll just figure out our values and do the right thing. So, yeah, there's some ways to push on this without having a full alignment plan, but I'm not sure if that counts as what you were saying.
AARON: No, I guess it does. Yeah, it sounds like it does. And it could be that I'm just kind of mistaken about the degree to which that type of research and outreach is going on. That sounds like it's at least partially true.
#5: Nathan Barnard (again!) on why general intelligence is basically fake [00:34:10]
Up next, Nathan Barnard is back for his second episode. And we talked about the nature of general intelligence, its relationship with language and the implications of specialized brain functions on the understanding of human cognitive abilities.
NATHAN: Yes. This like symbolic like symbolic, symbolic reasoning stuff. Yeah. So I think if I was, like, making the if I was, like, making the case for general intelligence being real, I wouldn't have symbolic reasoning, but I would have language stuff. I'd have this hierarchical structure thing, which.
AARON: I would probably so I think of at least most uses of language and central examples as a type of symbolic reasoning because words mean things. They're like yeah. Pointers to objects or something like that.
NATHAN: Yeah, I think it's like, pretty confidence isn't where this isn't a good enough description of general intelligence. So, for instance so if you bit in your brain called, I'm using a checklist, I don't fuck this up vernacular, I'm not making this cool. Lots of connects to use words like pointers as these arbitrary signs happens mostly in this area of the brain called Berkeley's area. But very famously, you can have Berkeley's epaxics who lose the ability to do language comprehension and use the ability to consistently use words as pointers, as signs to point to things, but still have perfect good spatial reasoning abilities. And so, conversely, people with brokers of fascia who fuck up, who have the broker's reason their brain fucks up will not be able to form fluent sentences and have some problems like unsigned syntax, and they'll still be able to have very good spatial reasoning. It could still, for instance, be like, good engineers. Would you like many problems which, like, cost engineering?
AARON: Yeah, I totally buy that. I don't think language is the central thing. I think it's like an outgrowth of, like I don't know, there's like a simplified model I could make, which is like it's like an outgrowth of whatever general intelligence really is. But whatever the best spatial or graphical model is, I don't think language is cognition.
NATHAN: Yes, this is a really big debate in psycholinguistics as to whether language is like an outgrowth of other abilities like the brain has, whether language whether there's very specialized language modules. Yeah, this is just like a very live debate in psycholinguistics moments. I actually do lean towards the reason I've been talking about this actually just going to explain this hierarchical structure thing? Yeah, I keep talking about it. So one theory for how you can comprehend new sentences, like, the dominant theory in linguistics, how you can comprehend new sentences, um, is you break them up into, like you break them up into, like, chunks, and you form these chunks together in this, like, tree structure. So something like, if you hear, like, a totally novel sentence like the pit bull mastiff flopped around deliciously or something, you can comprehend what the sentence means despite the fact you've never heard it. Theory behind this is you saw yes, this can be broken up into this tree structure, where the different, like, ah, like like bits of the sentence. So, like like the mastiff would be like, one bit, and then you have, like, another bit, which is like, the mastiff I can't remember I said rolled around, so that'd be like, another bit, and then you'd have connectors to our heart.
AARON: Okay.
NATHAN: So the massive rolling around one theory of one of the sort of distinctive things that humans have disabilities is like, this quite general ability to break things up into these these tree structures. This is controversial within psycholinguistics, but it's broadly an area which I broadly buy it because we do see harms to other areas of intelligence. You get much worse at, like, Ravens Progressive Matrices, for instance, when you have, like, an injury to brokers area, but, like, not worse at, like, tests like tests of space, of, like, spatial reasoning, for instance.
AARON: So what is like, is there, like, a main alternative to, like, how humans.
NATHAN: Understand language as far as this specificity of how we pass completely novel sentences, as far as where this is just like this is just like the the academic consensus. Okay.
AARON: I mean, it sounds totally like right? I don't know.
NATHAN: Yeah. But yeah, I suppose going back to saying, how far is language like an outgrowth of general intelligence? An outgrowth like general intelligence versus having much more specialized language modules? Yeah, I lean towards the latter, despite yeah, I still don't want to give too strong of a personal opinion here because I'm not a linguistic this is a podcast.
AARON: You're allowed to give takes. No one's going to say this is like the academic we want takes.
NATHAN: We want takes. Well, gone to my head is.
AARON: I.
NATHAN: Think language is not growth of other abilities. I think the main justification for this, I think, is that the loss of other abilities we see when you have damage to broker's area and verca's area.
AARON: Okay, cool. So I think we basically agree on that. And also, I guess one thing to highlight is I think outgrowth can mean a couple of different things. I definitely think it's plausible. I haven't read about this. I think I did at some point, but not in a while. But outgrowth could mean temporarily or whatever. I think I'm kind of inclined to think it's not that straightforward. You could have coevolution where language per se encourages both its own development and the development of some general underlying trait or something.
NATHAN: Yeah. Which seems likely.
AARON: Okay, cool. So why don't humans have general intelligence?
NATHAN: Right. Yeah. As I was sort of talking about previously.
AARON: Okay.
NATHAN: I think I think I'd like to use go back to like a high level like a high level argument is there appears to be very surprised, like, much higher levels of functional specialization in brains than you expect. You can lose much more specific abilities than you expect to be able to lose. You can lose specifically the ability a famous example is like facebindness, actually. You probably lose the ability to specifically recognize things which you're, like, an expert in.
AARON: Who does it or who loses this ability.
NATHAN: If you've damaged your fuse inform area, you'll lose the ability to recognize faces, but nothing else.
AARON: Okay.
NATHAN: And there's this general pattern that your brain is much more you can lose much more specific abilities than you expect. So, for instance, if you sort of have damage to your ventral, medial, prefrontal cortex, you can say the reasoning for why you shouldn't compulsively gamble but still compulsively gamble.
AARON: For instance okay, I understand this not gambling per se, but like executive function stuff at a visceral level. Okay, keep going.
NATHAN: Yeah. Some other nice examples of this. I think memory is quite intuitive. So there's like, a very famous patient called patient HM who had his hippocampus removed and so as a result, lost all declarative memory. So all memory of specific facts and things which happened in his life. He just couldn't remember any of these things, but still perfectly functioning otherwise. I think at a really high level, I think this functional specialization is probably the strongest piece of evidence against the general intelligence hypothesis. I think fundamentally, general intelligence hypothesis implies that, like, if you, like yeah, if you was, like, harm a piece of your brain, if you have some brain injury, you might like generically get worse at tasks you like, generically get worse at, like at like all task groups use general intelligence. But I think suggesting people, including general intelligence, like the ability to write, the ability to speak, maybe not speak, the ability to do math, you do have.
AARON: This it's just not as easy to analyze in a Cogsy paper which IQ or whatever. So there is something where if somebody has a particular cubic centimeter of their brain taken out, that's really excellent evidence about what that cubic centimeter does or whatever, but that non spatial modification is just harder to study and analyze. I guess we'll give people drugs, right? Suppose that set aside the psychometric stuff. But suppose that general intelligence is mostly a thing or whatever and you actually can ratchet it up and down. This is probably just true, right? You can probably give somebody different doses of, like, various drugs. I don't know, like laughing gas, like like, yeah, like probably, probably weed. Like I don't know.
NATHAN: So I think this just probably isn't true. Your working memory corrects quite strongly with G and having better working memory generic can make you much better at lots of tasks if you have like.
AARON: Yeah.
NATHAN: Sorry, but this is just like a specific ability. It's like just specifically your working memory, which is improved if you go memory to a drugs. Improved working memory. I think it's like a few things like memory attention, maybe something like decision making, which are all like extremely useful abilities and improve how well other cognitive abilities work. But they're all separate things. If you improved your attention abilities, your working memory, but you sort of had some brain injury, which sort of meant you sort of had lost ability to pass syntax, you would not get better at passing syntax. And you can also use things separately. You can also improve attention and improve working memory separately, which just it's not just this one dial which you can turn up.
AARON: There's good reason to expect that we can't turn it up because evolution is already sort of like maximizing, given the relevant constraints. Right. So you would need to be looking just like injuries. Maybe there are studies where they try to increase people's, they try to add a cubic centimeter to someone's brain, but normally it's like the opposite. You start from some high baseline and then see what faculties you lose. Just to clarify, I guess.
NATHAN: Yeah, sorry, I think I've lost the you still think there probably is some general intelligence ability to turn up?
AARON: Honestly, I think I haven't thought about this nearly as much as you. I kind of don't know what I think at some level. If I could just write down all of the different components and there are like 74 of them and what I think of a general intelligence consists of does that make it I guess in some sense, yeah, that does make it less of an ontologically legit thing or something. I think I think the thing I want to get the motivating thing here is that with humans yet you can like we know humans range in IQ, and there's, like, setting aside a very tiny subset of people with severe brain injuries or development disorders or whatever. Almost everybody has some sort of symbolic reasoning that they can do to some degree. Whereas the smartest maybe I'm wrong about this, but as far as I know, the smartest squirrel is not going to be able to have something semantically represent something else. And that's what I intuitively want to appeal to, you know what I mean?
NATHAN: Yeah, I know what you're guessing at. So I think there's like two interesting things here. So I think one is, could a squirrel do this? I'm guessing a squirrel couldn't do this, but a dog can, or like a dog probably can. A chimpanzee definitely can.
AARON: Do what?
NATHAN: Chimpanzees can definitely learn to associate arbitrary signs, things in the world with arbitrary signs.
AARON: Yes, but maybe I'm just adding on epicentercles here, but I feel like correct me if I'm wrong, but I think that maybe I'm just wrong about this, but I would assume that Chicken Tees cannot use that sign in a domain that is qualitatively different from the ones they've been in. Right. So, like, a dog will know that a certain sign means sit or whatever, but maybe that's not a good I.
NATHAN: Don'T know think this is basically not true.
AARON: Okay.
NATHAN: And we sort of know this from teaching.
AARON: Teaching.
NATHAN: There's like a famously cocoa de guerrilla. Also a bonobo whose name I can't remember were taught sign language. And the thing they were consistently bad at was, like, putting together sentences they could learn quite large vocabularies learning to associate by large, I mean in the hundreds of words, in the low hundreds of words which they could consistently use consistently use correctly.
AARON: What do you mean by, like, in what sense? What is bonobo using?
NATHAN: A very famous and quite controversial example is like, coco gorilla was like, saw a swan outside and signed water bird. That's like, a controversial example. But other things, I think, which are controversial here is like, the syntax part of putting water and bird together is the controversial part, but it's not the controversial part that she could see a swan and call that a bird.
AARON: Yeah, I mean, this is kind of just making me think, okay, maybe the threshold for D is just like at the chimp level or something. We are like or whatever the most like that. Sure. If a species really can generate from a prefix and a suffix or whatever, a concept that they hadn't learned before.
NATHAN: Yeah, this is a controversial this is like a controversial example of that the addition to is the controversial part. Yeah, I suppose maybe brings back to why I think this matters is will there be this threshold which AIS cross such that their reasoning after this is qualitatively different to their reasoning previously? And this is like two things. One, like a much faster increase in AI capabilities and two, alignment techniques which worked on systems which didn't have g will no longer work. Systems which do have g. Brings back to why I think this actually matters. But I think if we're sort of accepting it, I think elephants probably also if you think that if we're saying, like, g is like a level of chimpanzees, chimpanzees just, like, don't don't look like quantitatively different to, like, don't look like that qualitatively different to, like, other animals. Now, lots of other animals live in similar complex social groups. Lots of other animals use tools.
AARON: Yeah, sure. For one thing, I don't think there's not going to be a discontinuity in the same way that there wasn't a discontinuity at any point between humans evolution from the first prokaryotic cells or whatever are eukaryotic one of those two or both, I guess. My train of thought. Yes, I know it's controversial, but let's just suppose that the sign language thing was legit with the waterbird and that's not like a random one off fluke or something. Then maybe this is just some sort of weird vestigial evolutionary accident that actually isn't very beneficial for chimpanzees and they just stumbled their way into and then it just enabled them to it enables evolution to bootstrap Shimp genomes into human genomes. Because at some the smartest or whatever actually, I don't know. Honestly, I don't have a great grasp of evolutionary biology or evolution at all. But, yeah, it could just be not that helpful for chimps and helpful for an extremely smart chimp that looks kind of different or something like that.
NATHAN: Yeah. So I suppose just like the other thing she's going on here, I don't want to keep banging on about this, but you can lose the language. You can lose linguistic ability. And it's just, like, happens this happens in stroke victims, for instance. It's not that rare. Just, like, lose linguistic ability, but still have all the other abilities which we sort of think of as like, general intelligence, which I think would be including the general intelligence, like, hypothesis.
AARON: I agree that's, like, evidence against it. I just don't think it's very strong evidence, partially because I think there is a real school of thought that says that language is fundamental. Like, language drives thought. Language is, like, primary to thought or something. And I don't buy that. If you did buy that, I think this would be, like, more damning evidence.
#6 Daniel Filan on why I'm wrong about ethics (+ Oppenheimer and what names mean in like a hardcore phil of language sense) [00:56:54]
[Note: I forgot to record an intro segment here. Sorry!]
AARON: Yeah. Yes. I'm also anti scam. Right, thank you. Okay, so I think that thing that we were talking about last time we talked, which is like the thing I think we actually both know stuff about instead of just like, repeating New York Times articles is my nuanced ethics takes and why you think about talk about that and then we can just also branch off from there.
DANIEL: Yeah, we can talk about that.
AARON: Maybe see where that did. I luckily I have a split screen up, so I can pull up things. Maybe this is kind of like egotistical or something to center my particular view, but you've definitely given me some of the better pushback or whatever that I haven't gotten that much feedback of any kind, I guess, but it's still interesting to hear your take. So basically my ethical position or the thing that I think is true is that which I think is not the default view. I think most people think this is wrong is that total utilitarianism does not imply that for some amount of suffering that could be created there exists some other extremely large arbitrarily, large amount of happiness that could also be created which would morally justify the former. Basically.
DANIEL: So you think that even under total utilitarianism there can be big amounts of suffering such that there's no way to morally tip the calculus. However much pleasure you can create, it's just not going to outweigh the fact that you inflicted that much suffering on some people.
AARON: Yeah, and I'd highlight the word inflicted if something's already there and you can't do anything about it, that's kind of neither here nor there as it pertains to your actions or something. So it's really about you increasing, you creating suffering that wouldn't have otherwise been created. Yeah. It's also been a couple of months since I've thought about this in extreme detail, although I thought about it quite a bit. Yeah.
DANIEL: Maybe I should say my contrary view, I guess, when you say that, I don't know, does total utilitarianism imply something or not? I'm like, well, presumably it depends on what we mean by total utilitarianism. Right. So setting that aside, I think that thesis is probably false. I think that yeah. You can offset great amounts of suffering with great amounts of pleasure, even for arbitrary amounts of suffering.
AARON: Okay. I do think that position is like the much more common and even, I'd say default view. Do you agree with that? It's sort of like the implicit position of people who are of self described total utilitarians who haven't thought a ton about this particular question.
DANIEL: Yeah, I think it's probably the implicit default. I think it's the implicit default in ethical theory or something. I think that in practice, when you're being a utilitarian, I don't know, normally, if you're trying to be a utilitarian and you see yourself inflicting a large amount of suffering, I don't know. I do think there's some instinct to be like, is there any way we can get around this?
AARON: Yeah, for sure. And to be clear, I don't think this would look like a thought experiment. I think what it looks like in practice and also I will throw in caveats as I see necessary, but I think what it looks like in practice is like, spreading either wild animals or humans or even sentient digital life through the universe. That's in a non as risky way, but that's still just maybe like, say, making the earth, making multiple copies of humanity or something like that. That would be an example that's probably not like an example of what an example of creating suffering would be. For example, just creating another duplicate of earth. Okay.
DANIEL: Anything that would be like so much suffering that we shouldn't even the pleasures of earth outweighs.
AARON: Not necessarily, which is kind of a cop out. But my inclination is that if you include wild animals, the answer is yes, that creating another earth especially. Yeah, but I'm much more committed to some amount. It's like some amount than this particular time and place in human industry is like that or whatever.
DANIEL: Okay, can I get a feel of some other concrete cases to see?
AARON: Yeah.
DANIEL: So one example that's on my mind is, like, the atomic bombing of Hiroshima and Nagasaki, right? So the standard case for this is, like, yeah, what? A hundred OD thousand people died? Like, quite terrible, quite awful. And a lot of them died, I guess a lot of them were sort of some people were sort of instantly vaporized, but a lot of people died in extremely painful ways. But the countercase is like, well, the alternative to that would have been like, an incredibly grueling land invasion of Japan, where many more people would have died or know regardless of what the actual alternatives were. If you think about the atomic bombings, do you think that's like the kind of infliction of suffering where there's just not an offsetting amount of pleasure that could make that okay?
AARON: My intuition is no, that it is offsettable, but I would also emphasize that given the actual historical contingencies, the alternative, the implicit case for the bombing includes reducing suffering elsewhere rather than merely creating happiness. There can definitely be two bad choices that you have to make or something. And my claim doesn't really pertain to that, at least not directly.
#7: Holly Elmore on AI pause, wild animal welfare, and some cool biology things I couldn't fully follow but maybe you can [01:04:00]
Up next, Holly Elmore and I discuss the complexities and implications of AI development and open sourcing. We talk about protests and ethical considerations around her, um, uh, campaign to pause the development of frontier AI systems until, until we can tell that they're safe.
AARON: So what's the plan? Do you have a plan? You don't have to have a plan. I don't have plans very much.
HOLLY: Well, right now I'm hopeful about the UK AI summit. Pause AI and I have planned a multi city protest on the 21 October to encourage the UK AI Safety Summit to focus on safety first and to have as a topic arranging a pause or that of negotiation. There's a lot of a little bit upsetting advertising for that thing that's like, we need to keep up capabilities too. And I just think that's really a secondary objective. And that's how I wanted to be focused on safety. So I'm hopeful about the level of global coordination that we're already seeing. It's going so much faster than we thought. Already the UN Secretary General has been talking about this and there have been meetings about this. It's happened so much faster at the beginning of this year. Nobody thought we could talk about nobody was thinking we'd be talking about this as a mainstream topic. And then actually governments have been very receptive anyway. So right now I'm focused on other than just influencing opinion, the targets I'm focused on, or things like encouraging these international like, I have a protest on Friday, my first protest that I'm leading and kind of nervous that's against Meta. It's at the Meta building in San Francisco about their sharing of model weights. They call it open source. It's like not exactly open source, but I'm probably not going to repeat that message because it's pretty complicated to explain. I really love the pause message because it's just so hard to misinterpret and it conveys pretty clearly what we want very quickly. And you don't have a lot of bandwidth and advocacy. You write a lot of materials for a protest, but mostly what people see is the title.
AARON: That's interesting because I sort of have the opposite sense. I agree that in terms of how many informational bits you're conveying in a particular phrase, pause AI is simpler, but in some sense it's not nearly as obvious. At least maybe I'm more of a tech brain person or whatever. But why that is good, as opposed to don't give extremely powerful thing to the worst people in the world. That's like a longer everyone.
HOLLY: Maybe I'm just weird. I've gotten the feedback from open source ML people is the number one thing is like, it's too late, there's already super powerful models. There's nothing you can do to stop us, which sounds so villainous, I don't know if that's what they mean. Well, actually the number one message is you're stupid, you're not an ML engineer. Which like, okay, number two is like, it's too late, there's nothing you can do. There's all of these other and Meta is not even the most powerful generator of models that it share of open source models. I was like, okay, fine. And I don't know, I don't think that protesting too much is really the best in these situations. I just mostly kind of let that lie. I could give my theory of change on this and why I'm focusing on Meta. Meta is a large company I'm hoping to have influence on. There is a Meta building in San Francisco near where yeah, Meta is the biggest company that is doing this and I think there should be a norm against model weight sharing. I was hoping it would be something that other employees of other labs would be comfortable attending and that is a policy that is not shared across the labs. Obviously the biggest labs don't do it. So OpenAI is called OpenAI but very quickly decided not to do that. Yeah, I kind of wanted to start in a way that made it more clear than pause AI. Does that anybody's welcome something? I thought a one off issue like this that a lot of people could agree and form a coalition around would be good. A lot of people think that this is like a lot of the open source ML people think know this is like a secret. What I'm saying is secretly an argument for tyranny. I just want centralization of power. I just think that there are elites that are better qualified to run everything. It was even suggested I didn't mention China. It even suggested that I was racist because I didn't think that foreign people could make better AIS than Meta.
AARON: I'm grimacing here. The intellectual disagreeableness, if that's an appropriate term or something like that. Good on you for standing up to some pretty bad arguments.
HOLLY: Yeah, it's not like that worth it. I'm lucky that I truly am curious about what people think about stuff like that. I just find it really interesting. I spent way too much time understanding the alt. Right. For instance, I'm kind of like sure I'm on list somewhere because of the forums I was on just because I was interested and it is something that serves me well with my adversaries. I've enjoyed some conversations with people where I kind of like because my position on all this is that look, I need to be convinced and the public needs to be convinced that this is safe before we go ahead. So I kind of like not having to be the smart person making the arguments. I kind of like being like, can you explain like I'm five. I still don't get it. How does this work?
AARON: Yeah, no, I was thinking actually not long ago about open source. Like the phrase has such a positive connotation and in a lot of contexts it really is good. I don't know. I'm glad that random tech I don't know, things from 2004 or whatever, like the reddit source code is like all right, seems cool that it's open source. I don't actually know if that was how that right. But yeah, I feel like maybe even just breaking down what the positive connotation comes from and why it's in people's self. This is really what I was thinking about, is like, why is it in people's self interest to open source things that they made and that might break apart the allure or sort of ethical halo that it has around it? And I was thinking it probably has something to do with, oh, this is like how if you're a tech person who makes some cool product, you could try to put a gate around it by keeping it closed source and maybe trying to get intellectual property or something. But probably you're extremely talented already, or pretty wealthy. Definitely can be hired in the future. And if you're not wealthy yet I don't mean to put things in just materialist terms, but basically it could easily be just like in a yeah, I think I'll probably take that bit out because I didn't mean to put it in strictly like monetary terms, but basically it just seems like pretty plausibly in an arbitrary tech person's self interest, broadly construed to, in fact, open source their thing, which is totally fine and normal.
HOLLY: I think that's like 99 it's like a way of showing magnanimity showing, but.
AARON: I don't make this sound so like, I think 99.9% of human behavior is like this. I'm not saying it's like, oh, it's some secret, terrible self interested thing, but just making it more mechanistic. Okay, it's like it's like a status thing. It's like an advertising thing. It's like, okay, you're not really in need of direct economic rewards, or sort of makes sense to play the long game in some sense, and this is totally normal and fine, but at the end of the day, there's reasons why it makes sense, why it's in people's self interest to open source.
HOLLY: Literally, the culture of open source has been able to bully people into, like, oh, it's immoral to keep it for yourself. You have to release those. So it's just, like, set the norms in a lot of ways, I'm not the bully. Sounds bad, but I mean, it's just like there is a lot of pressure. It looks bad if something is closed source.
AARON: Yeah, it's kind of weird that Meta I don't know, does Meta really think it's in their I don't know. Most economic take on this would be like, oh, they somehow think it's in their shareholders interest to open source.
HOLLY: There are a lot of speculations on why they're doing this. One is that? Yeah, their models aren't as good as the top labs, but if it's open source, then open source quote, unquote then people will integrate it llama Two into their apps. Or People Will Use It And Become I don't know, it's a little weird because I don't know why using llama Two commits you to using llama Three or something, but it just ways for their models to get in in places where if you just had to pay for their models too, people would go for better ones. That's one thing. Another is, yeah, I guess these are too speculative. I don't want to be seen repeating them since I'm about to do this purchase. But there's speculation that it's in best interests in various ways to do this. I think it's possible also that just like so what happened with the release of Llama One is they were going to allow approved people to download the weights, but then within four days somebody had leaked Llama One on four chan and then they just were like, well, whatever, we'll just release the weights. And then they released Llama Two with the weights from the beginning. And it's not like 100% clear that they intended to do full open source or what they call Open source. And I keep saying it's not open source because this is like a little bit of a tricky point to make. So I'm not emphasizing it too much. So they say that they're open source, but they're not. The algorithms are not open source. There are open source ML models that have everything open sourced and I don't think that that's good. I think that's worse. So I don't want to criticize them for that. But they're saying it's open source because there's all this goodwill associated with open source. But actually what they're doing is releasing the product for free or like trade secrets even you could say like things that should be trade secrets. And yeah, they're telling people how to make it themselves. So it's like a little bit of a they're intentionally using this label that has a lot of positive connotations but probably according to Open Source Initiative, which makes the open Source license, it should be called something else or there should just be like a new category for LLMs being but I don't want things to be more open. It could easily sound like a rebuke that it should be more open to make that point. But I also don't want to call it Open source because I think Open source software should probably does deserve a lot of its positive connotation, but they're not releasing the part, that the software part because that would cut into their business. I think it would be much worse. I think they shouldn't do it. But I also am not clear on this because the Open Source ML critics say that everyone does have access to the same data set as Llama Two. But I don't know. Llama Two had 7 billion tokens and that's more than GPT Four. And I don't understand all of the details here. It's possible that the tokenization process was different or something and that's why there were more. But Meta didn't say what was in the longitude data set and usually there's some description given of what's in the data set that led some people to speculate that maybe they're using private data. They do have access to a lot of private data that shouldn't be. It's not just like the common crawl backup of the Internet. Everybody's basing their training on that and then maybe some works of literature they're not supposed to. There's like a data set there that is in question, but metas is bigger than bigger than I think well, sorry, I don't have a list in front of me. I'm not going to get stuff wrong, but it's bigger than kind of similar models and I thought that they have access to extra stuff that's not public. And it seems like people are asking if maybe that's part of the training set. But yeah, the ML people would have or the open source ML people that I've been talking to would have believed that anybody who's decent can just access all of the training sets that they've all used.
AARON: Aside, I tried to download in case I'm guessing, I don't know, it depends how many people listen to this. But in one sense, for a competent ML engineer, I'm sure open source really does mean that. But then there's people like me. I don't know. I knew a little bit of R, I think. I feel like I caught on the very last boat where I could know just barely enough programming to try to learn more, I guess. Coming out of college, I don't know, a couple of months ago, I tried to do the thing where you download Llama too, but I tried it all and now I just have like it didn't work. I have like a bunch of empty folders and I forget got some error message or whatever. Then I tried to train my own tried to train my own model on my MacBook. It just printed. That's like the only thing that a language model would do because that was like the most common token in the training set. So anyway, I'm just like, sorry, this is not important whatsoever.
HOLLY: Yeah, I feel like torn about this because I used to be a genomicist and I used to do computational biology and it was not machine learning, but I used a highly parallel GPU cluster. And so I know some stuff about it and part of me wants to mess around with it, but part of me feels like I shouldn't get seduced by this. I am kind of worried that this has happened in the AI safety community. It's always been people who are interested in from the beginning, it was people who are interested in singularity and then realized there was this problem. And so it's always been like people really interested in tech and wanting to be close to it. And I think we've been really influenced by our direction, has been really influenced by wanting to be where the action is with AI development. And I don't know that that was right.
AARON: Not personal, but I guess individual level I'm not super worried about people like you and me losing the plot by learning more about ML on their personal.
HOLLY: You know what I mean? But it does just feel sort of like I guess, yeah, this is maybe more of like a confession than, like a point. But it does feel a little bit like it's hard for me to enjoy in good conscience, like, the cool stuff.
AARON: Okay. Yeah.
HOLLY: I just see people be so attached to this as their identity. They really don't want to go in a direction of not pursuing tech because this is kind of their whole thing. And what would they do if we weren't working toward AI? This is a big fear that people express to me with they don't say it in so many words usually, but they say things like, well, I don't want AI to never get built about a pause. Which, by the way, just to clear up, my assumption is that a pause would be unless society ends for some other reason, that a pause would eventually be lifted. It couldn't be forever. But some people are worried that if you stop the momentum now, people are just so luddite in their insides that we would just never pick it up again. Or something like that. And, yeah, there's some identity stuff that's been expressed. Again, not in so many words to me about who will we be if we're just sort of like activists instead of working on.
AARON: Maybe one thing that we might actually disagree on. It's kind of important is whether so I think we both agree that Aipause is better than the status quo, at least broadly, whatever. I know that can mean different things, but yeah, maybe I'm not super convinced, actually, that if I could just, like what am I trying to say? Maybe at least right now, if I could just imagine the world where open eye and Anthropic had a couple more years to do stuff and nobody else did, that would be better. I kind of think that they are reasonably responsible actors. And so I don't know. I don't think that actually that's not an actual possibility. But, like, maybe, like, we have a different idea about, like, the degree to which, like, a problem is just, like, a million different not even a million, but, say, like, a thousand different actors, like, having increasingly powerful models versus, like, the actual, like like the actual, like, state of the art right now, being plausibly near a dangerous threshold or something. Does this make any sense to you?
HOLLY: Both those things are yeah, and this is one thing I really like about the pause position is that unlike a lot of proposals that try to allow for alignment, it's not really close to a bad choice. It's just more safe. I mean, it might be foregoing some value if there is a way to get an aligned AI faster. But, yeah, I like the pause position because it's kind of robust to this. I can't claim to know more about alignment than OpenAI or anthropic staff. I think they know much more about it. But I have fundamental doubts about the concept of alignment that make me think I'm concerned about even if things go right, like, what perverse consequences go nominally right, like, what perverse consequences could follow from that. I have, I don't know, like a theory of psychology that's, like, not super compatible with alignment. Like, I think, like yeah, like humans in living in society together are aligned with each other, but the society is a big part of that.
#8: Max Alexander and I solve ethics, philosophy of mind, and cancel culture once and for all [01:24:43]
Now, up next, Max Alexander, and I talk about Twitter, how it's changed as a platform, and many fun things besides.
AARON: Wait, maybe. Like, what else? I don't know. What do you think about Twitter, like, in general? I don't know. Because this is how we met.
MAX: Yeah.
AARON: We have not met in real life.
MAX: Worse as a platform than it was two years ago or something.
AARON: Okay.
MAX: Stability wise, and there are small changes that make it worse or something, but largely my experience is unchanged, I think.
AARON: Do you think it's good, bad? I don't know. Do you think people should join Twitter on the market?
MAX: I think EA should join EA. Twitter. I'm not sure if you join Twitter rather than other social medias or something. I think sort of the area of social media we're on is uniquely quite good or something.
AARON: I agree.
MAX: And some of this is like, you get interactions with people, which is good, and people are very nice or something, and very civil where we are. And it's less clear the sorts of personal ability or something and niceness that you get where we are in, like, are elsewhere in Twitter because I don't go elsewhere. But basically you should join Twitter, I guess, if you're going to enter a small community or something, if you're just going to use it to browse memes or something, it's not clear this is better than literally any other social media that has no.
AARON: Yeah, I agree. Well, I guess our audience is, of all, maybe four people, is largely from Twitter. But you never know. There's like a non zero chance that somebody from the wider world will be listening. I think it's at least worth an experiment. Right. Maybe you could tell me something that I should experiment with. Is there anything else like Twitter that we don't have in common that you think that maybe I don't do? It's like, oh, he's an idiot for not doing.
MAX: Oh, probably not. I mean, I'm sure you do better things than I do. Probably.
AARON: Well, I mean, probably this is a large. Right? Like, I don't know.
MAX: I think a benefit of using Twitter is like, it kind of opens you up or something. Probably is the case. It probably does literally build your social skills or something. I mean, maybe not in an obviously useful way, because it's like you're probably not necessarily that much better at doing in person stuff or something as a result of these Twitter. Maybe it improves you very slightly or something, but it's a different skill, texting versus talking.
AARON: Actually, here's something I want your thoughts on recently. Maybe this is outing me as a true Twitter addict, but no, I, by and large, have had a really good experience and I stand by that. I think it's net on net. Not just on net, but just in general, added value to my life and stuff. And it's great, especially given the community that I'm in. The communities that I'm in. But yeah, this is going to kind of embarrassing. I've started thinking in tweets. I'm not 100% of the time, not like my brain is only stuck on Twitter mode, but I think on the margin there's been a chef toward a thought verbalizes an Aaron's brain as something that could be a tweet. And I'm not sure this is a positive.
MAX: Like it is the case. I've had my friends open Twitter in front of me, like my Twitter and go through and read my tweets. Actually, many people in my life do this. I don't know why. I don't really want them to do that. And it does change the way you talk. Certainly part of that is probably character element, and part of it is probably like culture or something. So that's the case. I don't know if I experienced that or I do sometimes if I thought of a really stupid pun. Normally you don't do anything with that, but now I can or something. Right. It's worth holding on for the 6 seconds it takes to open my phone. But I think I actually kind of maybe think in tweets already or something. Like, if you read my writing, I've gotten feedback that it's both very poetic or something. And poems are short or something. It's like very stanza or something, which is kind of how Twitter works also. Right. I think if you looked at the formatting of some of my writing, you would see that it's very twitter like or something. In some sense, there's no character limit, and so maybe this is just the sort of thing you're experiencing or something. Or maybe it's more intense.
AARON: Yeah, probably not exactly. Honestly, I don't think this is that big of a deal. One thing is, I think this is a causal effect. I've blogged less and. And I think it's like, not a direct replacement. Like, I think Scooter has been like an outlet for my ideas that actually feels less effortful and takes less. So it's not like a one for one thing. So other more worky things have filled in the gap for blogging. But I think it has been a causal reason that I haven't blogged as much as I would like to. Really would like have to or something. Yeah, I can see that being thing that is like ideas, there's no strong signal that a particular tweet is an important idea that's worth considering. Whereas if you've written a whole blog post on it and you have 200 subscribers or whatever, you put in a lot of effort. People are at least going to say like, oh, this is me. At least plausibly like an important idea. Like when they're coming into it or something like that.
MAX: Yeah. And if you think something is valuable or something, maybe this is different for you or something. But I get like three likes on all my tweets. It's very rare I get ten likes or something. The number of followers. It's just stuck there forever.
#9: Sarah Woodhouse on discovering AI x-risk, Twitter, and more [01:30:56]
Here, Sarah Woodhouse and I talked about how she got into the field of AI safety, starting from concerned about job automation, and how this led her to be a Twitter influencer.
SARAH: Well, I realized that a chatbot could very easily do my job and that my employers either hadnât noticed this or they had noticed, but they were just being polite about it and they didnât want to fire me because theyâre too nice. And I was like, I should find out what AI development is going to be like over the next few years so that I know if I should go and get good at some other stuff.
SARAH: I just had a little innocent Google. And then within a few clicks, Iâd completely doom pilled myself. I was like, weâre all going to die. I think I found Geoffrey Hinton because he was on the news at the time, because he just quit his job at Google. And he was there saying things that sounded very uncertain, very alarming. And I was like, well, heâs probably the pessimist, but Iâm sure that there are loads of optimists to counteract that because thatâs how it usually goes. You find a doomer and then you find a bunch of more moderate people, and then thereâs some consensus in the middle that everythingâs basically fine.
SARAH: I was like, if I just keep looking, Iâll find the consensus because itâs there. Iâm sure itâs there. So I just kept looking and looking for it. I looked for it for weeks. I just didnât find it. And then I was like, nobody knows whatâs going on. This seems really concerning. So then I started lurking on Twitter, and then I got familiar with all the different accounts, whatever. And then at some point, I was like, Iâm going to start contributing to this conversation, but I didnât think that anybody would talk back to me. And then at some point, they started talking back to me and I was like, this is kind of weird.
SARAH: And then at some point, I was having an existential crisis and I had a couple of glasses of wine or something, and I just decided to type this big, long thread. And then I went to bed. I woke up the next morning slightly grouchy and hungover. I checked my phone and there were all these people messaging me and all these people replying to my thread being like, this is so relatable. This really resonated with me. And I was like, what is going on?
AARON: You were there on Twitter before that thread right? Iâm pretty sure I was following you.
SARAH: I think, yeah, I was there before, but no one ever really gave me any attention prior to that. I think I had a couple of tweets that blew up before that, but not to the same extent. And then after that, I think I was like, okay, so now I have an audience. When I say an audience, like, obviously a small one, but more of an audience than Iâve ever had before in my life. And I was like, how far can I take this?
SARAH: I was a bit like, people obviously started following me because Iâm freFreaking out about AI, but if I post an outfit, whatâs going to happen? How far can I push this posting, these fit checks? I started posting random stuff about things that were completely unrelated. I was like, oh, people are kind of here for this, too. Okay, this is weird. So now Iâm just milking it for all its worth, and I really donât know why anybodyâs listening to me. Iâm basically very confused about the whole thing.
AARON: I mean, I think itâs kind of weird from your perspective, or itâs weird in general because there arenât that many people who just do that extremely logical thing at the beginning. I donât know, maybe itâs not obvious to people in every industry or whatever that AI is potentially a big deal, but thereâs lots of truckers or whatever. Maybe theyâre not the best demographic or the most conducive demographic, like, getting on Twitter or whatever, but thereâs other jobs that it would make sense to look into that. Itâs kind of weird to me that only you followed the rabbit hole all the way down.
SARAH: I know! This is what IâŠBecause itâs not that hard to complete the circle. It probably took me like a day, it took me like an afternoon to get from, Iâm worried about job automation to I should stop saving for retirement. It didnât take me that long. Do you know what I mean? No one ever looks. I literally donât get it. I was talking to some people. I was talking to one of my coworkers about this the other day, and I think I came up in conversation. She was like, yeah, Iâm a bit worried about AI because I heard on the radio that taxi drivers might be out of a job. Thatâs bad. And I was like, yeah, that is bad. But do you know what else? She was like, what are the AI companies up to that we donât know about? And I was like, I mean, you can go on their website. You can just go on their website and read about how they think that their technology is an extinction risk. Itâs not like theyâre hiding. Itâs literally just on there and no one ever looks. Itâs just crazy.
AARON: Yeah. Honestly, I donât even know if I was in your situation, if I would have done that. Itâs like, in some sense, I am surprised. Itâs very few people maybe like one, but at another level, itâs more rationality than most humans have or something. Yeah. You regret going down that rabbit hole?
SARAH: Yeah, kind of. Although Iâm enjoying the Twitter thing and itâs kind of fun, and it turns out thereâs endless comedic material that you can get out of impending doom. The whole thing is quite funny. Itâs not funny, but you can make it funny if you try hard enough. But, yeah, what was I going to say? I think maybe I was more primed for doom pilling than your average person because I already knew what EA was and I already knew, you know what I mean. That stuff was on my radar.
AARON: Thatâs interesting.
SARAH: I think had it not been on my radar, I donât think I would have followed the pipeline all the way.
AARON: Yeah. I donât know what browser you use, but it would be. And you should definitely not only do this if you actually think it would be cool or whatever, but this could be in your browser history from that day and that would be hilarious. You could remove anything you didnât want to show, but if itâs like Google Chrome, they package everything into sessions. Itâs one browsing session and itâll have like 10,000 links.
SARAH: Yeah, I think for non-sketchy reasons, I delete my Google history more regularly than that. I donât think Iâd be able to find that. But I can remember the day and I can remember my anxiety levels just going up and up somewhere between 01:00 p.m. and 07:00 p.m. And by the evening Iâm like, oh, my God.
AARON: Oh, damn, thatâs wild.
SARAH: It was really stressful.
AARON: Yeah, I guess props for, I donât know if propsâŠIs the right word, I guess, impressed? Iâm actually somewhat surprised to hear that you said you regret it. I mean, that sucks though, I guess. Iâm sorry.
SARAH: If you could unknow this, would you?
AARON: No, because I think itâs worth maybe selfishly, but not overall because. Okay, yeah, I think that would plausibly be the selfish thing to do. Actually. No, actually, hold on. No, I actually donât think thatâs true. I actually think thereâs enough an individual can do selfishly such that it makes sense. Even the emotional turmoil.
SARAH: It would depend how much you thought that you were going to personally move the needle by knowing about it. I personally donât think that Iâm going to be able to do very much. I was going to tip the scales. I wouldnât selfishly unknow it and sacrifice the world. But me being not particularly informed or intelligent and not having any power, I feel like if I forgot that AI was going to end the world, it would not make much difference.
AARON: You know what I mean? I agree that itâs like, yes, it is unlikely for either of us to tip the scales, but.
SARAH: Maybe you canât.
AARON: No, actually, in terms of, yeah, Iâm probably somewhat more technically knowledgeable just based on what I know about you. Maybe Iâm wrong.
SARAH: No, youâre definitely right.
AARON: Itâs sort of just like a probabilities thing. I do think that âdoomâ - that word - is too simplified, often too simple to capture what people really care about. But if you just want to say doom versus no doom or whatever, AI doom versus no AI doom. Maybe thereâs like a one in 100,000 chance that one of us tips the scales. And thatâs important. Maybe even, like, one in 10,000. Probably not. Probably not.
SARAH: One in 10,000. Wow.
#10: Pigeon Hour x Consistently Candid pod-crossover: I debate moral realism* with Max Alexander and Sarah Hastings-Woodhouse [01:41:08]
And for our last segment, this is a segment of Sarah, from the previous segments, new podcast called Consistently Candid, excellent name. And basically, Max Alexander and I are debating moral realism or something like that, as you'll see, with Sarah being the moderator and judge.
Aaron: Yeah. Max, do you want to go first or second?Max: I'll go second.Sarah: Okay.
Aaron: Well, the first thing is that. The thing I always say is that I simply deferred to Sharon Hewitt Rawlette, who was on the 80,000 Hours Podcast. They had a whole podcast episode about this, and she's an actual philosopher who made the actual case for this.
Aaron: And so everything I say is just basically, likeâŠthe actual case is that you just listen to her. Well, I guess one thing is that, okay, what's the base rate of people being correct given that they're on the 80,000 hours podcast? Pretty high. Probably, like, 99%.
Max: I don't know if that's right.Aaron: Not that.Sarah: Has no one ever been on the 80,000 Podcast and argued the opposite?Max: Sam Bankman Fried was on the 80,000 hours podcast.Sarah: Oh, yeah, that's true. That was embarrassing.Aaron: Well, that's why I said 99%, not 100%.Max: Yeah, that was their one episode.Aaron: Yeah, everything else - wait, I'm sorry. I was mostly joking about that, but no, I am serious: maybe I'll find a way to reference the URL in the show description or something, or just like Google â80,000 Hours podcast moral realism.â
Aaron: First of all, my actual point of view is a weak version of moral realism. I believe that truly normative statements, such as "a person should do X or Y," are not objectively true or false. However, I do think that sometimes, at least occasionally, statements that objectively order worlds or actions can be true or false. For example, saying "world A is objectively better than world B."
Aaron: The most intuitive argument, or perhaps intuition pump, that I can gesture to in favor of my point of view is this idea of comparing and objectively ordering worlds or actions. It's just like, okay, so you have one world and then another world, which is exactly the same, except it also creates a person who's feeling a lot of pain, and that's the only difference. And I want to say that this world is objectively worse, and the reason why it's objectively worse is just because it is built into both, sort of, semantically, the meaning of what we say, pain or suffering, but also not mean. That's true.
Aaron: But another perspective on this. It's sort of like a brute fact of the universe, in the same way that facts about physics are that suffering and pain are bad. And so if you just add some of this bad stuff, or on the other side, add some objectively good stuff, you get an objectively better shade of the world. And so I will leave it there for Max to tell me why I'm wrong.Sarah: All right, well, okay, can I ask a question first? Max, you want to go. How do I phrase this? Sorry, I'm just clarifying. So you're basically saying that you can't make truth claims about what people ought to do, but you can about which states of affairs are better or worse than others.
Sarah: But if you can definitely say this circumstance is better than this one, objectively speaking, then if you could find some way of empirically determining which actions brought about more pleasure or pain, even if, I mean, maybe we can never actually determine which actions would do that. But say, if you could, then would those things not be like, would you not be able to make a claim about what you should do?Aaron: I think you can make a claim. In fact, I think they actually would make the claim. But then what I wouldn't be able to say, at least what I currently think, is that those wouldn't be objectively true or false in the same way. I'm less sure about this, for what it's worth. I'm like, less.Sarah: How can it be objectively the case that one situation could be better than the other, but it's not objectively true that you should do the thing that is most likely to bring about the better one?Aaron: No, this is a good question. I actually just had this debate on Twitter, sort of.Sarah: Okay.Aaron: Although I think the person ended up agreeing with me. One thing is, I think some people just have the sense those two statements are basically just saying are just like, rewording of the same thing. And that's just not my sense. But maybe I'm the weird one, and everybody else has the sense that when they say, oh, x is better than y, and the statement like, oh, you should act so as to bring about x, that these are just exactly the same thing. It's just reworded. Is that your sense?Sarah: I think they're exactly the same thing, but. Well, actually, no, I kind of do. I don't really understand how it can be true that X is better than y, and at the same time, it's not true that you should try and make X happen.
Outro [1:47:04]
All right, that's it; thank you so much to all my guests!
And if you're listening to this, like there's like a 99% chance that you are invited as a guest. It's just a matter of me working my way through some people who I'm being unnecessarily slow with.
Thanks so much for listening and you know where to find me - if you're listening to this, you almost certainly know where to find me.
Take care.
Get full access to Aaron's Blog at www.aaronbergman.net/subscribe -
Intro
At the gracious invitation of AI Safety Twitter-fluencer Sarah Hastings-Woodhouse, I appeared on the very first episode of her new podcast âConsistently Candidâ to debate moral realism (or something kinda like that, I guess; see below) with fellow philosophy nerd and EA Twitter aficionado Max Alexander, alongside Sarah as moderator and judge of sorts.
What I believe
In spite of the name of the episode and the best of my knowledge/understanding a few days ago, it turns out my stance may not be ~genuine~ moral realism.
Hereâs my basic meta-ethical take:
* Descriptive statements that concern objective relative goodness or badness (e.g., "it is objectively for Sam to donate $20 than to buy an expensive meal that costs $20 more than a similar, less fancy mealâ) can be and sometimes are true; but
* Genuinely normative claims like âSam should (!) donate $20 and should not buy that fancy mealâ are never objectively true.
Of course the label per se doesnât really matter. But for a bunch of reasons it still seems wise to figure out which label really does work best.
Some definitions
Stanford Encyclopedia of Philosophy:
Moral realists are those who think that, in these respects, things should be taken at face valueâmoral claims do purport to report facts and are true if they get the facts right. Moreover, they hold, at least some moral claims actually are true. That much is the common and more or less defining ground of moral realism (although some accounts of moral realism see it as involving additional commitments, say to the independence of the moral facts from human thought and practice, or to those facts being objective in some specified way).
Wikipedia:
The robust model of moral realism commits moral realists to three theses:
* The semantic thesis: The primary semantic role of moral predicates (such as "right" and "wrong") is to refer to moral properties (such as rightness and wrongness), so that moral statements (such as "honesty is good" and "slavery is unjust") purport to represent moral facts, and express propositions that are true or false (or approximately true, largely false, and so on).
* The alethic thesis: Some moral propositions are in fact true.
* The metaphysical thesis: Moral propositions are true when actions and other objects of moral assessment have the relevant moral properties (so that the relevant moral facts obtain), where these facts and properties are robust: their metaphysical status, whatever it is, is not relevantly different from that of (certain types of) ordinary non-moral facts and properties.
And to settle the matter, hereâs what some LLM have to say.
My prompt:
Suppose a person believes the following:
1: Descriptive statements that concern objective relative goodness or badness (e.g., "it is objectively for Sam to donate $20 than to buy an expensive meal that costs $20 more than a similar, less fancy mealâ) can be and sometimes are true; but
2: Genuinely normative claims like âSam should donate $20 and should not buy that fancy mealâ are never objectively true.
Is this person best categorized as a moral realist?
ChatGPT
Given these two beliefs, the person's stance doesn't align completely with traditional moral realism, which typically asserts that there are objective truths about both moral facts (like the goodness or badness of actions) and moral duties (what one should or should not do). However, it also doesn't fully align with moral anti-realism, since the person believes in some form of objective truth regarding moral value.
One possible categorization for this stance could be a form of "value realism" combined with "normative anti-realism" or "normative subjectivism." This would mean that the person believes in objective truths about what is better or worse (value realism) but thinks that what one ought to do (normative claims) is not a matter of objective truth but perhaps of subjective judgment or societal conventions.
This position is quite sophisticated and might be best understood as a hybrid or intermediate position in the landscape of moral philosophy, rather than fitting neatly into the category of moral realism or its standard alternatives.
Great, so much for trying to fit things into nice little buckets.
Claude, fortunately, is more definitive:
No, the view described is not best categorized as moral realism. Here's why:
Moral realism is the view that there are objective moral facts or truths. A moral realist believes that some moral claims, including normative claims about what one should or shouldn't do, can be objectively and mind-independently true.
The view in the example accepts that there can be objective truths about relative degrees of goodness or badness (claim 1). However, it denies that there are objective normative truths about what one should or shouldn't do (claim 2).
By rejecting that genuine normative 'should/shouldn't' claims can be objectively true, the view denies a key commitment of moral realism - namely that moral claims can state objective normative facts. At best it allows for objective evaluative facts about goodness/badness, not prescriptive facts about obligations.
So this view would be more accurately categorized as a form of moral anti-realism like non-cognitivism or error theory. It does not qualify as moral realism because of its rejection of the existence of objective normative truths that could make strong normative claims justified or true. The commitment to objective evaluative facts alone does not sufficiently qualify the view as realist in the metaethical sense.
In summary, the view accepts objective "good/bad" facts but denies objective "should/shouldn't" facts. This combination puts it at odds with core commitments of moral realism, so it is better seen as an anti-realist view.
Iâm gonna disagree with Claude here, at least in that the bolded claim above (objective normative facts are a key part of moral realism) just seemsâŠwrong? Like all the definitions could replace the term âmoral factâ or âmoral claimâ with ânormative factâ and ânormative claimâ but they do not!
So I really donât have a great answer to which label works best.
One of two ~human level chatbots giving me a definitive-sounding âanti-realismâ answer is too much evidence for me, whose only formal philosophy training is an undergrad minor, to rule that one out. There are also good arguments, I think, for the ârealist label,â as well as for âneitherâ (i.e., âsecret third thingâ). In fact all of these seem pretty similar in terms of argument convincingness/correctness.
So, in sum, đ€·ââïž.
Some tweets on the matter, both of which are images that link to the original:
Ok now you can vote:
Anyway, long story short, I believe something kinda sorta like moral realism, maybe, and that is what we discuss. Without further ado, hereâs the (very imperfect) transcript of the episode.
Please note that some of the long blocks of text have been cleaned up a bit via minor rewording. I am too lazy to do that for everything.
Transcript
Sarah: Hello, and welcome to my first experiment in low effort podcasting. In this episode of the podcast, which I'm now calling Consistently Candid because some people thought that was funny, I talked to Aaron Bergman and Max Alexander about moral realism.
Sarah: They kind of debate it. And I, having read the Wikipedia page about five minutes previously, a, occasionally chime in with some opinions that I hadn't thought out very well. So enjoy!
Sarah: Anyway, I guess this is my podcast now, but I don't have a name for it yet.Max: That's a good podcast nameSarah: Introduce it.Aaron: Can I broadcast this on Pigeon Hour as well?Sarah: Yeah, sure.Max: Okay, cool.Aaron: Sweet.Sarah: But I also want to make my own thing because people.Aaron: No, totally. But yeah, you can say no, you can copyright it and then sue me.Sarah: No. Well, that's fine. This is totally, like, anyone can broadcast it anywhere they want.Max: You can text or whatever, get on the Trump website.Sarah: Yeah. So you guys have a disagreement, apparently, about moral realism. I have briefly skimmed the Wikipedia page, and I don't have an opinion, but I thought we have it out.Aaron: No, I feel like the format should be that we try to convince youâŠSarah: So, yeah, you try and convince me that you each try and convince me you're right, and I will come to a conclusion and let you know who I'm persuaded by. And if at any point I have, like, a thought that's worth articulating, I'll weigh in with that. But I think that's kind of unlikely because I don't really know anything; I'm playing a moderating role here.Max: Well, confusion is worth pointing out or something like that, right?Sarah: Yeah, I can do that at regular intervals. I can tell you how confused I am. That's definitely doable.Aaron: Maybe you should start with, like, do you have an initial take at all, or are you really 50/50?Sarah: I mean, from very briefly reading the Wikipedia, it liked doesn't sound true to me.Max: Oh, hell yeah!Aaron: No. Okay, podcast over.Max: Way over the Wikipedia just to see what says. Did you read actual Wikipedia?Sarah: Wikipedia? Yeah, it says âmoral realism (also, ethical realism) is the position that ethical Sentences express propositions that refer to objective features of the world. That is, features independent of subjective opinion.âAaron: Yeah, facts.Max: Good summary from Wikipedia.Sarah: Fake.Max: My job is going to be easy.Sarah: Then, but I'm totally open to be persuaded.Aaron: Okay. The first thing is that I recognize that it sounds fake, it sounds very sus, but then it actually surprisingly checks out. So I just want to get that on the table.Sarah: Okay, what about if each of you do, like, a little opening spiel about why you think you're right, and then you can yell at each other about it afterwards.Aaron: Yeah. Max, do you want to go first or second?Max: I'll go second.Sarah: Okay.
Aaron: Well, the first thing is that. The thing I always say is that I simply deferred to Sharon Hewitt Rawlette, who was on the 80,000 Hours Podcast. They had a whole podcast episode about this, and she's an actual philosopher who made the actual case for this.
Aaron: And so everything I say is just basically, likeâŠthe actual case is that you just listen to her. Well, I guess one thing is that, okay, what's the base rate of people being correct given that they're on the 80,000 hours podcast? Pretty high. Probably, like, 99%.
Max: I don't know if that's right.Aaron: Not that.Sarah: Has no one ever been on the 80,000 Podcast and argued the opposite?Max: Sam Bankman Fried was on the 80,000 hours podcast.Sarah: Oh, yeah, that's true. That was embarrassing.Aaron: Well, that's why I said 99%, not 100%.Max: Yeah, that was their one episode.Aaron: Yeah, everything else - wait, I'm sorry. I was mostly joking about that, but no, I am serious: maybe I'll find a way to reference the URL in the show description or something, or just like Google â80,000 Hours podcast moral realism.â
Aaron: First of all, my actual point of view is a weak version of moral realism. I believe that truly normative statements, such as "a person should do X or Y," are not objectively true or false. However, I do think that sometimes, at least occasionally, statements that objectively order worlds or actions can be true or false. For example, saying "world A is objectively better than world B."
Aaron: The most intuitive argument, or perhaps intuition pump, that I can gesture to in favor of my point of view is this idea of comparing and objectively ordering worlds or actions. It's just like, okay, so you have one world and then another world, which is exactly the same, except it also creates a person who's feeling a lot of pain, and that's the only difference. And I want to say that this world is objectively worse, and the reason why it's objectively worse is just because it is built into both, sort of, semantically, the meaning of what we say, pain or suffering, but also not mean. That's true.
Aaron: But another perspective on this. It's sort of like a brute fact of the universe, in the same way that facts about physics are that suffering and pain are bad. And so if you just add some of this bad stuff, or on the other side, add some objectively good stuff, you get an objectively better shade of the world. And so I will leave it there for Max to tell me why I'm wrong.Sarah: All right, well, okay, can I ask a question first? Max, you want to go. How do I phrase this? Sorry, I'm just clarifying. So you're basically saying that you can't make truth claims about what people ought to do, but you can about which states of affairs are better or worse than others.
Sarah: But if you can definitely say this circumstance is better than this one, objectively speaking, then if you could find some way of empirically determining which actions brought about more pleasure or pain, even if, I mean, maybe we can never actually determine which actions would do that. But say, if you could, then would those things not be like, would you not be able to make a claim about what you should do?Aaron: I think you can make a claim. In fact, I think they actually would make the claim. But then what I wouldn't be able to say, at least what I currently think, is that those wouldn't be objectively true or false in the same way. I'm less sure about this, for what it's worth. I'm like, less.Sarah: How can it be objectively the case that one situation could be better than the other, but it's not objectively true that you should do the thing that is most likely to bring about the better one?Aaron: No, this is a good question. I actually just had this debate on Twitter, sort of.Sarah: Okay.Aaron: Although I think the person ended up agreeing with me. One thing is, I think some people just have the sense those two statements are basically just saying are just like, rewording of the same thing. And that's just not my sense. But maybe I'm the weird one, and everybody else has the sense that when they say, oh, x is better than y, and the statement like, oh, you should act so as to bring about x, that these are just exactly the same thing. It's just reworded. Is that your sense?Sarah: I think they're exactly the same thing, but. Well, actually, no, I kind of do. I don't really understand how it can be true that X is better than y, and at the same time, it's not true that you should try and make X happen.Aaron: Yeah, if they're not semantic and don't mean the exact same thing, then there's the question of what else you need to get from one to the other. If you've established one, what else do you need?
Aaron: For instance, if you're unsure, my somewhat unconfident perspective is that statements that are normative, like "you should do X," are their own thing. We might just not live in a universe where those have meaning outside of the social realm or above it. We can use them as useful fictions in the social world, but they're not fundamental in the same way that physics is or something like that.
Max: You're saying moral claims are. Sorry, moral claims are like this.Aaron: True normative claims. So you ought to do x. Yeah.Max: Well, I mean, depending on what you mean by them being different from air quotes, physics or something like this, kind of sounds like you might be an anti realist, maybe.Sarah: I was going to say it does kind of sound like you.Aaron: No, I know in some ways I'm like, maybe I am, but then if so, I just want to defend my actual position, which is that, okay, fine, you can call me whatever you want. But then I still think that we have objective ordering of states of the world, and that's kind of what really.Max: As Sarah introduced, moral realism is about the truth of valuable propositions. This is the core of it. But then there's the actual core, which is something like "murder is wrong" evaluates to true sometimes. So, if I said murder is wrong and I'm talking about Sarah shooting me with a gun, that's true; it's bad for her to shoot me with a gun, all things considered.
Max: This is what people really focus on. It doesn't matter if your position is that objectively, conscious states imply an ordering of preferability or something to that effect. Stubbing your toe is preferable to being shot with a gun. Objectively, this is just what it is to experience suffering. And the same thing applies to positive experiences. It's better to have a birthday party than it is to eat one cookie. This is just what it is to experience things, and it applies to ordering states of affairs.
Max: I can accept this under some definitions. It's not objective per se, but it is true based on my preference function. Objectively, I prefer these things this way. And maybe it's true more broadly than this. But if you don't have that bit about being shot by a gun being wrong for Sarah to do, then you're not practically a realist, in my opinion.Aaron: There's this sort of meta debate, which may be important. I kind of think it actually ends up with a couple different ways. There's the meta debate, and there's the meta-meta debate of, like, okay, is this actually an important question or not? Do you think it's not important to establish? Do you think it's sort of irrelevant unless you have true normativity?
Aaron: Sorry, honestly, I got distracted and kind of lost my train of thought in the last few seconds, so I'll let you take it wherever.
Max: Yeah, I mean, I guess Sarah where. Having heard these very basic things for it, though, I guess Aaron hasn't done the best job defending moral realism. If you'd like. Or like defending standard moral realism or something like that. I should say defending the best version.Sarah: He hasn't defended.Aaron: Making the best argument, the most convincing argument for moral realism. I'm just trying to claim or defend what I think is true.Sarah: Yeah, no, I guess I still don't really understand how believing that there are states which are objectively more preferable than others is compatible with believing that there aren't actions that are objectively right and objectively wrong. I just don't really understand how those two things.Aaron: Okay, I feel like maybe we should just table that and set aside the question. I feel like the question of whether normativity is legit or not. We can say, okay, maybe objective ordering implies normativity. Maybe it doesn't. But then we can sell this debate like objective ordering, which I think Max does not. I think our most core disagreement. Max does not think objective ordering is a thing.Max: I actually. Sorry. You go, Sarah.Sarah: No, that's a good clarification. So carry on.Max: I'm not sure that's our core disagreement, but I am happy to debate that. Just in that, I would say that if I bought Aaron's thing about objective ordering, this does not make me a moral realist or something, or at least not in a really strong sense of the term. But also if I can convince Aaron that the objective ordering thing isn't the case, I guess this also works.Aaron: Actually, I want to jump in and say I feel like actual court. The reason why moral realism is the question about it is sort of important, is that people want to be able to say, "Oh, these are just like my values or whatever. I can just assert them as the fundamental truth of the matter." And that's sort of like the core. I don't want to say that's the core, but that is certainly a core, an important part of the debate, which is like, I want to say no.
Aaron: If we're discussing, once we decided that we want to try to do what's right or whatever, then it's like an investigative, not empirical, but like quasi empirical, sort of similar ish to an empirical question. We have to uncover the truth, just like assert it or whatever. And that's like a core reason why realism is important.
Aaron: But if you have someone who's arguing like, "No, I just don't care what I ought to do or whether I'm just going to reject any claims about whether I ought to act morally or not," I feel like it doesn't actually matter. Maybe in some sense it does, but in a practical sense, it doesn't matter whether they're objectively acting wrong or not. There's no good way to respond regardless of whether realism is true. You know what I mean?Max: Well, I suppose that's a slightly different thing. But in defending that or something, you kind of referenced ought statements, right?Aaron: Yeah.Max: Which I think is a bit to my point of, like, the relevant thing is here is normativity, the thing. People disagree. Assuming somebody says, I will do what I ought to morally or something like that. Right. It's not the case that an ordering of preferable states matters, unless that then implies that they need to take certain actions. Okay. Yeah, there's this ordering of pain versus suffering. Like different states of that. But why shouldn't I just blow up a city or something like that? You need to be able to tell me that or the reason.Aaron: Do I.Max: If you don't want me to blow up the city.Aaron: No, but you haven't. Right. And so neither has anybody else.Max: This debate has an action relevant thing.Aaron: No. Okay. So I disagree there because I think evidently people, it is like a lucky, I mean, not exactly lucky, but I'll just say lucky. Fact of the matter that people are, in fact, a lot of people are inclined to try to act morally. The relevant hard question is figuring out what that means, not trying to convince people whether they ought to or not.Max: Sure. But what I'm saying is that.Aaron: You'Re.Max: Saying here's this objective ordering thing or whatever. Basically, kind of what you're saying is here's this thing that actually what you're not saying is, here's this thing that leads to actions you should take. But what you're kind of trying to say is, this is the thing for you. This is the thing that leads to what actions should be taken. Like this sort of objective ordering of pain.Aaron: What I'm saying is, let's just take is given. Or if you happen to want to act morally, whatever that means, then I can lay out at least like a framework, or not like a framework, but criteria.Max: There's a question or something which is like, why shouldn't I be a. I guess it seems like you're a hedonist to some degree like that. Right? Why shouldn't I be a preference-ist? Like a preference utilitarian.Aaron: Oh, yeah. So this is another different.Max: But I can objectively.Sarah: What's a preference utilitarian?Aaron: Please somebody who's wrong.Sarah: Okay.Aaron: Go ahead.Max: Aaron thinks something like, for a world state. So like, just a way the world could be to be more preferable or to be better, I guess I should say to be better is to have more happiness in it, which roughly is like the sort of doing drugs happiness. I don't know if that's unfair to Aaron, but that is a type of.Aaron: No, that totally, yeah.Max: Okay. Do more drugs and being stabbed would make it less preferable. That could kind of sharp pain and a preference theory of welfare. You don't have to be utilitarian. Just what it says for welfare to be what welfare is for. A preference theory is that when your preferences are satisfied, that is better. So if I have a preference for being covered in chocolate, jumping into a bathtub full of chocolate is like a really good state of affairs. Whereas for Aaron it would be if I did a bunch of weed or something is like the better thing.Aaron: I mean, that's like the worst. That's like a very inefficient pharmacological route.Max: Yeah, but it is legal where am.Aaron: Okay.Sarah: So, Aaron, you think that people.Aaron: Sorry, go ahead. Okay.Sarah: Surely it's just the case that different people can achieve different states of welfare by doing different things. Everyone wants to be high all the time.Aaron: No, totally.Sarah: What's your point?Aaron: My thing is that preferences don't intrinsically matter. They certainly matter in the real world as how we actually do things or whatever, but there's no intrinsic value. Or insofar as preferences matter, they only matter because at the end of the day, they achieve, they increase happiness and decrease.Max: I mean, Aaron might not be committed to this for various ways, but like a very naive, hedonic theory of welfare. That is kind of in the drug style of it. Like, dopamine is pleasure or something like that. Might say that you shouldn't go around force feeding people drugs, but if it was the case that everybody magically became on drugs, this would actually be quite good, assuming they don't die as a result. If you just could kind of bring that state of affairs about by pushing a button, you should do that. Even though it's the case, many people have a preference not for.Sarah: This is like the AGI comes and plugs us all into weird endorphin machines, forever.Aaron: Adding in, if you could just stipulate that also, maybe this inhibits a population growth, that would be good. And maybe this is also, like an x risk in some ways, because, okay, then we can't defend against asteroids or whatever, but if you just set those aside, then, yeah.Max: And then, like, a preference theory would say, since people don't have a preference for this, this actually means it's not as good.Sarah: Okay, I get that. So if you're not a preference utilitarian, then you think that people might not actually understand what states they would experience the most happiness in, and therefore their preferences might not be the best metric for, like, people might think, oh, I would be happier if I did this thing. But they don't know what would actually make them happier. So it isn't necessarily right to try and bring about their preference.Aaron: I'm getting really tempted to decide to walk, maybe. Should I walk back?Sarah: I don't know. What's your preference?Max: I think, as, like, a vague quibble, probably a preference theorist would say, it's not so much about happiness, it's about welfare, because those are different. Whereas high welfare is like, high preference satisfaction for a preference theory. Whereas happiness might not be like, you might have a preference for making lots of paintings, but it's not the case that this makes you feel the emotion happy, or something like that.Aaron: I think that scenario max prevented was like, I bite the bullet on that and said, yeah, if everybody could be magically happy, even if they didn't want to be, that would be good. But the other side of the coin. The other side of the coin is like, okay, you can also imagine just, like, a being that really wants to suffer and be tortured, in fact. And then there's nothing incoherent about that.
Aaron: And so should we, in fact, cause this immense pain if we invent this creature? And I think, no, we shouldn't. And I think that's a pretty compelling objection to preference theories of welfare.Max: Wait, sorry, what's your objection again?Aaron: Wait, sorry. What was that?Max: What's the compelling objection?Aaron: If there's, like, a pig that wants to be tortured, should be tortured. The pig.Max: I see what you're saying. Yeah. And a preference theory might say you should.Aaron: Yeah. And I think, pretty obviously, I think it becomes more clear if you imagine that, okay, we have genetic control over the pig. We can control. You're like, choose whether to create the pig, kid. Like, I want to create the pig. And then I do. It's like, seems bad.Max: Yeah. I mean, for what it's worth, if I was a moral realist or whatever, I would not be a preference theorist, probably, but I might be a preference theorist because I'm not a realist.Aaron: This is interesting. Also, in a minute, I'm going to have to maybe not actually go, but stop being a good conversationalist or a bad conversationalist. I'm going to become an even worse conversationalist.Max: Nice.Aaron: For, like, I don't know, ten minutes at least.Max: Yeah.Aaron: So you guys can talk about. You can gossip about me.Max: Just in the middle of the podcast is gossip.Aaron: I mean, we can rearrange it.Max: Right. That's.Sarah: I can use those ten minutes to speed run developing some actual opinions. I didn't have any coming in. I still kind of don't.Max: If you want, I can give you a vague overview of the other prominent welfare theories.Sarah: Okay. That would be useful. And then I'll sort of pick one with it.Max: The third one is often something like objective list theory, which probably the easiest way to explain it is something like Aristotle's concept of eudiomia, I think, is.Sarah: How you say it.Max: So, something that is good for humans to live can be thought of as living a very fulfilling life, which is the height of welfare. Some other intuitions about this are related to the ideas of higher and lower level pleasures, as proposed by Bentham or another utilitarian philosopher. The concept suggests that it's better to be a dissatisfied human than a very happy pig.
Max: The reason for this is that although rolling around in the mud is pleasurable for a pig, there's something about being human and experiencing a wide range of emotions and experiences that might be more intense than what a pig can feel. Of course, someone might argue that pigs have their own unique experiences, but humans can do things like math and create TV shows, which are fulfilling on a higher order level. Engaging in these higher order pleasures contributes to a good human life.
Max: Living a fulfilling life involves engaging in objectively welfare-enhancing activities, such as loving well and embodying virtues like compassion and honor. Instead of welfare being limited by how many drugs one can consume or how many preferences one can fulfill, it's about living an "objectively good" human life. Many people have intuitions about this concept, and it resonates with their understanding of what it means to lead a meaningful existence.Sarah: So is the thing that makes us able to live more fulfilling lives than pigs? Does that hinge on intelligence?Max: Kind of depends who you like. The thing about the pigs is not from Aristotle, as I was taught, like Aristotle. He'd say something like, there is this thing that it is to live a very good life as a pig. And this is different from what it's.Sarah: Like to live a very good life as a human.Max: I don't know where he goes on. Like, is it more preferable to be a pig or a human? But most people think it's more preferable to be a human than a pig. And by most people, I mean most humans, not most people.Sarah: Yeah. That seems like a ridiculously difficult thing to ascertain. Fair enough. Okay.Max: And I guess the reason you might doubt this a bit is that it can be a bit elitist. Well, this isn't necessarily a reason to doubt it, but consider this: you could make an objective list theory that says it's really important to listen to opera. To be a good human and live a good human life is to kind of go to the opera and really get into it.
Max: It's like, I don't know, why would that be the case? That's kind of weird, right? You might be like, oh, to live a really good human life is to have a monogamous, straight relationship with four children or something. It's like, I don't know, why would that be the case?
Max: I mean, maybe I'm kind of just making up obviously wrong objective lists, but why is it the case that what it is to be good is to have these very specific sorts of things? Like, how do you actually go about making a list? Because a hedonist or a preference theorist can say, oh, it's by having lots of utility, like joy, or it's by having lots of fulfilling preferences. And I'm not quite sure how an objective list theorist goes about justifying their list outside of appeals to intuition or something like that.Sarah: Okay, got you. Okay, so we've got pump yourself full of drugs, follow your preferences, and make a random, arbitrary list of things that you think a good person or a fulfilled person might do, and then do those and also project that onto everyone else in a sort of, like, controlling, elitist type way.Max: I like how you've described all of them unfairly. That's not really the right word. Caricature. You've, like, caricatureized them all. That's how I describe them in private.Sarah: Cool. Now I've got to pick one. Which one of these resonates with me? I mean, I guess the preference one seems like the way that most people actually go about life. Maybe it's that one.Max: To be honest, I think it's probably all of them or something like what people think. That's why they all have obvious issues if you take them to the extreme. At the baseline, they might seem appealing. For example, what does it mean to live a good life? I don't know. Fulfilling my preferences seems pretty good. But wait, I have some bad preferences sometimes.
Max: Maybe it's to be really happy all the time. However, I don't want to be on drugs constantly. Well, perhaps it's to live a fulfilling human life. But now I don't know how to define this, and it's kind of just about defining it by my preferences. And now I'm back there, and you can go in a circle like this, around and around.Sarah: Yeah. Difficult being a person isn't.Max: Yeah.Sarah: What a.Max: You know, Aaron would say tougher bolts. The alternative?Sarah: Being a pig.Max: Well, being a person.Sarah: Being a person is preferable to being a pig.Max: No, I meant to like not being.Sarah: To not being a person.Max: To just being. Not existing.Sarah: Dead.Max: Yeah.Sarah: Right. Yeah.Max: I mean, I think it's preferable to be me than it is to be dead, but not objectively, I guess.Sarah: Yeah, this reminds me of a conversation I had with someone on Twitter when I made a poll about whether people thought that the world was net negative or not. A surprising percentage of people did, by the way, which is pressing. Someone made a point about people having a survival instinct, and if people want to survive, then surviving is good.
Sarah: Someone else responded by saying that people have preferences that don't accurately reflect the best state of the world. So, people have a survival instinct, but actually, they ought not to have a survival instinct because they think it's better to exist. But actually, they're wrong about that. That's kind of what I thought they were saying. I feel like if you've backed yourself into that corner, something deep in my gut tells me that you've gone wrong somewhere.Max: What I would say is something like what they're running into is the fact that moral realism might be fake. Right. Well, I guess this isn't strictly the case. Maybe the issue you're pointing out is what I'm describing here, though.
Max: If moral realism is correct, then there's a fact of the matter about some of this stuff. Like there's a right theory of welfare, if there's realism about welfare anyway. But if there's not a right theory of welfare, then you might be like, "Oh, the world is kind of worth it because people have this survival instinct," and someone's like, "Well, no, because of this other thing," and you're like, "Oh, that's kind of convincing."
Max: But then someone can just say, "Oh, but this other convincing thing," and you just go around in a circle forever because there's nothing to really ground yourself on since there's no actual fact of the matter.
Sarah: Well, that sounds like all of philosophy.Max: I mean, that's what I'd say.Sarah: I'm starting to get that unmoored feeling right now. It's kind of disorientating, though.Max: It is the case that you can get better at articulating and maybe even thinking what you think or something like that. Maybe you just have to be kind of careful about it or something. Maybe if you thought about a lot about what welfare is or something, you kind of get a sense about the life you want to lead or something like that. And then you just have to be a bit careful about people coming in and gaslighting isn't the right word, but maybe gaslighting you about what you want to do with your life or something like that.Sarah: Open to being gaslit as long as someone's telling me what to do.Max: I would have done that online if I were you.Sarah: Yeah, probably not. Please, I didn't mean that. Nobody gaslight me. I don't want to be gaslit. Do you want to give me any more philosophy crash courses?Max: I mean, I can. I do think it's interesting how people think about realism, and the number of people who are "right" is higher than I'd expect. I say "right" because I'm an antirealist. So, you might think that people would have a default towards realism.
Max: However, this gets technical. I think religion is a subjectivist realist theory. By subjectivist, it means that God is the one telling you what to do, and it's realist because he's real. Some philosophers may argue that religion is also antirealist, but that's debatable. Many people are religious and believe it's objectively bad to kill your mom, for example.
Max: It's interesting that I encounter people my age who think antirealism seems right, or that realism seems false after some thought. This is especially strange considering many philosophers are realists, so it might be a bit weird.Sarah: So realism is like a well subscribed position amongst professional philosophers.Aaron: Let me google that again back.Max: Hello. There is some data on this. Is this the right thing? Okay, so meta ethics, moral realism or moral antirealism? 62% accept or accept or lean towards moral realism. So 24% lean towards it, 37% accept it, and then 11.5% accept antirealism. And 14.5%. You're welcome. Sorry. Lean towards antirealism, and then, like, 12% do some other stuff, whatever that means. So, like, agnostic.Sarah: That's pretty surprising.Max: Yeah. I mean, one reason would be, like, why are you doing moral philosophy as your job if you are an antirealist? Right. Is like a relatively compelling question, especially when you could be paid more to do something else with your life. So there might be, like a selection effect to some degree.Sarah: Yeah. I feel like I wish I had done philosophy at uni instead of doing English. That probably would have been more fun, and then I would know more things and I could hold my own in a conversation. Well, okay.Max: So part of it is like, this is my hot philosophy take, I'm going to say, comes a bit from antirealism. So if you're an anti realist right. You think there's kind of no fact of the matter about or whatever you're an antirealist about. So you could be like an antirealist about art or something. So there's no objectively good art. Right.Sarah: Yeah.Max: So, whatever domain you're an anti-realist about, there's no fact of the matter. It comes down to who can squabble best or something like that. This really advantages people who have advantages, like me. I can squabble best, I think, because I've thought about this stuff a lot, know the terms, and can be confusing if I need to. Maybe I can talk somewhat well or something, so I can outmaneuver others.
Max: If you've ever seen Ben Shapiro debate, like in "Ben Shapiro owns liberals," what he's really doing isn't winning an argument. He's stumping them and making it difficult for them to respond. If you can do this in a conversation, you can just kind of win. A lot of philosophy, or conversations about philosophy, is actually about that.
Sarah: Yeah. Although the Ben Shapiro thing just depends what your algorithm is giving you, because if you're on the other side of TikTok, then you get the liberals owning Ben Shapiro. It's always just like the same interviews with different clips cut out of them or whatever.Max: I'm glad to know there are clips of that happening.Sarah: I've seen at least a few. I don't know. There was one where he went to the Cambridge union or something, and some girl yelled at him about how he's wrong about being pro life or something, and then everyone cheered. I don't know. To be honest, I obviously agreed with her. I don't think her arguments were that good. I think she was mostly just shouting at him. But given that she was right, it was still kind of satisfying to watch anyway. Yeah. I don't know. Sometimes I'm good at arguing with people, other times not. It just depends whether I'm in the zone or not. I'm kind of not right now, really.Max: What I'm saying is you should be more confident or something, and not because you're really good or something like that. I mean, that's probably true as well, but more because if there's no fact of the matter, it's better to stand your ground, usually, and rather than kind of get somebody to override what you think by just talking louder at you or something. If you kind of find it to be the case where it's like, oh, I disagree with this, but I don't know how to say it. You should just be like. You should kind of make a mental note to be like, I'm going to stick to my position, or something like that.Sarah: So you just always double down on everything, is what you're saying? Yeah, because nobody's right anyway.Max: Well, so this isn't true about some empirical things. Kind of like, if you were like, I think it's the case empirically that dogs weigh twelve pounds on average. I don't know, maybe you don't have to double down on that. If somebody's like, actually, the National Dog Institute studied this and then hints, you study, but otherwise you can double down, I give you permission.Sarah: Okay, excellent. I will be doing that.Max: It makes convincing people harder, actually.Sarah: As long as you're fine with me referring people back to you. When I'm a stubborn, belligerent pain in the ass, I can be like, well, Max told me that I was allowed to double down on every stupid take that I just came up with five minutes ago. So you can take it up with.Max: Him for business inquiries, please see my twitter.Aaron: I'm sort of back. I'm only sort of back.Max: Hello?Sarah: Okay.Aaron: Honestly, I should be banned from any podcast forever.Sarah: No, this is really funny.Max: They should make you, like, the 80k host, but you never go to the studio. You just do your errands.Aaron: No, I applied to be the CEO.Max: Did they reject you already?Aaron: Not yet. I don't think they should choose me, but I wasn't joking. You never know.Max: Nice.Sarah: It's a shame that this wasn't live when you submitted your application because it would have been such excellent evidence.Aaron: Well, I did interview Rob Wiblin and their producer Kieran Harris, like, a year ago.Sarah: You did?Aaron: Yeah. It's like the coolest thing I've ever done. And so I'm always, like, smug when I tell people.Sarah: That's so then follow you on Twitter.Max: Brag about it more, or like a resume.Aaron: It's a line on my resume.Sarah: That should be more prominent in your online presence. That's so cool.Aaron: Maybe I should put it on Pigeon Hour. Yeah, I'll send you guys. No, it's weird because it's not that big of a deal. But the other shit, it's cool. I'm not going to deny that.Sarah: That is a massive deal. What are you talking about? How did you swing that?Max: You don't get to know.Sarah: Well, I don't know. I don't know who won this argument. I don't know if you guys finished having your argument, but I would say we really did.Max: But I think you started in favor of what I thought and then Darren didn't convince you. So that's kind.Sarah: I don't think he has convinced me. I'm confused about how the objective ordering thing can exist independently of there being things that it is right or wrong to do. I still don't get that.Max: I actually can. Well, I don't know if I can give Aaron.Aaron: Hello? Yeah, I don't know. I'll just be a fly on the wall, I guess. Keep going.Max: Yeah, so think about it like this: you can rank every person based on height. That's something you can actually do. I can give you the ordering of heights of all people alive right now. However, there are two things that just giving you this ordered set of elements doesn't provide.
Max: One, it doesn't tell you what tallness is. It tells you who is taller than other people, like five foot nine is taller than five foot eight. You get that information, but you don't know who's tall because is tall 6ft? Is it 4ft? You don't know. That's an additional thing.
Max: It also doesn't tell you how tall you should want to be. It just tells you how tall you are. So, it's entirely possible to have this ordering of states of the world based on suffering or something like that, right? And it just doesn't have the next thing, which is what states of affairs are good. You can rank them, but you just don't get that information from that.Sarah: If you were like having to choose between two actions and each of them would bring about a different state of the world, and it was objectively true that one was better than the other and you knew that, even though the tallness analogy doesn't map onto that, because then it's not like you wouldn't have to determine the cutoff point of where do things stop being bad and when do they start being good. If you had two states of affairs and you knew one was better than the other, then surely you would be able to say, oh, it is right for me to do the one that brings about state b as opposed to.Aaron: State a. I think that makes a lot of sense. But if somebody was just committed to not arguing with you or just saying, you know what, I really don't care, then I don't know. That's where you get into the question of whether you have any response or whether there is a legitimate response to, well, maybe Max disagrees.Max: So I guess, first, to Sarah's point, you might say this is a bit of like, so really what Aaron might be claiming. I guess it depends on what he thinks he's claiming. He could be claiming, you can objectively rank states based on a betterness relationship, and that betterness relationship is, like, the objective one you ought to use. Or what he's saying is you can rank states based on a betterness relationship of suffering. You could also rank them based on a betterness relationship of preference function as well. Right. And so there's kind of this question about which is the right ordering.Aaron: I think there's, like, one true betterness function.Max: Okay, but why is it the one true one? I guess say that.Aaron: Oh, it's just, like, built into the meaning of, like, wait, is there a one true? So that's actually something I don't have. I think it's possible there's, like, some indeterminate ones, but there's, like, approximately one or something like that. Okay, so why is that? Yeah, it's just, like, the fact of the matter that suffering is good, happiness is bad, and nothing else is either intrinsically good or intrinsically bad. That is the short answer.Max: Um, yeah. I think this makes Sarah's question really important, though. Like, why this just.Aaron: I feel like this is sort of a bad thing for me to be arguing about, because, hey, I think it's great. Nobody wants to take this as a normative thing. I think it makes a lot of sense. It's like, okay, two thumbs up. You know what I mean? It's like, in some level, maybe. I don't think I can defend it at some intense metaphysical, fundamental level. Kind of, who cares?Max: Well, you should care, probably because it's okay. Probably important for your worldview or something would be like, okay, darn.Sarah: Okay. I feel like I'm still confused, but that's what I expected.Max: Just know that your starting intuitions were right.Sarah: I think they were right.Aaron: No, I think your starting intuitions are, like, sensible and wrong.Sarah: Sensible and wrong.Max: I remembered your thing, Aaron. The question was something like, if someone's like, oh, I don't care about what's objectively moral, good, what do you say to them? And so I guess what a philosopher would say is, you can make mistakes, right? Like, I could say, if you want to earn $100, invest in bitcoin or whatever. That's a horrible example. Sorry. If you want to earn $100. Invest in us treasury bonds.
Max: And I've done all the math or whatever, and this is just objectively the case. This is like the way you could earn $100 and you could just be like, I don't give a shit, I'm going to go invest in bitcoin instead, and then lose all your money. It's like, sure, whatever, but you've made a mistake, right? It's the case you did the wrong action.Aaron: If your goal was to bring whatever.Max: And you only have these two options available to you, this is the set of things. And I kind of, like, supposed that also investing in treasury bonds actually does net you $100.Sarah: But then isn't that, like, in that situation, the person has the goal of making more money, and in Aaron's scenario, they don't care about whether they're bringing about the best state of affairs.Max: Yeah, that's fair. I think kind of what you say there is like, what would we say about a murderer or something, I guess. Especially if you're like a realist. They don't care about the fact they shouldn't murder. They're murdering, but they're still making a mistake or something.Sarah: You might say different to, like, if someone murdered someone under the impression that by doing that they were, I don't know, say, saving five other people, that would be different to the person that murdered someone just because they didn't care whether or not that person, I think, to live or not.Max: The way to say this would be that a realist theory is kind of necessarily paternalistic or something. It's saying you need to be a certain way or something like that. And that makes it sound bad or something like that. But there are times when parents are right. Like parents are paternalistic in saying you should eat your vegetables. And in terms of health outcomes, this is true or something.
Max: And so moral theories are paternalistic about what actions you should take to better the world. And you don't have to want to better the world, I guess, but you ought to want to better the world or something. Just like we often kind of think.Aaron: Like, well, I mean, then you get into just recursion. I do feel like that's why realism.Max: Doesn't work or whatever.Aaron: I do think we're going off or not going off. But this particular question about whether the relation.Max: Sorry. One final thing is something like, I think what philosophers will often say is something like what it means to be rational or to be a rational agent, or to be a certain, whatever synonym or qualifying word, type of person, like a moral agent. Or whatever.
Max: To be like a person is to do this sort of stuff, like a sort of kantian view of ethics or whatever says by being a properly rational agent, you kind of wrong about everything. I mean, yeah, but that's the answer, right? You could say, if you're Aristotle, it'd be like to be a proper human or whatever, you kind of have to care about these sorts of things. But, yeah, I mean, you're right. You do just kind of get into a recursion thing, or you can.Aaron: I know this isn't a direct response. I just keep thinking that this sub question about what objective ordering imply, like, normativity is interesting. I just don't think it's like that. It's not like the main feel like it's like a little bit of a sideshow. And also one that's like, I feel like I'm trying to argue no, but also I'm sort of like thumbs up, no, but I'm trying to make my maximally defensible or something. Whereas if the answer was yes, that would sort of be convenient or something that would make me a full throated moral realist. You know what I mean?Sarah: But I feel like the answer just is yes. I don't get how it isn't. It just seems like. Can't speak incoherent for the answer to not be yes. So maybe you should just take the strong version, because it kind of seems like. Well, that's the logical.Aaron: I just want to ask where you think - okay, yeah - where you think. I think remembering Becca, you said you don't equate those two statements: "X would be better" and "you ought to do X." So, it seems like there's a gap between the former and the latter. The question is, how do you bridge that gap? Where do you get the extra bit of information from?
Aaron: Honestly, I look around and I don't see it. It's sort of like my juice. I'm just curious. They vibe associate. They don't merely vibe associate, but associate very strongly or something. However, there's still a little gap or something. I'm not doing a good job of explicating.Sarah: Yeah. So how do you get from X would be better than y to. You should bring about X. Maybe you're right and I am, in fact, wrong.Aaron: That was directed of Max.Max: Nice.Sarah: No, but Max, can you say something smart, please?Max: Yeah, sometimes I can.Sarah: Do you do it now to respond to that? I don't have a response. I'm getting.Max: I mean, like, it's kind of the case that some orderings imply normativity and some don't, I guess, is the case.Aaron: Like what.Max: Utility functions in the preference theory sense of that implies normativity.Aaron: That's like what we're talking about.Max: Yeah, but not all orderings imply normativity. You can order a set of numbers, and that doesn't have any normativity. And you could order colors in the world based on wavelength or something, and that doesn't tell you which colors you ought to bring about or something. Right. But an ordering of preferences based on your preference function or, sorry, states of the world based on your preference function does tell you what you should bring about or something.Aaron: No, maybe. But then there's the question of how do you get the really substantive should bit? You're an anti realist. I feel like the natural thing in an anti realist world, I think, is to say, it's like, where does this normativity come from? It's like, is there a God? Like, maybe. But setting that aside, what I would.Max: Say is there's normativity insofar as you want there to be some, like what ought you to do or want to do, like whatever you want to ought to do. Was that sufficiently smart, Sarah?Sarah: That sounded very sus, because, hang on, I'm confused again. Maybe my question was more like, I don't know. I don't have the technical language to express this, but it seems like, Aaron, what you're saying is there's like this unbridgeable gap between making a claim about states that are better than others and making a claim about a normative claim about what people ought to do. And there's like two fundamentally different categories, and it's impossible to kind of bridge the gap between them.
Sarah: But then I don't understand how if you believe that, then that seems like that would make you definitely not a moral realist, because you're basically saying that you can't use the. What's the opposite of normative? There's a word for that, right?Max: Somebody give me the word.Aaron: Sometimes it's positive.Sarah: But yeah, you can't use the descriptive claims to make the normative claims. So if you think that, then doesn't that just make you not a realist?Aaron: Well, I'm claiming that there are descriptive moral claims, and, for example, x is morally better than y.Sarah: It's like a descriptive x is better than y. Doesn't really seem like a moral claim. But you should do x seems like.Aaron: A moral claim, right?Sarah: If you're saying one thing is better than the other. But that doesn't imply anything about what you should do. To me, that's not a moral.Aaron: This is like a semantic question. Yeah, I guess I honestly don't really know how most people use the words. If I want to say, all things considered, you have these two worlds, and one is better and one is better. One is morally better. I think it would be great if people brought about the better one, but I don't think in some ultimate fundamental sense, there's this normativity thing.
Aaron: I think it's a very useful social thing to invoke. I think it would be great or whatever. Personally, I prefer that. But it's not like some fundamental truth of the universe that seems like, substantively seems like I'm talking about something that's at least related to morality.Sarah: I feel like I'm getting more and more confused the further into this conversation.Aaron: Welcome to philosophy.Max: Yeah.Sarah: This is horrible. Why do you guys do this?Aaron: Acts of getting a PhD?Max: Not actually. That's not a literally at the mean, not currently enrolled at a PhD program. But if you're on a hiring committee, feel free to hit me up.Aaron: Yes. If Wilma cast will listen to this, and I feel like there's at least a .01% chance he is, that's then plausible, I guess.Sarah: Come on. It's at least 0.2.Aaron: Yeah. If we have a campaign to get them to listen.Max: On average.Aaron: No, there's definitely a couple of philosophy people who will be listening.Sarah: Wait, will, if you're listening, I have a copy of what we owe the future under my tv, right on top of my high school musical two dvd.Max: Is that your. Thanks for the content of them by quality?Sarah: No, I would have put the dvd on top, obviously, if I was objectively ordering them.Max: Okay, yeah, inversely ordered.Aaron: I want to debate Will McCaskill. If you can get him on the podcast, that would be so sweet.Sarah: Maybe I should finish his book first. I only read.Aaron: I actually have critiques about what we are the future. And I feel like for non EA people that critique Will MacAskill, I am, like, the ultimate will MacAskill defender. But I also think he's wrong about stuff in what we are the future. So if you can get him on the pond, that would be so sweet.Max: Did you ever put that on the.Aaron: No, I only have 16 million drafts that will never see the light of day.Max: Well, if you put one of them up the chance.Aaron: Okay. I have, like, miscellaneous. It's in my list of. I have a twitter thread that's like, EA takes, and it's, like, in there.Sarah: That's it?Aaron: Yeah.Max: At some point, maybe you could just dm him the Twitter thread and say, want to go on a podcast to debate this? The next time he's on 80K, send it to Rob Woodland and just have him read out the takes.Aaron: Yeah, no, I definitely will.Max: Okay, good. I'll hold you to that. It's in writing now, but audibly written.Sarah: Cool. Well, I feel like we should wrap up soon. I have to decide who I agree with. Actually, I thought the whole point was for me to.Aaron: Oh, wait. Yes, you do. Sorry.Sarah: I thought we're going to solve this whole question.Aaron: Yes.Sarah: Just get on this.Aaron: I was wrong. You're objectively right, obviously.Sarah: I'm always. Never been wrong. Not even once.Max: Okay.Sarah: Who do I think is right?Max: I don't know.Sarah: I'm really confused. I don't know who's right.Aaron: Do you think Max has made, like, a more compelling case? Although I do feel like I've been mildly handicapped by my traveling circumstances.Max: Yeah, you probably did that because you knew I'd win no matter what.Sarah: Yeah, it seems like you guys don't really disagree about that much. So it seems like, Aaron, you think there's, like, this objective ordering thing that is true, but doesn't have any practical implications because it doesn't actually tell anyone anything about what they should.Aaron: No, no. Can I defend my. Okay, you got to add the normativity thing. I'm sorry to be annoying, but just to jump in, I think it has practical implications. Whenever somebody says, oh, I just assert that I'm a moral subjectivist, and I think that I just intrinsically value, say, human lives over other animal lives for reasons that go above and beyond, like sentience or something like that. And I want to be able to pull this out and say you're objectively. It's like the kind of thing that I'm interested.Sarah: To. I didn't mean to say that it didn't have it. It clearly does have practical implications, like the one you just named. So I'm sorry for misrepresenting you. And then, Max, you think that there is no objective ordering of worlds, and also that you can't make any normative claims about what people should or shouldn't do objectively. Objectively speaking, who do I agree with? I don't know. I don't feel qualified to make the call because my brain feels a bit scrambled.Aaron: As a reminder. Unfortunately, the world doesn't have to obey what you declare to be correct, so the stakes might be lower than you think.Sarah: Oh, I thought I was solving this issue once. And for what you're telling me it doesn't even matter what I say.Aaron: You were like, I'm not qualified. And I was like, well, maybe, but luckily the world, unfortunately, Oxford philosophy isn't going to stamp it solved for better and for worse.Sarah: Unbelievable.Aaron: I know it's bad because you have less power, but it does lower the stakes.Sarah: Okay, well, no, that's a relief. I feel a little bit less pressure now. I guess we never even really discussed how you do the world ordering or how you justify which worlds are better than others. Maybe that would have been important to get into, but let's not do that right now. I think I agree with Max.Max: No.Sarah: Sorry.Aaron: No. Can we edit in? I don't know, some bad sound effects of me, like falling or something.Max: Oh, yeah.Sarah: I don't know how to do that. If you can teach me how to.Aaron: Do that, then sure, yeah. That is one thing. Well, I can do it on my end. I don't know what you're going to do on your chromebook.Sarah: Yeah, I didn't think that through. Really? There's got to be something I can. Oh, I'll figure it out. I'll figure it out.Max: It all works.Sarah: Yeah. Okay, cool. Thanks for arguing about moral realism.Aaron: It's been really, anytime, literally just wake me up in the middle of the night. I'm, like, barely even joking.Sarah: I also think what I might have to do is sort of timestamp all the moments where I said a coherent sentence because there weren't that many of them.Max: At least four or five.Sarah: I actually don't know if I said a single coherent thing this whole time.Aaron: No, you definitely did.Sarah: Yeah.Aaron: Awesome. Not more than I was expecting, but more than I would have expected. An arbitrary person. If we just pick a random person. This is, like, much better than that.Sarah: You did better than I thought you would.Aaron: No, you did better than the fake median human.Sarah: Thank you.Aaron: In fact, probably. Definitely better than the 75th percentile. Probably, like, better than that, in fact.Sarah: Wow. High price.Aaron: Once we're getting into, like, 99.99. We can debate that.Sarah: Yeah. We can quibble over the. Quibble over whether it's 99.8 or 99.9, but I'll take that. Okay, cool. Thanks, guys.Aaron: Lovely. All right. Pip pip cheerio.Sarah: Have a good rest of your days. I'm going to go to bed. Yeah.Aaron: Okay, cool. Adios.Sarah: Cool. Bye.
Get full access to Aaron's Blog at www.aaronbergman.net/subscribe -
Note: I canât seem to edit or remove the âtranscriptâ tab. I recommend you ignore that and just look at the much higher quality, slightly cleaned up one below.
Most importantly, follow Sarah on Twitter!
Summary
(Written by chatGPT, as you can probably tell)
In this episode of Pigeon Hour host Aaron delves deep into the world of AI safety with his guest, Sarah Woodhouse. Sarah shares her unexpected journey from fearing job automation to becoming a recognized voice on AI safety Twitter. Her story starts with a simple Google search that led her down a rabbit hole of existential dread and unexpected fame on social media. As she narrates her path from lurker to influencer, Sarah reflects on the quirky dynamics of the AI safety community, her own existential crisis, and the serendipitous tweet that resonated with thousands.
Aaron and Sarahâs conversation takes unexpected turns, discussing everything from the peculiarities of EA rationalists to the surprisingly serious topic of shrimp welfare. They also explore the nuances of AI doom probabilities, the social dynamics of tech Twitter, and Sarahâs unexpected viral fame as a tween. This episode is a rollercoaster of insights and anecdotes, perfect for anyone interested in the intersection of technology, society, and the unpredictable journey of internet fame.
Topics discussed
Discussion on AI Safety and Personal Journeys:
* Aaron and Sarah discuss her path to AI safety, triggered by concerns about job automation and the realization that AI could potentially replace her work.
* Sarah's deep dive into AI safety started with a simple Google search, leading her to Geoffrey Hinton's alarming statements, and eventually to a broader exploration without finding reassuring consensus.
* Sarah's Twitter engagement began with lurking, later evolving into active participation and gaining an audience, especially after a relatable tweet thread about an existential crisis.
* Aaron remarks on the rarity of people like Sarah, who follow the AI safety rabbit hole to its depths, considering its obvious implications for various industries.
AI Safety and Public Perception:
* Sarah discusses her surprise at discovering the AI safety conversation happening mostly in niche circles, often with a tongue-in-cheek attitude that could seem dismissive of the serious implications of AI risks.
* The discussion touches on the paradox of AI safety: itâs a critically important topic, yet it often remains confined within certain intellectual circles, leading to a lack of broader public engagement and awareness.
Cultural Differences and Personal Interests:
* The conversation shifts to cultural differences between the UK and the US, particularly in terms of sincerity and communication styles.
* Personal interests, such as theater and musicals (like "Glee"), are also discussed, revealing Sarah's background and hobbies.
Effective Altruism (EA) and Rationalist Communities:
* Sarah points out certain quirks of the EA and rationalist communities, such as their penchant for detailed analysis, hedging statements, and the use of probabilities in discussions.
* The debate around the use of "P(Doom)" (probability of doom) in AI safety discussions is critiqued, highlighting how it can be both a serious analytical tool and a potentially alienating jargon for outsiders.
Shrimp Welfare and Ethical Considerations:
* A detailed discussion on shrimp welfare as an ethical consideration in effective altruism unfolds, examining the moral implications and effectiveness of focusing on animal welfare at a large scale.
* Aaron defends his position on prioritizing shrimp welfare in charitable giving, based on the principles of importance, tractability, and neglectedness.
Personal Decision-Making in Charitable Giving:
* Strategies for personal charitable giving are explored, including setting a donation cutoff point to balance moral obligations with personal needs and aspirations.
Transcript
AARON: Whatever you want. Okay. Yeah, I feel like you said this on Twitter. The obvious thing is, how did you learn about AI safety? But maybe youâve already covered that. Thatâs boring. First of all, do you want to talk about that? Because we donât have to.
SARAH: I donât mind talking about that.
AARON: But itâs sort of your call, so whatever. I donât know. Maybe briefly, and then we can branch out?
SARAH: I have a preference for people asking me things and me answering them rather than me setting the agenda. So donât ever feel bad about just asking me stuff because I prefer that.
AARON: Okay, cool. But also, it feels like the kind of thing where, of course, we have AI. Everyone already knows that this is just like the voice version of these four tweets or whatever. But regardless. Yes. So, Sarah, as Pigeon Hour guest, what was your path through life to AI safety Twitter?
SARAH: Well, I realized that a chatbot could very easily do my job and that my employers either hadnât noticed this or they had noticed, but they were just being polite about it and they didnât want to fire me because theyâre too nice. And I was like, I should find out what AI development is going to be like over the next few years so that I know if I should go and get good at some other stuff.
SARAH: I just had a little innocent Google. And then within a few clicks, Iâd completely doom pilled myself. I was like, weâre all going to die. I think I found Geoffrey Hinton because he was on the news at the time, because he just quit his job at Google. And he was there saying things that sounded very uncertain, very alarming. And I was like, well, heâs probably the pessimist, but Iâm sure that there are loads of optimists to counteract that because thatâs how it usually goes. You find a doomer and then you find a bunch of more moderate people, and then thereâs some consensus in the middle that everythingâs basically fine.
SARAH: I was like, if I just keep looking, Iâll find the consensus because itâs there. Iâm sure itâs there. So I just kept looking and looking for it. I looked for it for weeks. I just didnât find it. And then I was like, nobody knows whatâs going on. This seems really concerning. So then I started lurking on Twitter, and then I got familiar with all the different accounts, whatever. And then at some point, I was like, Iâm going to start contributing to this conversation, but I didnât think that anybody would talk back to me. And then at some point, they started talking back to me and I was like, this is kind of weird.
SARAH: And then at some point, I was having an existential crisis and I had a couple of glasses of wine or something, and I just decided to type this big, long thread. And then I went to bed. I woke up the next morning slightly grouchy and hungover. I checked my phone and there were all these people messaging me and all these people replying to my thread being like, this is so relatable. This really resonated with me. And I was like, what is going on?
AARON: You were there on Twitter before that thread right? Iâm pretty sure I was following you.
SARAH: I think, yeah, I was there before, but no one ever really gave me any attention prior to that. I think I had a couple of tweets that blew up before that, but not to the same extent. And then after that, I think I was like, okay, so now I have an audience. When I say an audience, like, obviously a small one, but more of an audience than Iâve ever had before in my life. And I was like, how far can I take this?
SARAH: I was a bit like, people obviously started following me because Iâm freFreaking out about AI, but if I post an outfit, whatâs going to happen? How far can I push this posting, these fit checks? I started posting random stuff about things that were completely unrelated. I was like, oh, people are kind of here for this, too. Okay, this is weird. So now Iâm just milking it for all its worth, and I really donât know why anybodyâs listening to me. Iâm basically very confused about the whole thing.
AARON: I mean, I think itâs kind of weird from your perspective, or itâs weird in general because there arenât that many people who just do that extremely logical thing at the beginning. I donât know, maybe itâs not obvious to people in every industry or whatever that AI is potentially a big deal, but thereâs lots of truckers or whatever. Maybe theyâre not the best demographic or the most conducive demographic, like, getting on Twitter or whatever, but thereâs other jobs that it would make sense to look into that. Itâs kind of weird to me that only you followed the rabbit hole all the way down.
SARAH: I know! This is what IâŠBecause itâs not that hard to complete the circle. It probably took me like a day, it took me like an afternoon to get from, Iâm worried about job automation to I should stop saving for retirement. It didnât take me that long. Do you know what I mean? No one ever looks. I literally donât get it. I was talking to some people. I was talking to one of my coworkers about this the other day, and I think I came up in conversation. She was like, yeah, Iâm a bit worried about AI because I heard on the radio that taxi drivers might be out of a job. Thatâs bad. And I was like, yeah, that is bad. But do you know what else? She was like, what are the AI companies up to that we donât know about? And I was like, I mean, you can go on their website. You can just go on their website and read about how they think that their technology is an extinction risk. Itâs not like theyâre hiding. Itâs literally just on there and no one ever looks. Itâs just crazy.
AARON: Yeah. Honestly, I donât even know if I was in your situation, if I would have done that. Itâs like, in some sense, I am surprised. Itâs very few people maybe like one, but at another level, itâs more rationality than most humans have or something. Yeah. You regret going down that rabbit hole?
SARAH: Yeah, kind of. Although Iâm enjoying the Twitter thing and itâs kind of fun, and it turns out thereâs endless comedic material that you can get out of impending doom. The whole thing is quite funny. Itâs not funny, but you can make it funny if you try hard enough. But, yeah, what was I going to say? I think maybe I was more primed for doom pilling than your average person because I already knew what EA was and I already knew, you know what I mean. That stuff was on my radar.
AARON: Thatâs interesting.
SARAH: I think had it not been on my radar, I donât think I would have followed the pipeline all the way.
AARON: Yeah. I donât know what browser you use, but it would be. And you should definitely not only do this if you actually think it would be cool or whatever, but this could be in your browser history from that day and that would be hilarious. You could remove anything you didnât want to show, but if itâs like Google Chrome, they package everything into sessions. Itâs one browsing session and itâll have like 10,000 links.
SARAH: Yeah, I think for non-sketchy reasons, I delete my Google history more regularly than that. I donât think Iâd be able to find that. But I can remember the day and I can remember my anxiety levels just going up and up somewhere between 01:00 p.m. and 07:00 p.m. And by the evening Iâm like, oh, my God.
AARON: Oh, damn, thatâs wild.
SARAH: It was really stressful.
AARON: Yeah, I guess props for, I donât know if propsâŠIs the right word, I guess, impressed? Iâm actually somewhat surprised to hear that you said you regret it. I mean, that sucks though, I guess. Iâm sorry.
SARAH: If you could unknow this, would you?
AARON: No, because I think itâs worth maybe selfishly, but not overall because. Okay, yeah, I think that would plausibly be the selfish thing to do. Actually. No, actually, hold on. No, I actually donât think thatâs true. I actually think thereâs enough an individual can do selfishly such that it makes sense. Even the emotional turmoil.
SARAH: It would depend how much you thought that you were going to personally move the needle by knowing about it. I personally donât think that Iâm going to be able to do very much. I was going to tip the scales. I wouldnât selfishly unknow it and sacrifice the world. But me being not particularly informed or intelligent and not having any power, I feel like if I forgot that AI was going to end the world, it would not make much difference.
AARON: You know what I mean? I agree that itâs like, yes, it is unlikely for either of us to tip the scales, but.
SARAH: Maybe you canât.
AARON: No, actually, in terms of, yeah, Iâm probably somewhat more technically knowledgeable just based on what I know about you. Maybe Iâm wrong.
SARAH: No, youâre definitely right.
AARON: Itâs sort of just like a probabilities thing. I do think that âdoomâ - that word - is too simplified, often too simple to capture what people really care about. But if you just want to say doom versus no doom or whatever, AI doom versus no AI doom. Maybe thereâs like a one in 100,000 chance that one of us tips the scales. And thatâs important. Maybe even, like, one in 10,000. Probably not. Probably not.
SARAH: One in 10,000. Wow.
AARON: But thatâs what people do. People vote, even though this is old 80k material Iâm regurgitating because they basically want to make the case for why even if youâre not. Or in some article they had from a while ago, they made a case for why doing things that are unlikely to counterfactually matter can still be amazingly good. And the classic example, just voting if youâre in a tight race, say, in a swing state in the United States, and it could go either way. Yeah. It might be pretty unlikely that you are the single swing vote, but it could be one in 100,000. And thatâs not crazy.
SARAH: It doesnât take very much effort to vote, though.
AARON: Yeah, sure. But I think the core justification, also, the stakes are proportionally higher here, so maybe that accounts for some. But, yes, youâre absolutely right. Definitely different amounts of effort.
SARAH: Putting in any effort to saving the world from AI. I wouldnât say that. I wouldnât say that Iâm sacrificing.
AARON: I donât even know if I like. No. Maybe it doesnât feel like a sacrifice. Maybe it isnât. But I do think thereâs, like, a lot. Thereâs at least something to be. I donât know if this really checks out, but I would, like, bet that it does, which is that more reasonably, at least calibrated. I wanted to say reasonably well informed. But really what it is is, like, some level of being informed and, like, some level of knowing what you donât know or whatever, and more just like, normal. Sorry. I hope normal is not like a bat. Iâm saying not like tech Bros, I guess so more like non tech bros. People who are not coded as tech bros. Talking about this on a public platform just seems actually, in fact, pretty good.
SARAH: As long as we like, literally just people that arenât men as well. No offense.
AARON: Oh, no, totally. Yeah.
SARAH: Where are all the women? Thereâs a few.
AARON: Thereâs a few that are super. I donât know, like, leaders in some sense, like Ajeya Cotra and Katja Grace. But I think the last EA survey was a third. Or I could be butchering this or whatever. And maybe even within that category, thereâs some variation. I donât think itâs 2%.
SARAH: Okay. All right. Yeah.
AARON: Like 15 or 20% which is still pretty low.
SARAH: No, but thatâs actually better than I would have thought, I think.
AARON: Also, Twitter is, of all the social media platforms, especially mail. I donât really know.
SARAH: Um.
AARON: I donât like Instagram, I think.
SARAH: I wonder, it would be interesting to see whether or not thatâs much, if itâs become more male dominated since Elon Musk took.
AARON: Itâs not a huge difference, but who knows?
SARAH: I donât know. I have no idea. I have no idea. Weâll just be interesting to know.
AARON: Okay. Wait. Also, thereâs no scheduled time. Iâm very happy to keep talking or whatever, but as soon as you want to take a break or hop off, just like. Yeah.
SARAH: Oh, yeah. Iâm in no rush.
AARON: Okay, well, I donât know. Weâve talked about the two obvious candidates. Do you have a take or something? Want to get out to the world? Itâs not about AI or obesity or just a story you want to share.
SARAH: These are my two pet subjects. I donât know anything else.
AARON: I donât believe you. I know you know about house plants.
SARAH: I do. A secret, which you canât tell anyone, is that I actually only know about house plants that are hard to kill, and Iâm actually not very good at taking care of them.
AARON: Well, Iâm glad itâs house plants in that case, rather than pets. Whatever.
SARAH: Yeah. I mean, I have killed some sea monkeys, too, but that was a long time ago.
AARON: Yes. So did I, actually.
SARAH: Did you? I feel like everyone has. Everyoneâs got a little sea monkey graveyard in their past.
AARON: New cause area.
SARAH: Are there more shrimp or more sea monkeys? Thatâs the question.
AARON: I donât even know what even. I mean, are they just plankton?
SARAH: No, theyâre not plankton.
AARON: I know what sea monkeys are.
SARAH: Thereâs definitely a lot of them because theyâre small and insignificant.
AARON: Yeah, but I also think we donât. It depends if youâre talking about in the world, which I guess probably like sea monkeys or farmed for food, which is basically like. I doubt these are farmed either for food or for anything.
SARAH: Yeah, no, youâre probably right.
AARON: Or they probably are farmed a tiny bit for this niche little.
SARAH: Or theyâre farmed to sell in aquariums for kids.
AARON: Apparently. They are a kind of shrimp, but they were bred specifically to, I donât know, be tiny or something. Iâm just skimming that, Wikipedia. Here.
SARAH: Sea monkeys are tiny shrimp. That is crazy.
AARON: Until we get answers, tell me your life story in whatever way you want. It doesnât have to be like. I mean, hopefully not. Donât straight up lie, but wherever you want to take that.
SARAH: Iâm not going to lie. Iâm just trying to think of ways to make it spicier because itâs so average. I donât know what to say about it.
AARON: Well, itâs probably not that average, right? I mean, it might be average among people you happen to know.
SARAH: Do you have any more specific questions?
AARON: Okay, no. Yeah, hold on. I have a meta point, which is like, I think the people who are they have a thing on the top of their mind, and if I give any sort of open ended question whatsoever, theyâll take it there and immediately just start giving slinging hot takes. But thenOther people, I think, this category is very EA. People who arenât, especially my sister, theyâre like, âNo, I have nothing to talk about. I donât believe that.â But theyâre not, I guess, as comfortable.
SARAH: No, I mean, I have. Something needs to trigger them in me. Do you know what I mean? Yeah, I need an in.
AARON: Well, okay, hereâs one. Is there anything youâre like, âMaybe Iâll cut this. This is kind of, like narcissistic. I donât know. But is there anything you want or curious to ask?â This does sound kind of weird. I donât know. But we can cut it if need be.
SARAH: What does the looking glass in your Twitter name mean? Because Iâve seen a bunch of people have this, and I actually donât know what it means, but I was like, no.
AARON: People ask this. I respond to a tweet thatâs like, âWhat does that like?â At least, I donât know, once every month or two. Or know basically, like Spencer Greenberg. I donât know if youâre familiar with him. Heâs like a sort of.
SARAH: I know the know.
AARON: He literally just tweeted, like a couple years ago. Put this in your bio to show that you really care about finding the truth or whatever and are interested in good faith conversations. Are you familiar with the scout mindset?
SARAH: Yeah.
AARON: Julia Galef. Yeah. Thatâs basically, like the short version.
SARAH: Okay.
AARON: Iâm like, yeah, all right. And thereâs at least three of us who have both a magnifying glass. Yeah. And a pause thing, which is like, my tightest knit online community I guess.
SARAH: I think Iâve followed all the pause people now. I just searched the emoji on Twitter, and I just followed everyone. Now I canât find. And I also noticed when I was doing this, that some people, if theyâve suspended their account or theyâre taking time off, then they put a pause in their thing. So I was, like, looking, and I was like, oh, these are, like, AI people. But then they were just, like, in their bio, they were, like, not tweeting until X date. This is a suspended account. And I was like, I see we have a messaging problem here. Nice. I donât know how common that actually.
AARON: Was. Iâm glad. That was, like, a very straightforward question. Educated the masses. Max Alexander said Glee. Is that, like, the show? You can also keep asking me questions, but again, this is like.
SARAH: Wait, what did he say? Is that it? Did he just say glee? No.
AARON: Not even a question mark. Just the word glee.
SARAH: Oh, right. He just wants me to go off about Glee.
AARON: Okay. Go off about. Wait, what kind of Glee are we? Vaguely. This is like a show or a movie or something.
SARAH: Oh, my God. Have you not seen it?
AARON: No. I mean, I vaguely remember, I think, watching some TV, but maybe, like, twelve years ago or something. I donât know.
SARAH: I think it stopped airing in, like, maybe 2015?
AARON: 16. So go off about it. I donât know what I. Yeah, I.
SARAH: Donât know what to say about this.
AARON: Well, why does Max think you might have a take about Glee?
SARAH: I mean, I donât have a take about. Just see the thing. See? No, not even, like, I am just transparently extremely lame. And I really like cheesy. Iâm like. Iâm like a musical theater kid. Not even ironically. I just like show tunes. And Glee is just a show about a glee club at a high school where they sing show tunes and thereâs, like, petty drama, and people burst into song in the hallways, and I just think itâs just the most glorious thing on Earth. Thatâs it. There are no hot takes.
AARON: Okay, well, thatâs cool. I donât have a lot to say, unfortunately, but.
SARAH: No, thatâs totally fine. I feel like this is not a spicy topic for us to discuss. Itâs just a good time.
AARON: Yeah.
SARAH: Wait.
AARON: Okay. Yeah. So I do listen to Hamilton on Spotify.
SARAH: Okay.
AARON: Yeah, thatâs about it.
SARAH: I like Hamilton. Iâve seen it three times. Oh.
AARON: Live or ever. Wow. Cool. Yeah, no, thatâs okay. Well, what do people get right or wrong about theater kids?
SARAH: Oh, I donât know. I think all the stereotypes are true.
AARON: I mean, thatâs generally true, but usually, itâs either over moralized, thereâs like a descriptive thing thatâs true, but itâs over moralized, or itâs just exaggerated.
SARAH: I mean, to put this in more context, I used to be in choir. I went every Sunday for twelve years. And then every summer we do a little summer school and we go away and put on a production. So we do a musical or something. So I have been. What have I been? I was in Guys and Dolls. I think I was just in the chorus for that. I was the reverend in Anything Goes. But he does unfortunately get kidnapped in like the first five minutes. So heâs not a big presence. Oh, Iâve been Tweedle dumb in Alice in Wonderland. I could go on, but right now as Iâm saying this, Iâm looking at my notice board and I have two playbills from when I went to Broadway in April where I saw Funny Girl and Hadestown.
SARAH: I went to New York.
AARON: Oh, cool. Oh yeah. We can talk about when youâre moving to the United States. However.
SARAH: Iâm not going to do that. Okay.
AARON: I know. Iâm joking. I mean, I donât know.
SARAH: I donât think Iâm going to do that. I donât know. It just seems like you guys have got a lot going on over there. It seems like things arenât quite right with you guys. Things arenât quite right with us either.
AARON: No, I totally get this. I think it would be cool. But also I completely relate to not wanting to. Iâve lived within 10 miles of one. Not even 10 miles, 8 miles in one location. Obviously gone outside of that. But my entire life.
SARAH: Youâve just always lived in DC.
AARON: Yeah, either in DC or. Sorry. But right now in Maryland, itâs like right next to DC on the Metro or at Georgia University, which is in the trying to think would I move to the UK. Like I could imagine situations that would make me move to the UK. But it would still be annoying. Kind of.
SARAH: Yeah, I mean, I guess itâs like theyâre two very similar places, but there are all these little cultural things which I feel like kind of trip you up.
AARON: I donât to. Do you want to say what?
SARAH: Like I think people, I just like, I donât know. I donât have that much experience because Iâve only been to America twice. But people seem a lot more sincere in a way that you donât really get that. Like people are just never really being upfront. And in America, I just got the impression that people just have less of a veneer up, which is probably a good thing. But itâs really hard to navigate if youâre not used to it or something. I donât know how to describe that.
AARON: Yeah, Iâve definitely heard this at least. And yeah, I think itâs for better and for worse.
SARAH: Yeah, I think itâs generally a good thing.
AARON: Yeah.
SARAH: But itâs like thereâs this layer of cynicism or irony or something that is removed and then when itâs not there, itâs just everything feels weak. I canât describe it.
AARON: This is definitely, I think, also like an EA rationalist thing. I feel like Iâm pretty far on the spectrum. Towards the end of surgical niceties are fine, but I donât know, donât obscure what you really think unless itâs a really good reason to or something. But it can definitely come across as being rude.
SARAH: Yeah. No, but I think itâs actually a good rule of thumb to obscure what you. Itâs good to try not to obscure what you think most of the time, probably.Ably, I donât know, but I would love to go over temporarily for like six months or something and just hang out for a bit. I think thatâd be fun. I donât know if I would go back to New York again. Maybe. I like the bagels there.
AARON: I should have a place. Oh yeah. Remember, I think we talked at some point. We can cut this out if you like. Donât if either of us doesnât want it in. But we discussed, oh yeah, I should be having a place. You can. I emailed the landlord like an hour before this. Hopefully, probably more than 50%. That is still an offer. Yeah, probably not for all six months, but I donât know.
SARAH: I would not come and sleep on your sofa for six months. That would be definitely impolite and very weird.
AARON: Yeah. I mean, my roommates would probably grumble.
SARAH: Yeah. They would be like.
AARON: Although I donât know. Who knows? I wouldnât be shocked if people were actually like, whatever somebody asked for as a question. This is what he said. I might also be interested in hearing how different backgrounds. Wait, sorry. This is not good grammar. Let me try to parse this. Not having a super hardcore EA AI rationalist background shape how you think or how you view AI as rationality?
SARAH: Oh, thatâs a good question. I think itâs more happening the other way around, the more I hang around in these circles. You guys are impacting how I think.
AARON: Itâs definitely true for me as well.
SARAH: Seeping into my brain and my language as well. Iâve started talking differently. I donât know. Thatâs a good question, though. Yeah. One thing that I will say is that there are certain things that I find irritating about the EA way of style of doing things. I think one specific, I donât know, the kind of like hand ring about everything. And I know that this is kind of the point, right? But itâs kind of like, you know, when someoneâs like, I want to take a stance on something, but then whenever they want to take a stance on something, they feel the need to write like a 10,000 word blog post where theyâre thinking about the second and order and third and fifth order effects of this thing. And maybe this thing that seems good is actually bad for this really convoluted reason. Thatâs just so annoying.
AARON: Yeah.
SARAH: Also understand that maybe that is a good thing to do sometimes, but it just seems like, I donât know how anyone ever gets anywhere. It seems like everyone must be paralyzed by indecision all the time because they just canât commit to ever actually just saying anything.
AARON: I think this kind of thing is really good if youâre trying to give away a billion dollars. Oh yes, I do want the billion dollar grantor to be thinking through second and third order effects of how they give away their billion dollars. But also, no, I am super. The words on the tip of my tongue, not overwhelmed but intimidated when I go on the EA forum because the posts, none of them are like normal, like five paragraph essays. Some of them are like, I think one of them I looked up for fun because I was going to make a meme about it and still will. Probably was like 30,000 words or something. And even the short form posts, which really gets me kind of not even annoyed. I donât know, maybe kind of annoyed is that the short form posts, which is sort of the EA forum version of Twitter, are way too high quality, way too intimidating. And so maybe I should just suck it up and post stuff anyway more often. It just feels weird. I totally agree.
SARAH: I was also talking to someone recently about how I lurked on the EA forum and less wrong for months and months and I couldnât figure out the upvoting system and I was like, am I being stupid or why are there four buttons? And I was like, well, eventually I had to ask someone because I couldnât figure it out. And then he explained it to me and I was like, that is just so unnecessary. Like, just do it.
AARON: No, I do know what you mean.
SARAH: I just tI think itâs annoying. It pisses me off. I just feel like sometimes you donât need to add more things. Sometimes less is good. Yeah, thatâs my hot take. Nice things.
AARON: Yeah, thatâs interesting.
SARAH: But actually, a thing that I like that EAâs do is the constant hedging and caveatting. I do find it kind of adorable. I love that because itâs like youâre having to constantly acknowledge that you probably didnât quite articulate what you really meant and that youâre not quite making contact with reality when youâre talking. So you have to clarify that you probably were imprecise when you said this thing. Itâs unnecessary, but itâs kind of amazing.
AARON: No, itâs definitely. I am super guilty of this because Iâll give an example in a second. I think Iâve been basically trained to try pretty hard, even in normal conversation with anybody, to just never say anything thatâs literally wrong. Or at least if I do caveat it.
AARON: I was driving home, me and my parents and I, unless visited, our grandparents were driving back, and we were driving back past a cruise ship that was in a harbor. And my mom, who was driving at the time, said, âOh, Aaron, can you see if thereâs anyone on there?â And I immediately responded like, âWell, thereâs probably at least one person.â Obviously, thatâs not what she meant. But that was my technical best guess. Itâs like, yes, there probably are people on there, even though I couldnât see anybody on the decks or in the rooms. Yeah, thereâs probably a maintenance guy. Felt kind of bad.
SARAH: You canât technically exclude that there are, in fact, no people.
AARON: Then I corrected myself. But I guess Iâve been trained into giving that as my first reaction.
SARAH: Yeah, I love that. I think itâs a waste of words, but I find it delightful.
AARON: It does go too far. People should be more confident. I wish that, at least sometimes, people would say, âEpistemic status: Want to bet?â or âI am definitely right about this.â Too rarely do we hear, "Iâm actually pretty confident here.
SARAH: Another thing is, people are too liberal with using probabilities. The meaning of saying there is an X percent chance of something happening is getting watered down by people constantly saying things like, âI would put 30% on this claim.â Obviously, thereâs no rigorous method thatâs gone into determining why itâs 30 and not 35. Thatâs a problem and people shouldnât do that. But I kind of love it.
AARON: I can defend that. People are saying upfront, âThis is my best guess. But thereâs no rigorous methodology.â People should take their word for that. In some parts of society, itâs seen as implying that a numeric probability came from a rigorous model. But if you say, âThis is my best guess, but itâs not formed from anything,â people should take their word for that and not refuse to accept them at face value.
SARAH: But why do you have to put a number on it?
AARON: It depends on what youâre talking about. Sometimes probabilities are relevant and if you donât use numbers, itâs easy to misinterpret. People would say, âIt seems quite likely,â but what does that mean? One person might think âquite reasonably likelyâ means 70%, the other person thinks it means 30%. Even though itâs weird to use a single number, itâs less confusing.
SARAH: To be fair, I get that. Iâve disagreed with people about what the word âunlikelyâ means. Someoneâs pulled out a scale that the government uses, or intelligence services use to determine what âunlikelyâ means. But everyone interprets those words differently. I see what youâre saying. But then again, I think people in AI safety talking about P Doom was making people take us less seriously, especially because peopleâs probabilities are so vibey.
AARON: Some people are, but I take Paul Cristianoâs word seriously.
SARAH: Heâs a 50/50 kind of guy.
AARON: Yeah, I take that pretty seriously.Obviously, itâs not as simple as him having a perfect understanding of the world, even after another 10,000 hours of investigation. But itâs definitely not just vibes, either.
SARAH: No, I came off wrong there. I donât mean that everyoneâs understanding is just vibes.
AARON: Yeah.
SARAH: If you were looking at it from the outside, it would be really difficult to distinguish between the ones that are vibes and the ones that are rigorous, unless you carefully parsed all of it and evaluated everyoneâs background, or looked at the model yourself. If youâre one step removed, it looks like people just spitting out random, arbitrary numbers everywhere.
AARON: Yeah. Thereâs also the question of whether P doom is too weird or silly, or if it could be easily dismissed as such.
SARAH: Exactly, the moment anyone unfamiliar with this discussion sees it, theyâre almost definitely going to dismiss it. They wonât see it as something they need to engage with.
AARON: Thatâs a very fair point. Aside from the social aspect, itâs also a large oversimplification. Thereâs a spectrum of outcomes that we lump into doom and not doom. While this binary approach can be useful at times, itâs probably overdone.
SARAH: Yeah, because when some people say doom, they mean everyone dies, while others mean everyone dies plus everything is terrible. And no one specifies what they mean. It is silly. But, I also find it kind of funny and I kind of love it.
AARON: Iâm glad thereâs something like that. So itâs not perfect. The more straightforward thing would be to say P existential risk from AI comes to pass. Thatâs the long version, whatever.
SARAH: If I was in charge, I would probably make people stop using PDOOm. I think itâs better to say it the long way around. But obviously Iâm not in charge. And I think itâs funny and kind of cute, so Iâll keep using it.
AARON: Maybe Iâm willing to go along and try to start a new norm. Not spend my whole life on it, but say, I think this is bad for X, Y, and Z reasons. Iâll use this other phrase instead and clarify when people ask.
SARAH: Youâre going to need Twitter premium because youâre going to need a lot more characters.
AARON: I think thereâs a shorthand which is like PX risk or P AiX risk.
SARAH: Maybe itâs just the word doom thatâs a bit stupid.
AARON: Yeah, thatâs a term out of the Bay Area rationalists.
SARAH: But then I also think it kind of makes the whole thing seem less serious. People should be indignant to hear that this meme is being used to trade probabilities about the likelihood that theyâre going to die and their families are going to die. This has been an in-joke in this weird niche circle for years and they didnât know about it. Iâm not saying that in a way to morally condemn people, but if you explain this to peopleâŠPeople just go to dinner parties in Silicon Valley and talk about this weird meme thing, and what they really mean is the ODs know everyoneâs going to prematurely die. People should be outraged by that, I think.
AARON: I disagree that itâs a joke. It is a funny phrase, but the actual thing is people really do stand by their belief.
SARAH: No, I totally agree with that part. Iâm not saying that people are not being serious when they give their numbers, but I feel like thereâs something. I donât know how to put this in words. Thereâs something outrageous about the fact that for outsiders, this conversation has been happening for years and people have been using this tongue-in-cheek phrase to describe it, and 99.9% of people donât know thatâs happening. Iâm not articulating this very well.
AARON: I see what youâre saying. I donât actually think itâs like. I donât know a lot of jargon.
SARAH: But when I first found out about this, I was outraged.
AARON: I honestly just donât share that intuition. But thatâs really good.
SARAH: No, I donât know how to describe this.
AARON: I think I was just a little bit indignant, perhaps.
SARAH: Yeah, I was indignant about it. I was like, you guys have been at social events making small talk by discussing the probability of human extinction all this time, and I didnât even know. I was like, oh, thatâs really messed up, guys.
AARON: I feel like Iâm standing by the rational tier because, it was always on. No one was stopping you from going on less wrong or whatever. It wasnât behind closed.
SARAH: Yeah, but no one ever told me about it.
AARON: Yeah, thatâs like a failure of outreach, I suppose.
SARAH: Yeah. I think maybe Iâm talking more about. Maybe the people that Iâm mad at is the people who are actually working on capabilities and using this kind of jargon. Maybe Iâm mad at those people. Theyâre fine.
AARON: Do we have more questions? I think we might have more questions. We have one more. Okay, sorry, but keep going.
SARAH: No, Iâm going to stop making that point now because I donât really know what Iâm trying to say and I donât want to be controversial.
AARON: Controversy is good for views. Not necessarily for you. No, thank you for that. Yes, that was a good point. I think it was. Maybe it was wrong. I think it seems right.
SARAH: It was probably wrong.
Shrimp Welfare: A Serious Discussion
AARON: I donât know what she thinks about shrimp welfare. Oh, yeah. I think itâs a general question, but letâs start with that. What do you think about shrimp? Well, today.
SARAH: Okay. Is this an actual cause area or is this a joke about how if you extrapolate utilitarianism to its natural conclusion, you would really care about shrimp?
AARON: No, thereâs a charity called the Shrimp Welfare Initiative or project. I think itâs Shrimp Welfare Initiative. I can actually have a rant here about how itâs a meme that people find amusing. It is a serious thing, but I think people like the meme more than theyâre willing to transfer their donations in light of it. This is kind of wrong and at least distasteful.
No, but thereâs an actual, if you Google, Shrimp Welfare Project. Yeah, itâs definitely a thing, but itâs only a couple of years old. And itâs also kind of a meme because it does work in both ways. It sort of shows how weâre weird, but in the sense that we are willing to care about things that are very different from us. Not like weâre threatening other people. Thatâs not a good description.
SARAH: Is the extreme version of this position that we should put more resources into improving the lives of shrimp than into improving the lives of people just because there are so many more shrimp? Are there people that actually believe that?
AARON: Well, I believe some version of that, but it really depends on who the âweâ is there.
SARAH: Should humanity be putting more resources?
AARON: No one believes that as far as I know.
SARAH: Okay. Right. So what is the most extreme manifestation of the shrimp welfare position?
AARON: Well, I feel like my position is kind of extreme, and Iâm happy to discuss it. Itâs easier than speculating about what the more extreme ones are. I donât think any of them are that extreme, I guess, from my perspective, because I think Iâm right.
SARAH: Okay, so what do you believe?
AARON: I think that most people who have already decided to donate, say $20, if they are considering where to donate it and they are better morally, it would be better if they gave it to the shrimp welfare project than if they gave it to any of the commonly cited EA organizations.
SARAH: Malaria nets or whatever.
AARON: Yes. I think $20 of malaria nets versus $20 of shrimp. I can easily imagine a world where it would go the other way. But given the actual situation, the $20 of shrimp is much better.
SARAH: Okay. Is it just purely because thereâs just more shrimp? How do we know how much shrimp suffering there is in the world?
AARON: No, this is an excellent question. The numbers are a key factor, but no, itâs not as simple. I definitely donât think one shrimp is worth one human.
SARAH: Iâm assuming that itâs based on the fact that there are so many more shrimp than there are people that I donât know how many shrimp there are.
AARON: Yeah, thatâs important, but at some level, itâs just the margin. What I think is that when youâre donating money, you should give to wherever it does the most good, whatever that means, whatever you think that means. But letâs just leave it at that. The most good is morally best at the margin, which means youâre not donating where you think the world should or how you think the world should expend its trillion dollar wealth. All youâre doing is adding $20 at this current level, given the actual world. And so part of it is what you just said, and also including some new research from Rethink Priorities.Measuring suffering in reasonable ranges is extremely hard to do. But I believe itâs difficult to do a better job than raising priorities on that, given what Iâve seen. I can provide some links. There are a few things to consider here: numbers, times, and the enormity of suffering. I think there are a couple of key elements, including tractability.
Are you familiar with the three-pronged concept people sometimes discuss, which encompasses tractability, and neglectedness?
SARAH: Okay.
AARON: Importance is essentially what we just mentioned. Huge numbers and plausible amounts of suffering. When you try to do the comparison, it seems like theyâre a significant concern. Tractability is another factor. I think the best estimates suggest that a one-dollar donation could save around 10,000 shrimp from a very painful death.
SARAH: In that senseâŠ
AARON: You could imagine that even if there were a hundred times more shrimp than there actually are, we have direct control over how they live and die because weâre farming them. The industry is not dominated by wealthy players in the United States. Many individual farmers in developing nations, if educated and provided with a more humane way of killing the shrimp, would use it. Thereâs a lot of potential for improvement here. This is partly due to the last prong, neglectedness, which is really my focus.
SARAH: Youâre saying no one cares about the shrimp.
AARON: Iâm frustrated that itâs not taken seriously enough. One of the reasons why the marginal cost-effectiveness is so high is because large amounts of money are donated to well-approved organizations. But individual donors often overlook this. They ignore their marginal impact. If you want to see even a 1% shift towards shrimp welfare, the thing to do is to donate to shrimp welfare. Not donate $19 to human welfare and one dollar to shrimp welfare, which is perhaps what they think the overall portfolio should be.
SARAH: Interesting. I donât have a good reason why youâre wrong. It seems like youâre probably right.
AARON: Let me put the website in the chat. This isnât a fair comparison since itâs something I know more about.
SARAH: Okay.
AARON: On the topic of obesity, neither of us were more informed than the other. But I could have just made stuff up or said something logically fallacious.
SARAH: You could have told me that there were like 50 times the number of shrimp in the world than there really are. And I would have been like, sure, seems right.
AARON: Yeah. And I donât know, if IâŠIf I were in your position, I would say, âOh, yeah, that sounds right.â But maybe there are other people who have looked into this way more than me that disagree, and I can get into why I think itâs less true than youâd expect in some sense.
SARAH: I just wonder if thereâs like⊠This is like a deeply non-EA thing to say. So I donât know, maybe I shouldnât say it, but are there not any moral reasons? Is there not any good moral philosophy behind just caring more about your own species than other species? If youâre sorry, but thatâs probably not right, is it? Thereâs probably no way to actually morally justify that, but it seems like it feels intuitively wrong. If youâve got $20 to be donating 19 of them to shrimp and one to children with malaria, that feels like there should be something wrong with that, but I canât tell you what it is.
AARON: Yeah, no, there is something wrong, which is that you should donate all 20 because theyâre acting on the margin, for one thing. I do think that doesnât check out morally, but I think basically me and everybody I know in terms of real life or whatever, I do just care way more about humans. I donât know, for at least the people that itâs hard to formalize or specify what you mean by caring about or something. But, yeah, I think you can definitely basically just be a normal human who basically cares a lot about other humans. And still thatâs not like, negated by changing your $20 donation or whatever. Especially because thereâs nothing else that I do for shrimp. I think you should be like a kind person or something. Iâm like an honest person, I think. Yeah, people should be nice to other humans. I mean, you should be nice in the sense of not beating them. But if you see a pigeon on the street, you donât need to say hi or whatever, give it a pet, because. I donât know. But yeah, you should be basically like, nice.
SARAH: You donât stop to say hi to every pigeon that you see on the way to anywhere.
AARON: I do, but I know most normal people donât.
SARAH: This is why Iâm so late to everything, because I have to do it. I have to stop for every single one. No exceptions.
AARON: Yeah. Or how I think about it is sort of like a little bit of compartmentalization, which I think is like⊠Which is just sort of like a way to function normally and also sort of do what you think really checks out at the end of the day, just like, okay, 99% of the time Iâm going to just be like a normal person who doesnât care about shrimp. Maybe Iâll refrain from eating them. But actually, even that is like, I could totally see a person just still eating them and then doing this. But then during the 1% of the time where youâre deciding how to give money away and none of those, the beneficiaries are going to be totally out of sight either way. This is like a neutral point, I guess, but itâs still worth saying, yeah, then you can be like a hardcore effective altruist or whatever and then give your money to the shrimp people.
SARAH: Do you have this set up as like a recurring donation?
AARON: Oh, no. Everybody should call me out as a hypocrite because I havenât donated much money, but Iâm trying to figure out actually, given that I havenât had a stable income ever. And maybe, hopefully I will soon, actually. But even then, itâs still a part-time thing. I havenât been able to do sort of standard 10% or more thing, and Iâm trying to figure out what the best thing to do or how to balance, I guess, not luxury, not like consumption on things that I⊠Well, to some extent, yeah. Maybe Iâm just selfish by sometimes getting an Uber. Thatâs totally true. I think Iâm just a hypocrite in that respect. But mostly I think the trade-off is between saving, investing, and giving. Beast of the money that I have saved up and past things. So this is all sort of a defense of why I donât have a recurring donation going on.
SARAH: Iâm not asking you to defend yourself because I do not do that either.
AARON: I think if I was making enough money that I could give away $10,000 a year and plan on doing that indefinitely, I would be unlikely to set up a recurring donation. What I would really want to do is once or twice a year, really try to prioritize deciding on how to give it away rather than making it the default. This has a real cost for charities. If you set up a recurring donation, they have more certainty in some sense of their future cash flow. But thatâs only good to do if youâre really confident that youâre going to want to keep giving there in the future. I could learn new information that says something else is better. So I donât think I would do that.
SARAH: Now Iâm just thinking about how many shrimp did you say it was per dollar?
AARON: Donât quote me. I didnât say an actual thing.
SARAH: It was like some big number. Right. Because I just feel like thatâs such a brainworm. Imagine if you let that actually get in your head and then every time you spend some unnecessary amount of money on something you donât really need, you think about how many shrimp you just killed by getting an Uber or buying lunch out. That is so stressful. I think Iâm going to try not to think about that.
AARON: I donât mean to belittle this. This is like a core, I think youâre new to EA type of thinking. Itâs super natural and also troubling when you first come upon it. Do you want me to talk about how I, or other people deal with that or take action?
SARAH: Yeah, tell me how to get the shrimp off my conscience.
AARON: Well, for one thing, you donât want to totally do that. But I think the main thing is that the salience of things like this just decreases over time. I would be very surprised if, even if youâre still very engaged in the EA adjacent communities or EA itself in five years, that it would be as emotionally potent. Brains make things less important over time. But I think the thing to do is basically to compartmentalize in a sort of weird sense. Decide how much youâre willing to donate. And it might be hard to do that, but that is sort of a process. Then you have that chunk of money and you try to give it away the best you can under whatever you think the best ethics are. But then on the daily, you have this other set pot of money. You just are a normal person. You spend it as you wish. You donât think about it unless you try not to. And maybe if you notice that you might even have leftover money, then you can donate the rest of it. But I really do think picking how much to give should sort of be its own project. And then you have a pile of money you can be a hardcore EA about.
SARAH: So you pick a cut off point and then you donât agonize over anything over and above that.
AARON: Yeah. And then people, I mean, the hard part is that if somebody says their cut off point is like 1% of their income and theyâre making like $200,000, I donât know. Maybe their cut off point should be higher. So there is a debate. It depends on that personâs specific situation. Maybe if they have a kid or some super expensive disease, itâs a different story. If youâre just a random guy making $200,000, I think you should give more.
SARAH: Maybe you should be giving away enough to feel the pinch. Well, not even that. I donât think Iâm going to do that. This is something that I do actually want to do at some point, but I need to think about it more and maybe get a better job.
AARON: Another thing is, if youâre wanting to earn to give as a path to impact, you could think and strive pretty hard. Maybe talk to people and choose your education or professional development opportunities carefully to see if you can get a better paying job. Thatâs just much more important than changing how much you give from 10% to 11% or something. You should have this macro level optimization. How can I have more money to spend? Let me spend, like, I donât know, depends what life stage you are, but if you had just graduated college or maybe say youâre a junior in college or something. It could make sense to spend a good amount of time figuring out what that path might look like.
AARON: Iâm a huge hypocrite because I definitely havenât done all this nearly as much as I should, but I still endorse it.
SARAH: Yeah, I think itâs fine to say what you endorse doing in an ideal world, even if youâre not doing that, thatâs fine.
AARON: For anybody listening, I tweeted a while ago, asking if anyone has resources on how to think about giving away wealth. Iâm not very wealthy but have some amount of savings. Itâs more than I really need. At the same time, maybe I should be investing it because EA orgs donât feel like, or they think they canât invest it because thereâs potentially a lot of blowback if they make poor investments, even though it would be higher expected value.
Thereâs also the question of, okay, having some amount of savings allows me to take higher, potentially somewhat higher risk, but higher value opportunities because I have a cushion. But Iâm very confused about how to give away what I should do here. People should DM me on Twitter or anywhere they have ideas.
SARAH: I think you should calculate how much you need to cover your very basic needs. Maybe you should work out, say, if you were working 40 hours a week in a minimum wage job, like how much would you make then? And then you should keep that for yourself. And then the rest should definitely all go to the shrimp. Every single penny. All of it.
AARON: This is pretty plausible. Just to make it more complicated, thereâs also the thing that I feel like my estimates or my best guesses of the best charities to give to over time has changed. And so thereâs like two competing forces. One is that I might get wiser and more knowledgeable as time goes on. The other one is that in general, giving now is better than giving later. All else equal, because I think for a couple of reasons, the main one just being that the charities donât know that youâre going to give later.
AARON: So itâs like they can plan for the future much better if they get money now. And also thereâs just higher leverage opportunities or higher value per dollar opportunities now in general than there will be later for a couple of reasons I donât really need to. This is what makes it really complicated. So Iâve donated in the past to places that I donât think, or I donât think even at the time were the best to. So then thereâs a question of like, okay, how long do I save this money? Do I sit on it for months until Iâm pretty confident, like a year.
AARON: I do think that probably over the course of zero to five years or something, becoming more confident or changing your mind is like the stronger effect than how much good you give to the, or how much better it is for the charities to give now instead of later. But also thatâs weird because youâre never committing at all.Sometimes you might decide to give it away, and maybe you wonât. Maybe at that time youâre like, âOh, thatâs what I want. A car, I have a house, whatever.â Itâs less salient or something. Maybe something bad happened with EA and you no longer identify that way. Yeah, thereâs a lot of really thorny considerations. Sorry, Iâm talking way too much.
SARAH: Long, are you factoring AI timelines into this?
AARON: That makes it even more sketchy. But that could also go both ways. On one hand, you have the fact that if you donât give away your money now and you die with it, itâs never going to do any good. The other thing is that it might be that especially high leverage opportunities come in the future or something potentially you need, I donât know, whatever I can imagine I could make something up about. OpenPhil needs as much money as it can get to do X, Y and Z. Itâs really important right now, but I wonât know that until a few years down the line. So just like everything else, it doesnât neatly wash out.
SARAH: What do you think the AGI is going to do to the shrimp? I reckon itâs probably pretty neat, like one shrimp per paperclip. Maybe you could get more. I wonder what the sort of shrimp to paperclip conversion rate is.
AARON: Has anyone looked into that morally? I think like one to zero. I donât think in terms of money. You could definitely price that. I have no idea.
SARAH: I donât know. Maybe Iâm not taking this as seriously as I should be because Iâm.
AARON: No, I mean, humor is good. When people are giving away money or deciding what to do, they should be serious. But joking and humor is good. Sorry, go ahead.
SARAH: No, you go ahead.
AARON: I had a half-baked idea. At EA Global, they should have a comedy show where people roast everybody, but itâs a fundraiser. You have to pay to get 100 people to attend. They have a bidding contest to get into the comedy show. That was my original idea. Or they could just have a normal comedy show. I think thatâd be cool.
SARAH: Actually, I think thatâs a good idea because you guys are funny. There is a lot of wit on this side of Twitter. Iâm impressed.
AARON: I agree.
SARAH: So I think thatâs a very good idea.
AARON: Okay. Dear Events team: hire Aaron Bergman, professional comedian.
SARAH: You can just give them your Twitter as a source for how funny you are, and that clearly qualifies you to set this up. I love it.
AARON: This is not important or related to anything, but I used to be a good juggler for entertainment purposes. I have this video. Maybe I should make sure the world can see it. Itâs like a talent show. So maybe I can do that instead.
SARAH: Juggling. You definitely should make sure the world has access to this footage.
AARON: It had more views than I expected. It wasnât five views. It was 90 or something, which is still nothing.
SARAH: I can tell you a secret right now if you want. That relates to Max asking in the chat about glee.
AARON: Yes.
SARAH: This bit will also have to edit out, but me having a public meltdown over AI was the second time that Iâve ever blown up on the Internet. The first time being. I canât believe Iâm telling you this. I think Iâm delirious right now. Were you ever in any fandoms, as a teenager?
AARON: No.
SARAH: Okay. Were you ever on Tumblr?
AARON: No. I sort of know what the cultural vibes were. I sort of know what youâre referring to. There are people who like Harry Potter stuff and bands, like Kpop stuff like that.
SARAH: So people would make these fan videos where theyâd take clips from TV shows and then they edit them together to music. Sometimes people would edit the clips to make it look like something had happened in the plot of the show that hadnât actually happened. For example, say, what if X character had died? And then you edit the clips together to try and make it look like theyâve died. And you put a sad song, how to save a life by the fray or something, over the top. And then you put it on YouTube.
AARON: Sorry, tell me whatâŠ"Hat I should search or just send the link here. Iâm sending my link.
SARAH: Oh, no, this doesnât exist anymore. It does not exist anymore. Right? So, say if youâre, like, eleven or twelve years old and you do this, and you donât even have a mechanism to download videos because you donât know how to do technology. Instead, you take your little iPod touch and you just play a YouTube video on your screen, and you literally just film the screen with your iPod touch, and thatâs how youâre getting the clips. Itâs kind of shaky because youâre holding the camera anyway.
SARAH: Then you edit together on the iMovie app of your iPod touch, and then you put it on the Internet, and then you just forget about it. You forget about it. Two years later, youâre like, oh, I wonder what happened to that YouTube account? And you log in and this little video that youâve made with edited clips that youâve filmed off the screen of your laptop to âHow To Save Lifeâ by The Fray with clips from Glee in it, has nearly half a million views.
AARON: Nice. Love it.
SARAH: Embarrassing because this is like, two years later. And then all the comments were like, oh, my God, this was so moving. This made me cry. And then obviously, some of them were hating and being like, do you not even know how to download video clips? Like, what? And then youâre so embarrassed.
AARON: I could totally seem it. Creative, but only a reasonable solution. Yeah.
SARAH: So thatâs my story of how I went viral when I was like, twelve.
AARON: It must have been kind of overwhelming.
SARAH: Yeah, it was a bit. And you can tell that my time, itâs like 20 to eleven at night, and now Iâm starting to really go off on one and talk about weird things.
AARON: Like an hour. So, yeah, we can wrap up. And I always say this, but itâs actually true. Which is that low standard, like, low stakes or low threshold. Low bar for doing that in recording some of the time.
SARAH: Yeah, probably. Weâll have to get rid of the part about how I went viral on YouTube when I was twelve. Iâll sleep on that.
AARON: Donât worry. Iâll send the transcription at some point soon.
SARAH: Yeah, cool.
AARON: Okay, lovely. Thank you for staying up late into the night for this.
SARAH: Itâs not that late into the night. Iâm just like, lame and go to bed early.
AARON: Okay, cool. Yeah, I know. Yeah, for sure. All right, bye.
Get full access to Aaron's Blog at www.aaronbergman.net/subscribe -
* Follow â Max on Twitterâ
* And read his â blogâ
* Listen here or on Spotify or Apple Podcasts
* RIP Google Podcasts đȘŠđȘŠđȘŠ
Summary
In this philosophical and reflective episode, hosts Aaron and Max engage in a profound debate over the nature of consciousness, moral realism, and subjective experience. Max, a skeptic of moral realism, challenges Aaron on the objective moral distinction between worlds with varying levels of suffering. They ponder the hard problem of consciousness, discussing the possibility of philosophical zombies and whether computations could account for consciousness. As they delve into the implications of AI on moral frameworks, their conversation extends to the origins of normativity and the nonexistence of free will.
The tone shifts as they discuss practical advice for running an Effective Altruism group, emphasizing the importance of co-organizers and the balance between being hospitable and maintaining normalcy. They exchange views on the potential risks and benefits of being open in community building and the value of transparency and honest feedback.
Transitioning to lighter topics, Max and Aaron share their experiences with social media, the impact of Twitter on communication, and the humorous side of office gossip. They also touch on the role of anonymity in online discussions, pondering its significance against the backdrop of the Effective Altruism community.
As the episode draws to a close, they explore the consequences of public online behavior for employment and personal life, sharing anecdotes and contemplating the broader implications of engaging in sensitive discourses. Despite their digressions into various topics, the duo manages to weave a coherent narrative of their musings, leaving listeners with much to reflect upon.
Transcript
AARON: Without any ado whatsoever. Max Alexander and I discuss a bunch of philosophy things and more.
MAX: I don't think moral realism is true or something.
AARON: Okay, yeah, we can debate this.
MAX: That's actually an issue then, because if it's just the case that utilitarianism and this an axiology, which is true or something, whether or not I'm bothered by or would make certain traits personally doesn't actually matter. But if you had the godlike AI or like, I need to give it my axiological system or something, and there's not an objective one, then this becomes more of a problem that you keep running into these issues or something.
AARON: Okay, yeah, let's debate. Because you think I'm really wrong about this, and I think you're wrong, but I think your position is more plausible than you think. My position is probably. I'm at like 70%. Some version of moral realism is true. And I think you're at, like, what? Tell me. Like, I don't know, 90 or something.
MAX: I was going to probably 99% or something. I've yet to hear a thing that's plausible or something here.
AARON: Okay, well, here, let's figure it out once and for all. So you can press a button that doesn't do Nick. The only thing that happens is that it creates somebody in the world who's experiencing bad pain. There's no other effect in the world. And then you have to order these two worlds. There's no normativity involved. You only have to order them according to how good they are. This is my intuition pump. This isn't like a formal argument. This is my intuition pump that says, okay, the one without that suffering person and no other changes. Subjectively, not subjectively. There's a fact of the matter as to which one is better is, like, not. I mean, I feel like, morally better and better here just are synonyms. All things considered. Better, morally better, whatever. Do you have a response, or do you just want to say, like, no, you're a formal argument.
MAX: What makes this fact of the matter the case or something like that?
AARON: Okay, I need to get into my headspace where I've done this or had this debate before. I do know. I'll defer to Sharon Roulette not too long ago, like ADK podcast guest who basically made the case for hedonic moral realism and hedonic value being the one thing that intrinsically matters and a moral realist view based on that. And I basically disagree with her. Okay. It's like settling in now. Yeah. So it is just the fact of the matter that pleasure is moral is good. And if you say that's not true, then you're wrong and pain is bad. And if you say that that's not true, you're just wrong. That's kind of the argument. That's it. And then I can build on top of it. Where do you get ordering of the world from? But that's the core of the argument here.
MAX: Yeah. I think you need an explanation for why this is the fact of the matter or something.
AARON: Okay. I mean, do I need an explanation for why one equals one or something like that? Do you need an explanation?
MAX: Yes, I think yes. Really? Because we take this to be the case or something, but the symbols one plus one equals two or something is like by itself not true or something. It's like just a bunch of lines, really, or something. Like there's all these axioms and things we build on with the mathematical system and you could do other ones. There are like a bunch of other systems.
AARON: I guess if you're a true epistemological nihilist and you think there are no statements that are true, then I'm probably not going to convince you. Is that the case for you?
MAX: I don't think it's the case that there aren't things that are true or something.
AARON: Do you think there's anything that is true? Can you give me an example? Is there a bed behind you?
MAX: I'll say yes, but that's probably couched. You could probably, I think, represent the universe or something as a bunch of a matrix of atom positions or subatomic particle positions and these things, and maybe the rules that govern the equations that govern how they interact or something.
AARON: Yeah, I agree.
MAX: Make claims that are truth valuable based on that matrix or something. And then you could be like, we can then draw fuzzy concepts around certain things in this matrix and then say more true and true things or whatever.
AARON: So I think our disagreement here is that maybe, I don't know if it's a disagreement, but the hard problem of consciousness introduces the fact that that description of the world is just not complete. You have subjective experience also. Are you a phenomenal realist?
MAX: What's the definition of that again?
AARON: So you think qualia is like a real legit thing?
MAX: Is qualia just experience or something?
AARON: I'm sorry, I feel like I'm just depending, assuming that, you know, every single term that has ever been used in philosophy. I feel like the thing that I want to say is qualia is like. Yeah, I'll say it's like subjective experience, basically. But then people will say, like, oh, qualia exists, but not in the sense that people normally think. And. But I want to use the strong version of qualia that people argue about. It's real. It's, like, genuine. There's no illusions going on. There's no such thing as functional qualia. If there's functional pain, it is like a different thing than what people mean when they say pain. Most people mean when they say pace and pain. Most of the time, there's, like, a real, genuine, legit subjective experience going on. Do you think this thing is real?
MAX: I would say yes, but it does seem like what I would say subjective experiences or something is like a type of computation or something.
AARON: I actually lean towards functionalism. Are you familiar with functionalism and theory of philosophy of mind or whatever?
MAX: Yeah. Wouldn't that just be the. I think that's what I said. Right. It's just computations or whatever.
AARON: So I'm actually not super sure about this. So I apologize to the philosophical community if I'm getting this wrong, but my sense is that when people say functionalism, sometimes they mean that, as in not exactly empirical, but sort of empirical fact of the world is that if you have some computations, you get Qualia, and other people mean that they are just identical. They are just the same thing. There's no, like, oh, you get one and you get the other. They're just the same thing. And I think to say that computations are identical just mean the same thing as qualia is just not true, because it's at least conceivable that. Tell me if you disagree with this. I claim that it's at least conceivable that you have computers that do some sort of computation that you hypothesize might be conscious, but they are not, in fact, conscious. And I think this is conceivable.
MAX: Yeah, I agree. Though I would say when it is the case, something, it doesn't seem that means there's something besides just computations going on, or, like, the specific type of computations, like, really, what you said there is. It's conceivable you could have computations that.
AARON: Look like we want. Okay, yeah, sorry. You're right. Actually, what I mean is that. Sorry, not what I mean, but what I should have said is that it is conceivable that functionalism is false. Meaning that you can get. Meaning that you can have two sets of systems doing computations, and one of them has qualia and the other one does not. One of them is conscious, the other one is not. Do you think this is conceivable?
MAX: Well, do you mean like identical computations or something? I think that'd be the necessary thing or something, because my computer is doing computations right now, but I don't think it's conscious. And I'm doing computations right now, and I do think I'm conscious.
AARON: So if you think that you could run your brain's program on an arbitrarily powerful large computer, do you think it's conceivable that that hypothetical computer would not have conscious experience?
MAX: I do, but I think this is like the real question. Or the reason I would say it's conceivable that's not having conscious experience is because I would think your simulation just isn't doing the right sort of thing. You think it is, but it's not. For whatever reason, carbon and atoms have interactions that we didn't realize, but they do.
AARON: Yeah, actually, this is a good point. I'm actually very sympathetic to this. When people say, I feel like functionalism and I forget what the term is, but substance based, at some point you're just going to get down to like, oh, no, it needs to be like, yeah, if you really want the quarks to be doing the exact same things, then you're just getting two identical physical systems. Physical. I don't like that word so much, but whatever, I'll use it. I think we just reinvented the zombies thing. Like, are zombies conceivable? And I claim yes.
MAX: I would think no, or something. Okay, I guess I don't know the P zombie example super well, but I would guess at a certain level of your ability to, like, if you knew all the physical things about the world, like all the physical things that were knowable, p zombies would not be possible for something like you'd be able to tell.
AARON: But are they conceivable? I think that's the crux. I think.
MAX: I think P zombies are like an epistemic problem or something. Right? Is probably what I would say. It's a lack of knowledge or something. Like if you knew all the relevant things and you would be able to.
AARON: Tell, maybe, yeah, I think the fact that it's an epistemic problem is like a demonstration of its conceivability. I don't think it's conceivable that there's like, wait, what's a good example of something that's inconceivable? I feel like they're all very abstract, like a circular square or something. But I don't have a better example at hand. But no, one thing is, you don't know that I'm. I guess you know that you're a conscious, right. But, like, it's really, like, if you just stipulate that there's another version of you that is, like, yeah, in another everettian branch, maybe that's not a good example. I don't know. Because. I don't know. I feel like it's just weird with physics, but as close as you could possibly make it to you in just, like, whatever way, just, like, throw in all the stipulations you want. Do you think there's any chance it's not conscious, but you are?
MAX: I guess I'm pretty confident it would.
AARON: Be conscious or something.
MAX: Like, Spock Teleporter thing. Yeah, I think we're probably both conscious. Or I was wrong about me being conscious in the first place, I guess. Or there is a very small chance that's incredibly. Probably not worth mentioning. But I'm already mentioning it, that I've lost my mind and the whole universe is just my brain making shit up or whatever. And so it's all just like, I'm.
AARON: The only real thing connecting this back to moral realism. Yes. My original claim was that the matrix of quarks and positions and physical laws is just not a complete description of the universe. And I think that you want to say that it is and that valence. So, like, valence is just encoded in that description somewhere.
MAX: Yeah. Or, like, the thing you're interested in is, like, because you have to zoom in or something. I don't know if you've heard of Spinoza's God or something, but he got excommunicated from Judaism for this.
AARON: I didn't even know you could get excommunicated.
MAX: I didn't know either, but he did, and they still aren't over it. Somebody wrote a letter to someone high up in whatever jewish thing, and they were like, documentary, and they're like, we'll never talk about Spinoza. So his claim is, like, God is the universe. So if the totality of all physical interactions and matter is, like, the universe, he probably won't put it that way, but that's his idea. And so if that's true, the rock over there is not conscious, but I am. And so it's like a merger. Property of a smaller part of a system is consciousness or something. It's not just like the universe being conscious. Yeah, I would say that it is. The smaller. It's just a bunch of interactions is what Wally is or whatever.
AARON: Yeah. I don't think, unfortunately, that we're going to solve this year. I actually do know we disagree on. Now, more fundamentally, I don't know what the appropriate term is, but I think that you're wrong about Valence just being, I guess, logically implied by the whole mathematical thing, like all the quirks and physical laws. You think it's like a logical necessity that falls out of that, that valence is real and happens, in fact.
MAX: I don't know if logical. I don't know enough logic, I guess, to say this is the case, but it is like given the physical rules governing our universe and all the.
AARON: Oh, no. You know, but they're not given. That was a part of the description.
MAX: What do you mean?
AARON: The description of how quirks interact, or like physical laws or whatever is like part of the model that I'm referring to. And I think you want to say that given this whole picture of the physical world, and I'll just use physical to mean what most people think as physical. And yet the thing that science is concerned with, just things that are subject to observation and causality and measurable causality, I guess you think that it's like, okay, we have a whole picture of the universe under this world, under this view, and then it just is definitely the case, given all this, that I know that you're sentient. I'm sentient. Both sentience and dalence is like, wherever it is happening is just like directly implied by this whole description of the universe or whatever.
MAX: Yeah. Or like, insofar as it does exist or something.
AARON: Yeah. Okay. Unfortunately, I don't think we're going to resolve this debate, but I do think this is like the crux of disagreement.
MAX: I think probably that sort of approach that I take you to be taken is probably the most convincing way one could go about doing it.
AARON: Yeah, I'm not convinced of it, but.
MAX: I do think this is the case.
AARON: Can I get you to 2%?
MAX: Because I could give you this example. Right. And what you're saying maybe gets you out of it or something, or can get you more out of it than someone else would be able to. And so, like a typical way you might go about saying moral realism is true or something, there's like this book, or it's like a series of lectures, they turn into a book by Christine Korsgaard, I think is her name. She's like a know.
AARON: I don't believe she's a.
MAX: She is called kantian. I don't know if she's literally a.
AARON: Okay.
MAX: But it uses whatever and like the idea is sort of the way you get normativity, which is the thing that tells you. You can imagine the space of possible ethical systems or something like, there's a bunch. One tells you to stab everyone. We probably shouldn't do that. And the way you get to pick which one is normativity that tells you what you ought to do. And the way you get normativity is like, it follows from human reason or something like this, right? We kind of have this natural reason, whatever Kant called it. He used a different term, I think. And you can make these arguments, and it follows from this, right? And then the question is like, well, what if there are aliens? Right? And then their reason leads them to pick a different ethical theory out of the available ones they could have picked. And these two ethical theories clash with each other. They say they do different things. Who gets to win that clash or something? Like, who ought to. You need now a meta normative theory or something. And I think what your response would be is like, oh, the answer is in qualia or whatever, or something.
AARON: No, I don't think normativity is. I'm actually not super sure about this, but I think, yeah, when I say moral realism, I mean, as far as I know, I'm, like, the only person who espouses this. I'm sure that's not true. There's definitely, like, 10,000 phds, but I don't know who they are. But my claim is that objective ordering of worlds exists, normativity per se. Not necessarily.
MAX: I would say that. Okay, it's not clear what the difference is here or something, because earlier I kind of claimed that ethical theories are just isomorphic to orderings of possible worlds or something.
AARON: More formally, the claim, it would be good if x happened, or even it would be good if you did x sometimes has an objective truth value, true or false. But the claim, like, you should do x. I don't know. I'm actually pretty divided. Or not, I don't know. But it seems like there's, like, an extra thing being added there when you go from, it would be good if you did x objectively to, you should objectively do x. And I kind of not willing to make that jump.
MAX: I guess the problem is you have to or something at a certain point.
AARON: Or it's like.
MAX: That'S fair, what you get out of being like, there's this objective ordering or something, but it's not the case you have to do anything with. It's just like, I have this thing here.
AARON: Oh, no, actually, yeah, this actually makes a lot of sense, I guess. I haven't thought about that much. Yeah, you might think, okay, if you're a moral realist, that includes normativity, but you don't think there's any form of divine punishment, then maybe just we're describing the same thing and we're just using different words or something like that. Because. Yeah. It's like, okay, I don't know. At the end of the day, no one's going to make you do it. There's no punishment. You really should or something. And I want to claim you really should. Well, it would be good if you did x. You should. Maybe you should, but I don't know if you objectively should or something. Yeah, maybe this is just the same thing.
MAX: Yeah. It's, like less clear where we disagree then. Because I might be willing to say you have a preference function. Right. Maybe you could call it objective. It emerges objectively from the fact that there's an errand, that there's this betterness ordering of possible worlds based on the.
AARON: Things for me or in my judgment of the world. Okay. Yeah, sure.
MAX: And then I could say that. Right. And if that's what we're calling moral realism, sure. But probably what people really mean by moral realism is like, you have one of those two orderings, whichever one.
AARON: Oh, no, that's not what I mean by moral realism. That's a psychological fact that is very contingent. What I mean is that one of those is true. My claim is that, in fact, my ordering of worlds. Well, there's two claims. One of them is, like, the normative ethical position, which is like, oh, I think it's true. But the other one is that, okay, conditional on it being true, it is objectively true. I feel like. Like kind of lost in word salad now.
MAX: Yeah, I guess what is objectively true here? Mean or something at this man.
AARON: Yeah. I don't know.
MAX: Just like, is I'm trying to remember my McDowell or whatever and my Mackie. These are like people who wrote a bunch of articles against each other about this sort of objective, subjective thing.
AARON: Maybe we should find another thing to discuss, argue about. I don't know. Do you have any ideas? It can be something like, really not deep.
MAX: I mean, the other thing I'm almost certain is true is like, there's no free will or something.
AARON: Yeah. Nobody thinks that that's not true.
MAX: People really like free will. Even people who don't, who think free will is false. Like free will.
AARON: Oh, yeah. I mean, I'm kind of sympathetic to what's the word for it. Yeah. I think that compatibilism has like some. There's like, no, hold on, hold on. Before you block me on Twitter. I don't think it's like saying anything fundamental about the universe or like the nature of reality in the way that moral realism is. I do think it's like a useful semantic distinction to make between. I don't know if useful is like maybe something like slightly stronger than useful, but the sense in which, okay, I can choose to pick up my phone here, even though there's no libertarian free will that is meaningfully different than somebody like my dad running over here and making me do that physically or something like that.
MAX: I think you would just call that like coerced or uncoursed or something. I mean, the reason I say that is.
AARON: Yeah, sure.
MAX: When you say free will, most people think libertarian free will. And that carries things with it that compatibilism doesn't or something. And people basically just.
AARON: Bailey. Yes, people with this. I agree with this. I think nobody. Yeah, basically nobody. Compatibilism is like a fake philosopher thing. Not entirely fake, but like basically fake philosopher thing.
MAX: Or it's used to justify things that shouldn't be justifiable or something like that.
AARON: Yeah. Although honestly, I kind of like it as a psychological crutch. So I'm just going to keep on doing that. Well, you can't stop me. Okay. Sorry.
MAX: That's true. I could if I put like a button in your brain or a tumor or whatever.
AARON: Yeah. But you can't do that, at least for now. I hope you'll choose not to if you do get the ability to do that. Yeah. Okay. Are there any other deep philosophy takes before, I don't know, say something else?
MAX: No, I think you can move it.
AARON: No. Okay. There's actually nothing particular on my mind. As usual, I did not prepare for this conversation. So sorry about that. Yeah. So do you want to tell me your life story or like something interesting about your life story that you want to discuss on a podcast? Not necessarily the whole thing, an arbitrary part, but is there. And one of these could be your experience in college, like running an EA group, for example.
MAX: I guess I could give advice to EAA group organizers or something.
AARON: Yes, do that.
MAX: One is get co organizers because it's really difficult to do alone, especially if you're a certain type of person, which I am, which is like, I don't like sending emails. I can do it kind of, but I'll put them off sometimes too much and sometimes you're just like I have other things to do. I can't make this presentation. And if you have to do literally all of it yourself and all the tabling and these things, it can get demoralizing and just difficult to do. So if you're like, oh, I should do this high impact thing. Start a K group or something. Make sure you have other people with you and that they're like, the value aligned is probably the wrong way to say it, but committed or something. Or committed enough to put in at least the amount of time you're putting in or something.
AARON: No, I totally agree with this. Even though I'm like a much less, I feel like committed community builder. I did have my stint in college and luckily I had a very good type, a co organizer who's also extroverted. I'm like me. So this was very, extremely good.
MAX: Yeah, I think probably so there's kind of this community sentiment now. Maybe, or like maybe where we are, I should say in the community, people should be more normal. Why can't you just be normal or something like this?
AARON: I'll represent the other position here.
MAX: Well, I'm going to say something like, I think that's kind of bad to some degree. I think what you should do is be hospitable or something, which maybe means isn't that much different. But I think, I think lots of weird things. I may even say lots of weird things a lot of the time and these sorts of things, but one can be nice and understanding and welcoming while doing this. And that probably does mean there are certain types of weird you can't do. Like if you're a group organizer, probably don't start a group house with your members or something. I guess though if you're in college, maybe it's slightly different because dorms are kind of like a weird space. And is it bad if a bunch of EA's live in the same dorm or friends live in the same room? I don't know. That's fine. But if you're a group organizer, probably be careful about flirting with your members or something. Don't do power imbalances, I guess is the thing, but I think it's okay.
AARON: To be like, don't do power. Kids. Do not do power imbalances. Okay. No, I agree with all this. Sorry, keep going.
MAX: And maybe sometimes this does mean you shouldn't always say what you think is the case. Probably at the first EA meeting you don't go like, I think utilitarianism is true and we should donate the US budget to shrimp or something.
AARON: I don't think that's true, actually. Only 90%.
MAX: Okay. Yeah, see, that's reasonable. But most people will need room. And I think it is the case that part of letting people explore things and think for themselves is not weighing them down with what you think or something. And so you can be pushed back when they think things to help. Basically, you might think there are certain objectively true philosophy things. I don't necessarily think there's philosophy stuff, but you might take this to be the case. Good philosophy professor will not indoctrinate or something their students or very strongly convince them of this thing. They'll give them the space to sort of think through things rather than just tell them the right answer or something. And good group organizing looks like this too, I think. Especially when you're onboarding new eas or something.
AARON: Yeah, I largely agree with this. I do think this can actually, and I'm not sure if we. Tell me what your thoughts are on this. I feel like people are going to say, oh, Aaron's pro indoctrination. I am not pro indoctrination. I love me a good critic. Everybody should write to me and tell me what I'm really wrong about. But I actually think that just being kind of upfront about what you think is actually sort of. Sorry, let me back up. Sometimes what people will go really hard on the don't indoctrinate and what that looks like is kind of refusing to let on what they think is the case on a forgiven subject. And I think this is actually not necessarily, I don't want to say literally in all cases that I can possibly imagine, but basically, yeah, as a rule of thumb, no, you should be okay telling people what you think and then if it's in fact the case that people disagree, especially in the EA context, in the EA community say that too. But yeah, I actually think community builders, this is a generalization that I'm not super confident in, but we think that community builders are a little too hesitant to just say like, oh yeah, no, I think this is true. Here's why. These are some of the other plausible views. Tell me why I'm wrong.
MAX: Yeah, I think my guess would be that's maybe true in some sense, but there's like a sort of line you want to walk or something, and being on one side of the line could be very bad or lead to certain issues that I don't read the EA form that much. I'm a fake EA, but those are doing community building better or something post where people I think kind of the takeaway. I might have the title wrong, but it seems like the takeaway of the post I'm thinking of is like, epistemics are sometimes bad in community things or something.
AARON: Was this kind of recent ish?
MAX: Yeah, I think it's like people are way too into AI or something.
AARON: Okay. I think this is like a good faith criticism that's absolutely wrong. And I actually have a twitter thread that I'll like. Maybe I'll actually, I'll see if I can find it right now, but if I can, like 2 seconds. But yeah, wait, keep going.
MAX: Yeah, I mean, like, what I would say is there are failure modes that look like that or something, and you might want to avoid it for various reasons or. Yeah.
AARON: Yes. Also, empirically, I did get like one round of feedback that was like, oh, no, thanks, Aaron, for not being so coy. So maybe I'm going hard. I'm like, the fact that I got ten people or whatever, I think probably.
MAX: It'S like you'd expect it to be a dynamic system or something. Like the amount of coinness you should have goes down as number of sessions you've interacted with this person go up or something like that. The way to do there's not a catch all for how to run a discussion group or something. It's kind of based on the people you have in your group or something. And so I think you do actually want more tibbinness towards the start. You want to be more normal, more. Not just saying what you think is the case, or something like the first session to get people warmed up, kind of understand where their boundaries are or something like this. Boundaries probably isn't the right word, but how they are, because it can be intimidating. I think if the person, the group organizer is like, this is what I think is the case, tell me what is wrong. You just might not want to say why it's wrong. Because you're scared or something. You don't want them to dislike you for normal human reasons.
AARON: Yeah. No, to be clear, I very much buy into the thing. Do not say what. Just because you think an arbitrary proposition to be true doesn't mean that it is a good idea to say that. There's trivial examples here. If you go to something I can make up, I don't know. You don't like your aunt's dress at a family gathering. You think that's like a true statement, that it's ugly? No one's going to say, oh yeah, well, maybe you shouldn't lie. If she really asks you, we can argue about that. And I think I'll probably defend that. You probably shouldn't lie, but you should be as nice as possible or whatever, but you don't like it if she in fact asks you point blank. But no, you shouldn't just say, oh, let me raise my hand like, hi, Aunt Mary, your dress is ugly, or whatever. On the other hand, then we could get into, okay, if somebody asks you, okay, what do you think about wild animals or whatever? I don't know. Yeah, you should be as polite and normal sounding as possible. But even if you're not directly lying, you shouldn't just say things that are like people would naturally take to be like a conclusion that you don't believe in or whatever.
MAX: Yeah, I agree with this, though, probably there is another thing here. This is not really what you're saying or something, but if somebody maybe advice, I would say or something, which I think people often get, is also don't derail conversations or something. I guess even the pursuit of true things, like if you mention, if you're doing a discussion on animal welfare or something, and you're like, yes, in passing, you frame it for. You give like a 92nd spiel. Like framing. You're like, yes. And people also kind of care about animal welfare here. You could read Brian Tamask or something. If anyone's interested, I can send you links. And someone asks later on, what are your thoughts here? Maybe don't talk about animal welfare, wild animal welfare in the context. Because maybe you just think of derail it because you have to be like, here's all this stuff or something. It's just more productive to be like, we'll talk about it later or something.
AARON: Yeah, okay. No, I agree. We're sort of debating vibes, right?
MAX: We're like debating.
AARON: I think we're like, no, sorry, I was using that sort of like maybe facetiously is the right word, like sarcastically or whatever. Unlike an analytic philosophy where we get to have propositions and then you say p and I say, not p, and then we yell at each other. Unfortunately, this isn't as conducive to that. Although I do think also for the case of EA fellowships per se, and maybe like the general case of group discussions, you also shouldn't let. There's like a thing where at least I've noticed, and maybe this is just like n equals three or whatever, but there will be people who are well meaning and everything. They just don't know what they're talking about and they'll derail the conversation. And then you're afraid to rerail the conversation because. I don't know, because that would be like indoctrination or something. And I think sometimes rerailing is just, like the right thing to do.
MAX: Yeah. I mean, I haven't experienced it that much, I think. But see, no issues. That or something.
AARON: I think rerailing is also. Maybe this is like a general point. I feel like I'm talking from the position as the leader or whatever, but no, just even in the other side of things, I feel like in general, in college, I feel like that's the biggest thing I could think of, like that category, taking college classes or whatever. I would want to know whether the professor thinks that the thing I said is a good point or a bad point. Right. I feel like a lot of times they would just say good point no matter what, and then it's no evidence as to whether it's actually a good point or not. Well, I'm being slightly overstating this a little bit.
MAX: I think you're conflating good and true or something. Maybe.
AARON: Yeah, sure. But, yeah, I don't remember any specific examples that I can give, but all I'm trying to say is that it's not just from the perspective of somebody who's trying to get people to lead them to the right answer or whatever. No. I don't know. I feel like it's also, well, in one sense, if you really value your epistemics, it might be like, help. You might want people to give you more people who, in fact, more knowledgeable and have thought a lot about some subject to be more upfront about what they think, but also as sort of a matter of just like, respect is not exactly the right word. I don't think the professors were being disrespectful to me, but it's like if we're just friends talking, or it puts you more on an equal playing field in some way. If the one person who is ostensibly and in fact more knowledgeable and ostensibly in a position of de facto power or whatever, leading this discussion or whatever, it puts them on a pedestal. If they try to be super coy about what they think and then not give you legible feedback as to whether a point you said is plausible or not. You can imagine, I don't know, just making an argument that is just kind of terrible. And then, I don't know, I would kind of want the pressure to say no. That's being polite about it, probably, but say, like, no, bad or something.
MAX: Yeah. I mean, I guess there's something where maybe I do think professors should not just say everything's good or something. They should only say it when it's good or something. Where good doesn't mean true, in my mind. Yeah. But I think I probably lean towards this, which is like risk aversion in sort of community building and classroom building is valuable or something, because you might take the approach that if you're like, I think it is a more risky or something, risky probably isn't the right thing to say there. I mean like risking, like a risk averse versus risk, sort of like, yeah, it's more risky to sort of do the things you're describing and maybe it's like risk neutral or something. The EV is positive because you'll get lots of really engaged people as a result, and you'll also push people out and discounts out or something. I think I tend to favor being risk neutral here. So you maybe aren't making as many, really. You're losing out some value at one end, but you are including more people. And I sort of anticipate this on net being better or something like producing better results, at least like in community health sorts of senses.
AARON: Yeah, I actually think. I agree with that. I do think that people in social situations are just like, in fact, I don't know if it's exactly a bias formally, but people are just like risk averse. Because the whole evos thing, it's like, oh, we used to if you angered somebody in your tribe, they might kill you or whatever, but now it's like, okay, somebody leaves your club, it's fine. No, maybe it's not fine. Right. That is in fact a loss potentially. Maybe it's not. But I think we sort of have to adjust, make ourselves adjust a little bit more in the pro risk direction.
MAX: Yeah. Though I think probably if you take the project of EA seriously or something, there are probably good reasons to want various types of diversity and that sort of approach of, I guess being more risk averse is more likely to get you diversity or something.
AARON: Yes, good point. Excellent. Good job. Everything you said is true, in fact and good. Any other hot or cold takes, in fact, or medium or lukewarm takes?
MAX: Um, I mean, certainly. Right is the question.
AARON: I don't know. It doesn't have to be related to the space is like wider than you probably initially think the space here is like. As long as it's not an info hazard. Not illegal to say. Yeah, you can kind of bring up whatever and not like singling out like random people and not mean, not mean.
MAX: Like global parties gossip. I, like, know what Toby or did last.
AARON: Done. Okay. Unfortunately, I'm not hip to that.
MAX: And I'm sure there is gossip, like somebody didn't change coffee pot or something. But I don't work there, so I don't know.
AARON: Maybe. Hopefully you will soon. CEa, if you're listening to this.
MAX: Well, they're different organizations.
AARON: I don't know. Oxford, every. Ea.org, if you're listening to this.
MAX: Yeah. They're all.
AARON: Wait, maybe. Like, what else? I don't know. What do you think about Twitter, like, in general? I don't know. Because this is how we met.
MAX: Yeah.
AARON: We have not met in real life.
MAX: Worse as a platform than it was two years ago or something.
AARON: Okay.
MAX: Stability wise, and there are small changes that make it worse or something, but largely my experience is unchanged, I think.
AARON: Do you think it's good, bad? I don't know. Do you think people should join Twitter on the market?
MAX: I think EA should join EA. Twitter. I'm not sure if you join Twitter rather than other social medias or something. I think sort of the area of social media we're on is uniquely quite good or something.
AARON: I agree.
MAX: And some of this is like, you get interactions with people, which is good, and people are very nice or something, and very civil where we are. And it's less clear the sorts of personal ability or something and niceness that you get where we are in, like, are elsewhere in Twitter because I don't go elsewhere. But basically you should join Twitter, I guess, if you're going to enter a small community or something, if you're just going to use it to browse memes or something, it's not clear this is better than literally any other social media that has no.
AARON: Yeah, I agree. Well, I guess our audience is, of all, maybe four people, is largely from Twitter. But you never know. There's like a non zero chance that somebody from the wider world will be listening. I think it's at least worth an experiment. Right. Maybe you could tell me something that I should experiment with. Is there anything else like Twitter that we don't have in common that you think that maybe I don't do? It's like, oh, he's an idiot for not doing.
MAX: Oh, probably not. I mean, I'm sure you do better things than I do. Probably.
AARON: Well, I mean, probably this is a large. Right? Like, I don't know.
MAX: I think a benefit of using Twitter is like, it kind of opens you up or something. Probably is the case. It probably does literally build your social skills or something. I mean, maybe not in an obviously useful way, because it's like you're probably not necessarily that much better at doing in person stuff or something as a result of these Twitter. Maybe it improves you very slightly or something, but it's a different skill, texting versus talking.
AARON: Actually, here's something I want your thoughts on recently. Maybe this is outing me as a true Twitter addict, but no, I, by and large, have had a really good experience and I stand by that. I think it's net on net. Not just on net, but just in general, added value to my life and stuff. And it's great, especially given the community that I'm in. The communities that I'm in. But yeah, this is going to kind of embarrassing. I've started thinking in tweets. I'm not 100% of the time, not like my brain is only stuck on Twitter mode, but I think on the margin there's been a chef toward a thought verbalizes an Aaron's brain as something that could be a tweet. And I'm not sure this is a positive.
MAX: Like it is the case. I've had my friends open Twitter in front of me, like my Twitter and go through and read my tweets. Actually, many people in my life do this. I don't know why. I don't really want them to do that. And it does change the way you talk. Certainly part of that is probably character element, and part of it is probably like culture or something. So that's the case. I don't know if I experienced that or I do sometimes if I thought of a really stupid pun. Normally you don't do anything with that, but now I can or something. Right. It's worth holding on for the 6 seconds it takes to open my phone. But I think I actually kind of maybe think in tweets already or something. Like, if you read my writing, I've gotten feedback that it's both very poetic or something. And poems are short or something. It's like very stanza or something, which is kind of how Twitter works also. Right. I think if you looked at the formatting of some of my writing, you would see that it's very twitter like or something. In some sense, there's no character limit, and so maybe this is just the sort of thing you're experiencing or something. Or maybe it's more intense.
AARON: Yeah, probably not exactly. Honestly, I don't think this is that big of a deal. One thing is, I think this is a causal effect. I've blogged less and. And I think it's like, not a direct replacement. Like, I think Scooter has been like an outlet for my ideas that actually feels less effortful and takes less. So it's not like a one for one thing. So other more worky things have filled in the gap for blogging. But I think it has been a causal reason that I haven't blogged as much as I would like to. Really would like have to or something. Yeah, I can see that being thing that is like ideas, there's no strong signal that a particular tweet is an important idea that's worth considering. Whereas if you've written a whole blog post on it and you have 200 subscribers or whatever, you put in a lot of effort. People are at least going to say like, oh, this is me. At least plausibly like an important idea. Like when they're coming into it or something like that.
MAX: Yeah. And if you think something is valuable or something, maybe this is different for you or something. But I get like three likes on all my tweets. It's very rare I get ten likes or something. The number of followers. Gross. It's just stuck there forever.
AARON: Feel like it's not true. Should I read all your bangers? I have a split screen going on. Should I search from Max Alexander?
MAX: Search for my baggers. That's 20,000 tweets or posts.
AARON: How many?
MAX: The Ted like ones you'll find over Ted likes are know very small percentage of the total do.
AARON: So why is your handle absurdly, Max? Is there a story you don't have to answer mean?
MAX: It's very simple. So I think existentialism is right or something. And absurdism specifically. And my name is.
AARON: Wait, really? Wait, what even is absurdism?
MAX: Humans have this inherent search for meaning, and there's no inherent meaning in the universe.
AARON: This is just moral realism in continental flavor.
MAX: Well, and then you have to. So there is no moral truth. Right. And you have to make your own meaning or you could kill yourself.
AARON: I guess this is not true. Okay, whatever, but whatever. This is just continental bullshit.
MAX: If you're a moral antirealist or something, you probably end up being an existentialist or don't care about philosophy, I suppose.
AARON: Oh, this is a good one. If anyone at OpenAI follows me, I just want to say that I'd probably pay $20 a month, maybe even more, for a safely aligned super intelligence. I will actually second that. So, open AI. We can promise you $40 if you do that.
MAX: A month, in fact.
AARON: Yes. Yeah. There's lots of bangers here. You guys should all follow Max and look up his bangers.
MAX: I would be surprised if somebody's listening to this and isn't already.
AARON: You never know. I'm going to have to check my listening data after this and we'll see how big our audience is. Kind of forget. Yeah. So once again, the space of. Also, I'm happy to take a break. There's no formalized thing here. Structure.
MAX: I mean, I'll go for hours.
AARON: Oh, really? Okay, cool. No. Are there any topics that are, like. I feel like. Yeah, the space is very large. Here's something. Wait, no, I was going to say, is there anything I do that you disagree with?
MAX: That's like a classic.
AARON: Yeah, I find you very unobjectionable. That's a boring compliment. I'm just kidding. Thank you. It's actually not.
MAX: I have, like, I suppose writing takes or something, but I don't know if I can find the book.
AARON: Oh, wait, no. We have those two similar blog posts, but you know what I'm talking about. Okay, can I just give the introduction to this?
MAX: Okay.
AARON: No, I think we just have similar blog posts that I will hopefully remember to link that I think are substantively very similar, except they have totally different vibes. And yours is very positive and mine is very negative. In fact, mine is called on suffering. I don't remember what yours is called.
MAX: It's a wonderful life.
AARON: There you go. Those are the vibes, but they're both like, oh, no. Hedonic value is actually super meaningful and important, but then we take it in opposite directions. That's it.
MAX: Yeah.
AARON: I don't know if I feel like your title actually appears to something. I feel like your title is bad, but besides that, the piece is good.
MAX: Well, there's a movie called it's a wonderful life and the post is, like Christmas themed.
AARON: Oh, I feel like I keep not getting things like that.
MAX: It's okay.
AARON: I feel like I vaguely knew that was like a phrase people said, but wasn't sure where it came from. I remember there's like an EA thing called non trivial that I helped on a little bit and I didn't realize it was a reference to non trivial pursuits, which is like a board game or something. No, I think it was actually originally called non trivial pursuits, I think. I'm sorry, Peter McIntyre, if you're listening to this, I apologize for any false, like, I don't know, like a reasonable amount of time, and I had no idea. And then the guy who I was working under brought this up and I was like, wait, that's a board game. Or something. Anyway, this is not an important aside, I'm kind of a bad podcaster because Dwarkesh Patel, who's about 100,000 times better at podcasting than me, goes for like six or 8 hours. That stresses me out so much. Even from an out. Like, not even doing it, doing it, I would just die. I would simply fizzle out of existence. But even thinking about it is very stressful.
MAX: It depends who guessed it, but I could probably talk to somebody for six continuous hours or something.
AARON: Tell me we don't have to discuss this all but one thing could be the virtues of being anonymous. Not anonymous.
MAX: Sure. Okay. I think the virtue is probably like comfortableness or something is the primary one, and maybe some risk aversion or something.
AARON: Yeah.
MAX: Probably. It's not that common or something, but it's probably more common than people think. But being socially ostracized or being very publicly canceled or something, and maybe for bad reasons, one might say as well does occur. And you might be the sort of person who wants to avoid this. I mean, in some sense you've kind of been piled on by various points and it seems like you just were fine with this.
AARON: TBD. So I have not been formally hired by anyone since I've discussed this. No, there was like the tweet where I was like, we should bury poor people underground. Just kidding. That's not what I said. That is what a subset of people who piled on me said. I said, which is not in fact what I said. I said, I asked a question, which was like, why are we not building underground more? No, but yeah, I feel like this is definitely just like a personal taste thing. I don't know. I'm sure there are, but on very broadly, don't even want to say like EA Twitter, but like extended, I don't know, like broadly, like somewhat intellectual ish, English speaking Twitter or something like that. Are there examples of people with under, say, 5000 followers who have said something that is not legitimately indicative? I mean, this is doing a lot of work here, I want to say not legitimately indicative of them being a terrible person. Right. That's doing a lot of work. Right. But I think that neither of us are. So then this is the question.
MAX: My guess would be, like, teachers or something, maybe that sometimes happens to, or like, I think the more, I think probably the sorts of jobs we end up wanting or something like this are more okay with online presence. Because I don't know, everyone at your, ethink priorities is terminally online, right? And so they're not going to mind if you are too.
AARON: I'm not counting on getting an EA or a job.
MAX: Like corporate and public sector jobs I think are generally are like you should make your social media private or something.
AARON: Yeah.
MAX: They tell you that not just the.
AARON: Corporate world is like a big category, right? If you're a software engineer, first of all, I think if you actually have reprehensible opinions, I think it is in your self interest to be an alt. If you want to say them out loud. Not being a bad person is doing a lot of work here. But for what it's worth, I really do think I'm extending there are communists that I would say are genuinely not bad people. I really think they're wrong about the whole communism thing, but that does not fall into my category of automatically makes it such that you should be an alt to avoid being exposed.
MAX: I think probably there are good reasons in EA specifically to maybe make an alt that are not the case elsewhere. And this is like the community is homogeneous and lots of people with lots of power are also on Twitter, basically. Maybe if somebody rethink it's a bad vibe of you or something, that's indicative they shouldn't hire you. I guess maybe. But maybe it doesn't matter how I use Twitter. I don't think this is true, but maybe I use Twitter in a way that's abrasive or maybe they just don't like my puns or something. This is information that will make them change their opinion about me in an interview. And maybe it's not that impactful and maybe it is a little impactful and maybe sometimes it's positive it will make them like me more or something. But the community is so insular, such that this is more of a problem or something.
AARON: Yeah, I feel like my sense is just that it's more symmetrical than this is giving it credit for. Yeah, I guess if you have a lot of heterodox views that are on hot especially, I don't know. Yeah. If you disagree with a lot of people in EA who are powerful, not just not in a weird way, just have hiring power or whatever, on topics that are really emotionally salient or whatever. Yeah. I would probably say if you want to very frankly discuss your opinions and get hired at the orgs where these people have hiring power, it's like probably make an alt. I just don't think that's true for that many people.
MAX: My guess is this is more important the smaller the is or something. My guess would be open philanthropy, like where you think I don't know, global parties institute. It doesn't matter if you make an alternate, really, you're okay. But there are some EA orgs that are like three people or something. Right. And probably your online presence matters, but.
AARON: I mean that's also. They also hire a fewer open spots. Yeah. I just feel like the upside is or maybe the downside. I think we probably disagree on the downside somewhat. I mean, just like stepping away from the abstract level. I just personally think that you, in fact, I can take this out, but can we just say that your last name is not in fact Alexander? Yeah, that's okay. I feel like you could just use your real name and your life would.
MAX: Well, it is my real name. It just my middle name.
AARON: Okay. Wow. Okay. Very, very slate sarcodex esque.
MAX: That's why I did it, because.
AARON: Oh, nice.
MAX: But it was like really nice.
AARON: I actually don't know why I didn't connect the dots there. Yeah, I feel like just like you personally in expectation, it would probably not be a dramatic change to your life if I were you and I was suddenly cast into your shoes. But with my beliefs, I would just use my real name or something like that.
MAX: Yeah, I think it's like at the point now it doesn't matter or something. There's no real upside to changing it and there's no real downside.
AARON: Yeah, it's not a huge deal.
MAX: Yeah, I think probably, well, some people like to be horny on Twitter or something and probably if you want to do that, you should be anonymous or.
AARON: I mean, but once again, even I feel like it depends what you mean by be horny. Right. If you're going to post nude photos on Twitter. Yeah. Actually had a surprisingly good dm conversation with a porn account. I did not follow them, for what it's worth.
MAX: They're presumably run by real.
AARON: No, no, it was actually. No, she was very polite. Yes, I think I'm correctly assuming the name clearly. I won't say the name, but like accounting. But it was clearly indicative. It was a woman. She basically objected to me suggesting that minimum wage jobs in the US are uncommon. And I think this is actually, in fact, I deleted my tweet because it was like, I think giving a false impression that, yeah, I live in a wealthy area. It's true that server sector jobs generally pay better than minimum wage, but elsewhere in the US it's not true. It's a very polite interaction. Anyway, sorry, a total side story. Oh yeah, horny on Twitter. Yes, probably. Yeah, I agree. I don't know, but that can mean multiple things. I guess I have participated somewhat in gender discourse. I guess. I don't think I've been extremely horny. Yeah, fair enough. I don't think I've been very horny on me.
MAX: Yeah. I think probably there's maybe something to be the case that, and you see this as prestige increases or something. This isn't totally true because I think Elie Iser Zukowski posted about whatever or something, but Will McCaskill and Peter Wildefer are not engaging in gender discourse or whatever. Right. Publicly anyway.
AARON: Wait, do I want to put my money or not my money, my social reputation doesn't use Twitter very much, which I think, I don't know. I would in fact take this question much more seriously if I was him. Like the question of whether to just be more frank and open and just, I don't know, maybe he doesn't want to do. I'm like totally hypothesizing here. Wait, Peter totally does sometimes, at least in one case. I know. No, I have disagreed with Peter on gender discourse and that's just like somewhat. I don't know. Right.
MAX: I just don't see. He's not starting gender discourse, I guess I should say.
AARON: Well, I mean, Adam and Eve started gender. No, but I don't know. I don't want to single anybody out here. I feel like, yeah, if you're like the one face of a social movement, yeah, you should probably take it pretty seriously. At least the question of like, yeah, you should be more risk averse. I will buy into that. I don't think my position here is like absolute whatsoever. I think, yeah. For people with fewer than 5000 followers, it's like a gradient, right? I just think on the margin people are general or like in general, people maybe are more risk averse than they have to be or something like that.
MAX: Yeah. Though I'm not sure there are that many major examples. Because one reason you might be anonymous is not because you are scared about people who follow you finding you elsewhere. It's because you don't want the inverse to be the case or something.
AARON: Wait, what's the, I'm sorry, you don't.
MAX: Want your parents to google you Twitter account and read your tweets? Yeah, everyone I know knows my Twitter. I don't even know how they all found it, but.
AARON: That'S interesting. Okay, well, I mean, yeah.
MAX: Read stupid tweets of me in front of me.
AARON: Well, there you go. That's like a good real life example of like, okay. That's like a real downside, I guess. Or maybe it's not because it's kind of a funny thing, but.
MAX: I think you do get this. People will joke about this sometimes, I think on Twitter or something, or I've seen it ever where it's like, I hope my boss is not using Twitter today because they'll be like, why weren't you working or something?
AARON: Oh, yeah.
MAX: Literally just tweeting or something.
AARON: Yeah. I know. If you've called in sick to work and you're, like, lying about that, you're.
MAX: Just tweeting a bunch, they might be kind of suspicious or something. If your boss could see you all the time, you open Twitter for like a minute to tweet or something, they'd probably be like a little judgy or something, right? Maybe they wouldn't.
AARON: Yeah, yeah, sure. I mean, it's just like a matter of degree.
MAX: These are probably the most real world scenarios, is like somebody, you know, gets information that isn't really that damaging but slightly inconveniences you or something.
AARON: Yeah. Oh, actually, I talked about this more with Nathan Young, but then I didn't record his audio. I'm sorry. I've apologized for this before, but I'm so bad at me. This is before. I really hope that I've solved this for this particular, I think I have for this episode. But, yeah, on my one viral tweet or whatever, one thing is just like, oh, yeah. A lot of people basically, I think they were earnest. Not earnest, exactly. That's like, maybe too generous, but they genuinely thought that I was saying something bad, like something immoral. I think they were wrong about that, but I don't think they were lying. But then, in fact, okay, several of my real life friends either, it's like, come up somehow and none of them actually thought I was saying anything bad. And in fact, I met somebody basically at a party who was talking about this. And then it was like, oh, that's me. I was the one who posted that. But basically the point here is that nothing bad in my. As far as I can tell, maybe I'm wrong, but as far as I can tell, nothing bad in my life has come with a viral tweet that mainly was viral from people quotating it, saying that. I was saying something like immoral. N equals one.
MAX: Yeah. I don't know how come it would be or something. I think, like Contra points or something. I believe it's her. Has videos about canceling and stuff like this. I think she was canceled at various. Lindsay Elsie, you may have heard of, I think, kind of got run off the Internet.
AARON: Actually don't know this person.
MAX: She does, like, video, or she did video essays about media and stuff. She's a fiction author now. There are orders of magnitude more well known than we are for something.
AARON: There's, like, canceling that really genuinely ruins people's lives. And then there's, like, Barry Weiss. I don't know, she has like a substack or like, I don't even know, like, IDW person or I think. I don't even know if she got. There's people like this. I'm sure they have other academics and they're professors at universities who don't like them, but they have successful, profitable substacs. And it seems to me like their lives aren't made a lot worse by being, quote unquote, canceled. No, but then there really are. Right. I'm sure that there's definitely cases of normalish people. Yeah.
MAX: I don't know, but maybe it's just more psychologically damaging, regardless of consequences, to be piled on when it's your real face or something.
AARON: Yeah. Also, I think 90. I don't know about at least 90%. Probably not like 99 and then like ten, nine. But something in between these two numbers of pylons just don't have any real world consequences whatsoever. Canceling is like a much less common phenomenon.
MAX: Yeah, that seems right to me.
AARON: Yeah. Let it be known that we are two white. We can also take this part out if you don't want this identifying information. 220 something white males discussing being canceled on a podcast, which is possibly the most basic thing that has ever happened in the universe.
MAX: I mean, I want to be known for this. When I, my children come up to me and say, we found the one podcast interview you did.
AARON: Yeah. I feel like I want to reiterate that there's a core thing here, which is, like, if you really hate jewish people and you think they should die, there's like, a fundamental thing there where if you're open and honest about your opinions, people are going to correctly think that you're a bad person. Whereas I think neither of us hold any views that are like that.
MAX: Agree. I think.
AARON: Yeah, no, maybe I wouldn't know, which is fine. But I'm sort of using an extreme example that I think very few people, at least, who I would interact with, hold. But if you're in fact, like, a person who has views that I think really indicate immoral behavior or something. Sorry. Maybe I'll even take that part out because it could be like, clip eclipsed or whatever, but if you just say like, oh yeah, I steal things from street vendors or whatever. I don't know, that's a bad thing. And you're like being honest, don't do that. And then people are like, yeah, you have to make, I guess, have some confidence that you're like a person who's like, I don't know, doesn't do very, doesn't do pretty things or have views that true legitimately indicate that you either do or would do very immoral things that are regarded to be immoral by a large number of people or whatever. This is doing a lot of the.
MAX: Mean, like I think this goes back to earlier things or something. I just thought of it or something. I mean, it's not clear the sign of this or something, but I wrote a whole blog post about joining EA or something which I think has some genuinely very good parts in it that are really good explanations of why someone should be motivated to do EA or something. There's lots of really embarrassing stuff about life in high school or something in it.
AARON: Yeah, no, I'm pretty sure everyone in.
MAX: 80 who works at 80k. Not everyone, but I have good reason to think many people at read this, which is like, I don't know what to make of that. It's just kind of weird. Right, people?
AARON: Yeah, no, I agree. It's kind of weird. I think like object level. I don't know. I don't even know if embarrassed. Embarrassing is like the right word. Yeah, it's like genuinely, I guess, more vulnerable than a lot of people are in their public blogs or whatever. But I don't really think it reflects poorly on you when take it in aggregate. Right. I mean maybe you disagree with that, but especially I don't think so.
MAX: Or if you think it does, you probably are judgy to a degree I think is unreasonable or something from when I was like 13 or something.
AARON: Yeah, sure.
MAX: That bad? Really?
AARON: What bad things? Should I. No, there's something that I actually don't even think is immoral, but it's like somewhat embarrassing. Maybe I'll even say it on the next podcast episode after I've thought about it for ten minutes instead of 1 minute or something like that. But I don't know, I think if I said it, it would be okay. Nothing that bad would happen. I guess. You're going to continue being Max Alexander instead of Max.
MAX: I mean, like it's the brand. Like I can't.
AARON: Wait. I feel like this is not a good reason. I feel like pretty quickly. Yeah. I feel like path dependence is like a real thing, but in this particular case, it's just like, not as big as maybe scenes or something like that. I don't know. Yeah.
MAX: I just don't care what the upside is or something.
AARON: Yeah, true. Yeah. I'm thinking maybe we wrap up.
Get full access to Aaron's Blog at www.aaronbergman.net/subscribe -
* Listen on Spotify or Apple Podcasts
* Be sure to check out and follow Hollyâs Substack and org Pause AI.
Blurb and summary from Clong
Blurb
Holly and Aaron had a wide-ranging discussion touching on effective altruism, AI alignment, genetic conflict, wild animal welfare, and the importance of public advocacy in the AI safety space. Holly spoke about her background in evolutionary biology and how she became involved in effective altruism. She discussed her reservations around wild animal welfare and her perspective on the challenges of AI alignment. They talked about the value of public opinion polls, the psychology of AI researchers, and whether certain AI labs like OpenAI might be net positive actors. Holly argued for the strategic importance of public advocacy and pushing the Overton window within EA on AI safety issues.
Detailed summary
* Holly's background - PhD in evolutionary biology, got into EA through New Atheism and looking for community with positive values, did EA organizing at Harvard
* Worked at Rethink Priorities on wild animal welfare but had reservations about imposing values on animals and whether we're at the right margin yet
* Got inspired by FLI letter to focus more on AI safety advocacy and importance of public opinion
* Discussed genetic conflict and challenges of alignment even with "closest" agents
* Talked about the value of public opinion polls and influencing politicians
* Discussed the psychology and motives of AI researchers
* Disagreed a bit on whether certain labs like OpenAI might be net positive actors
* Holly argued for importance of public advocacy in AI safety, thinks we have power to shift Overton window
* Talked about the dynamics between different AI researchers and competition for status
* Discussed how rationalists often dismiss advocacy and politics
* Holly thinks advocacy is neglected and can push the Overton window even within EA
* Also discussed Holly's evolutionary biology takes, memetic drive, gradient descent vs. natural selection
Full transcript (very imperfect)
AARON
You're an AI pause, Advocate. Can you remind me of your shtick before that? Did you have an EA career or something?
HOLLY
Yeah, before that I was an academic. I got into EA when I was doing my PhD in evolutionary biology, and I had been into New Atheism before that. I had done a lot of organizing for that in college. And while the enlightenment stuff and what I think is the truth about there not being a God was very important to me, but I didn't like the lack of positive values. Half the people there were sort of people like me who are looking for community after leaving their religion that they grew up in. And sometimes as many as half of the people there were just looking for a way for it to be okay for them to upset people and take away stuff that was important to them. And I didn't love that. I didn't love organizing a space for that. And when I got to my first year at Harvard, harvard Effective Altruism was advertising for its fellowship, which became the Elite Fellowship eventually. And I was like, wow, this is like, everything I want. And it has this positive organizing value around doing good. And so I was totally made for it. And pretty much immediately I did that fellowship, even though it was for undergrad. I did that fellowship, and I was immediately doing a lot of grad school organizing, and I did that for, like, six more years. And yeah, by the time I got to the end of grad school, I realized I was very sick in my fifth year, and I realized the stuff I kept doing was EA organizing, and I did not want to keep doing work. And that was pretty clear. I thought, oh, because I'm really into my academic area, I'll do that, but I'll also have a component of doing good. I took giving what we can in the middle of grad school, and I thought, I actually just enjoy doing this more, so why would I do anything else? Then after grad school, I started applying for EA jobs, and pretty soon I got a job at Rethink Priorities, and they suggested that I work on wild animal welfare. And I have to say, from the beginning, it was a little bit like I don't know, I'd always had very mixed feelings about wild animal welfare as a cause area. How much do they assume the audience knows about EA?
AARON
A lot, I guess. I think as of right now, it's a pretty hardcore dozen people. Also. Wait, what year is any of this approximately?
HOLLY
So I graduated in 2020.
AARON
Okay.
HOLLY
Yeah. And then I was like, really?
AARON
Okay, this is not extremely distant history. Sometimes people are like, oh, yeah, like the OG days, like four or something. I'm like, oh, my God.
HOLLY
Oh, yeah, no, I wish I had been in these circles then, but no, it wasn't until like, 2014 that I really got inducted. Yeah, which now feels old because everybody's so young. But yeah, in 2020, I finished my PhD, and I got this awesome remote job at Rethink Priorities during the Pandemic, which was great, but I was working on wild animal welfare, which I'd always had some. So wild animal welfare, just for anyone who's not familiar, is like looking at the state of the natural world and seeing if there's a way that usually the hedonic so, like, feeling pleasure, not pain sort of welfare of animals can be maximized. So that's in contrast to a lot of other ways of looking at the natural world, like conservation, which are more about preserving a state of the world the way preserving, maybe ecosystem balance, something like that. Preserving species diversity. The priority with wild animal welfare is the effect of welfare, like how it feels to be the animals. So it is very understudied, but I had a lot of reservations about it because I'm nervous about maximizing our values too hard onto animals or imposing them on other species.
AARON
Okay, that's interesting, just because we're so far away from the margin of I'm like a very pro wild animal animal welfare pilled person.
HOLLY
I'm definitely pro in theory.
AARON
How many other people it's like you and formerly you and six other people or whatever seems like we're quite far away from the margin at which we're over optimizing in terms of giving heroin to all the sheep or I don't know, the bugs and stuff.
HOLLY
But it's true the field is moving in more my direction and I think it's just because they're hiring more biologists and we tend to think this way or have more of this perspective. But I'm a big fan of Brian domestics work. But stuff like finding out which species have the most capacity for welfare I think is already sort of the wrong scale. I think a lot will just depend on how much. What are the conditions for that species?
AARON
Yeah, no, there's like seven from the.
HOLLY
Coarseness and the abstraction, but also there's a lot of you don't want anybody to actually do stuff like that and it would be more possible to do the more simple sounding stuff. My work there just was consisted of being a huge downer. I respect that. I did do some work that I'm proud of. I have a whole sequence on EA forum about how we could reduce the use of rodenticide, which I think was the single most promising intervention that we came up with in the time that I was there. I mean, I didn't come up with it, but that we narrowed down. And even that just doesn't affect that many animals directly. It's really more about the impact is from what you think you'll get with moral circle expansion or setting precedents for the treatment of non human animals or wild animals, or semi wild animals, maybe like being able to be expanded into wild animals. And so it all felt not quite up to EA standards of impact. And I felt kind of uncomfortable trying to make this thing happen in EA when I wasn't sure that my tentative conclusion on wild animal welfare, after working on it and thinking about it a lot for three years, was that we're sort of waiting for transformative technology that's not here yet in order to be able to do the kinds of interventions that we want. And there are going to be other issues with the transformative technology that we have to deal with first.
AARON
Yeah, no, I've been thinking not that seriously or in any formal way, just like once in a while I just have a thought like oh, I wonder how the field of, like, I guess wild animal sorry, not wild animal. Just like animal welfare in general and including wild animal welfare might make use of AI above and beyond. I feel like there's like a simple take which is probably mostly true, which is like, oh, I mean the phrase that everybody loves to say is make AI go well or whatever that but that's basically true. Probably you make aligned AI. I know that's like a very oversimplification and then you can have a bunch of wealth or whatever to do whatever you want. I feel like that's kind of like the standard line, but do you have any takes on, I don't know, maybe in the next couple of years or anything more specifically beyond just general purpose AI alignment, for lack of a better term, how animal welfare might put to use transformative AI.
HOLLY
My last work at Rethink Priorities was like looking a sort of zoomed out look at the field and where it should go. And so we're apparently going to do a public version, but I don't know if that's going to happen. It's been a while now since I was expecting to get a call about it. But yeah, I'm trying to think of what can I scrape from that?
AARON
As much as you can, don't reveal any classified information. But what was the general thing that this was about?
HOLLY
There are things that I think so I sort of broke it down into a couple of categories. There's like things that we could do in a world where we don't get AGI for a long time, but we get just transformative AI. Short of that, it's just able to do a lot of parallel tasks. And I think we could do a lot we could get a lot of what we want for wild animals by doing a ton of surveillance and having the ability to make incredibly precise changes to the ecosystem. Having surveillance so we know when something is like, and the capacity to do really intense simulation of the ecosystem and know what's going to happen as a result of little things. We could do that all without AGI. You could just do that with just a lot of computational power. I think our ability to simulate the environment right now is not the best, but it's not because it's impossible. It's just like we just need a lot more observations and a lot more ability to simulate a comparison is meteorology. Meteorology used to be much more of an art, but it became more of a science once they started just literally taking for every block of air and they're getting smaller and smaller, the blocks. They just do Bernoulli's Law on it and figure out what's going to happen in that block. And then you just sort of add it all together and you get actually pretty good.
AARON
Do you know how big the blocks are?
HOLLY
They get smaller all the time. That's the resolution increase, but I don't know how big the blocks are okay right now. And shockingly, that just works. That gives you a lot of the picture of what's going to happen with weather. And I think that modeling ecosystem dynamics is very similar to weather. You could say more players than ecosystems, and I think we could, with enough surveillance, get a lot better at monitoring the ecosystem and then actually have more of a chance of implementing the kinds of sweeping interventions we want. But the price would be just like never ending surveillance and having to be the stewards of the environment if we weren't automating. Depending on how much you want to automate and depending on how much you can automate without AGI or without handing it over to another intelligence.
AARON
Yeah, I've heard this. Maybe I haven't thought enough. And for some reason, I'm just, like, intuitively. I feel like I'm more skeptical of this kind of thing relative to the actual. There's a lot of things that I feel like a person might be skeptical about superhuman AI. And I'm less skeptical of that or less skeptical of things that sound as weird as this. Maybe because it's not. One thing I'm just concerned about is I feel like there's a larger scale I can imagine, just like the choice of how much, like, ecosystem is like yeah, how much ecosystem is available for wild animals is like a pretty macro level choice that might be not at all deterministic. So you could imagine spreading or terraforming other planets and things like that, or basically continuing to remove the amount of available ecosystem and also at a much more practical level, clean meat development. I have no idea what the technical bottlenecks on that are right now, but seems kind of possible that I don't know, AI can help it in some capacity.
HOLLY
Oh, I thought you're going to say that it would increase the amount of space available for wild animals. Is this like a big controversy within, I don't know, this part of the EA animal movement? If you advocate diet change and if you get people to be vegetarians, does that just free up more land for wild animals to suffer on? I thought this was like, guys, we just will never do anything if we don't choose sort of like a zone of influence and accomplish something there. It seemed like this could go on forever. It was like, literally, I rethink actually. A lot of discussions would end in like, okay, so this seems like really good for all of our target populations, but what about wild animals? I could just reverse everything. I don't know. The thoughts I came to on that were that it is worthwhile to try to figure out what are all of the actual direct effects, but I don't think we should let that guide our decision making. Only you have to have some kind of theory of change, of what is the direct effect going to lead to? And I just think that it's so illegible what you're trying to do. If you're, like, you should eat this kind of fish to save animals. It doesn't lead society to adopt, to understand and adopt your values. It's so predicated on a moment in time that might be convenient. Maybe I'm not looking hard enough at that problem, but the conclusion I ended up coming to was just like, look, I just think we have to have some idea of not just the direct impacts, but something about the indirect impacts and what's likely to facilitate other direct impacts that we want in the future.
AARON
Yeah. I also share your I don't know. I'm not sure if we share the same or I also feel conflicted about this kind of thing. Yeah. And I don't know, at the very least, I have a very high bar for saying, actually the worst of factory farming is like, we should just like, yeah, we should be okay with that, because some particular model says that at this moment in time, it has some net positive effect on animal welfare.
HOLLY
What morality is that really compatible with? I mean, I understand our morality, but maybe but pretty much anyone else who hears that conclusion is going to think that that means that the suffering doesn't matter or something.
AARON
Yeah, I don't know. I think maybe more than you, I'm willing to bite the bullet if somebody really could convince me that, yeah, chicken farming is actually just, in fact, good, even though it's counterintuitive, I'll be like, all right, fine.
HOLLY
Surely there are other ways of occupying.
AARON
Yeah.
HOLLY
Same with sometimes I would get from very classical wild animal suffering people, like, comments on my rodenticide work saying, like, well, what if it's good to have more rats? I don't know. There are surely other vehicles for utility other than ones that humans are bent on destroying.
AARON
Yeah, it's kind of neither here nor there, but I don't actually know if this is causally important, but at least psychologically. I remember seeing a mouse in a glue trap was very had an impact on me from maybe turning me, like, animal welfare pills or something. That's like, neither here nor there. It's like a random anecdote, but yeah, seems bad. All right, what came after rethink for you?
HOLLY
Yeah. Well, after the publication of the FLI Letter and Eliezer's article in Time, I was super inspired by pause. A number of emotional changes happened to me about AI safety. Nothing intellectual changed, but just I'd always been confused at and kind of taken it as a sign that people weren't really serious about AI risk when they would say things like, I don't know, the only option is alignment. The only option is for us to do cool, nerd stuff that we love doing nothing else would. I bought the arguments, but I just wasn't there emotionally. And seeing Eliezer advocate political change because he wants to save everyone's lives and he thinks that's something that we can do. Just kind of I'm sure I didn't want to face it before because it was upsetting. Not that I haven't faced a lot of upsetting and depressing things like I worked in wild animal welfare, for God's sake, but there was something that didn't quite add up for me, or I hadn't quite grocked about AI safety until seeing Eliezer really show that his concern is about everyone dying. And he's consistent with that. He's not caught on only one way of doing it, and it just kind of got in my head and I kept wanting to talk about it at work and it sort of became clear like they weren't going to pursue that sort of intervention. But I kept thinking of all these parallels between animal advocacy stuff that I knew and what could be done in AI safety. And these polls kept coming out showing that there was really high support for Paws and I just thought, this is such a huge opportunity, I really would love to help out. Originally I was looking around for who was going to be leading campaigns that I could volunteer in, and then eventually I thought, it just doesn't seem like somebody else is going to do this in the Bay Area. So I just ended up quitting rethink and being an independent organizer. And that has been really I mean, honestly, it's like a tough subject. It's like a lot to deal with, but honestly, compared to wild animal welfare, it's not that bad. And I think I'm pretty used to dealing with tough and depressing low tractability causes, but I actually think this is really tractable. I've been shocked how quickly things have moved and I sort of had this sense that, okay, people are reluctant in EA and AI safety in particular, they're not used to advocacy. They kind of vaguely think that that's bad politics is a mind killer and it's a little bit of a threat to the stuff they really love doing. Maybe that's not going to be so ascendant anymore and it's just stuff they're not familiar with. But I have the feeling that if somebody just keeps making this case that people will take to it, that I could push the Oberson window with NEA and that's gone really well.
AARON
Yeah.
HOLLY
And then of course, the public is just like pretty down. It's great.
AARON
Yeah. I feel like it's kind of weird because being in DC and I've always been, I feel like I actually used to be more into politics, to be clear. I understand or correct me if I'm wrong, but advocacy doesn't just mean in the political system or two politicians or whatever, but I assume that's like a part of what you're thinking about or not really.
HOLLY
Yeah. Early on was considering working on more political process type advocacy and I think that's really important. I totally would have done it. I just thought that it was more neglected in our community to do advocacy to the public and a lot of people had entanglements that prevented them from doing so. They work sort of with AI labs or it's important to their work that they not declare against AI labs or something like that or be perceived that way. And so they didn't want to do public advocacy that could threaten what else they're doing. But I didn't have anything like that. I've been around for a long time in EA and I've been keeping up on AI safety, but I've never really worked. That's not true. I did a PiBBs fellowship, but.
AARON
I've.
HOLLY
Never worked for anybody in like I was just more free than a lot of other people to do the public messaging and so I kind of felt that I should. Yeah, I'm also more willing to get into conflict than other EA's and so that seems valuable, no?
AARON
Yeah, I respect that. Respect that a lot. Yeah. So like one thing I feel like I've seen a lot of people on Twitter, for example. Well, not for example. That's really just it, I guess, talking about polls that come out saying like, oh yeah, the public is super enthusiastic about X, Y or Z, I feel like these are almost meaningless and maybe you can convince me otherwise. It's not exactly to be clear, I'm not saying that. I guess it could always be worse, right? All things considered, like a poll showing X thing is being supported is better than the opposite result, but you can really get people to say anything. Maybe I'm just wondering about the degree to which the public how do you imagine the public and I'm doing air quotes to playing into policies either of, I guess, industry actors or government actors?
HOLLY
Well, this is something actually that I also felt that a lot of EA's were unfamiliar with. But it does matter to our representatives, like what the constituents think it matters a mean if you talk to somebody who's ever interned in a congressperson's office, one person calling and writing letters for something can have actually depending on how contested a policy is, can have a largeish impact. My ex husband was an intern for Jim Cooper and they had this whole system for scoring when calls came in versus letters. Was it a handwritten letter, a typed letter? All of those things went into how many points it got and that was something they really cared about. Politicians do pay attention to opinion polls and they pay attention to what their vocal constituents want and they pay attention to not going against what is the norm opinion. Even if nobody in particular is pushing them on it or seems to feel strongly about it. They really are trying to calibrate themselves to what is the norm. So those are always also sometimes politicians just get directly convinced by arguments of what a policy should be. So yeah, public opinion is, I think, underappreciated by ya's because it doesn't feel like mechanistic. They're looking more for what's this weird policy hack that's going to solve what's? This super clever policy that's going to solve things rather than just like what's acceptable discourse, like how far out of his comfort zone does this politician have to go to advocate for this thing? How unpopular is it going to be to say stuff that's against this thing that now has a lot of public support?
AARON
Yeah, I guess mainly I'm like I guess I'm also I definitely could be wrong with this, but I would expect that a lot of the yeah, like for like when politicians like, get or congresspeople like, get letters and emails or whatever on a particular especially when it's relevant to a particular bill. And it's like, okay, this bill has already been filtered for the fact that it's going to get some yes votes and some no votes and it's close to or something like that. Hearing from an interested constituency is really, I don't know, I guess interesting evidence. On the other hand, I don't know, you can kind of just get Americans to say a lot of different things that I think are basically not extremely unlikely to be enacted into laws. You know what I mean? I don't know. You can just look at opinion. Sorry. No great example comes to mind right now. But I don't know, if you ask the public, should we do more safety research into, I don't know, anything. If it sounds good, then people will say yes, or am I mistaken about this?
HOLLY
I mean, on these polls, usually they ask the other way around as well. Do you think AI is really promising for its benefits and should be accelerated? They answer consistently. It's not just like, well now that sounds positive. Okay. I mean, a well done poll will correct for these things. Yeah. I've encountered a lot of skepticism about the polls. Most of the polls on this have been done by YouGov, which is pretty reputable. And then the ones that were replicated by rethink priorities, they found very consistent results and I very much trust Rethink priorities on polls. Yeah. I've had people say, well, these framings are I don't know, they object and wonder if it's like getting at the person's true beliefs. And I kind of think like, I don't know, basically this is like the kind of advocacy message that I would give and people are really receptive to it. So to me that's really promising. Whether or not if you educated them a lot more about the topic, they would think the same is I don't think the question but that's sometimes an objection that I get. Yeah, I think they're indicative. And then I also think politicians just care directly about these things. If they're able to cite that most of the public agrees with this policy, that sort of gives them a lot of what they want, regardless of whether there's some qualification to does the public really think this or are they thinking hard enough about it? And then polls are always newsworthy. Weirdly. Just any poll can be a news story and journalists love them and so it's a great chance to get exposure for the whatever thing. And politicians do care what's in the news. Actually, I think we just have more influence over the political process than EA's and less wrongers tend to believe it's true. I think a lot of people got burned in AI safety, like in the previous 20 years because it would be dismissed. It just wasn't in the overton window. But I think we have a lot of power now. Weirdly. People care what effective altruists think. People see us as having real expertise. The AI safety community does know the most about this. It's pretty wild now that's being recognized publicly and journalists and the people who influence politicians, not directly the people, but the Fourth Estate type, people pay attention to this and they influence policy. And there's many levels of I wrote if people want a more detailed explanation of this, but still high level and accessible, I hope I wrote a thing on EA forum called The Case for AI Safety Advocacy. And that kind of goes over this concept of outside versus inside game. So inside game is like working within a system to change it. Outside game is like working outside the system to put pressure on that system to change it. And I think there's many small versions of this. I think that it's helpful within EA and AI safety to be pushing the overton window of what I think that people have a wrong understanding of how hard it is to communicate this topic and how hard it is to influence governments. I want it to be more acceptable. I want it to feel more possible in EA and AI safety to go this route. And then there's the public public level of trying to make them more familiar with the issue, frame it in the way that I want, which is know, with Sam Altman's tour, the issue kind of got framed as like, well, AI is going to get built, but how are we going to do it safely? And then I would like to take that a step back and be like, should AI be built or should AGI be just if we tried, we could just not do that, or we could at least reduce the speed. And so, yeah, I want people to be exposed to that frame. I want people to not be taken in by other frames that don't include the full gamut of options. I think that's very possible. And then there's a lot of this is more of the classic thing that's been going on in AI safety for the last ten years is trying to influence AI development to be more safety conscious. And that's like another kind of dynamic. There, like trying to change sort of the general flavor, like, what's acceptable? Do we have to care about safety? What is safety? That's also kind of a window pushing exercise.
AARON
Yeah. Cool. Luckily, okay, this is not actually directly responding to anything you just said, which is luck. So I pulled up this post. So I should have read that. Luckily, I did read the case for slowing down. It was like some other popular post as part of the, like, governance fundamentals series. I think this is by somebody, Zach wait, what was it called? Wait.
HOLLY
Is it by Zach or.
AARON
Katya, I think yeah, let's think about slowing down AI. That one. So that is fresh in my mind, but yours is not yet. So what's the plan? Do you have a plan? You don't have to have a plan. I don't have plans very much.
HOLLY
Well, right now I'm hopeful about the UK AI summit. Pause AI and I have planned a multi city protest on the 21 October to encourage the UK AI Safety Summit to focus on safety first and to have as a topic arranging a pause or that of negotiation. There's a lot of a little bit upsetting advertising for that thing that's like, we need to keep up capabilities too. And I just think that's really a secondary objective. And that's how I wanted to be focused on safety. So I'm hopeful about the level of global coordination that we're already seeing. It's going so much faster than we thought. Already the UN Secretary General has been talking about this and there have been meetings about this. It's happened so much faster at the beginning of this year. Nobody thought we could talk about nobody was thinking we'd be talking about this as a mainstream topic. And then actually governments have been very receptive anyway. So right now I'm focused on other than just influencing opinion, the targets I'm focused on, or things like encouraging these international like, I have a protest on Friday, my first protest that I'm leading and kind of nervous that's against Meta. It's at the Meta building in San Francisco about their sharing of model weights. They call it open source. It's like not exactly open source, but I'm probably not going to repeat that message because it's pretty complicated to explain. I really love the pause message because it's just so hard to misinterpret and it conveys pretty clearly what we want very quickly. And you don't have a lot of bandwidth and advocacy. You write a lot of materials for a protest, but mostly what people see is the title.
AARON
That's interesting because I sort of have the opposite sense. I agree that in terms of how many informational bits you're conveying in a particular phrase, pause AI is simpler, but in some sense it's not nearly as obvious. At least maybe I'm more of a tech brain person or whatever. But why that is good, as opposed to don't give extremely powerful thing to the worst people in the world. That's like a longer everyone.
HOLLY
Maybe I'm just weird. I've gotten the feedback from open source ML people is the number one thing is like, it's too late, there's already super powerful models. There's nothing you can do to stop us, which sounds so villainous, I don't know if that's what they mean. Well, actually the number one message is you're stupid, you're not an ML engineer. Which like, okay, number two is like, it's too late, there's nothing you can do. There's all of these other and Meta is not even the most powerful generator of models that it share of open source models. I was like, okay, fine. And I don't know, I don't think that protesting too much is really the best in these situations. I just mostly kind of let that lie. I could give my theory of change on this and why I'm focusing on Meta. Meta is a large company I'm hoping to have influence on. There is a Meta building in San Francisco near where yeah, Meta is the biggest company that is doing this and I think there should be a norm against model weight sharing. I was hoping it would be something that other employees of other labs would be comfortable attending and that is a policy that is not shared across the labs. Obviously the biggest labs don't do it. So OpenAI is called OpenAI but very quickly decided not to do that. Yeah, I kind of wanted to start in a way that made it more clear than pause AI. Does that anybody's welcome something? I thought a one off issue like this that a lot of people could agree and form a coalition around would be good. A lot of people think that this is like a lot of the open source ML people think know this is like a secret. What I'm saying is secretly an argument for tyranny. I just want centralization of power. I just think that there are elites that are better qualified to run everything. It was even suggested I didn't mention China. It even suggested that I was racist because I didn't think that foreign people could make better AIS than Meta.
AARON
I'm grimacing here. The intellectual disagreeableness, if that's an appropriate term or something like that. Good on you for standing up to some pretty bad arguments.
HOLLY
Yeah, it's not like that worth it. I'm lucky that I truly am curious about what people think about stuff like that. I just find it really interesting. I spent way too much time understanding the alt. Right. For instance, I'm kind of like sure I'm on list somewhere because of the forums I was on just because I was interested and it is something that serves me well with my adversaries. I've enjoyed some conversations with people where I kind of like because my position on all this is that look, I need to be convinced and the public needs to be convinced that this is safe before we go ahead. So I kind of like not having to be the smart person making the arguments. I kind of like being like, can you explain like I'm five. I still don't get it. How does this work?
AARON
Yeah, no, I was thinking actually not long ago about open source. Like the phrase has such a positive connotation and in a lot of contexts it really is good. I don't know. I'm glad that random tech I don't know, things from 2004 or whatever, like the reddit source code is like all right, seems cool that it's open source. I don't actually know if that was how that right. But yeah, I feel like maybe even just breaking down what the positive connotation comes from and why it's in people's self. This is really what I was thinking about, is like, why is it in people's self interest to open source things that they made and that might break apart the allure or sort of ethical halo that it has around it? And I was thinking it probably has something to do with, oh, this is like how if you're a tech person who makes some cool product, you could try to put a gate around it by keeping it closed source and maybe trying to get intellectual property or something. But probably you're extremely talented already, or pretty wealthy. Definitely can be hired in the future. And if you're not wealthy yet I don't mean to put things in just materialist terms, but basically it could easily be just like in a yeah, I think I'll probably take that bit out because I didn't mean to put it in strictly like monetary terms, but basically it just seems like pretty plausibly in an arbitrary tech person's self interest, broadly construed to, in fact, open source their thing, which is totally fine and normal.
HOLLY
I think that's like 99 it's like a way of showing magnanimity showing, but.
AARON
I don't make this sound so like, I think 99.9% of human behavior is like this. I'm not saying it's like, oh, it's some secret, terrible self interested thing, but just making it more mechanistic. Okay, it's like it's like a status thing. It's like an advertising thing. It's like, okay, you're not really in need of direct economic rewards, or sort of makes sense to play the long game in some sense, and this is totally normal and fine, but at the end of the day, there's reasons why it makes sense, why it's in people's self interest to open source.
HOLLY
Literally, the culture of open source has been able to bully people into, like, oh, it's immoral to keep it for yourself. You have to release those. So it's just, like, set the norms in a lot of ways, I'm not the bully. Sounds bad, but I mean, it's just like there is a lot of pressure. It looks bad if something is closed source.
AARON
Yeah, it's kind of weird that Meta I don't know, does Meta really think it's in their I don't know. Most economic take on this would be like, oh, they somehow think it's in their shareholders interest to open source.
HOLLY
There are a lot of speculations on why they're doing this. One is that? Yeah, their models aren't as good as the top labs, but if it's open source, then open source quote, unquote then people will integrate it llama Two into their apps. Or People Will Use It And Become I don't know, it's a little weird because I don't know why using llama Two commits you to using llama Three or something, but it just ways for their models to get in in places where if you just had to pay for their models too, people would go for better ones. That's one thing. Another is, yeah, I guess these are too speculative. I don't want to be seen repeating them since I'm about to do this purchase. But there's speculation that it's in best interests in various ways to do this. I think it's possible also that just like so what happened with the release of Llama One is they were going to allow approved people to download the weights, but then within four days somebody had leaked Llama One on four chan and then they just were like, well, whatever, we'll just release the weights. And then they released Llama Two with the weights from the beginning. And it's not like 100% clear that they intended to do full open source or what they call Open source. And I keep saying it's not open source because this is like a little bit of a tricky point to make. So I'm not emphasizing it too much. So they say that they're open source, but they're not. The algorithms are not open source. There are open source ML models that have everything open sourced and I don't think that that's good. I think that's worse. So I don't want to criticize them for that. But they're saying it's open source because there's all this goodwill associated with open source. But actually what they're doing is releasing the product for free or like trade secrets even you could say like things that should be trade secrets. And yeah, they're telling people how to make it themselves. So it's like a little bit of a they're intentionally using this label that has a lot of positive connotations but probably according to Open Source Initiative, which makes the open Source license, it should be called something else or there should just be like a new category for LLMs being but I don't want things to be more open. It could easily sound like a rebuke that it should be more open to make that point. But I also don't want to call it Open source because I think Open source software should probably does deserve a lot of its positive connotation, but they're not releasing the part, that the software part because that would cut into their business. I think it would be much worse. I think they shouldn't do it. But I also am not clear on this because the Open Source ML critics say that everyone does have access to the same data set as Llama Two. But I don't know. Llama Two had 7 billion tokens and that's more than GPT Four. And I don't understand all of the details here. It's possible that the tokenization process was different or something and that's why there were more. But Meta didn't say what was in the longitude data set and usually there's some description given of what's in the data set that led some people to speculate that maybe they're using private data. They do have access to a lot of private data that shouldn't be. It's not just like the common crawl backup of the Internet. Everybody's basing their training on that and then maybe some works of literature they're not supposed to. There's like a data set there that is in question, but metas is bigger than bigger than I think well, sorry, I don't have a list in front of me. I'm not going to get stuff wrong, but it's bigger than kind of similar models and I thought that they have access to extra stuff that's not public. And it seems like people are asking if maybe that's part of the training set. But yeah, the ML people would have or the open source ML people that I've been talking to would have believed that anybody who's decent can just access all of the training sets that they've all used.
AARON
Aside, I tried to download in case I'm guessing, I don't know, it depends how many people listen to this. But in one sense, for a competent ML engineer, I'm sure open source really does mean that. But then there's people like me. I don't know. I knew a little bit of R, I think. I feel like I caught on the very last boat where I could know just barely enough programming to try to learn more, I guess. Coming out of college, I don't know, a couple of months ago, I tried to do the thing where you download Llama too, but I tried it all and now I just have like it didn't work. I have like a bunch of empty folders and I forget got some error message or whatever. Then I tried to train my own tried to train my own model on my MacBook. It just printed. That's like the only thing that a language model would do because that was like the most common token in the training set. So anyway, I'm just like, sorry, this is not important whatsoever.
HOLLY
Yeah, I feel like torn about this because I used to be a genomicist and I used to do computational biology and it was not machine learning, but I used a highly parallel GPU cluster. And so I know some stuff about it and part of me wants to mess around with it, but part of me feels like I shouldn't get seduced by this. I am kind of worried that this has happened in the AI safety community. It's always been people who are interested in from the beginning, it was people who are interested in singularity and then realized there was this problem. And so it's always been like people really interested in tech and wanting to be close to it. And I think we've been really influenced by our direction, has been really influenced by wanting to be where the action is with AI development. And I don't know that that was right.
AARON
Not personal, but I guess individual level I'm not super worried about people like you and me losing the plot by learning more about ML on their personal.
HOLLY
You know what I mean? But it does just feel sort of like I guess, yeah, this is maybe more of like a confession than, like a point. But it does feel a little bit like it's hard for me to enjoy in good conscience, like, the cool stuff.
AARON
Okay. Yeah.
HOLLY
I just see people be so attached to this as their identity. They really don't want to go in a direction of not pursuing tech because this is kind of their whole thing. And what would they do if we weren't working toward AI? This is a big fear that people express to me with they don't say it in so many words usually, but they say things like, well, I don't want AI to never get built about a pause. Which, by the way, just to clear up, my assumption is that a pause would be unless society ends for some other reason, that a pause would eventually be lifted. It couldn't be forever. But some people are worried that if you stop the momentum now, people are just so luddite in their insides that we would just never pick it up again. Or something like that. And, yeah, there's some identity stuff that's been expressed. Again, not in so many words to me about who will we be if we're just sort of like activists instead of working on.
AARON
Maybe one thing that we might actually disagree on. It's kind of important is whether so I think we both agree that Aipause is better than the status quo, at least broadly, whatever. I know that can mean different things, but yeah, maybe I'm not super convinced, actually, that if I could just, like what am I trying to say? Maybe at least right now, if I could just imagine the world where open eye and Anthropic had a couple more years to do stuff and nobody else did, that would be better. I kind of think that they are reasonably responsible actors. And so I don't know. I don't think that actually that's not an actual possibility. But, like, maybe, like, we have a different idea about, like, the degree to which, like, a problem is just, like, a million different not even a million, but, say, like, a thousand different actors, like, having increasingly powerful models versus, like, the actual, like like the actual, like, state of the art right now, being plausibly near a dangerous threshold or something. Does this make any sense to you?
HOLLY
Both those things are yeah, and this is one thing I really like about the pause position is that unlike a lot of proposals that try to allow for alignment, it's not really close to a bad choice. It's just more safe. I mean, it might be foregoing some value if there is a way to get an aligned AI faster. But, yeah, I like the pause position because it's kind of robust to this. I can't claim to know more about alignment than OpenAI or anthropic staff. I think they know much more about it. But I have fundamental doubts about the concept of alignment that make me think I'm concerned about even if things go right, like, what perverse consequences go nominally right, like, what perverse consequences could follow from that. I have, I don't know, like a theory of psychology that's, like, not super compatible with alignment. Like, I think, like yeah, like humans in living in society together are aligned with each other, but the society is a big part of that. The people you're closest to are also my background in evolutionary biology has a lot to do with genetic conflict.
AARON
What is that?
HOLLY
Genetic conflict is so interesting. Okay, this is like the most fascinating topic in biology, but it's like, essentially that in a sexual species, you're related to your close family, you're related to your ken, but you're not the same as them. You have different interests. And mothers and fathers of the same children have largely overlapping interests, but they have slightly different interests in what happens with those children. The payoff to mom is different than the payoff to dad per child. One of the classic genetic conflict arenas and one that my advisor worked on was my advisor was David Haig, was pregnancy. So mom and dad both want an offspring that's healthy. But mom is thinking about all of her offspring into the future. When she thinks about how much.
AARON
When.
HOLLY
Mom is giving resources to one baby, that is in some sense depleting her ability to have future children. But for dad, unless the species is.
AARON
Perfect, might be another father in the future.
HOLLY
Yeah, it's in his interest to take a little more. And it's really interesting. Like the tissues that the placenta is an androgenetic tissue. This is all kind of complicated. I'm trying to gloss over some details, but it's like guided more by genes that are active in when they come from the father, which there's this thing called genomic imprinting that first, and then there's this back and forth. There's like this evolution between it's going to serve alleles that came from dad imprinted, from dad to ask for more nutrients, even if that's not good for the mother and not what the mother wants. So the mother's going to respond. And you can see sometimes alleles are pretty mismatched and you get like, mom's alleles want a pretty big baby and a small placenta. So sometimes you'll see that and then dad's alleles want a big placenta and like, a smaller baby. These are so cool, but they're so hellishly complicated to talk about because it involves a bunch of genetic concepts that nobody talks about for any other reason.
AARON
I'm happy to talk about that. Maybe part of that dips below or into the weeds threshold, which I've kind of lost it, but I'm super interested in this stuff.
HOLLY
Yeah, anyway, so the basic idea is just that even the people that you're closest with and cooperate with the most, they tend to be clearly this is predicated on our genetic system. There's other and even though ML sort of evolves similarly to natural selection through gradient descent, it doesn't have the same there's no recombination, there's not genes, so there's a lot of dis analogies there. But the idea that being aligned to our psychology would just be like one thing. Our psychology is pretty conditional. I would agree that it could be one thing if we had a VNM utility function and you could give it to AGI, I would think, yes, that captures it. But even then, that utility function, it covers when you're in conflict with someone, it covers different scenarios. And so I just am like not when people say alignment. I think what they're imagining is like an omniscient. God, who knows what would be best? And that is different than what I think could be meant by just aligning values.
AARON
No, I broadly very much agree, although I do think at least this is my perception, is that based on the right 95 to 2010 Miri corpus or whatever, alignment was like alignment meant something that was kind of not actually possible in the way that you're saying. But now that we have it seems like actually humans have been able to get ML models to understand basically human language pretty shockingly. Well, and so actually, just the concern about maybe I'm sort of losing my train of thought a little bit, but I guess maybe alignment and misalignment aren't as binary as they were initially foreseen to be or something. You can still get a language model, for example, that tries to well, I guess there's different types of misleading but be deceptive or tamper with its reward function or whatever. Or you can get one that's sort of like earnestly trying to do the thing that its user wants. And that's not an incoherent concept anymore.
HOLLY
No, it's not. Yeah, so yes, there is like, I guess the point of bringing up the VNM utility function was that there was sort of in the past a way that you could mathematically I don't know, of course utility functions are still real, but that's not what we're thinking anymore. We're thinking more like training and getting the gist of what and then getting corrections when you're not doing the right thing according to our values. But yeah, sorry. So the last piece I should have said originally was that I think with humans we're already substantially unaligned, but a lot of how we work together is that we have roughly similar capabilities. And if the idea of making AGI is to have much greater capabilities than we have, that's the whole point. I just think when you scale up like that, the divisions in your psyche or are just going to be magnified as well. And this is like an informal view that I've been developing for a long time, but just that it's actually the low capabilities that allows alignment or similar capabilities that makes alignment possible. And then there are, of course, mathematical structures that could be aligned at different capabilities. So I guess I have more hope if you could find the utility function that would describe this. But if it's just a matter of acting in distribution, when you increase your capabilities, you're going to go out of distribution or you're going to go in different contexts, and then the magnitude of mismatch is going to be huge. I wish I had a more formal way of describing this, but that's like my fundamental skepticism right now that makes me just not want anyone to build it. I think that you could have very sophisticated ideas about alignment, but then still just with not when you increase capabilities enough, any little chink is going to be magnified and it could be yeah.
AARON
Seems largely right, I guess. You clearly have a better mechanistic understanding of ML.
HOLLY
I don't know. My PiBBs project was to compare natural selection and gradient descent and then compare gradient hacking to miotic drive, which is the most analogous biological this is a very cool thing, too. Meatic drive. So Meiosis, I'll start with that for everyone.
AARON
That's one of the cell things.
HOLLY
Yes. Right. So Mitosis is the one where cells just divide in your body to make more skin. But Meiosis is the special one where you go through two divisions to make gametes. So you go from like we normally have two sets of chromosomes in each cell, but the gametes, they recombine between the chromosomes. You get different combinations with new chromosomes and then they divide again to bring them down to one copy each. And then like that, those are your gametes. And the gametes eggs come together with sperm to make a zygote and the cycle goes on. But during Meiosis, the point of it is to I mean, I'm going to just assert some things that are not universally accepted, but I think this is by far the best explanation. But the point of it is to take this like, you have this huge collection of genes that might have individually different interests, and you recombine them so that they don't know which genes they're going to be with in the next generation. They know which genes they're going to be with, but which allele of those genes. So I'm going to maybe simplify some terminology because otherwise, what's to stop a bunch of genes from getting together and saying, like, hey, if we just hack the Meiosis system or like the division system to get into the gametes, we can get into the gametes at a higher rate than 50%. And it doesn't matter. We don't have to contribute to making this body. We can just work on that.
AARON
What is to stop that?
HOLLY
Yeah, well, Meiosis is to stop that. Meiosis is like a government system for the genes. It makes it so that they can't plan to be with a little cabal in the next generation because they have some chance of getting separated. And so their best chance is to just focus on making a good organism. But you do see lots of examples in nature of where that cooperation is breaking down. So some group of genes has found an exploit and it is fucking up the species. Species do go extinct because of this. It's hard to witness this happening. But there are several species. There's this species of cedar that has a form of this which is, I think, maternal genome. It's maternal genome elimination. So when the zygote comes together, the maternal chromosomes are just thrown away and it's like terrible because that affects the way that the thing works and grows, that it's put them in a death spiral and they're probably going to be extinct. And they're trees, so they live a long time, but they're probably going to be extinct in the next century. There's lots of ways to hack meiosis to get temporary benefit for genes. This, by the way, I just think is like nail in the coffin. Obviously, gene centered view is the best evolutionarily. What is the best the gene centered view of evolution.
AARON
As opposed to sort of standard, I guess, high school college thing would just be like organisms.
HOLLY
Yeah, would be individuals. Not that there's not an accurate way to talk in terms of individuals or even in terms of groups, but to me, conceptually.
AARON
They'Re all legit in some sense. Yeah, you could talk about any of them. Did anybody take like a quirk level? Probably not. That whatever comes below the level of a gene, like an individual.
HOLLY
Well, there is argument about what is a gene because there's multiple concepts of genes. You could look at what's the part that makes a protein or you can look at what is the unit that tends to stay together in recombination or something like over time.
AARON
I'm sorry, I feel like I cut you off. It's something interesting. There was meiosis.
HOLLY
Meiotic drive is like the process of hacking meiosis so that a handful of genes can be more represented in the next generation. So otherwise the only way to get more represented in the next generation is to just make a better organism, like to be naturally selected. But you can just cheat and be like, well, if I'm in 90% of the sperm, I will be next in the next generation. And essentially meiosis has to work for natural selection to work in large organisms with a large genome and then yeah, ingredient descent. We thought the analogy was going to be with gradient hacking, that there would possibly be some analogy. But I think that the recombination thing is really the key in Meadic Drive. And then there's really nothing like that in.
AARON
There'S. No selection per se. I don't know, maybe that doesn't. Make a whole lot of sense.
HOLLY
Well, I mean, in gradient, there's no.
AARON
G in analog, right?
HOLLY
There's no gene analog. Yeah, but there is, like I mean, it's a hill climbing algorithm, like natural selection. So this is especially, I think, easy to see if you're familiar with adaptive landscapes, which looks very similar to I mean, if you look at a schematic or like a model of an illustration of gradient descent, it looks very similar to adaptive landscapes. They're both, like, in dimensional spaces, and you're looking at vectors at any given point. So the adaptive landscape concept that's usually taught for evolution is, like, on one axis you have fitness, and on the other axis you have well, you can have a lot of things, but you have and you have fitness of a population, and then you have fitness on the other axis. And what it tells you is the shape of the curve there tells you which direction evolution is going to push or natural selection is going to push each generation. And so with gradient descent, there's, like, finding the gradient to get to the lowest value of the cost function, to get to a local minimum at every step. And you follow that. And so that part is very similar to natural selection, but the Miosis hacking just has a different mechanism than gradient hacking would. Gradient hacking probably has to be more about I kind of thought that there was a way for this to work. If fine tuning creates a different compartment that doesn't there's not full backpropagation, so there's like kind of two different compartments in the layers or something. But I don't know if that's right. My collaborator doesn't seem to think that that's very interesting. I don't know if they don't even.
AARON
Know what backup that's like a term I've heard like a billion times.
HOLLY
It's updating all the weights and all the layers based on that iteration.
AARON
All right. I mean, I can hear those words. I'll have to look it up later.
HOLLY
You don't have to full I think there are probably things I'm not understanding about the ML process very well, but I had thought that it was something like yeah, like in yeah, sorry, it's probably too tenuous. But anyway, yeah, I've been working on this a little bit for the last year, but I'm not super sharp on my arguments about that.
AARON
Well, I wouldn't notice. You can kind of say whatever, and I'll nod along.
HOLLY
I got to guard my reputation off the cuff anymore.
AARON
We'll edit it so you're correct no matter what.
HOLLY
Have you ever edited the Oohs and UMS out of a podcast and just been like, wow, I sound so smart? Like, even after you heard yourself the first time, you do the editing yourself, but then you listen to it and you're like, who is this person? Looks so smart.
AARON
I haven't, but actually, the 80,000 Hours After hours podcast, the first episode of theirs, I interviewed Rob and his producer Kieran Harris, and that they have actual professional sound editing. And so, yeah, I went from totally incoherent, not totally incoherent, but sarcastically totally incoherent to sounding like a normal person. Because of that.
HOLLY
I used to use it to take my laughter out of I did a podcast when I was an organizer at Harvard. Like, I did the Harvard Effective Alchruism podcast, and I laughed a lot more than I did now than I do now, which is kind of like and we even got comments about it. We got very few comments, but they were like, girl hosts laughs too much. But when I take my laughter out, I would do it myself. I was like, wow, this does sound suddenly, like, so much more serious.
AARON
Yeah, I don't know. Yeah, I definitely say like and too much. So maybe I will try to actually.
HOLLY
Realistically, that sounds like so much effort, it's not really worth it. And nobody else really notices. But I go through periods where I say like, a lot, and when I hear myself back in interviews, that really bugs me.
AARON
Yeah.
HOLLY
God, it sounds so stupid.
AARON
No. Well, I'm definitely worse. Yeah. I'm sure there'll be a way to automate this. Well, not sure, but probably not too distant.
HOLLY
Future people were sending around, like, transcripts of Trump to underscore how incoherent he is. I'm like, I sound like that sometimes.
AARON
Oh, yeah, same. I didn't actually realize that this is especially bad. When I get this transcribed, I don't know how people this is a good example. Like the last 10 seconds, if I get it transcribed, it'll make no sense whatsoever. But there's like a free service called AssemblyAI Playground where it does free dr
AARONased transcription and that makes sense. But if we just get this transcribed without identifying who's speaking, it'll be even worse than that. Yeah, actually this is like a totally random thought, but I actually spent not zero amount of effort trying to figure out how to combine the highest quality transcription, like whisper, with the slightly less good
AARONased transcriptions. You could get the speaker you could infer who's speaking based on the lower quality one, but then replace incorrect words with correct words. And I never I don't know, I'm.
HOLLY
Sure somebody that'd be nice. I would do transcripts if it were that easy, but I just never have but it is annoying because I do like to give people the chance to veto certain segments and that can get tough because even if I talk you.
AARON
Have podcasts that I don't know about.
HOLLY
Well, I used to have the Harvard one, which is called the turning test. And then yeah, I do have I.
AARON
Probably listened to that and didn't know it was you.
HOLLY
Okay, maybe Alish was the other host.
AARON
I mean, it's been a little while since yeah.
HOLLY
And then on my I like, publish audio stuff sometimes, but it's called low effort. To underscore.
AARON
Oh, yeah, I didn't actually. Okay. Great minds think alike. Low effort podcasts are the future. In fact, this is super intelligent.
HOLLY
I just have them as a way to catch up with friends and stuff and talk about their lives in a way that might recorded conversations are just better. You're more on and you get to talk about stuff that's interesting but feels too like, well, you already know this if you're not recording it.
AARON
Okay, well, I feel like there's a lot of people that I interact with casually that I don't actually they have these rich online profiles and somehow I don't know about it or something. I mean, I could know about it, but I just never clicked their substack link for some reason. So I will be listening to your casual.
HOLLY
Actually, in the 15 minutes you gave us when we pushed back the podcast, I found something like a practice talk I had given and put it on it. So that's audio that I just cool. But that's for paid subscribers. I like to give them a little something.
AARON
No, I saw that. I did two minutes of research or whatever. Cool.
HOLLY
Yeah. It's a little weird. I've always had that blog as very low effort, just whenever I feel like it. And that's why it's lasted so long. But I did start doing paid and I do feel like more responsibility to the paid subscribers now.
AARON
Yeah. Kind of the reason that I started this is because whenever I feel so much I don't know, it's very hard for me to write a low effort blog post. Even the lowest effort one still takes at the end of the day, it's like several hours. Oh, I'm going to bang it out in half an hour and no matter what, my brain doesn't let me do that.
HOLLY
That usually takes 4 hours. Yeah, I have like a four hour and an eight hour.
AARON
Wow. I feel like some people apparently Scott Alexander said that. Oh, yeah. He just writes as fast as he talks and he just clicks send or whatever. It's like, oh, if I could do.
HOLLY
That, I would have written in those paragraphs. It's crazy. Yeah, you see that when you see him in person. I've never met him, I've never talked to him, but I've been to meetups where he was and I'm at this conference or not there right now this week that he's supposed to be at.
AARON
Oh, manifest.
HOLLY
Yeah.
AARON
Nice. Okay.
HOLLY
Cool Lighthaven. They're now calling. It looks amazing. Rose Garden. And no.
AARON
I like, vaguely noticed. Think I've been to Berkeley, I think twice. Right? Definitely. This is weird. Definitely once.
HOLLY
Berkeley is awesome. Yeah.
AARON
I feel like sort of decided consciously not to try to, or maybe not decided forever, but had a period of time where I was like, oh, I should move there, or we'll move there. But then I was like I think being around other EA's in high and rational high concentration activates my status brain or something. It is very less personally bad. And DC is kind of sus that I was born here and also went to college here and maybe is also a good place to live. But I feel like maybe it's actually just true.
HOLLY
I think it's true. I mean, I always like the DCAS. I think they're very sane.
AARON
I think both clusters should be more like the other one a little bit.
HOLLY
I think so. I love Berkeley and I think I'm really enjoying it because I'm older than you. I think if you have your own personality before coming to Berkeley, that's great, but you can easily get swept. It's like Disneyland for all the people I knew on the internet, there's a physical version of them here and you can just walk it's all in walking distance. That's all pretty cool. Especially during the pandemic. I was not around almost any friends and now I see friends every day and I get to do cool stuff. And the culture is sometimes it's like a really annoying near miss for me, but a lot of the times it's just like, oh, wow, how do I know so many people who are so similar to me? This is great.
AARON
Yeah, that's definitely cool. Yeah, I've definitely had that in Eags and stuff. Cool. I feel like you have a party, right?
HOLLY
You don't have to answer that Robin Hansen's talk. I mean, probably know what he's going to say. That's the thing when you know someone's rich online profile so well, it can be weird to see them in person and just hear them say only stuff from that only a subset of those things. I'm not saying Robin's, like like, I don't know, I haven't seen him enough in person. But Stephen Pinker was this way for like I was in the evolutionary biology department, but it was kind of close to the psychology department. And I went to a lab meeting there and I talked to Steve a few times and then he actually was, yeah, like, why don't we have a meeting and talk about your career? And I was such I had read every word he'd ever written at that.
AARON
Um, that's cool.
HOLLY
But I just had nothing to say to him. And then I realized pretty much everything I did say, I knew that he was going to answer because he's not someone who speaks very spontaneously. He pretty much has Cached chunks and loads them. The only spontaneous conversation we ever had was about AI and it was because we.
AARON
Listened to a lot of ADK. But I think I mean, I did talk to like for this other podcast episode and I don't know, I didn't have that. Totally. I feel like it was like I didn't know everything he was going to say, but who else would be like that?
HOLLY
Rob has a lot of off the cuff content. He doesn't say everything he thinks.
AARON
True. Yeah. Oh, we didn't talk about we can cut this part. We didn't talk about whether there's a conspiracy to not fund pause research or pause not research pause stuff. Do you want to have a comment that we can edit out?
HOLLY
I wouldn't call it a conspiracy, but I just think there's like, a reluctance to do it.
AARON
Yeah.
HOLLY
And some of it is like I think people are just being honest about it. They're like, yeah, it would get in the way of what I'm already doing. I'm trying to have a good relationship with AI companies and I feel like this would piss them off. I don't feel like they're giving their reasoning and it could make sense. I just think that they are wrong that their whole organization shouldn't be able to fund other causes.
AARON
If this is OpenPhil, I feel like that's not a good yeah. If you're like a multibillion dollar grant organization, it's very hard to have a single yeah, it's like that's not like a person with views who needs to it's not like a single agent necessarily. I mean, it kind of acts that way.
HOLLY
Yeah. I don't even know not sure how much I can say. Yeah. I'm not sure that AI companies expect that. I'm not sure if it's like that actual that's been communicated to people like OpenPhil and they are acting accordingly, or if they're just afraid of that and acting accordingly. I don't just I feel like there should be some way for OpenPhil or Dustin to fund advocacy interventions. I think part of it is that the people making those decisions aren't convinced of them, aren't convinced that advocacy is good. And I think there are some things like that. I don't know. It's hard for me to ignore that. Holden is married to Daniela Amade and they all used to live with his brother in law, dario Amade of Anthropic. And Daniel is also of like I'm not trying to say that there's something sinister going on, but it's just like, who wants to believe that their wife is doing something really bad if like, who wants to really go there and consider that possibility? I just think that's concerning. Of course, he's probably not thinking as clearly about that as somebody else would. That bothers me. I really was bothered by holden went on that six month sabbatical and came back with his playbook for AI safety. And it was just like, more of the same. He didn't even mention public advocacy. It was like the reason he went on that sabbatical it was because of well, never mind. I'm not sure of the reason he went on that sabbatical, but it was like the news that happened during that sabbatical was all about public is kind of into this now. It just seemed like he should at least engage on that, and he didn't. And he even suggested starting a new AI company. I just thought it just seems so dated. It just wasn't, considering the strategic position we're in now. And I kind of wondered if that was because, I don't know, he's really bought into what Daniela and Dario think.
AARON
About I'm kind of more bought into the perspective of much better than replacement cutting edge AI lab is like, maybe not good or something than you seem to be. I don't have a super strong view on this. I haven't thought about it nearly as much as either you or any of the people you just mentioned, but I don't know, it doesn't seem crazy.
HOLLY
Yeah, I guess I look at it as like that would be. I don't think it's impossible that somebody could just come up with the answer to alignment and if they're able to use that AI to make sure that nobody else makes unaligned AI or something like that, and it doesn't become a totalitarian dictatorship or something, all of those things, I don't think it's impossible. I don't even know how unlikely it is. If you told me in ten years that that's how it turned out, I would be like, oh, wow. But I wouldn't be like no. But as far as the best action to take and to advocate for, I think pause is the best. I think we don't have to help another AI lab get started, but our opportunity now is before we've gone far enough with AGI pursuits, is to implement a pause and have some barrier to if someone breaks the pause they're not like one step away from. I do just think that that's overall the best action to take, but if I'm just dispassionately mapping what could happen, I could see a lot of things happening. I could see alignment by default being true. I could see that we just like I don't know, there's just like something we don't get. Maybe we are just projecting our own instincts onto AI. That would surprise me less than everything going perfect, or like one singleton forming. That was good.
AARON
Yeah, maybe. Also, let me know whatever you want to wrap up much. I don't think I've made this a public take. Not that it's been a secret, but I think maybe even more, at least relative to the other AI safety pilled. Not the other, but relative to the AI safety pilled, like Twitter sphere or something like it. It seems pretty possible that OpenAI is I was going to say net good. I don't have problems with that phrase. epistemically.
HOLLY
It seems like they've done a really good job with the product so far. I'll definitely say that.
AARON
Yeah, I'm just a lot I don't know, I feel like it's easy to and I don't think they've acted perfectly or anthropic, but it's really easy to, I guess, miss it. It seems like in the world where, I don't know, meta and some random I don't know, whatever pick your other the next five labs or whoever would come along in the next five years or whatever, the world where those labs companies are at the cutting edge, it seems like a lot worse for maybe not super explicit reasons or reasons that are meta.
HOLLY
Just seems like less that's, like, all frankly, take that out, because I don't want to be making I want to be very on the up and up with what I'm saying about meta. But, yeah, I mean, just Yan LeCun's way of talking about and there was that article recently that alleged that Zuck just wants to be that he says things about just wanting to win and they think that open source is a way to do it and that Jan Lacoon is not just saying his opinion, it's calculated to undermine all the safety stuff.
AARON
It's so weird. Yeah. Also another just weird thing is that even though all of this is in some sense in some sense, it's like the extreme cutting edge of capitalism. On the other sense, okay, the key movers here have more money. It's like marginal money. Probably doesn't actually per se is probably not actually directly good for them or whatever. Once you have $100 million or whatever, the next million dollars isn't all that great. And it seems like a lot of them are, if not ethically motivated motivated by things beyond pure status, actually. Sorry, not pure status, but maybe at least like pure monetary incentives. Sorry, I sort of lost my train of thought.
HOLLY
I frequently think that people underrate the importance of the motive that just, like, people like doing what they're doing. They like their science, they like their work, and they don't want to think that it's bad. I just think, as simple as that, they really enjoy doing their work. They enjoy the kind of status that it brings, even if it's not financial, even if the wards aren't necessarily financial. The dynamic between Lacoon and Benjio and Hinton is really interesting because I'm just paraphrasing interactions I've remembered, but they seem to be saying, just give it up, Yan. We made a mistake. We need to course correct. And they both express henton and Benjio both expressed a lot of remorse about even though they didn't think that they did it on, but, like, they feel very sad that their life's work might have this legacy. And they seem to think that Yan Mccun is not dealing with that. And this could be a way of insisting that nothing's wrong and everything's good and just pushing harder in the other direction might be, like, a way of getting away from that possibility. I don't know.
AARON
Yeah, it sort of sucks that the psychology of a couple of dudes is quite important. Yeah. I don't know.
HOLLY
This is another area where my history of animal advocacy is interesting because I was a kid, vegetarian, and so I observed over many years how people would react to that and especially how they would react when they didn't think they had to make good arguments. It was one of the ways I first got interested in rationality, actually, because people would just give adults would just give the worst arguments for same so far. Yes, and I'm seeing that a lot with this. People who are unquestionably, the smartest people I knew are now saying the dumbest shit, now that pause is on the table and they're getting better about it. I mean, I think they were just taken aback at first, but they would say just like the dumbest reasons that it wasn't going to work, it just revealed. They obviously didn't want it to be a thing, or they didn't want to think about a new paradigm, or they kind of wanted things to be the way they were, where the focus was on technical stuff. I was having a conversation with somebody about the first instance of the Campaign for AI safety website. That's the Australian AI Safety Advocacy Group. And the first version of that website was a bit amateurish, I will definitely say, but I was in this thread and the people in it were making fun of it and picking on little things about it that didn't even make any sense. There was one line that was like ML engineers could be made to work on AI safety, or instead they could work on AI safety. Retrained was the word they used. And this is very similar. Like in vegan advocacy, you hear this all the time. Like slaughterhouse workers can be retrained in organic farming. It's not a great it's a little sillier in that case, very silly.
AARON
In the first case. I don't think it's that silly.
HOLLY
Yeah, but the point of that kind of thing is we care about the jobs of the people who be affected by this. And there are jobs in our thing.
AARON
Silicon Valley ML experts really struggling to make ends meet.
HOLLY
But that line was picked on and made fun of. And actually one person who was like a very smart person, knows a lot about the topic, was like, this would be like forced labor camps. And they might not have said camp, they might have just said forced labor program or something like that. And I was just like, what the dude? That's the most uncharitable explanation I've ever reaction I've ever heard. The reason that we can't pause or advocate for AI safety in public is that just everybody who wants to do it is too stupid. And so the only thing we can do is what you're doing, I guess, which I guess I won't say what it is because I want to maintain their anonymity. But it really struck me that happened in April and I just thought it was just very recognizable to me as the kind of terrible argument that only makes sense if you just think you have everybody's on your side and you can do a status move to keep people out or to keep something else out. That particular incident influenced me strongly to push for this harder because I don't know, if you're just present, like, making the argument more even if your argument is stupid, people just don't react that dumb.
AARON
No, I'm glad you updated in that. Like, I do think it's very good that AI safety seems NEA. It seems, like, pretty high. I don't know, it depends what status hierarchy you're talking about. But in all relevant domains, it seems pretty high status. And actually, it's kind of crazy how smart everybody is. This is my personal I don't know. Yeah, I feel like technical AI safety people really fucking smart. And so yeah, I've seen some people on Twitter say only once or twice because it's so far from true, but once or twice? Yeah, I guess they're just not smart enough to work in ML. It's like, okay, I don't know. It's like the farthest possible thing from the truth.
HOLLY
Yeah. The ML people, the open source ML people who are trying to hurt my feelings definitely want to go in on, like, I'm not smart enough, or my degree isn't a dumb subject or something. Yeah, it's great to be smart, but there just are more important things, and I just don't think you have to be a genius to see the logic of what I'm saying. Anyway, what I was saying was there's like a status quo or a relative status quo that a lot of people were comfortable even. I think Jan Lacoon was comfortable with being cool ML genius and doesn't want there to be some moral or ethical question with it. At least that's the picture I get from his interaction with the other Turing Prize winners. And then within AI safety, people don't really want to think about switching gears. Or maybe the landscape has shifted and now the next move is something that's not the skills they've spent all their time developing and not the skills that kind of got them into this whole thing, which I don't want anybody working on technical stuff to quit or something.
AARON
Yeah, the soy lent is just adds to the ethos.
HOLLY
Yeah, guys, I've been drinking a soy lent the whole time. It's not that I love them, but I do go through these periods where I feel like kind of nauseous and don't want to eat, and, like, soylent is whatever works.
AARON
Yeah, cool. I think I'm, like, slightly running out.
HOLLY
Of steam, which is like, there by four.
AARON
Okay. Yeah. But you are invited back on pigeon hour anytime. Not literally anytime, but virtually anytime.
HOLLY
We can record one for my thing.
AARON
Oh, yeah, totally. Any closing takes? Thoughts? I don't have any. You don't have to either.
HOLLY
Yeah, it was a fun time. Thank you.
AARON
Oh, cool. Yeah, no, maybe at some other point we can just discuss all your Evo biology takes or whatever, because that was quite interesting.
HOLLY
Oh, yeah. There's going to be maybe this chat cone thing, which is like, the less Wrong did, like, the Miri conversations last year, and they're trying to replicate that for more topics. And there might be one on evolution soon that I might be part of.
AARON
I'll keep an eye on that.
HOLLY
So I don't know if accompanying readings are fun for the podcast. Anyway. Yeah, I should probably go because I also need to pee. I've had three different liquids over here this whole time.
AARON
Okay. That's a great reason. Thank you so much.
HOLLY
Okay, bye. Thank you.
Get full access to Aaron's Blog at www.aaronbergman.net/subscribe -
Listen on:
* Spotify
* Apple Podcasts
* Google Podcasts
Note: the core discussion on ethics begins at 7:58 and moves into philosophy of language at ~1:12:19
Danielâs stuff:
* AI X-risk podcast
* The Filan Cabined podcast
* Personal website and blog
Blurb and bulleted summary from Clong
This wide-ranging conversation between Daniel and Aaron touches on movies, business drama, philosophy of language, ethics and legal theory. The two debate major ethical concepts like utilitarianism and moral realism. Thought experiments around rational beings choosing to undergo suffering feature prominently. meandering tangents explore the semantics of names and references.
* Aaron asserts that total utilitarianism does not imply that any amount of suffering can be morally justified by creating more happiness. His argument is that the affirmative case for this offsetting ability has not been clearly made.
* He proposes a thought experiment - if offered to experience the suffering of all factory farmed animals in exchange for unlimited happiness, even a perfectly rational being would refuse. This indicates there are some levels of suffering not offsettable.
* Aaron links this to experiences like hunger where you realize suffering can be worse than you appreciate normally. This causes his intuition some suffering can't be outweighed.
* Daniel disagrees, believing with the right probabilities and magnitudes of suffering versus happiness, rational beings would take that gamble.
* For example, Daniel thinks the atomic bombing of Japan could be offset by reducing more suffering. Aaron is less sure given the pain inflicted.
* Daniel also proposes offsets for animal farming, but Aaron doesn't think factory farming harm is offsettable by any amount of enjoyment of meat.
* They discuss definitions of rationality and whether evolution pressures against suicide impact the rationality of not killing oneself.
* Aaron ties his argument to siding with what a perfectly rational being would choose to experience, not necessarily what they would prefer.
* They debate whether hypothetical aliens pursuing "schmorality" could point to a concept truly analogous to human morality. Aaron believes not.
Transcript
(Very imperfect)
AARON
O'how's, it going it's going all right.
DANIEL
Yeah, I just so yesterday I saw Barbie and today I saw Oppenheimer, so it's good to oh, cool. That cultural.
AARON
Nice, nice.
DANIEL
Do you have takes? Yeah, I thought it was all right. It was a decent view of Oppenheimer as a person. It was like a how? I don't know. I feel like the public can tend to be taken in by this physicist figures you get this with quotes, right? Like, the guy was just very good at having fun with journalists, and now we get these amazing nuggets of wisdom from Einstein. I don't know. I think that guy was just having good I don't know. The thing that I'm coming away from is I thought I only watched Barbie because it was coming out on the same day as Oppenheimer, right? Like, otherwise it wouldn't have occurred to me to watch it. I was like, yeah, whatever. Barbie is, like, along for the ride, and Oppenheimer is going to be amazing, but in like, maybe Oppenheimer was a bit better than Barbie, but I'm not even sure of that, actually.
AARON
Yeah, I've been seeing people say that on Twitter. I haven't seen either, but I've been seeing several people say that I'm following, say, like, Barbie was exceptional. And also that kind of makes sense because I'm following all these EA people who are probably care more about the subject matter for the latter one. So it's like, I kind of believe that Barbie is, like, aesthetically better or something. That's my take. Right.
DANIEL
Guess. Well, if you haven't seen them, I guess I don't want to spoil them for you. They're trying to do different things aesthetically. Right. Like, I'm not quite sure I'd want to say one is aesthetically better. Probably in some ways, I think Barbie probably has more aesthetic blunders than Oppenheimer does. Okay. But yeah, I don't know if you haven't seen it, I feel like I don't want to spoil it for you.
AARON
Okay. No, that's fine. This isn't supposed to be like probably isn't the most important the most interesting thing we could be talking about is that the bar?
DANIEL
Oh, jeez.
AARON
Oh, no, that's a terrible bar. That was like an overstatement. That would be a very high bar. It would also be, like, kind of paralyzing. I don't know. Actually know what that would be, honestly. Probably some social juicy gossip thing. Not that we necessarily have any.
DANIEL
Yeah, I think your interestingness. Yeah, I think I don't have the know, the closest to gossip thing I saw was like, do you see this bit of Carolyn Elson's diaries and letters to SBF that was leaked to the.
AARON
No, I don't. Was this like today or recently? How recently?
DANIEL
This was like a few days ago.
AARON
I've been seeing her face on Twitter, but I don't actually think I know anything about this. And no, I would not have.
DANIEL
Background of who she is and stuff.
AARON
Yeah, hold on. Let the audience know that I am on a beach family vacation against my will. Just kidding. Not against my will. And I have to text my sister back. Okay, there we go. I mean, I broadly know the FTX story. I know that she was wait, I'm like literally blanking on the Alameda.
DANIEL
That's the name of research.
AARON
Okay. Yeah. So she was CEO, right? Yeah. Or like some sort of like I think I know the basics.
DANIEL
The like, she was one of the OG Stanford EA people and was around.
AARON
Yeah, that's like a generation. Not an actual generation, like an EA generation. Which is what, like six years or.
DANIEL
Like the I don't know, I've noticed like, in the there's like I feel like there's this gap between pre COVID people and post COVID people. No one left their house. Partly people moved away, but also you were inside for a while and never saw anyone in person. So it felt like, oh, there's like this crop of new people or something. Whereas in previous years, there'd be some number of new people per year and they'd get gradually integrated in. Anyway, all that is to say that, I don't know, I think SBF's side of the legal battle leaked some documents to The New York Times, which were honestly just like her saying, like, oh, I feel very stressed and I don't like my job, and I'm sort of glad that the thing is blown up now. I don't know. It honestly wasn't that salacious. But I think that's, like, the way I get in the loop on gossip like some of the New York Times.
AARON
And I eventually I love how it's funny that this particular piece of gossip is, like, running through the most famous and prestigious news organization in the world. Or, like, one of them or something. Yeah. Instead of just being like, oh, yeah, these two people are dating, or whatever. Anyway, okay, I will maybe check that out.
DANIEL
Yeah, I mean, honestly, it's not even that interesting.
AARON
The whole thing is pretty I am pretty. This is maybe bad, but I can't wait to watch the Michael Lewis documentary, pseudo documentary or whatever.
DANIEL
Yeah, it'll be good to read the book. Yeah, it's very surreal. I don't know. I was watching Oppenheimer. Right. And I have to admit, part of what I'm thinking is be if humanity survives, there's going to be this style movie about open AI, presumably, right? And I'm like, oh, man, it'll be amazing to see my friend group depicted on film. But that is going to happen. It's just going to be about FTX and about how they're all criminals. So that's not great.
AARON
Yeah, actually, everybody dunks on crypto now, and it's like low status now or whatever. I still think it's really cool. I never had more than maybe $2,000 or whatever, which is not a trivial I mean, it's not a large amount of my money either, but it's not like, nothing. But I don't know, if it wasn't for all the cultural baggage, I feel like I would be a crypto bro or I would be predisposed to being a crypto bro or something.
DANIEL
Yeah. I should say I was like joking about the greedy crypto people who want their money to not be stolen. I currently have a Monero sticker on the back of my a big I don't know, I'm a fan of the crypto space. It seems cool. Yeah. I guess especially the bit that is less about running weird scams. The bit that's running weird scams I'm less of a fan of.
AARON
Yeah. Yes. I'm also anti scam. Right, thank you. Okay, so I think that thing that we were talking about last time we talked, which is like the thing I think we actually both know stuff about instead of just like, repeating New York Times articles is my nuanced ethics takes and why you think about talk about that and then we can just also branch off from there.
DANIEL
Yeah, we can talk about that.
AARON
Maybe see where that did. I luckily I have a split screen up, so I can pull up things. Maybe this is kind of like egotistical or something to center my particular view, but you've definitely given me some of the better pushback or whatever that I haven't gotten that much feedback of any kind, I guess, but it's still interesting to hear your take. So basically my ethical position or the thing that I think is true is that which I think is not the default view. I think most people think this is wrong is that total utilitarianism does not imply that for some amount of suffering that could be created there exists some other extremely large arbitrarily, large amount of happiness that could also be created which would morally justify the former. Basically.
DANIEL
So you think that even under total utilitarianism there can be big amounts of suffering such that there's no way to morally tip the calculus. However much pleasure you can create, it's just not going to outweigh the fact that you inflicted that much suffering on some people.
AARON
Yeah, and I'd highlight the word inflicted if something's already there and you can't do anything about it, that's kind of neither here nor there as it pertains to your actions or something. So it's really about you increasing, you creating suffering that wouldn't have otherwise been created. Yeah. It's also been a couple of months since I've thought about this in extreme detail, although I thought about it quite a bit. Yeah.
DANIEL
Maybe I should say my contrary view, I guess, when you say that, I don't know, does total utilitarianism imply something or not? I'm like, well, presumably it depends on what we mean by total utilitarianism. Right. So setting that aside, I think that thesis is probably false. I think that yeah. You can offset great amounts of suffering with great amounts of pleasure, even for arbitrary amounts of suffering.
AARON
Okay. I do think that position is like the much more common and even, I'd say default view. Do you agree with that? It's sort of like the implicit position of people who are of self described total utilitarians who haven't thought a ton about this particular question.
DANIEL
Yeah, I think it's probably the implicit default. I think it's the implicit default in ethical theory or something. I think that in practice, when you're being a utilitarian, I don't know, normally, if you're trying to be a utilitarian and you see yourself inflicting a large amount of suffering, I don't know. I do think there's some instinct to be like, is there any way we can get around this?
AARON
Yeah, for sure. And to be clear, I don't think this would look like a thought experiment. I think what it looks like in practice and also I will throw in caveats as I see necessary, but I think what it looks like in practice is like, spreading either wild animals or humans or even sentient digital life through the universe. That's in a non as risky way, but that's still just maybe like, say, making the earth, making multiple copies of humanity or something like that. That would be an example that's probably not like an example of what an example of creating suffering would be. For example, just creating another duplicate of earth. Okay.
DANIEL
Anything that would be like so much suffering that we shouldn't even the pleasures of earth outweighs.
AARON
Not necessarily, which is kind of a cop out. But my inclination is that if you include wild animals, the answer is yes, that creating another earth especially. Yeah, but I'm much more committed to some amount. It's like some amount than this particular time and place in human industry is like that or whatever.
DANIEL
Okay, can I get a feel of some other concrete cases to see?
AARON
Yeah.
DANIEL
So one example that's on my mind is, like, the atomic bombing of Hiroshima and Nagasaki, right? So the standard case for this is, like, yeah, what? A hundred OD thousand people died? Like, quite terrible, quite awful. And a lot of them died, I guess a lot of them were sort of some people were sort of instantly vaporized, but a lot of people died in extremely painful ways. But the countercase is like, well, the alternative to that would have been like, an incredibly grueling land invasion of Japan, where many more people would have died or know regardless of what the actual alternatives were. If you think about the atomic bombings, do you think that's like the kind of infliction of suffering where there's just not an offsetting amount of pleasure that could make that okay?
AARON
My intuition is no, that it is offsettable, but I would also emphasize that given the actual historical contingencies, the alternative, the implicit case for the bombing includes reducing suffering elsewhere rather than merely creating happiness. There can definitely be two bad choices that you have to make or something. And my claim doesn't really pertain to that, at least not directly.
DANIEL
Right. Sorry. But when you said you thought your answer was no, you think you can't offset that with pleasure?
AARON
My intuition is that you can, but I know very little about how painful those deaths were and how long they lasted.
DANIEL
Yeah, so the non offset so it's like, further out than atomic bombing.
AARON
That's my guess, but I'm like.
DANIEL
Okay, sure, that's your guess. You're not super confident. That's fine. I guess another thing would be, like, the animal farming system. So, as you're aware, tons of animals get kept in farms for humans to eat, by many count. Many of them live extremely horrible lives. Is there some amount that humans could enjoy meat such that that would be okay?
AARON
No. So the only reason I'm hesitating is because, like, the question is, like, what the actual alternative is here, but, like, if it's like, if it's, like, people enjoy, like, a meat a normal amount and there's no basically the answer is no. Although, like, what I would actually endorse doing depends on what the alternative is.
DANIEL
Okay, but you think that factory farming is so bad that it's not offsettable by pleasure.
AARON
Yeah, that's right. I'm somewhat maybe more confident than the atomic bombing case, but again, I don't know what it's like to be a factory farm pig. I wouldn't say I'm, like, 99% sure. Probably more than 70% or something. Or 70%, like, conditional on me being right about this thesis, I guess something like that, which I'm like. Yeah, okay. I don't know. Some percent, maybe, not probably not 99% sure, but also more than 60. Probably more than 70% sure or something.
DANIEL
All right. Yeah. So I guess maybe can you tell us a little bit about why you would believe that there's some threshold that you like where you can no longer compensate by permitting pleasure?
AARON
Yes. Let me run through my argument and sort of a motivation, and the motivation actually is sort of more a direct answer to what you just said. So the actual argument that I have and I have a blog post about this that I'll link, it was part of an EA forum post also that you'll also link in the show description is that the affirmative default case doesn't seem to actually be made anywhere. That's not the complete argument, but it's a core piece of it, which is that it seems to be, like, the default received view, which doesn't mean it's wrong, but does mean that we should be skeptical. If you accept that I'm right, that the affirmative case hasn't been made, we can talk about that. Then you should default to some other heuristic. And the heuristic that I assert and sort of argue, but kind of just assert is a good heuristic is. Okay. Is you do the following thought experiment. If I was a maximally or perfectly rational being, would I personally choose to undergo this amount of suffering in compensation or not compensation, exchange for later undergoing or earlier undergoing some arbitrarily large amount of happiness. And I personally have the intuition that there are events or things that certainly conceivable states and almost certainly possible states that I could be in such that even as a rational being, like as a maximum rational being, I would choose to just disappear and not exist rather than undergo both of these things.
DANIEL
Okay.
AARON
Yeah.
DANIEL
Why do you think that?
AARON
Yeah, so good question. I think the answer comes at a couple of different levels. So there's a question of why I'm saying it and why I'm saying it is because I'm pretty sure this is the answer I would actually give if actually given if Credibly offered this option. But that just pushes the question back. Okay, why do I feel that.
DANIEL
Even what option are we talking about here? There exists a thing such that for.
AARON
All pleasures, basically, for example, let's just run with the fact, the assumption that a genie God descends. And I think it's credible, and he offers that I can live the life of every factory, farmed animal in exchange for whatever I want for any amount of time or something like that. Literally, I don't have to give the answer now. It can just be like an arbitrarily good state for an arbitrarily long period of time.
DANIEL
Oh, yeah.
AARON
And not only would I say the words no, I don't want to do that, I think that the words no, I don't want to do that, are selfishly in a non pejorative sense. Correct. And then there's a question of why do I have that intuition? And now I'm introspecting, which is maybe not super reliable. I think part of my intuition that I can kind of maybe sort of access via introspection just comes from basically, I'm very fortunate to not have had a mostly relatively comfortable life, like as a Westerner with access to painkillers, living in the 21st century. Even still, there have definitely been times when I've been suffered, at least not in a relative sense, but just like, in an absolute sense to me, in a pretty bad way. And one example I can give was just like, I was on a backpacking trip, and this is the example I give in another blog post I can link. I was on a backpacking trip, and we didn't have enough food, and I was basically very hungry for like five days. And I actually think that this is a good and I'm rambling on, but I'll finish up. I think it's illustrative. I think there's some level of suffering where you're still able to do at least for me, I'm still able to do something like reasoning and intentionally storing memories. One of the memories I tried to intentionally codify via language or something was like, yeah, this is really bad, this really sucks, or something like, that what.
DANIEL
Sucked about it, you were just like, really hungry yeah.
AARON
For five days.
DANIEL
Okay. And you codified the thought, like, feeling of this hunger I'm feeling, this really sucks.
AARON
Something like that. Right. I could probably explicate it more, but that's basically okay. Actually, hold on. All right. Let me add so not just it really sucks, but it sucks in a way that I can't normally appreciate, so I don't normally have access to how bad it sucks. I don't want to forget about this later or something.
DANIEL
Yeah. The fact that there are pains that are really bad where you don't normally appreciate how bad they are, it's not clear how that implies non offset ability.
AARON
Right, I agree. It doesn't.
DANIEL
Okay.
AARON
I do think that's causally responsible for my intuition that I lend link to a heuristic that I then argue does constitute an argument in the absence of other arguments for offset ability.
DANIEL
Yeah. Okay. So that causes this intuition, and then you give some arguments, and the argument is like, you think that if a genie offered you to live liable factory farmed animals in exchange for whatever you wanted, you wouldn't go for that.
AARON
Yes. And furthermore, I also wouldn't go for it if I was much more rational.
DANIEL
If you were rational, yeah. Okay. Yeah. What do I think about this? One thing I think is that the I think the case of live experience this suffering and then experience this pleasure, to me, I think that this is kind of the wrong way to go about this. Because the thing about experiencing suffering is that it's not just we don't live in this totally dualistic world where suffering just affects only your immaterial mind or something in a way where afterwards you could just be the same. In the real world, suffering actually affects you. Right. Perhaps indelibly. I think instead, maybe the thing I'd want to say is suppose you're offered a gamble, right, where there's like a 1% chance that you're going to have to undergo excruciating suffering and a 99% chance that you get extremely awesome pleasures or something.
AARON
Yeah.
DANIEL
And this is meant to model a situation in which you do some action in which one person is going to undergo really bad suffering and 99 other people are going to undergo really great pleasure. And to me, I guess my intuition is that for any bad thing, you could make the probability small enough and you can make the rest of the probability mass good enough that I want to do that. I feel like that's worth it for me. And now it feels a little bit unsatisfying that we're just going that we're both drilling down to, like, well, this is the choice I would make, and then maybe you can disagree that it's the choice you would make. But yeah, I guess about the gambling case, what do you think about that? Let's say it's literally a one in a million chance that you would have to undergo, let's say, the life of one factory farmed animal.
AARON
Yeah.
DANIEL
Or is that not enough? Do you want it to be like, more?
AARON
Well, I guess it would have to be like one of the worst factory farmed animals. Life, I think would make that like.
DANIEL
Yeah, okay, let's say it's like, maybe literally one in a billion chance.
AARON
First of all, I do agree that these are basically isomorphic or morally equivalent, or if anything, time ordering in my example does mess things up a little bit, I'll be happy to reverse them or say that instead compare one person to 1000 people. So, yeah, you can make the probability small enough that my intuition changes. Yeah. So in fact, 1%, I'm very like, no, definitely not doing that. One in a million. I'm like, I don't know, kind of 50 50. I don't have a strong intuition either way. 100 trillion. I have the intuition. You know what? That's just not going to happen. That's my first order intuition. I do think that considering the case where you live, one being lives both lives, or you have, say, one being undergoing the suffering and then like 100 trillion undergoing the pleasure makes small probabilities more if you agree that they're sort of isomorphic makes them more complete or something like that, or complete more real in some. Not tangible is not the right word, but more right.
DANIEL
You're less tempted to round it to zero.
AARON
Yeah. And so I tend to think that I trust my intuitions more about reasoning. Okay, there's one person undergoing suffering and like 100 trillion undergoing happiness as it pertains to the question of offset ability more than I trust my intuitions about small probabilities.
DANIEL
I guess that's strange because that strikes me as strange because I feel like you're regularly in situations where you make choices that have some probability of causing you quite bad suffering, but a large probability of being fun. Like going to the beach. There could be a shark there. I guess this is maybe against your will, but you can go to a restaurant, maybe get food poisoning, but how often are you like, oh man, if I flip this switch, one person will be poisoned, but 99 people will?
AARON
Well, then you'd have to think that, okay, staying home would actually be safer for some reason, which I don't affirmatively think is true, but this actually does work out for the question of whether you should kill yourself. And there hopefully this doesn't get censored by Apple or whatever, so nobody do that. But there I just think that my lizard brain or there's enough evolutionary pressure to not trust that I would be rational when it comes to the question of whether to avoid a small chance of suffering by unaliving myself, as they say on TikTok.
DANIEL
Hang on, evolution is pressured. So there's some evolutionary pressure to make sure you really don't want to kill yourself, but you think that's like, irrational.
AARON
I haven't actually given this a ton of thought. It gets hard when you loop in altruism and yeah, the question also there's like some chance that of sentient's after death, there's not literally zero or something like that. Yeah, I guess those are kind of cop outs. So I don't know, I feel like it certainly could be. And I agree this is sort of like a strike against my argument or something. I can set up a situation you have no potential to improve the lives of others, and you can be absolutely sure that you're not going to experience any sentience after death. And then I feel like my argument does kind of imply that, yeah, that's like the rational thing to do. I wouldn't do it. Right. So I agree. This is like a strike against me.
DANIEL
Yeah. I guess I just want to make two points. So the first point I want to make is just methodologically. If we're talking about which are you likely to be more rational about gambles of small risks, small probabilities of risk versus large rewards as opposed to situations where you can do a thing that affects a large number of people one way and a small number of people another way? I think the gambles are more like decisions that you make a bunch and you should be rational about and then just the second thing in terms of like, I don't know, I took you to be making some sort of argument along the lines of there's evolutionary pressure to want to not kill yourself. Therefore, that's like a debunking explanation. The fact that there was evolutionary pressure to not kill ourselves means that our instinct that we shouldn't kill ourselves is irrational. Whereas I would tend to look at it and say the fact that there was very strong evolutionary pressure to not kill ourselves is an explanation of why I don't want to kill myself. And I see that as affirming the choice to not kill myself, actually.
AARON
Well, I just want to say I don't think it's an affirmative argument that it is irrational. I think it opens up the question. I think it means it's more plausible that for other I guess not even necessarily for other reasons, but it just makes it more plausible that it is irrational. Well.
DANIEL
Yeah, I take exactly the opposite view. Okay. I think that if I'm thinking about, like, oh, what do I really want? If I consider my true preferences, do I really want to kill myself or something? And then I learn that, oh, evolution has shaped me to not kill myself, I think the inference I should make is like, oh, I guess probably the way evolution did that is that it made it such that my true desires are to not kill myself.
AARON
Yeah. So one thing is I just don't think preferences have any intrinsic value. So I don't know, we might just like I guess I should ask, do you agree with that or disagree with.
DANIEL
That do I think preferences have intrinsic value? No, but so no, but I think like, the whole game here is like, what do I prefer? Or like, what would I prefer if I understood things really clearly?
AARON
Yes. And this is something I didn't really highlight or maybe I didn't say it at all, is that I forget if I really argue it or kind of just assert it, but I at least assert that the answer to hedonic utilitarian. What you should do under hedonic utilitarianism is maybe not identical to, but exactly the same as what a rational agent would do or what a rational agent would prefer if they were to experience everything that this agent would cause. Or something like that. And so these should give you the exact same answers is something I believe sure. Because I do think preferences are like we're built to understand or sort of intuit and reason about our own preferences.
DANIEL
Kind of, yeah. But broadly, I guess the point I'm making at a high level is just like if we're talking about what's ethical or what's good or whatever, I take this to ultimately be a question about what should I understand myself as preferring? Or to the extent that it's not a question of that, then it's like, I don't know, then I'm a bit less interested in the exercise.
AARON
Yeah. It's not ideal that I appeal to this fake and that fake ideally rational being or something. But here's a reason you might think it's more worth thinking about this. Maybe you've heard about I think Tomasic makes an argument about yeah. At least in principle, you can have a pig that's in extreme pain but really doesn't want to be killed still or doesn't want to be taken out of its suffering or whatever, true ultimate preference or whatever. And so at least I think this is pretty convincing evidence that you can have where that's just like, wrong about what would be good for it, you know what I mean?
DANIEL
Yeah, sorry, I'm not talking about preference versus hedonic utilitarianism or anything. I'm talking about what do I want or what do I want for living things or something. That's what I'm talking about.
AARON
Yeah. That language elicits preferences to me and I guess the analogous but the idea.
DANIEL
Is that the answer to what I want for living things could be like hedonic utilitarianism, if you see what I mean.
AARON
Or it could be by that do you mean what hedonic utilitarianism prescribes?
DANIEL
Yeah, it could be that what I want is that just whatever maximizes beings pleasure no matter what they want.
AARON
Yeah. Okay. Yeah, so I agree with that.
DANIEL
Yeah. So anyway, heading back just to the suicide case right. If I learn that evolution has shaped me to not want to kill myself, then that makes me think that I'm being rational in my choice to not kill myself.
AARON
Why?
DANIEL
Because being rational is something like optimally achieving your goals. And I'm a little bit like I sort of roughly know the results of killing myself, right? There might be some question about like, but what are my goals? And if I learned that evolution has shaped my goals such that I would hate killing myself right, then I'm like, oh, I guess killing myself probably ranks really low on the list of states ordered by how much I like them.
AARON
Yeah, I guess then it seems like you have two mutually incompatible goals. Like, one is staying alive and one is hedonic utilitarianism and then you have to choose which of these predominates or whatever.
DANIEL
Yeah, well, I think that to the extent that evolution is shaping me to not want to commit suicide, it looks like the not killing myself one is winning. I think it's evidence. I don't think it's conclusive. Right. Because there could be multiple things going on. But I take evolutionary explanations for why somebody would want X. I think that's evidence that they are rational in pursuing X rather than evidence that they are irrational in pursuing X.
AARON
Sometimes that's true, but not always. Yeah, there's a lot in general it is. Yeah. But I feel like moral anti realistic, we can also get into that. Are going to not think this is like woo or Joe Carl Smith says when he's like making fun of moralists I don't know, in a tongue in cheek way. In one of his posts arguing for explicating his stance on antirealism basically says moral realists want to say that evolution is not sensitive to moral reasons and therefore evolutionary arguments. Actually, I don't want to quote him from memory. I'll just assert that evolution is sensitive to a lot of things, but one of them is not moral reasons and therefore evolutionary arguments are not a good evidence or are not good evidence when it comes to purely, maybe not even purely, but philosophical claims or object level moral claims, I guess, yeah, they can be evidenced by something, but not that.
DANIEL
Yeah, I think that's wrong because I think that evolution why do I think it's wrong? I think it's wrong because what are we talking about when we talk about morality? We're talking about some logical object that's like the completion of a bunch of intuitions we have. Right. And those I haven't thought about intuitions are the product of evolution. The reason we care about morality at all is because of evolution under the standard theory that evolution is the reason our brains are the way they are.
AARON
Yeah, I think this is a very strange coincidence and I am kind of weirded out by this, but yes, I.
DANIEL
Don'T think it's a coincidence or like not a coincidence.
AARON
So it's not a coincidence like conditional honor, evolutionary history. It is like no extremely lucky or something that we like, of course we'd find it earthlings wound up with morality and stuff. Well, of course you would.
DANIEL
Wait. Have you read the metafic sequence by Elizar? Yudkowski.
AARON
I don't think so. And I respect Elias a ton, except I think he's really wrong about ethics and meta ethics in a lot of like I don't even know if I but I have not, so I'm not really giving them full time.
DANIEL
Okay. I don't know. I basically take this from my understanding of the meta ethics sequence, which I recommend people read, but I don't think it's a coincidence. I don't think we got lucky. I think it's a coincidence. There are some species that get evolved, right, and they end up caring about schmorality, right?
AARON
Yeah.
DANIEL
And there are some species that get evolved, right? And they end up caring about the prime numbers or whatever, and we evolved and we ended up caring about morality. And it's not like a total so, okay, partly I'm just like, yeah, each one of them is really glad they didn't turn out to be the other things. The ones that care about two of.
AARON
Them are wrong, but two of them are wrong.
DANIEL
Well, they're morally wrong. Two of them do morally wrong things all the time. Right?
AARON
I want to say that I hate when people say that. Sorry. So what I am saying is that you can call those by different names, but if I'm understanding this argument right, they all think that they're getting at the same core concept, which is like, no, what should we do in some okay, so does schmorality have any sort of normativity?
DANIEL
No, it has schmormativity.
AARON
Okay, well, I don't know what schmormativity is.
DANIEL
You know how normativity I feel like that's good. Schmormativity is about promoting the schmud.
AARON
Okay, so it sounds like that's just normativity, except it's normativity about different propositions. That's what it sounds like.
DANIEL
Well, basically, I don't know, instead of these schmalians wait, no, they're aliens. They're not shmalians. They're aliens. They just do a bunch of schmud things, right? They engage in projects, they try and figure out what the schmud is. They pursue a schmud and then they look at humans, they're like, oh, these humans are doing morally good things. That's horrible. I'm so glad that we pursue the schmood instead.
AARON
Yeah, I don't know if it's incoherent. I don't think they're being incoherent. Your description of a hypothetical let's just take for granted whatever in the thought experiment is in fact happening. I think your description is not correct. And the reason it's not correct is because there is like, what's a good analogy? So when it comes to abstract concepts in general, it is very possible for okay, I feel like it's hard to explain directly, but here an analogy, is you can have two different people who have very different conceptions of justice, but fundamentally are earnestly trying to get at the same thing. Maybe justice isn't well defined or isn't like, actually, I should probably have come up with a good example here. But you know what? I'm happy to change the word for what I use as morality or whatever, but it has the same core meaning, which is like, okay, really, what should you do at the end of the day?
DANIEL
Yeah.
AARON
What should you do?
DANIEL
Whereas they care about morality, which is what they should do, which is a different thing. They have strong desires to do what they should do.
AARON
I don't think it is coherent to say that there are multiple meanings of the word should or multiple kinds. Yeah.
DANIEL
No, there aren't.
AARON
Sorry. There aren't multiple meanings of the word should. Fine.
DANIEL
There's just a different word, which is schmood, which means something different, and that's what their desires are pegged to.
AARON
I don't think it's coherent, given what you've already the entire picture, I think, is incoherent. Given everything else besides the word schmoud, it is incoherent to assert that there is something broadly not analogous, like maybe isomorphic to normativity or, like, the word should. Yeah. There is only what's yeah. I feel like I'm not gonna I'm not gonna be able to verbalize it super well. I do. Yeah. Can you take something can you pick.
DANIEL
A sentence that I said that was wrong or that was incoherent?
AARON
Well, it's all wrong because these aliens don't exist.
DANIEL
The aliens existed.
AARON
Okay, well, then we're debating, like, I actually don't know. It depends. You're asserting something about their culture and psychology, and then the question is, like, are you right or wrong about that? If we just take for granted that you're right, then you're right. All right. I'm saying no, you can't be sure. So conditional on being right, you're right. Then there's a question of, like, okay, what is the probability? So, like, conditional on aliens with something broad, are you willing to accept this phrase, like, something broadly analogous to morality? Is that okay?
DANIEL
Yeah, sure.
AARON
Okay. So if we accept that there's aliens with something broadly analogous to morality, then you want to say that they can have not only a different word, but truly a pointer to a different concept. And I think that's false.
DANIEL
So you think that in conceptual space, there's morality and that there's, like, nothing near it for miles.
AARON
The study, like yeah, basically. At least when we're talking about, like, the like, at the at the pre conclusion stage. So, like, before you get to the point where you're like, oh, yeah, I'm certain that, like, the answer is just that we need, like, we need to make as many tennis balls as possible or whatever the general thing of, like, okay, broadly, what is the right thing to do? What should I do? Would it be good for me to do this cluster of things yeah. Is, like, miles from everything else.
DANIEL
Okay. I think there's something true to that. I think I agree with that in some ways and on others, my other response is I think it's not a total coincidence that humans ended up caring about morality. I think if you look at these evolutionary arguments for why humans would be motivated to pursue morality. They rely on very high level facts. Like, there are a bunch of humans around. There's not one human who's, like, a billion times more powerful than everyone else. We have language. We talk through things. We reason. We need to make decisions. We need to cooperate in certain ways to produce stuff. And it's not about the fact that we're bipedal or something. So in that sense, I think it's not a total coincidence that we ended up caring about morality. And so in some sense, I think because that's true, you could maybe say you couldn't slightly tweak our species that it cared about something other than morality, which is kind of like saying that there's nothing that close to morality in concept space.
AARON
But I think I misspoke earlier what I should have said is that it's very weird that we care about that most people at least partially care about suffering and happiness. I think that's just a true statement. Sorry, that is the weird thing. Why is it weird? The weird thing is that it happens to be correct, even though I only.
DANIEL
Have what do you mean it's correct?
AARON
Now we have to get okay, so this is going into moral realism. I think moral realism is true, at least.
DANIEL
Sorry, what do you mean by moral realism? Wait, different by moral realism?
AARON
Yes. So I actually have sort of a weak version of moral realism, which is, like, not that normative statements are true, but that there is, like, an objective. So if you can rank hypothetical states of the world in an ordinal way such that one is objectively better than another.
DANIEL
Yes. Okay. I agree with that, by the way. I think that's true. Okay.
AARON
It sounds like you're a moral realist.
DANIEL
Yeah, I am.
AARON
Okay. Oh, really? Okay. I don't know. I thought you weren't. Okay, cool.
DANIEL
Lots of people in my reference class aren't. I think most Bay Area rationalists are not moral realists, but I am.
AARON
Okay. Maybe I was confused. Okay, that's weird. Okay. Sorry about that. Wait, so what do I mean by it happens to be true? It's like it happens to coincide with yeah, sorry, go ahead.
DANIEL
You said it happens to be correct that we care about morality or that we care about suffering and pleasure and something and stuff.
AARON
Maybe that wasn't the ideal terminology it happens to so, like, it's not morally correct? The caring about it isn't the morally correct thing. It seems sort of like the caring is instrumentally useful in promoting what happens to be legitimately good or something. Or, like legitimately good or something like that.
DANIEL
But but I think, like so the aliens could say a similar thing, right? They could say, like, oh, hey, we've noticed that we all care about schmurality. We all really care about promoting Schmeasure and avoiding Schmuffering. Right? And they'd say, like, they'd say, like, yeah, what's? What's wrong?
AARON
I feel like it's not maybe I'm just missing something, but at least to me, it's like, only adding to the confusion to talk about two different concepts of morality rather than just like, okay, this alien thinks that you should tile the universe paperclips, or something like that, or even that more reasonably, more plausibly. Justice is like that. Yeah. I guess this gets back to there's only one concept anywhere near that vicinity in concept space or something. Maybe we disagree about that. Yeah.
DANIEL
Okay. If I said paperclips instead of schmorality, would you be happy?
AARON
Yes.
DANIEL
I mean, cool, okay, for doing the.
AARON
Morally correct thing and making me happy.
DANIEL
I strive to. But take the paperclipper species, right? What they do is they notice, like, hey, we really care about making paperclips, right? And, hey, the fact that we care about making paperclips, that's instrumentally useful in making sure that we end up making a bunch of paperclips, right? Isn't that an amazing coincidence that we ended up caring our desires were structured in this correct way that ends up with us making a bunch of paperclips. Is that like, oh, no, total coincidence. That's just what you cared about.
AARON
You left at the part where they assert that they're correct about this. That's the weird thing.
DANIEL
What proposition are they correct about?
AARON
Or sorry, I don't think they're correct implicitly.
DANIEL
What proposition do they claim they're correct about?
AARON
They claim that the world in which there is many paperclips is better than the world in which there is fewer paperclips.
DANIEL
Oh, no, they just think it's more paperclipy. They don't think it's better. They don't care about goodness. They care about paperclips.
AARON
So it sounds like we're not talking about anything remotely like morality, then, because I could say, yeah, morality, morality. It's pretty airy. It's a lot of air in here. I don't know, maybe I'm just confused.
DANIEL
No, what I'm saying is, so you're like, oh, it's like this total coincidence that humans we got so lucky. It's so weird that humans ended up caring about morality, and it's like, well, we had to care about something, right? Like anything we don't care about.
AARON
Oh, wow, sorry, I misspoke earlier. And I think that's generating some confusion. I think it's a weird coincidence that we care about happiness and suffering.
DANIEL
Happiness and suffering, sorry. Yeah, but mutatus mutantus, I think you want to say that's like a weird coincidence. And I'm like, well, we had to care about something.
AARON
Yeah, but it could have been like, I don't know, could it have been otherwise, right? At least conceivably it could have been otherwise.
DANIEL
Yeah, the paperclip guys, they're like, conceivably, we could have ended up caring about pleasure and suffering. I'm so glad we avoided that.
AARON
Yeah, but they're wrong and we're right.
DANIEL
Right about what?
AARON
And then maybe I don't agree. Maybe this isn't the point you're making. I'm sort of saying that in a blunt way to emphasize it. I feel like people should be skeptical when I say, like okay, I have good reason to think that even though we're in a very similar epistemic position, I have reason to believe that we're right and not the aliens. Right. That's like a hard case to make, but I do think it's true.
DANIEL
There's no proposition that the aliens and us disagree on yes.
AARON
The intrinsic value of pleasure and happiness.
DANIEL
Yeah, no, they don't care about value. They care about schmalu, which is just.
AARON
Like, how much paperclips there is. I don't think that's coherent. I don't think they can care about value.
DANIEL
Okay.
AARON
They can, but only insofar as it's a pointer to the exact same not exact, but like, basically the same concept as our value.
DANIEL
So do you reject the orthogonality thesis?
AARON
No.
DANIEL
Okay. I think that is super intelligent.
AARON
Yeah.
DANIEL
So I take the orthogonality thesis to mean that really smart agents can be motivated by approximately any desires. Does that sound right to you?
AARON
Yeah.
DANIEL
So what if the desire is like, produce a ton of paperclips?
AARON
Yeah, it can do that descriptively. It's not morally good.
DANIEL
Oh, no, it's not morally good at all. They're not trying to be morally good. They're just trying to produce a bunch of paperclips.
AARON
Okay, in that case, we don't disagree. Yeah, I agree. This is like a conceivable state of the world.
DANIEL
Yeah. But what I'm trying to say is when you say it's weird that we got lucky the reason you think it's weird is that you're one of the humans who cares about pleasure and suffering. Whereas if you were one of the aliens who cared about paperclips. The analogous shmarin instead of Aaron would be saying, like, oh, it's crazy that we care about paperclips, because that actually causes us to make a ton of paperclips.
AARON
Do they intrinsically care about paperclips or is it a means of cement?
DANIEL
Intrinsically, like, same as in the Orphogonality thesis.
AARON
Do they experience happiness because of the paperclips or is it more of a functional intrinsic value?
DANIEL
I think they probably experience happiness when they create paperclips, but they're not motivated by the happiness. They're motivated by like, they're happy because they succeeded at their goal of making tons of paperclips. If they can make tons of paperclips but not be happy about it, they'd be like, yeah, we should do that. Sorry. No, they wouldn't. They'd say, like, we should do that and then they would do it.
AARON
Would your case still work if we just pretended that they're not sentient?
DANIEL
Yeah, sure.
AARON
Okay. I think this makes it cleaner for both sides. Yeah, in that case, yes. So I think the thing that I reject is that there's an analog term that's anything like morality in their universe. They can use a different word, but it's pointing to the same concept.
DANIEL
When you say anything like morality. So the shared concepts sorry, the shared properties between morality and paperclip promotion is just that you have a species that is dedicated to promoting it.
AARON
I disagree. I think morality is about goodness and badness.
DANIEL
Yes, that's right.
AARON
Okay. And I think it is totally conceivable. Not even conceivable. So humans wait, what's a good example? In some sense I intrinsically seem to value about regular. I don't know if this is a good example. Let's run with it intrinsically value like regulating my heartbeat. It happens to be true that this is conducive to my happiness and at least local non suffering. But even if it weren't, my brain stem would still try really hard to keep my heart beating or something like that. I reject that there's any way in which promoting heart beatingness is an intrinsic moral or schmoral value or even that could be it could be hypothesized as one but it is not in fact one or something like that.
DANIEL
Okay.
AARON
Likewise, these aliens could claim that making paperclips is intrinsically good. They could also just make them and not make that claim. And those are two very different things.
DANIEL
They don't claim it's good. They don't think it's good.
AARON
They think it's claim it schmud.
DANIEL
Which they prefer. Yeah, they prefer.
AARON
Don't. I think that is also incoherent. I think there is like one concept in that space because wait, I feel like also this is just like at some point it has to cash out in the real world. Right? Unless we're talking about really speculative not even physics.
DANIEL
What I mean is they just spend all of their time promoting paperclips and then you send them a copy of Jeremy Bentham's collected writings, they read it and they're like all right, cool. And then they just keep on making paperclips because that's what they want to do.
AARON
Yeah. So descriptively.
DANIEL
Sure.
AARON
But they never claim that. It's like we haven't even introduced objectivity to this example. So did they ever claim that it's objectively the right thing to do?
DANIEL
No, they claim that it's objectively the paperclipy thing to do.
AARON
I agree with that. It is the paperclippy thing to do.
DANIEL
Yeah, they're right about stuff. Yeah.
AARON
So they're right about that. They're just not a right. So I do think this all comes back down to the question of whether there's analogous concepts in near ish morality that an alien species might point at. Because if there's not, then the paperclippiness is just like a totally radically different type of thing.
DANIEL
But why does it like when did I say that they were closely analogous? This is what I don't understand.
AARON
So it seems to be insinuated by the closeness of the word semantic.
DANIEL
Oh yeah, whatever. When I was making it a similar sounding word, all I meant to say is that they talk about it plays a similar role in their culture as morality plays in our culture. Sorry. In terms of their motivations, I should say. Oh, yeah.
AARON
I think there's plenty of human cultures that are getting at morality. Yeah. So I think especially historically, plenty of human cultures that are getting at the same core concept of morality but just are wrong about it.
DANIEL
Yeah, I think that's right.
AARON
Fundamentalist religious communities or whatever, you can't just appeal to like, oh, we're like they have some sort of weird it's kind of similar but very different thing called morality.
DANIEL
Although, I don't know, I actually think that okay, backing up. All I'm saying is that beings have to care about something, and we ended up caring about morality. And I don't think, like I don't know, I don't think that's super surprising or coincidental or whatever. A side point I want to make is that I think if you get super into being religious, you might actually start referring to a different concept by morality. How familiar are you with classical theism?
AARON
That's not a term that I recognize, although I took a couple of theology classes, so maybe more of them if I hadn't done that.
DANIEL
Yeah, so classical theism, it's a view about the nature of God, which is that I'm going to do a bad job of describing it. Yeah, I'm not a classical theist, so you shouldn't take classical theist doctrine from me. But it's basically that God is like sort of God's the being whose attributes are like his existence or something like that. It's weird. But anyway, there's like some school of philosophical where they're like, yeah, there's this transcendent thing called God. We can know God exists from first principles and in particular their account of goodness. So how do you get around the Euphyro dilemma, right? Instead of something like divine command theory, what they say is that when we talk about things being good, good just refers to the nature of God. And if you really internalize that, then I think you might end up referring to something different than actual goodness. Although I think it's probably there's no such being as God in the article. Theist sense.
AARON
Yeah. So they argue what we mean by good is this other.
DANIEL
Concept. They would say that when everyone talks about good, what they actually mean is pertaining to the divine nature, but we just didn't really know that we meant that the same way that when we talked about water, we always meant H 20, but we didn't used to know that.
AARON
I'm actually not sure if this is I'm very unconfident, but I kind of want to bite the bullet and say, like, okay, fine, in that case, yeah, I'm talking about the divine nature, but we just have radically different understandings of what the divine nature is.
DANIEL
You think you're talking about the divine nature.
AARON
Right?
DANIEL
Why do you think that?
AARON
Sorry, I think I very slightly was not quite pedantic enough. Sorry, bad cell phone or whatever. Once again, not very confident at all.
DANIEL
But.
AARON
Think think that I'm willing to I'm so I think that I'm referring to the divine nature, but what I mean by the divine nature is that which these fundamentalist people are referring to. So I want to get around the term and say like, okay, whatever these fundamentalists are referring to, I am also referring to them.
DANIEL
Yeah, I should say classical theism is not slightly a different when people say fundamentalists, they often mean like a different corner of Christian space than classical theists. Classical. Theists think like Ed Fesser esoteric Catholics or something. Yeah, they're super into it.
AARON
Okay, anyway yes, just to put it all together, I think that when I say morality, I am referring to the same thing that these people are referring to by the divine nature. That's what it took me like five minutes to actually say.
DANIEL
Oh yeah, so I don't think you are. So when they refer to the divine nature, what they at least think they mean is they think that the divine is sort of defined by the fact that its existence is logically necessary. Its existence is in some sense attributes it couldn't conceivably not have its various attributes. The fact that it is like the primary cause of the world and sustainer of all things. And I just really doubt that the nature of that thing is what you mean by morality.
AARON
No, those are properties that they assert, but I feel like tell me if I'm wrong. But my guess is that if one such person were to just suddenly come to believe that actually all of that's right. Except it's not actually logically necessary that the divine nature exists. It happens to be true, but it's not logically necessary. They would still be sort of pointing to the same concept. And I just think, yeah, it's like that, except all those lists of properties are wrong.
DANIEL
I think if that were true, then classical theism would be false.
AARON
Okay.
DANIEL
So maybe in fact you're referring to the same thing that they actually mean by the divine nature, but what they think they mean is this classical theistic thing. Right. And it seems plausible to me that some people get into it enough that what they actually are trying to get at when they say good is different than what normal people are trying to get at when they say good.
AARON
Yeah, I don't think that's true. Okay, let's set aside the word morality because especially I feel like in circles that we're in, it has a strong connotation with a sort of like modern ish analytics philosophy, maybe like some other things that are in that category.
DANIEL
Your video is worsen, but your sound is back.
AARON
Okay, well, okay, I'll just keep talking. All right, so you have the divine nature and morality and maybe other things that are like those two things but still apart from them. So in that class of things and then there's the question of like, okay, maybe everybody necessarily anybody who thinks that there's any true statements about something broadly in their vicinity of goodness in the idea space is pointing to the meta level of that or whichever one of those is truly correct or something. This is pretty speculative. I have not thought about this. I'm not super confident.
DANIEL
Yeah, I think I broadly believe this, but I think this is right about most people when they talk. But you could imagine even with utilitarianism, right? Imagine somebody getting super into the weeds of utilitarianism. They lived utilitarianism twenty four, seven. And then maybe at some point they just substitute in utilitarianism for morality. Now when they say morality, they actually just mean utilitarianism and they're just discarding the latter of the broad concepts and intuitions behind them. Such a person might just I don't know, I think that's the kind of thing that can happen. And then you might just want a.
AARON
Different thing by the word. I don't know if it's a bad thing, but I feel like I do this when I say, oh, x is moral to do or morally good to do. It's like, what's the real semantic relationship between that and it's correct on utilitarianism to do? I feel like they're not defined as the same, but they happen to be the same or something. Now we're just talking about how people use words.
DANIEL
Yeah, they're definitely going to happen to be the same in the case that utilitarianism is like the right theory of morality. But you could imagine that. You could imagine even in the case where utilitarianism was the wrong theory, you might still just mean utilitarianism by the word good because you just forgot the intuitions from which you were building theory of morality and you're just like, okay, look, I'm just going to talk about utilitarianism now.
AARON
Yeah, I think this is like, yeah, this could happen. I feel like this is a cop out and like a non answer, but I feel like getting into the weeds of the philosophy of language and what people mean by concepts and words and true the true nature of concepts. It's just not actually that useful. Or maybe it's just not as interesting to me as I'm glad that somebody thought about that ever.
DANIEL
I think this can happen, though. I think this is actually a practical concern. Right. Okay. Utilitarianism might be wrong, right? Does that strike you as right? Yeah, I think it's possible for you to use language in such a way that if utilitarianism were wrong, what that would mean is that in ordinary language, goodness, the good thing to do is not always the utilitarian thing to do, right? Yes, but I think it's possible to get down an ideological rabbit hole. This is not specific to utilitarianism. Right. I think this can happen to tons of things where when you say goodness, you just mean utilitarianism and you don't have a word for what everyone else meant by goodness, then I think that's really hard to recover from. And I think that's the kind of thing that can conceivably happen and maybe sometimes actually happens.
AARON
Yeah, I guess as an empirical matter and like an empirical psychological matter and yes. Do people's brains ever operate this way? Yes. I don't really know where that leaves that leaves us. Maybe we should move on to a different topic or whatever.
DANIEL
Can I just say one more thing?
AARON
Yeah, totally.
DANIEL
First, I should just give this broad disclaimer that I'm not a philosopher and I don't really know what I'm talking about. But the second thing is that particular final point. I was sort of inspired by a paper I read. I think it's called, like, do Christians and Muslims worship the same god? Which is actually a paper about the philosophy of naming and what it means for proper names to refer to the same thing. And it's pretty interesting, and it has a footnote about why you would want to discourage blasphemy, which is sort of about this. Anyway.
AARON
No, I personally don't find this super interesting. I can sort of see how somebody would and I also think it's potentially important, but I think it's maybe yeah.
DANIEL
Actually it's actually kind of funny. Can I tell you a thing that I'm a little bit confused about?
AARON
Yeah, sure.
DANIEL
So philosophers just there's this branch of philosophy that's the philosophy of language, and in particular the philosophy of right. Like, what does it mean when we say a word refers to something in the real world? And some subsection of this is the philosophy of proper names. Right. So when I say Aaron is going to the like, what do I mean by know who is like, if it turned out that these interactions that I'd been having with an online like, all of them were faked, but there was a real human named Bergman, would that count as making that send is true or whatever? Anyway, there's some philosophy on this topic, and apparently we didn't need it to build a really smart AI. No AI person has studied this. Essentially, these theories are not really baked into the way we do AI these days.
AARON
What do you think that implies or suggests?
DANIEL
I think it's a bit confusing. I think naively, you might have thought that AIS would have to refer to things, and naively, you might have thought that in order for us to make that happen, we would have had to understand the philosophy of reference or of naming, at least on some sort of basic level. But apparently we just didn't have to. Apparently we could just like I don't have that.
AARON
In fact, just hearing your description, my initial intuition is like, man, this does not matter for anything.
DANIEL
Okay. Can I try and convince you that it should matter? Yeah, tell me how I fail to convince you.
AARON
Yeah, all right.
DANIEL
Humans are pretty smart, right? We're like the prototypical smart thing. How are humans smart? I think one of the main ingredients of that is that we have language. Right?
AARON
Yes. Oh, and by the way, this gets to the unpublished episode with Nathan Barnard.
DANIEL
Coming out an UN I think I've seen an episode with him.
AARON
Oh, yeah. This is the second one because he's.
DANIEL
Been very oh, exciting. All right, well well, maybe all this will be superseded by this unpublished episode.
AARON
I don't think so. We'll see.
DANIEL
But okay, we have language, right. Why is language useful? Well, I think it's probably useful in part because it refers to stuff. When I say stuff, I'm talking about the real world, right?
AARON
Yes.
DANIEL
Now, you might think that in order to build a machine that was smart and wielded the language usefully, it would also have to have language. We would have to build it such that its language referred to the real world. Right. And you might further think that in order to build something that use languages that actually succeeds at doing reference, we would have to understand what reference was.
AARON
Yes. I don't think that's right. Because insofar as we can get what we call useful is language in, language out without any direct interaction, without the AIS directly manipulating the world, or maybe not directly, but without using language understanders or beings that do have this reference property, that's what their language means to them, then this would be right. But because we have Chat GPT, what the use comes from is like giving language to humans, and the humans have reference to the real world. But if the humans you need some connection to your reference, but it doesn't have to be at every level or something like that.
DANIEL
Okay, so do you think that suppose we had something that was like Chat GPT, but we gave it access to some robot limbs and it could pick up mice. Maybe it could pick up apples and throw the apples into the power furnace powering its data center. We give it these limbs and these actuators sort of analogous to how humans interact with the world. Do you think in order to make a thing like that that worked, we would need to understand the philosophy of reference?
AARON
No. I'm not sure why.
DANIEL
I also don't know why.
AARON
Okay, well, evolution didn't understand the philosophy of reference. I don't know what that tells us.
DANIEL
I actually think this is, like, my lead answer of, like, we're just making AIS by just randomly tweaking them until they work. That's my rough summary of Scastic gradient descent. In some sense, this does not require you to have a strong sense of how to implement your AIS. Maybe that's why we don't need to.
AARON
Understand philosophy or the SDD process is doing the philosophy. In some sense, that's kind of how I think about it or how I think about it now. I guess during the SDD process, you're, like, tweaking basically the algorithm, and at the end of the day, probably in order to, say, pick up marbles or something, reference to a particular marble or the concept of marble, not only the concept, but both the concept and probably a particular marble is going to be encoded. Well, I guess the concept of marble will be if that's how it was trained, that will be encoded in the weights themselves, you know what I mean? But then maybe a particular marble will be encoded vision to see that marble be encoded in a particular layers activation.
DANIEL
Or something, something like that, maybe. Yeah, I think this is like yeah, I guess what we're getting at is something like look, meaning is like a thing you need in order to make something work, but if you can just directly have a thing that gradually gets itself to work, that will automatically produce meaning, and therefore we don't have to think about it.
AARON
It will have needed to figure out meaning along the way.
DANIEL
Yeah, but we won't have needed to figure it out. That'll just happen in the training process.
AARON
Yeah. I mean, in the same way that everything happens in the training process. Yeah, that's where all the magic happens.
DANIEL
All right, so do you want to hear my new philosophy of language proposal?
AARON
Yes.
DANIEL
Yeah. So here's the new proposal. I think the theory of reference is not totally solved to everyone's satisfaction. So what we're going to do is we're going to train Chat GPT to manipulate objects in the physical world, right? And then we're going to give the weights to the philosophers. We're also going to give it like, a bunch of the training checkpoints, right?
AARON
And then they're going to look at.
DANIEL
This, and then they're going to figure out the philosophy of meaning.
AARON
What are training checkpoints?
DANIEL
Oh, just like the weights at various points during training.
AARON
Okay, and your proposal is that the philosophers are going to well, we haven't solved neck interpretability anyway, right? Yeah. I feel like this is empirically not possible, but conceptually, maybe the outcome won't be like solving meeting, but either solving meeting or deciding that it was a confused question or something. There was no answer, but something like resolvative.
DANIEL
Yeah. I don't know. I brought this up as like a reductiod absurdum or something or sort of to troll. But actually, if we get good enough at mechanical interpretability, maybe this does just shine light on the correct theory of reference.
AARON
I mean, I'm just skeptical that we need a theory of reference. I don't know, it seems like kind of like philosopher word games to me or something like that. I mean, I can be convinced otherwise. It's like haven't seen that.
DANIEL
I'm not sure that we need it. Right. I think I get fine without an explicit one, but I don't think you can tell.
AARON
Yes. Okay.
DANIEL
Can I tell you my favorite? It's sort of like a joke. It's a sentence that yeah. All right, so here's the sentence. You know Homer, right? Like the Greek poet who wrote the Iliad and the.
AARON
Oh, is that the.
DANIEL
No, this is the setup, by the way, do you know anything else about Homer?
AARON
Male? I don't know that I think that.
DANIEL
Yeah, okay, all right. This is not going to be funny as a joke, but it's meant to be a brain tickler, right? So the Iliad and the Odyssey, they weren't actually written by Homer? They were written by a different Greek name, by a different Greek man who.
AARON
Was also named I thought I saw somebody tweet this.
DANIEL
I think she got it from me.
AARON
That'S my okay, cool.
DANIEL
She might have got it from the lecture that I watched.
AARON
Maybe you can explain to me. Other people are saying, oh yeah, I don't think they were rolling on the ground laughing or whatever, but they were like, oh, ha, this is actually very funny after you explain it. And I did not have that intuition at I'm like, okay, so there's two guys named the where's the brain tickly part?
DANIEL
Oh, the brain tickly part is this. How could that sentence possibly be true when all you knew about Homer was that he was a Greek guy who wrote The Iliad and The Odyssey and that he was named.
AARON
How could that sentence okay, so I feel like the sentence on its own doesn't have a truth value, but what it implies. If I just heard that in normal conversation in fact, when I heard it just now, and if I were to hear it in normal conversation, what I would take it to mean is the famous guy who all the academics talk about, turns out, yes, that is Person A. And there was also this other person who is not somebody else has a better, more solid understanding of Homer beyond defining him as the author of The Iliad, The Odyssey, even though that's really all I know about him. I trust there's other people for whom this is not the case. And implicitly, I'm thinking, okay, so there's some philosophy or history dudes or whatever, who they know where he was born, they know his middle name or whatever, and so we're just going to call him Person A. And in fact, there was another guy named Domer, and there's no contradiction there or whatever.
DANIEL
What if nobody alive? What if everything that so I think this is actually plausible, I think, in terms of what living people know about Homer, I think it's just that he was a guy named Homer, he was Greek, he wrote The Iliad and The Odyssey, or at least is reputed to have. And maybe we know something about the period in which he lived, and maybe you can figure out the part of Greece in which he lived from the language, but I think that's probably all humanity currently knows about.
AARON
So maybe, maybe the statement can be it feels like it can be false. And the way it could be false is if we took a census of just suppose we had a census of everybody who ever lived in that period and there was only one Homer, well, then we would know that statement is false.
DANIEL
What do you mean, only one Homer?
AARON
I mean, there was not two individuals in the census, this hypothetical census named.
DANIEL
Homer who were given the name Homer. Gotcha yeah, that would make it.
AARON
And so it seems to be carrying substantive information that, in fact, we have historical evidence of two different individuals, and we have reason to believe there were two different individuals who went by the name Homer, and one of them wrote The Iliad and The Odyssey. And given those two facts, then the statement is true.
DANIEL
Okay, if the statement were so in the past, there are two different people named Homer, and only one of them wrote The Iliad of The Odyssey. But then why would we not say that The Iliad of The Odyssey were written by Homer? Why would we say they weren't written by Homer if they were written by a different guy who was also named Homer?
AARON
Yeah, so this gets back to the difference between the statement per se and my interpretation. So the statement per se, it sounds like there's no difference there. Or the phrase, like, some other guy named where it's, like, redundant, maybe not wrong, but like redundant or something they may have even wrong. I don't know. The information carried in the statement would be equivalent if you just said, we have good reason to believe there was not merely one Homer, but two. And indeed, one of these people wrote The Odyssey and the same statement, basically.
DANIEL
All right, so here's the thing I'm going to hit you up with. I think usually people have, like most people have names that other people also have, right?
AARON
Yes.
DANIEL
Like, there's more than one person named Daniel. There's more than one person named Aaron.
AARON
Right.
DANIEL
There was probably more than one person named Homer around the time when Homer was supposed to have all right, so so, yeah, homer didn't write Thalia and The Odyssey. They were written by some other guy who was also named Homer.
AARON
Yeah, I think that's a true statement.
DANIEL
Oh, I think it's a false. Can I try and convince you that you're wrong to say that's a true statement?
AARON
Yeah.
DANIEL
All right, here's one statement. Homer wrote the iliad and the odyssey. Right?
AARON
Yes.
DANIEL
Do you think that's true?
AARON
Okay, so think it is both true and false, depending on the reference of Homer.
DANIEL
Oh, yeah. So what is the reference?
AARON
Something like, yeah, maybe I'm willing to take back the thing that I previously said because this feels like more normal language or something when I say I'm talking to Daniel, right, that feels like a true statement. But maybe my sister has a friend named Daniel, and if I told that to her right, like, she would be right to say that it's false because you know what? I keep getting back to the fact that who gives a shit? You know? What I mean. I still struggle to see. You can dig down into the true, whether a particular proposition is true or false or indeterminate or something. But in normal language, we have a million psychological and maybe not psychological, but we have a million ways to figure out what is meant by a particular proposition beyond the information contained in its words. Okay. I don't know. This is not an answer or whatever, but it still seems like it's all fine, even if we never figure out.
DANIEL
I guess sorry, I'm going to do a little bit of sweeping. Your audience doesn't want to hear that. I'm going to sweep them, then.
AARON
No, that's totally cool. We're pro sweeping.
DANIEL
All right. Finish. All right.
AARON
Yeah.
DANIEL
I'm inclined to agree that it's fine. So when you say there's a million opportunities to understand the content of a sentence other than just the information contained the words, or understand what somebody means beyond just the statement info containing the words, you might still want to know what the info contained the words actually is. I should say, broadly, the way I relate to this is as an interesting puzzle.
AARON
Yeah, no, I kind of agree. Maybe I'm just like more. Yeah, I think it's like I can see why somebody was lined. It interesting.
DANIEL
Yeah. It gets to a thing where when you try to think of what we mean by something like Homer or what we mean by something like Daniel Filon, at least when other people say it, often it will be you'll come up with a candidate definition. And then there'll be some example which you hadn't anticipated, which I think is part of what makes this interesting. So, for instance, you might think that Daniel Phylan is the person named Daniel Filon, but here's a sentence, daniel Filon could have been named Sam. Or actually, here's a better one. Daniel Filon could have been named Patrick. Like, my dad actually sort of wanted to for a while. My dad was thinking of calling me Patrick. Right.
AARON
I was almost, yeah.
DANIEL
Yeah. So if you think about the sentence, daniel Filon could have been named Patrick. If Daniel Filon just means, like, a person named Daniel Filon, then that's.
AARON
I mean yeah, but that shouldn't.
DANIEL
So then you might say, like, oh, what Daniel Filon means is it's actually just an abbreviation of a bunch of things you might know about me. Right. Like, Daniel Filon is this guy who is Australian, but now lives in Berkeley and hosts this podcast and a few other things. And then the trouble is, you could imagine a parallel world right, where I didn't do any of those things.
AARON
Well, I feel like that's a bad definition. It would be, Janopilot is a human being who was both psychologically and genetically continuous with the being who existed before he was named, or something like that.
DANIEL
Okay. But you still have to nail down which being Daniel Fallon is supposed to be psychologically and genetically continuous with wait, what? Sorry. When you say, like, Daniel Phylan means just beings that are like, human beings that are psychologically and genetically continuous with the being before they were named. I think that's what you said.
AARON
Which angry definition. Yeah, well, I'm talking about you. Yeah, beyond that, I don't think there's any other verbal mishmash I can say will point to that. There's, like, a human being there's, like, a human being that's, like, the atoms aren't the same. Plus not all the memories are the same. There's personal identity issues, but there's a human being with basically your genetics, like, whatever, how many your age, plus a couple months. And that is also Daniel filon.
DANIEL
Yeah. Can you try and say that without using the word you imagine it's somebody who you're not talking to, and so you don't get to wait, what?
AARON
I don't even know wait, what am I supposed to be trying to gesture towards what I'm trying to say?
DANIEL
Yeah, give a definition of what you mean by Daniel Filon in a way that's valid. Like, I would still be Daniel Filon in a way where imagine a counterfactual world where, like, grown up to hate EA or something. You would want to still call that guy Daniel Filon. Daniel but but you're not allowed to use the word you okay?
AARON
Yeah. Daniel Filin is the human being who is currently not. I feel like I kind of mean two different things. Honestly, I don't think there's one definition. One is, like, the actual is the current instantiation and the current and actual instantiation of a particular human being. And the more and the other definition or meaning I have is all like, all human all human beings who either were or will be. I don't know about could be, honestly, or I think could be. I don't know about could have been. Yeah, maybe could have been. Yes. Let's go with what could have been. So throughout the multiverse, if that's a thing, all those beings who either were will be, could have been or could be psychologically and genetically continuous with a human being who was conceived or, like I guess I guess I guess this being started existing when he was a genetic entity or, like, had his full genome or something, which is hard.
DANIEL
Which beings are.
AARON
The counterfactual alternatives of the current being named Daniel Phylan? And this being, in turn, is defined as the current instantiation of an original past self. And that original past self can be delineated in time by the moment that a particular human being had the had all the genes or whatever.
DANIEL
So it's things that branch off the current being that is named Daniel Filon right.
AARON
Or things that branch off the yeah, branch off, but ultra retrospectively, I guess, but yeah.
DANIEL
Okay. And the current being suppose, like so I haven't actually told you this, but my legal name is actually Steve Schmuckson, not Daniel Phylan. Is there anything that the name Daniel Filon refers to?
AARON
Like, there's no factor of the matter.
DANIEL
You think there's no factor of the.
AARON
Here'S my concern. Where is the fact of the matter located? Or something like that. Is it in my neuro? Yeah. Is it, like, moral truth? What is it like, referential truth? Is there anything referential truth is like.
DANIEL
Oh, I don't know. I guess probably not.
AARON
Okay.
DANIEL
But I guess when you say the person names Daniel Filon, I think there's still a question of, like, wait, who is the like, how do you figure out who the person named Daniel Filon is? Like, I think that gets back to.
AARON
The probably it's probably multiple people. Wait, hold on. Pause. Okay, I'll cut this part out. Lindsay, I'm in the middle of a sorry. Sorry. Bye. Okay, I'm back.
DANIEL
Yeah, but when you say, like, the person names Daniel Filon and you're using that in your definition of what do I mean by Daniel filon? That strikes me as kind of circular because how do we know which person is the one who's named Daniel Filon?
AARON
Yeah, I agree. That's a poor definition. I feel like I very weekly think that I could come up with a more rigorous definition that would be, like, really annoying non intuitive.
DANIEL
Okay.
AARON
Not super sure about that.
DANIEL
You should try and then read some phil articles because it's all totally doesn't.
AARON
Matter and it's like a fake question. Oh, yeah, it doesn't matter.
DANIEL
I just think it's a fun puzzle.
AARON
Yeah, but it feels like it's not even yeah, so there's, like, a lot of things I feel like there's mathematical questions that don't matter but are more meaningful in some sense than even this feels kind of like maybe not. How many angels dance in the head of a pin? Yeah, actually kind of like that. Yeah. How many angels can dance in the head of a pin?
DANIEL
I think that question is meaningful.
AARON
What's the answer?
DANIEL
What's the answer? I guess it depends what you mean by angel. Normally in the Christian tradition, I think angels are supposed to not be material.
AARON
I think maybe, like, tradition. I'm asking about the actual answer.
DANIEL
Yeah, I mean, the actual answer to how many angels can dance on the yeah, I think when you use the word angel okay, the tricky thing here is when you use the word angel, you might be primarily referring to angels in the Jewish tradition about which no.
AARON
I'm referring to real angels.
DANIEL
There aren't any real angels.
AARON
Okay, well, then how many angels can dance in the head of a pen?
DANIEL
Zero. Because there aren't any.
AARON
I'm kind of joking sort of adopting your stance when I came to whatever, the aliens with the weird word.
DANIEL
I gave you an answer. What do you want?
AARON
Yeah, I'm also going to give you, like, a series of answers. I mean, I'm not actually going through I think it'll be annoying, but I could give you a series of answers like that or whatever, like I'm referring.
DANIEL
To I'm not sure you could give me another question. That's my answer.
AARON
Oh, okay.
DANIEL
As for how many actual angels, could.
AARON
I feel like I might be trapped here because I thought that was going to trip you up, and it's just like, yeah, it sounds like the right answer. Honestly.
DANIEL
Well, I guess you might think that. Suppose all dogs suddenly died, right. And then later I asked you how many dogs could fit in this room, there would still be an answer to that question that was like greater than zero. Yeah. I think the word angels just, like it just depends on what the word angels refers to. And I'm like, well, if it has to refer to actual angels, then there aren't any actual angels. If we're referring to angels as conceived of in the Christian tradition, then I think infinitely many. If we're referring to angels as conceived of in other traditions, then I think that I don't know the answer.
AARON
Yes, that sounds right. I'm glad you find this sorry. That was like an hour, so that was an annoying way of putting it.
DANIEL
I liked it. That was a fine thing to say.
AARON
At the metal level. At the metal level. I find it interesting that some people find this interesting.
DANIEL
Yeah. Okay, before you go away and try and figure out theory of naming, can I add some side constraints? Some constraints that you might not have thought of?
AARON
Sure.
DANIEL
Okay, so here's a sentence. Like, harry Potter is a wizard. Right.
AARON
There are no wizards.
DANIEL
You think it's false that Harry Potter is a wizard?
AARON
Yes.
DANIEL
All right, but let's just take the okay, like like you kind of know what that means, right?
AARON
Yes.
DANIEL
Let's take another sentence. Like, thor is the god of lightning, right?
AARON
Yes.
DANIEL
Now, I take it you don't believe in the literal existence of Thor or of Harry Potter. Right?
AARON
Yeah. Right.
DANIEL
But when I talk about I'm, I'm wielding the name Harry Potter, and I'm doing a sort of similar thing as when I wield the name Aaron Bergman. Right.
AARON
Similar. Not the same, but similar.
DANIEL
Yeah. Okay, cool. So Harry Potter the thing about Harry Potter is it's like an empty name, right? It's a name that doesn't refer to anything that actually exists. Right.
AARON
Doesn't refer to any configuration of actually existed molecules. It refers to some abstractions, and it refers to a common set of a grouping of properties in various people's minds.
DANIEL
Oh, you think it refers to the grouping of properties rather than so if I said, like, Thor actually exists, that would be true, according to you?
AARON
No, I'm trying to figure out why. I think I figured why I totally.
DANIEL
Think this is a solvable problem, by the way.
AARON
Okay.
DANIEL
I'm not trying to say this is some sort of deepity, like, you will never know. I think this is conceivable. Anyway, the point is, Harry Potter and Thor are examples of names that don't refer to actual humans or gods or whatever, but they're different, right?
AARON
Yes. So that's interesting.
DANIEL
You might have thought that names were nailed down by the sets of things they referred to.
AARON
Hold on. I think something can refer to something without or sorry, there are things besides maybe you don't have a good word, but there are thingy like things, for lack of a better term, that exist in some meaningful sense of exist that are not configurations of quarks or identifiable configuration, or like, yeah, let's go to configurations.
DANIEL
Quarks and leptons. Sure. And you don't just mean like, the Em field. You mean like, things can refer to non physical stuff.
AARON
I think physical is in a useful category. This is also a hot take in some.
DANIEL
Like, wait, do you think that Harry Potter is like, this non physical being that flies around on a broomstick, or do you think that Harry Potter is like, the concept?
AARON
So I think there's multiple things that that term means, and the way it's actually used is depends on Bergman.
DANIEL
Do you think Aaron Bergman means multiple?
AARON
No.
DANIEL
What's the difference?
AARON
Well, I can in fact, Harry Potter might only refer to exactly two things.
DANIEL
What are the two things that Harry Potter refers to?
AARON
Sorry, wait, maybe I'm wrong about that. Okay, hold on. So, like, if I use the not, I don't know, because what I want to say is harry refers to what you think it refers to in two different contexts. And one context is where we pretend that he exists, and the other context is when we recognize or pretend that he doesn't. And now you're going to say, oh, who's you referring to. Am I right?
DANIEL
Yeah.
AARON
Okay, that sounds like what I'm going to say. Okay. No, I feel like there's, like an er Harry Potter, which is like a cluster of traits, like a cluster of things. There's no hardened, well defined thing in the same way there's no well defined notion of what is a bottle of wine. You can keep adding weird tidbits to.
DANIEL
The bottle of wine, but the er Harry Potter is like a bundle of traits.
AARON
Characteristics. Traits. Okay.
DANIEL
Rishi Sunak a bundle of traits.
AARON
I think there's, like two levels. There's, like, the metal Rishi Sunak and the thing that people normally refer to when they refer to Rishi Sunak, which is not merely which is not a bundle of traits. It is distinguished from other from like it is a physical or like a biological mind like thing that is individuated or pointed out in person space by the bundle of traits or something like that.
DANIEL
Yeah, he is that. But I think that when people say Rishi Sunak, I don't think they ever mean the bundle of traits. I think they mean, like, the guy. I think the guy has the bundle of traits, but they don't mean the. Traits, they mean the guy.
AARON
Yeah, I think that's right. I think the way that they, with their mind brain lands on that actual meaning is, like, in some sense, recognizing those letters as pointing to characteristics, as pointing to things, to maybe things or characteristics such as the Prime Minister of Britain or UK or whatever, like things.
DANIEL
That embody the they don't they don't refer to the characteristics themselves. They refer to the things that embody the characteristics. Right.
AARON
I think as an empirical matter, this is true. I can imagine a world in which it's sometimes the first of the bundle of characteristics.
DANIEL
Yeah, I guess I think that would be people speaking a different language. Right. Like, there are all sorts of different languages. Some of them might have the word Rishi sunak. That happens to mean, like, the property of being the Prime Minister of Great Britain and Northern Ireland.
AARON
Well, like, okay, so let's say in a thousand years or whatever, and there's still humans or whatever, there's like a mythology about some being. And in the same way that there's mythology about Thor, there's mythology about this being who's in various myths plays the role of the not plays the role, but is the role in the myths of the Prime Minister of the UK. Which is like some ancient society and has these various traits, then it would behave kind of thought. But yeah, this is like a conceivable thing, in which case there is a reference I wouldn't say that means that the language people speak is in English anymore because they use rishi sunak in that way.
DANIEL
But when they said rishi sunak, they were actually referring to the traits not like some sort of being.
AARON
Well, maybe there were historians in that society who were referring to the being, but most normal people weren't or something.
DANIEL
I guess I think they would be referring to, like I guess to them I would call rishi sunak. Like, sorry, what kinds of things do these people believe about rishi sunak? But how are they using sentences involving rishi sunak?
AARON
So somebody might say, oh, you know, rishi sunak isn't actually a lie. That would be a true statement. It would also be a true sorry. Sorry, or like, wait, yeah.
DANIEL
Sorry is the idea that these people have myths about. Right, all right, sorry. That's the question I was asking. Okay, all right, cool. I guess this would be sort of similar to the case of Santa Claus. The phrase Santa Claus comes from St. Nicholas, who was probably a real guy from Turkey named okay, I like, vaguely.
AARON
Knew that, I think.
DANIEL
Yeah, but I guess this gets us back to where we started with when we say Santa Claus, do we mean like, the bundle of ideas around Santa Claus or do we mean like a guy who dispenses a bunch of presents.
AARON
On I mean, I want to step back.
DANIEL
Anyway.
AARON
Yeah, I feel like maybe insofar as I feel like maybe it does matter, or like, yeah, the question of meaning or sorry, it can matter, but it just has a different answer in particular different cases. And so the right way to go about it is to just discuss reference in the case of morality, for example, the case of Santa Claus and another. And there's no general answer. Or maybe there is a general answer, but it's so abstract that it's not.
DANIEL
Useful in any way that might be. Well, I think even abstract answers can be pretty yeah, I think you might have some hope that there's a general answer for the case of proper names to be even concrete. I think you might think that there's some theory that's sort of specific that unifies the names aaron Bergman, Santa Claus and Zeus.
AARON
Yeah. And I guess I think, oh, it'll be a lot easier and quicker just to actually disambiguate case by case. Maybe I'm wrong. Maybe I'm wrong. So if some tenured philosophers at whatever university want to work on this, people.
DANIEL
Can do that, I should say. I've read theories that purport to explain all of these three naming practices that I found somewhat convincing. When I say papers, I mean one paper. It's actually the paper I cited earlier.
AARON
Okay, you can send it to me or, like, send me a link or whatever, if you want.
DANIEL
Yeah, really, what's happening in this conversation is I read one paper and now I'm trolling you about it. I hope it's a good kind of trolling.
AARON
Yeah, it feels like benevolent trolling. But I actually do think this is kind of meaningful in the context of morality, or at least it's actually kind of non obvious in that case, whereas it generally is obvious, like, what a particular person in real life is referring to. In the case of Santa Claus, just depending on and morality happens to be important. Right. So maybe there's other cases like that. Or I could see legal battles over, like, what does a law refer to? There's, like, two different people. It's like the guy, the state, there's the name itself. Yes, sure. I don't know.
DANIEL
Yeah. This reminds me of various formulations of originalism, which is you've heard of originalism, I guess constitutional.
AARON
Yeah.
DANIEL
So original it's this theory that when you're interpreting laws, you should interpret the original thing going on there, rather than what we currently want it to be, or whatever. And there's this question of, like, wait, what thing that was originally going on? Should we interpret? And sometimes you occasionally hear people say that it's about the original intent. I think this is definitely false, but more often people will say, oh, they mean the original public meaning. But sometimes people say, oh, no, it's the original meaning. In a legal context, people try to get at what exactly they mean by originalism, and it has some of its flavor.
AARON
Yeah, I could talk about object level or at the level we've been talking. I don't think it's, like a fact of the matter, but object level. If you convince me that originalism was true, maybe you couldn't what I want to say is because those people weren't playing by the rules or whatever, we just got to norm it out or something sorry. People writing the Constitution weren't doing it under the pretext of originalism. I don't know. It I could be wrong about this.
DANIEL
Okay. Why do you think it.
AARON
Maybe looks pretty plausible that I'm wrong? I vaguely feel like this is a thing that was, like, developed in, like, the 20th century by, like, legal scholars.
DANIEL
I think that's sort of right. So they had this notion of strict constructionism in the 19th century that I think is kind of analogous to originalism. I think when people talk about originalism, they mean simple enough concepts that it seems plausible to me that people could have been to me. I don't know, maybe this is my bias, but it seems very intuitive to me that when people were writing the Constitution, maybe they were thinking, hey, I want this law to mean what it means right now.
AARON
Yeah. There's a question. Okay. What is me and me. Yeah.
DANIEL
I guess everybody thinks yeah, all right. There's one game which is, like, what did the framers think they were doing when they wrote the Constitution? There's a potentially different question, which is, like, what were they actually doing? They could have been wrong about legal theory. Right. That's conceivable. And then there's a third game, which I think is maybe the best game, which is, like, what's the best way to sort of found a system of laws? Should we hope that all the courts do originalism, or should we hope that all the courts do like, I'm not exactly sure what the alternative is supposed to be, but like, yeah, but what.
AARON
Should we ask from an alternative?
DANIEL
Is like, sorry.
AARON
Yeah, I agree. I assume you mean, like, what actually in 2023 should be the answer, or how should judges interpret the Constitution?
DANIEL
That's the game whereby should I hear means something like, what would cause the most clarity about the laws? And something like that.
AARON
I don't mean that exact same thing. I think I mean something like, more in some sense, ultimately moral, not like clarity is not I don't know. There's other values besides clarity.
DANIEL
Yeah, sure. We might want to limit scope a little bit to make it easier to think about. Right.
AARON
Yeah.
DANIEL
When I'm building a house, if I'm building a house, I probably want to think, like, how will this house not fall down?
AARON
I don't know.
DANIEL
I'm going to have a bunch of concrete requirements, and it's probably going to be better to think about that rather than, like, what should I build? Because I don't want to solve philosophy before building my house.
AARON
Yeah, it's not as obvious what those requirements are for. Possible that just because you can have just, like, two statements issued by the federal court, or you can imagine that the last two judgments by the Supreme Court include unambiguous propositions that are just opposites of one another. And I don't think this would mean that the United States of America has fallen. You know what? Okay, like, nobody knows. What should we do? I don't.
DANIEL
Mean yeah. I would tend to take that as saying that legal judgments don't follow the inference rules of classical logic. Seems fine to me.
AARON
Sure. Also, I think I'm going to have to wrap this up in him pretty soon. Sorry.
DANIEL
Yeah, we can go for ages.
AARON
Do this again. Yeah, this will be the longest one yet.
DANIEL
I feel a bit guilty for just trolling. I don't even properly understand.
AARON
Especially I do think the morality thing is interesting because I think there's definitely, like, a strain of rationalist thought that it's directionally like you were at least in terms of vibes, like where you were coming from. That's pretty influential, at least in some circles.
DANIEL
Yeah, I guess I'm not sure if I did a good job of articulating it. And also, I've sort of changed my mind a little bit about I don't know, I feel like when I talk about morality, I want to get caught in the weird weeds or the semantics rather than, like I think an important fact about morality is it's not a weird contingent fact that humans evolved to care about it. I don't know. To me, it's really interesting that evolutionary accounts of why we care about morality, they don't rely on really fine grained features. They rely on very broad. People talk to each other, and we have common projects, and there's not one guy who's stronger than every other human. I don't know. Yeah, I feel like that's somehow more real and more important than just the weird semantics of it. Anyway, before we close up, can I plug some of my stuff?
AARON
Yes, plug everything that you want.
DANIEL
All right. I have two podcasts. One of my podcasts is called Axrp. It's the AI X Risk Research podcast, and you can listen to me interview AI X Risk researchers about their work and why they do it. I have another podcast called The Phylan Cabinet, where I just talk to whoever about whatever I want. I think if you want to hear some people who strongly who I guess the audience of this podcast is mostly EA's, like young atheist kind of EA types, if you want to hear people who are kind of not like that. I have a few episodes on religion and one three and a half hour conversation with my local Presbyterian pastor about what he thinks about God. And I have another episode with an objectivist about just I don't know, I guess everything Ayn Rand thinks the culmination.
AARON
Oh, no, you cut out at the word objectivist. Sorry, wait, you cut out at the word objectivist.
DANIEL
Oh, yeah, I'll try to say it again. I have one episode where I talk to this objectivist just about a bunch of objectivist thought. So I think we cover objectivists, like, ethics, metaphysics, and a bit of objectivist aesthetics as well. And I don't know, the thing objectivists are most famous for is they're really against altruism. And I ended up thinking that I thought the body of thought was more persuasive than I expected it to be. So maybe I recommend those two episodes to.
AARON
Have been sort of actually haven't listened to it in, like, a week, but was listening to your one with Oliver habrica. But after I finish that, I will look at the objectivist one. Yeah. Everybody should follow those podcasts. Like me.
DANIEL
Everyone. Even if you don't speak English.
AARON
Everyone. In fact, even if you're not a human, like Santa Claus, including yeah. Okay. So anything else to plug?
DANIEL
If you're considering building AGI don't.
AARON
Hear that. I know. Sam, you're listening okay. I know you're listening to Pigeonhoue.
DANIEL
Okay, yeah, I guess that's not very persuasive of me to just say, but I think AI could kill everyone, and that would be really bad.
AARON
Yeah, I actually agree with this. All right, well, yeah, there's more people we can cover this in more nuance next time you come on pigeonholer. Okay, cool.
DANIEL
I'm glad we have a harmonious ending.
AARON
Yeah. Of conflict. Disagreement is good. I'm pro discourse. Cool. All right, take care. See ya. Bye.
Get full access to Aaron's Blog at www.aaronbergman.net/subscribe -
Very imperfect transcript: bit.ly/3QhFgEJ
Summary from Clong:
The discussion centers around the concept of a unitary general intelligence or cognitive ability. Whether this exists as a real and distinct thing.
Nathan argues against it, citing evidence from cognitive science about highly specialized and localized brain functions that can be damaged independently. Losing linguistic ability does not harm spatial reasoning ability.
He also cites evidence from AI, like systems excelling at specific tasks without general competency, and tasks easy for AI but hard for humans. This suggests human cognition isnât defined by some unitary general ability.
Aaron is more open to the idea, appealing to an intuitive sense of a qualitative difference between human and animal cognition - using symbolic reasoning in new domains. But he acknowledges the concept is fuzzy.
They discuss whether language necessitates this general ability in humans, or is just associated. Nathan leans toward specialized language modules in the brain.
They debate whether strong future AI systems could learn complex motor skills just from textual descriptions, without analogous motor control data. Nathan is highly skeptical.
Aaron makes an analogy to the universe arising from simple physical laws. Nathan finds this irrelevant to the debate.
Overall, Nathan seems to push Aaron towards a more skeptical view of a unitary general cognitive ability as a scientifically coherent concept. But Aaron retains some sympathy for related intuitions about human vs animal cognition.
Get full access to Aaron's Blog at www.aaronbergman.net/subscribe -
Summary (by Claude.ai)
This informal podcast covers a wide-ranging conversation between two speakers aligned in the effective altruism (EA) community. They have a similar background coming to EA from interests in philosophy, rationality, and reducing suffering. The main topic explored is reducing s-risks, or risks of extreme suffering in the future.
Winston works for the Center for Reducing Suffering (CRS), focused on spreading concern for suffering, prioritizing interventions, and specifically reducing s-risks. He outlines CRS's focus on research and writing to build a moral philosophy foundation for reducing suffering. Aaron is skeptical s-risk reduction is tractable currently, seeing the research as abstract without a clear theory of change.
They discuss how CRS and a similar group CLR are trying to influence AI alignment and digital sentience to reduce potential future s-risks. But Aaron worries about identifying and affecting the "digital neural correlates of suffering." Winston responds these efforts aim to have a positive impact even if unlikely to succeed, and there are potential lock-in scenarios that could be influenced.
Aaron explains his hesitancy to donate based on tractability concerns. He outlines his EA independent research, which includes an archive project around nuclear war. More broadly, the two find they largely ethically agree, including on a suffering-focused ethics and "lexical negative utilitarianism within total utilitarianism.
Some disagreements arise around the nature of consciousness, with Aaron arguing rejecting qualia implies nihilism while Winston disagrees. They also diverge on moral realism, with Aaron defending it and Winston leaning anti-realist.
As they wrap up the wide-ranging conversation, they joke about convincing each other and make predictions on podcast listens. They thank each other for the thought-provoking discussion, aligned in ethics but with some disagreements on consciousness and metaethics. The conversation provides an insider perspective on efforts to reduce s-risks through research and outreach.EA Archive: https://forum.effectivealtruism.org/posts/DndmvDGStD3gTfhXk
Get full access to Aaron's Blog at www.aaronbergman.net/subscribe -
Summary/specific topics:
- Stress Tests and AI Regulation: Nathan elaborates on the concept of stress tests conducted by central banks. These tests assess the resilience of banks to severe economic downturns and the potential for a domino effect if one bank fails. They believe that lessons from this process can be applied to AI regulation. Aaron agrees, but also highlights the need for a proactive approach to AI regulation, as opposed to the reactive measures often seen in banking regulation.
- The Role of Central Banks in AI Regulation: Nathan suggests that institutions structured like central banks, staffed with technical experts and independent from government, could be beneficial for AI regulation. They believe such institutions could respond quickly and effectively to crises. However, they acknowledge that this approach may not be effective if AI development leads to rapid, uncontrollable self-improvement.
- Compute Governance: The conversation then shifts to compute governance, which Nathan sees as a promising area for AI regulation due to the obviousness of someone using large amounts of compute. They believe that this could provide governments with a control lever over cutting-edge AI labs, similar to how central banks control banking loans and affairs.
- AI Regulation and the Role of Public Actors: Nathan acknowledges that the leaders of major AI labs seem sensible and aligned with AI safety principles. However, they argue that regulation and public actors can play a crucial role in creating common knowledge between labs and preventing a race to the bottom. They also discuss the potential benefits and drawbacks of different regulatory approaches.
- Financial Regulation as a Model for AI Regulation: Nathan believes that post-crisis financial regulation, such as the Dodd-Frank Act, has generally been effective. They suggest that AI regulation could follow a similar path, especially if AI becomes a significant part of the economy. However, Aaron expresses skepticism about the ability of political processes to produce effective AI regulation.
- Regulation Before and After Crises: The speakers agree that pre-crisis regulation has generally been less effective than post-crisis regulation. They discuss the potential for AI regulation to follow a similar pattern, with effective regulation emerging in response to a crisis.
- Regulatory Arbitrage: The conversation concludes with a discussion on regulatory arbitrage, where banks shift activities to where it's cheapest to do business. Despite evidence of this behavior, Nathan notes that there was no race to the bottom in terms of regulation during the financial crisis.
Get full access to Aaron's Blog at www.aaronbergman.net/subscribe -
- Follow Arjun on Twitter: https://twitter.com/panickssery
- Read and subscribe to his blog: https://arjunpanickssery.substack.com
- A mediocre transcription can be found at https://www.assemblyai.com/playground/transcript/6x5h1mcemt-bff1-40fc-a676-9b59c66985f0
đŠ đ
Get full access to Aaron's Blog at www.aaronbergman.net/subscribe -
A transcript can be found at assemblyai.com/playground/transcript/6y7e7wz28c-30aa-4e83-ba4f-1bddf2e23dad
Get full access to Aaron's Blog at www.aaronbergman.net/subscribe -
For All Good's inaugural episode, we talked to Rob Wiblin and Keiran Harris of 80,000 Hours about how and why they produce their show. This episode first appeared on their new feed "80,000 Hours: After Hours" here: https://80000hours.org/podcast/episodes/80k-after-hours-philosophy-of-the-80000-hours-podcast/
Hope you find it interesting!
Get full access to Aaron's Blog at www.aaronbergman.net/subscribe -
Hear EA Georgetown member Aaron Bergman's recent interview as a guest on the Narratives Podcast!
During the show, host Will Jarvis talks to Aaron about a key way he thinks people go wrong when choosing a career, how society treats children, how bureaucracy works, whether the FDA should have to approve medications, his interest in psychopharmacology, and a whole lot more.
Check out Narratives at narrativespodcast.com or search for it wherever you're listening to this episode!
Get full access to Aaron's Blog at www.aaronbergman.net/subscribe -
Welcome to All Good, a show by Georgetown Effective Altruism.
Get full access to Aaron's Blog at www.aaronbergman.net/subscribe -
Update Feb 12, 2023: Automated audio experiment with the Automator app no one uses
Intro
Back in 2016, I got my first job as a summer camp counselor. It was an outdoor adventure day camp, to which the six figure-making lawyers and consultants of Washington, D.C. sent their straight-A getting, lacrosse-playing children for a taste of the great outdoors.
The campers ranged from four years of age to 15, with those nine and up able to choose which among the activitiesâkayaking, rock climbing, horseback riding, and moreâto pursue. I, as a typical sheltered suburbanite (though a competitive rock climber rather than a lacrosse player), had attended the camp myself throughout my childhood.
We Junior Counselors, or âJCs,â could request which age-activity combinations to supervise. Though I had to do my fair share of toddler-wrangling, I was granted a few weeks in the Big Leagues: the rock climbing program for kids aged 9-15. Each cohort of about 12 campers had 3 staff members: a JC, a head counselor or âHCâ at least 18 years old, and an âinstructorâ to provide technical expertise.
As a 16 year old mere months older than some of my campers, it was a little bizarre. I was an Adult, getting paid (barely) to ensure that the youth in my care remained happy and healthy in the rugged Maryland wilderness. By and large, it was a good camp. What Iâm about to say doesnât negate the overall-positive experience of most campers.
Lying
Staff lied to campers constantly. We were expected to, and generally obliged. Not about important things, either. In fact, the lies generally concerned utterly trivial matters.
Child protection is good
Now, Iâm not entirely naive. I understand that placing children of all genders in the care of three young adults literally in the woods, including on one overnight campout per week, is a situation tailor-made for abuse. And the camp, to its credit, took this very seriously. We had training after training and rules and guidelines galore designed to ensure that no staff member would ever be tempted to misuse his or her power and status.
This was a good thing. Child abuse is bad, and Iâm glad we erred on the side of âcareful.â I understood that my relationship with the campers was not to be intimate. Staff shouldnât, and didnât, reveal the skeletons in their closet to middle schoolers at rock climbing camp. No playing âtruth or dareâ around the campfire, either. Understood.
Casual dishonesty
But did we really have to lie so much? Take my age. Though I donât recall being explicitly told âyou may not reveal your age to campers,â doing just this was an obvious norm among staff.
Like any normal human being, the campers wanted to know some basic biographical information about the people they were spending their summers with. This was particularly true for me, a 16 year old who looked about 14 on a good day, whose size-small âcounselorâ tank top was held up with safety pins to prevent the neck hole from drooping down to my nipples.
My fellow counselors, when inevitably asked this question, would say something like âyou have to guessâ or âsix hundredâ or âmaybe if we win the song competition Iâll tell you at the end of the weekâ (they wouldnât). Nobody said â17â or â31â or âsorry, I canât tell you that.â
Though human memory is fallible and itâs been five years, Iâm pretty sure I got tired of telling stupid, pointless lies. When campers asked, I made the completely banal decision to answer their question. Our conversations generally went like this:
Camper: How old are you?
Me: sixteen.
Camper: You look like youâre [insert number between 11 and 15].
Me: I know.
Camper: Ok.
And that was that. My authority remained intact. No one stopped listening to me. Maybe Iâm indulging myself, but I think the campers might have been pleasantly surprised at, you know, not being lied to for once.
I didnât only do this for the sake of my campersâ dignity. I really did look younger than some of the campers, and it was in my own best interest that they be aware I was not. My fellow counselors didnât get mad at me. They didnât bring it up. They just kept on lying to the kids, and I did not. I canât say for sure I was the only one who told the truth, but I sure never heard anyone else do the same.
It wasnât just my age. Theyâd ask which school I went to, and (if my memory serves correctly) I told them. Iâm sure they asked some things I really couldnât or shouldnât have answered, and then I wouldnât tell them.
Ageism
Back in my gung-ho veganism days, I remember thinking the term speciesism was really dumb. Damn right, we should treat different species differently. Unlike with, say, racial identity, there really are important differencesâmoral and otherwiseâbetween dandelions, lobsters, chickens, and humans.
I still think the term is ripe for misinterpretation, but I now see the underlying concept as sound. Speciesism is arbitrary differential treatment, rather than differential treatment justified by real differences in underlying traits.
Ageism is the same way. We really should treat young kids differently than adults, but only because they have different underlying traitsânot because of their age per se. If we donât think that kidsâ have the cognitive capacity or confidence or something to make big life decisions or competently assess risk, fine.
Iâm not saying we should let infants sign contracts, and Iâm not even saying we should never lie to children. Whatever Kant would have you think, dishonesty is sometimes the lesser of two evils and therefore the right thing to do, but the chronological age of the person youâd be lying to isnât a good enough justification.
The socially acceptable prejudice
I guess great minds think alike, because after writing this subheading I came across the AARPâs article âWorkplace Age Discrimination Still Flourishes in America.â Under its own subheading, âAgeism: An accepted bias,â it argues that
ageism in the workplace occurs every day across America, and it is tolerated or â even worse â unrecognized for what it truly is: discrimination, plain and simple.
âAge discrimination is so pervasive that people donât even recognize itâs illegal,â asserts Kristin Alden, an attorney specializing in employee rights at the Alden Law Group in Washington, D.C.
Ageism is most commonly discussed in the context of employment discrimination, but its social acceptability rings just as true in the context of other, non-workplace incarnations.
Reasons why
Anyway, I think there are two fundamental reasons why ageism is so common and acceptable:
* Age really does correlate with differences in underlying characteristics like mental aptitude.
* All adults were once children, and most adults expect to eventually get old.
The first reason seems pretty intuitive; it really can be hard to tell what is justified differential treatment (say, not letting toddlers sign legally-binding agreements) and what is straight-up, arbitrary bias.
The second point is more subtle. Itâs the âI canât be racistâI have a black friend!â of ageism. âI canât be biased against children,â we think, âI was a child myself once!â I think thereâs a bit of retributive justice going on, too: if I had to endure a childhood of casual, well-meaning disrespect, why shouldnât I get to enjoy the status-privileges of adulthood?
These narratives are rarely made explicit. Iâm sure Iâve been disrespectful to children before without saying such things out loud, or even really thinking about them directly. Nor are they malicious. When my fellow counselors and I were lying to the campers, we thought of it as a big joke. We were teasing them, playing a game the campers never agreed to join.
This is why ageism is so pernicious. Itâs easy to moralize racism and sexism; there are the oppressors and the oppressed. What happens when weâre all on both sides of the power structure, albeit at different times in our lives? This bias, well-intentioned and hidden in plain sight, doesnât fit well into a compelling moral narrative.
Candy-coated contempt
If you only read one link in this post, make it the Atlanticâs âThereâs a Better Way to Parent: Less Yelling, Less Praiseâ (warning: uses up one of your monthly free articles), an interview with the author of Hunt, Gather, Parent: What Ancient Cultures Can Teach Us About the Lost Art of Raising Happy, Helpful Little Humans. Part of it reads:
Joe Pinsker: Many American parenting strategies, you estimate, are only about 100 years old, and some of them arose more recently than that. What about American parenting sticks out to you as distinctive and particularly strange?
Michaeleen Doucleff: One of the craziest things we do is praise children constantly. When I was first working on the book, I recorded myself to see how frequently I praised my little girl, Rosy, and I noticed that I would exaggeratedly react to even her smallest accomplishments, like drawing a flower or writing a letter, with a comment like âGood job!â or âWow! What a beautiful flower!â
This is insane if you look around the world and throughout human history. Everywhere I went, I donât know if I ever heard a parent praise a child. Yet these kids are incredibly self-sufficient, confident, and respectfulâeverything we want praise to do, these kids already have it, without the praise.
Itâs hard to cut back on praise, because itâs so baked in, but later on, I decided to try. Itâs not that thereâs no feedback, but itâs much gentler feedbackâparents will smile or nod if a child is doing something they want. I started doing that, and Rosyâs behavior really improved. A lot of the attention-seeking behavior went away.
Doucleff emphasizes how excessive praise induces attention-seeking behavior, but Iâd like to draw attention to its direct, first-order effect: children feel disrespected.
Baseball
I started playing baseball in first grade, and didnât stop for nearly a decade. During elementary school, I distinctly remember loathing the hollow, meaningless compliments that coach after coach fed to us players (which my father just confirmed as a regular complaint of mine, so Iâm not making this up). Practices and games were a constant stream of âWay to go down swinging, Jackson!â and âGreat effort, Mike!â
If you had asked me at 8 why I hated these âcompliments,â Iâm not sure what I would have said. In retrospect, though, I think I (correctly) understood them as a subtle reminder that we elementary schoolers were unworthy of being treated like normal human beings. After all, I knew the coaches were being disingenuous most of the time; it just wasnât plausible that everything we did was so commendable.
In other words, I knew that adults were lying to me. Perhaps the word âlyingâ is a little strong, but Iâll stand by it. Saying âgreat jobâ when you know damn well that the job wasnât great, with intent to deceive, seems like a lie to meâhowever well-intentioned.
(Literally) feels bad, man
Thereâs another subtle objection that Iâd like to preempt. In the annals of culture-war adjacent academia, there is plenty of discussion of things like âdehumanization,â âstereotype threat,â and structural racism or sexism, which some folks are skeptical of. Whatever you think about these things, Iâd like to distinguish them from what Iâve been calling âcasual disrespectâ of kids.
Some social theorists purport these institutions to be important because of their subtle, nefarious, perhaps even subconscious impact on the structure of society. Even if nobody is overtly offended by bits of casual sexism thrown around, the thinking goes, such behavior helps to perpetuate a structural imbalance between the power and autonomy of men and women.
Iâm not disputing this, but once again let me emphasize the direct, first-order effects of âcasual disrespect.â It is not merely that kids are disempowered or whatever. It literally makes them feel bad! Ok, I canât say for sure whatâs going on in other peopleâs heads, but it sure as hell made me feel bad, and it sure seemed like my campers felt bad when they were being lied to.
We can speculate about the second-order, structural effects of ageism all we want, but letâs not forget that people feel bad when they sense theyâre being disrespected, and kids are people too.
Not just dishonesty
So far, Iâve focused on dishonesty because itâs the thing I have salient personal experience with (as a camp counselor and baseball player). That said, there are countless other ways that we adults treat children inappropriately without giving it a second thought.
Mannerism and tone of voice
You know how everyone talks to dogs and babies in weird, high-pitched voices? Yeah, we do a slightly toned-down version to kids as old as, say, 13, and I donât think itâs cool. This video (starts at 0:52, watch for about 30 seconds) is a fascinating example. The speaker is explicitly demonstrating how to treat children respectfully and yet still talks in a distinctly âbaby talkâ tone of voice.
It goes like this:
The first thing to remember is, children are humans, just like you.
If youâre not used to being around kids, I know it can be tempting to speak down to them or talk to them in baby talk, but honestly, you really donât have to do that. You can speak to a child the same way you speak to anyone else. In fact, thatâs the best way to speak to themâŠ
For example, itâs better to say, even to a five month old infant, âWould you like your bottle?â versus âBaby want a baba?ââŠ
For instance, you could say, âCan you point to that crayon?ââŠ
The bolded lines are spoken in âbaby talk-lite,â and I donât blame her. When I imagine speaking to an eight year old exactly the way I would speak to a peer (in tone and manner, not content), it feels weird. Instead, I instinctively make the pitch of my voice a bit higher, add more inflection to my voice, and otherwise subtly adjust my mannerisms. Itâs not full-on âbaby-talking,â but it is a step in that (wrong) direction.
Content censorship
Iâm perfectly willing to accept that children are sometimes genuinely not mature enough to handle certain content. I, for example, had nightmares for weeks after watching the movie Contact when I was about seven. Iâm pretty sure that wouldnât happen to me now. But I think we sometimes deem things âinappropriateâ without even considering whether the content could plausibly cause anything bad to happen.
If you think thereâs a reasonable chance your kid might learn that violence is ok or have nightmares after watching a violent movie, fine, donât show it to him. But if youâre inclined not to let your kid watch Zero Dark Thirty or something because of some vague notion that itâs âinappropriate,â consider interrogating this intuition.
Compared to violence, itâs even harder to justify our sex prohibition. Iâm far from the first person to note Americaâs prudish tendencies, but even liberal, sex-positive people often concede that kids shouldnât see naked bodies or watch people have sex on screen.
Again, there are plausible justifications for this. If a sex scene is borderline (or not so borderline) abusive, you might worry that a child would learn the wrong lessons about power and consent and such. Likewise, I think everyone should be wary of regularly consuming superstimuli like porn, children included. That said, literally what bad thing is going to happen if your kid sees Rose naked in Titanic? I managed to survive the movie as a kid without becoming a sex addict.
Curse words are very similar to content censorship. I get it, we donât want third graders running around yelling âFUCKâ just to get everyoneâs attention at Walmart. But what, exactly, is so problematic with children using the same words that adults use in the appropriate setting?
Path dependence and coordination
There is one very real barrier to treating children more like normal human beings: coordination. If your child is the first of her peers who knows how babies are made, other parents might get upset if little Julia relates this information to her friends. Likewise, even if youâre ok with Johnny cursing after he stubs his toe, others might look down on you both. Breaking social norms ainât easy.
Even still, there are some things for which being the first mover isnât too difficult. If youâre the only baseball coach who doesnât ceaselessly offer empty praise to the players and all the kids like you just as much, youâre really not risking your reputation. Likewise, other adults might be a little surprised when you speak to a toddler in a completely normal tone of voice, but theyâre not going to exile you from polite society. Perhaps, gradually, the norms can creep in the right direction.
Conclusion
According to my parents, my second grade teacher told them that I âacted like a 40 year old man.â I still donât know what she meant by thisâI cried all the time, so it definitely wasnât emotional stability. The point, though, is that perhaps my childhood experience was not representative of othersâ.
If so, maybe kids really arenât harmed or offended by being told untruths and being prevented from watching violent movies. Maybe my campers really didnât mind when we refused to reveal our ages, and maybe other children genuinely prefer when adults speak in a modified tone of voice.
But, to be frank, I donât think so. I may have been more interested in physics than the median first grader (an interest I have since lost), but I highly doubt that my entire experience was sui generis. After all, I was playing baseball and basketball and doing Cub Scout stuff and playing trumpet and going to summer camp just like so many of my peers. And, just like them, I endured the same needless censorship and well-intentioned disrespect so deeply ingrained in our culture.
Get full access to Aaron's Blog at www.aaronbergman.net/subscribe