DarkHorse Podcast with Daniel Schmachtenberger & Bret Weinstein
Synopsis:
The first DarkHorse podcast with Daniel Schmachtenberger.
Video Transcript:
[0:00:00] [Intro music]
[0:00:03] Bret: Hey folks. Welcome to the DarkHorse Podcast. I am very pleased today to be sitting with my good friend Daniel Schmachtenberger, who is the founder of The Consilience Project. There’s a lot more that I could say about you Daniel, but I think we will leave it at that for now. People can look up your bio if they want to do so.
[0:00:22] I should probably start by telling people that when I say that you are my good friend, I really mean that. You and I are good friends, although to be honest we haven’t spent all that much time together. This is one of those cases where you meet somebody, somebody who has started from a very different place, and you discover that you have all kinds of thought processes that have reached similar conclusions, and every discussion is fascinating. The more I learn about what you think, the more I realize I’ve got a lot to learn from you and that there is essentially infinite ground to be covered. So welcome Daniel.
[0:00:57] Daniel: Thank you, Bret. It’s really good to be here. Yeah, I feel the same way. We were introduced by our friend Jordan Hall, and we’ve never had a conversation where I didn’t learn something and where I didn’t appreciate the good faith way that you showed up when we had a disagreement to talk through, which is always fun.
[0:01:17] Bret: It is always fun, and I must say I did a little bit of poking around, seeing recent interviews you’ve done, but I deliberately did not overly study for this. My sense is that the audience will get a great deal out of hearing you and me go back and forth, and finding out what we agree on, where we disagree. And maybe most tellingly, there’s a phenomenon in which anybody who has learned to think more or less independently tends to have their own language for things, their own set of examples that they use to explain things that recur over and over again.
[0:01:53] And so in order to have high quality conversations, there is this period in which you’re effectively teaching the other person how you phrase things and seeing those things line up is great. And in the rare case where they don’t line up, it’s even better because you know there’s something to be learned one way or the other. So I’m hoping that will emerge here.
[0:02:15] Daniel: I’m looking forward to it.
[0:02:16] Bret: All right, good. So let’s start here: when I say that you and I come from different starting points, I mean to imply something in particular. You, as I understand it, were homeschooled and as you have described it, that was actually closer to what most people would probably refer to as unschooling. Somehow this did not mess up your motivational structures. Your parents were alert about what they were doing. And so you ended up pursuing what you wanted to pursue, and you did not get in your own way. And lo and behold, you end up a wickedly smart, independent, first-principles thinker.
[0:03:04] Now that is not my story at all. My story is I went to school and it didn’t work. I had something that most people would call a learning disability and it got in the way of school functioning for me, and more often than not I got dumb-tracked. And basically that complete failure of school to reach me accidentally worked like some kind of unschooling I would say. And so there are maybe many paths, I don’t know, but I’d be curious: for people who have traveled some road to the land of high-quality, independent thought, what can they expect the experience to be like when they arrive there?
[0:03:50] Daniel: The experience of high-quality, independent thought?
[0:03:54] Bret: Yes. If you imagine that, well it would be lovely to think independently and to do so well, and the world is going to be a paradise if you start doing that because of course that’s a very desirable thing to do and people will appreciate it, you’re going to be surprised. That’s not necessarily what happens when you arrive there and certain experiences show up all over the place. And without telling you what my experiences might have been, I’m curious as to what you might have encountered and whether or not those things will be similar.
[0:04:24] Daniel: Yeah, it’s an interesting question. I think most people will experience a higher degree of uncertainty than people that are part of any camp that has figured most things out and they can cognitively offload to whoever the authorities or the general group consensus is. And certainty is certainly comfortable in a particular way. And if you’re really thinking well about what is my basis for what I believe? what is the epistemic grounding? what’s the basis of my confidence margin? And you really think about your confidence basis clearly, as you try to find more information to deepen the topic, the known-unknowns expand faster, at least at a second order, to what the known-knowns do. And so you keep being aware of more stuff that you don’t know, that you know is relevant and pertains to the situation.
[0:05:27] So there’s a complexification of thought, there’s an increase in uncertainty, hopefully there’s an emotional development, and kind of a psychological development where there’s an increased comfort with uncertainty so that you can actually be honest and not defect on your own sensemaking into premature certainty that creates confirmation bias. And I think that’s why one of the reasons you used the term independent—there is a certain aloneness of not having a whole camp of people that think very similarly. I don’t find that to be a painful thing, but it’s a thing.
[0:06:12] Bret: It’s actually, in its own way it’s freeing because the fact is if you follow logic to natural conclusions you’ll end up saying a lot of things that are alarming or discordant with the conventional wisdom. And the world neatly divides into people who will be so enraged or thrown by what you’re saying that they disappear or maybe they become antagonists at a distance, and the people who have a similar experience and therefore aren’t thrown by the fact that you’re saying things that are out of phase. And so editing the world down to those who are comfortable with what they don’t know, who are interested in following things where they lead, irrespective of who that elevates and who it hobbles, those people are interesting people to hang out with. And so yes, the alienation may be a blessing in disguise in some ways.
[0:07:20] Daniel: There’s two other thoughts that came up as you were just talking. One is, I wouldn’t call myself an independent thinker. I’m being particular about the semantic of the word independent. I wouldn’t call anyone an independent thinker because I think in words invented by other people, I think in concepts invented and discovered by other people. I don’t necessarily have a specific school of orthodoxy from which I take an entire worldview, but almost every idea that I believe in, I did not discover.
[0:07:54] And so I think that’s a very important concept because I think the ideas we’re going to discuss today regarding democracy and open society have to do with the relationship between an individual and a collective. And I think the idea of an individual is fundamentally a misnomer. Without everybody else I wouldn’t be who I am, and I wouldn’t think the way I think, I wouldn’t think in the language I do, I wouldn’t have access to the knowledge that came from the Hubble Telescope and the Large Hadron Collider and so many things like that. So I can say that there is a certain ultimate authority of what I choose to feel that I believe in and trust that has an internal locus of control. But the information coming from without and my own internal processing of it are part of the same system.
[0:08:51] Bret: So this is a perfect example of what I was suggesting up front, where two people who do whatever we’re going to call this will have their own separate glossaries. So if I can translate what you’ve just said: I, Daniel Schmachtenberger, am not an independent thinker because such a thing is inconceivable in human form. And I totally agree with that. The fact is, not only are you interwoven with all sorts of humans who are responsible for conveying to you in one way or another conclusions that you couldn’t possibly check, these are thoughts that would be familiar to Descartes for example, but you are also building from the entirety of cumulative human culture. All of the tools with which you can think are, almost all of them are too ancient to even know where their rudiments originated.
[0:09:47] So anyway, I don’t disagree with any of that. So to me, I would say there is such a thing as an independent thinker and in your schema it has to do with whether or not they are thinking a la carte. That is to say, using that set of tools that is most effective irrespective of the fact that those tools don’t all come from one tradition. And you would say there is no independent thought because a la carte is the most you can do, or something like that.
[0:10:15] Daniel: I might say that I like a term like interdependent better because it doesn’t mean that there isn’t an individual involved, but it means that the individual without everyone else is also not a thing. And so the recognition that sovereignty and individuality is a thing, and it is conditioned by, affected by, and affecting other groups of people, are both necessary in the Hegelian synthesis to understand what is the nature of a human? Are they fundamentally individual and then groups emerge? Are they fundamentally tribal and they’re formed by the tribe? And it’s very much both, and the kind of recursive process between them.
[0:10:54] Bret: Hundred percent. In fact I once wrote something I called the Declaration of Interdependence. It was a sort of proto-Game B attempt to define what the rules of a functional society would be like. And I also frequently say that the individual is more an illusion than it is real. And what I describe is that an individual is a level of understanding that evolution has focused us on because historically it has been sort of the limit of that where we might have some useful control. Evolution might ultimately care about whether or not you are successful, your genes are still around a hundred generations from now. But your focus on a hundred generations from now is unlikely to have any useful impact whatsoever. Whereas your focus on your life and your children is likely to be useful.
[0:11:56] So we have been delivered a kind of temporal myopia in order to keep us focused on that which could be productive for an ancestor, but of course we are now living in an era in which we can have dramatic impacts on the distant future. In fact you and I are both quite focused on the strong possibility that our foolishness in the present will result in the end of our lineage. And that is something that evolution, were it capable of upgrading, would certainly have us concerned about because the hazard is very real and our tendency not to think about it is a big part of the danger.
[0:12:41] Daniel: I think temporal myopia and the collective action, collective coordination problem is a good way to describe all of the problems we face, or one of the generator functions of all the problems we face: that you have a bunch of game-theoretic situations where each agent within their own agency, pursuing the choice that makes most rational sense to them, pursues local optimums where the collective body of that drives global minimums. But if anyone tries to orient towards the longer-term global maximum, they just lose in the short term. That’s an arms race, that’s a tragedy of the commons.
[0:13:17] And so how do we reorient the game theory outside of those multi-polar traps, I would say is one of our underlying questions. That when the biggest harm we could cause was mediated by stone tools or bronze tools or iron tools or even industrial tools, we didn’t have to cause it immediately because the extent of harm was limited in scope. When it is mediated by fully globalized exponential tech running up against planetary boundaries with many different kinds of catastrophe weapons held by many different agents, we actually have to solve the problem.
[0:13:52] Bret: Yeah, I of course agree completely with this as well. That effectively, maybe it really is every single important problem is a collective action problem of one kind or another. We’ve got races to the bottom, we’ve got tragedies of the commons, we’ve got these things intermingled. But once you start to see that, on the one hand you could take from it a kind of reason for despair because these are not easy problems to solve.
[0:14:23] On the other hand, the discovery that effectively it’s not a thousand distinct problems, it’s a thousand variations on one theme, and that that theme is solvable. In fact we have for example Elinor Ostrom’s work, which points to the fact that evolution itself has solved these problems many times, that that is hopeful. So I don’t know where you are in terms of how hopeful you find yourself about humanity’s future, but I’m quite certain that you and I will align on the idea that, yes, if we could focus on the problem as it was, it’s more tractable than many people think it is.
[0:15:09] Daniel: Yeah, I mean you mention hopefulness. You mention a bunch of good things there, that rather than a bunch of separate problems you have a few problems with lots of expressions. This was a big chunk of the kind of work I engaged in, with a number of people you were part of, looking at: when we inventory across all of the catastrophic and existential risks, ones involving AI and problems with biotech and other kinds of exponential tech and environmental-mediated issues and things that escalate to large scale war, is there anything in terms of the patterns of human behavior that they have in common?
[00:15:50] And so this kind of race to the bottom, collective coordination thing is one way of looking at that. But there’s a few ways of looking at what we’d call the generator functions of catastrophic risk. And it really is simplifying if you can say, are there categorical solutions to those underlying generator functions? They’re hard, right? They’re hard.
[0:16:12] Now when you talk about hopefulness, I notice that the way that I relate to the optimism/pessimism thing is there’s an optimism which is almost like a choice. To say, I’m going to have optimism that there is a solution space worth investigating, even if I don’t know what it is, and if I’m wrong, it’s the right side to be wrong on. As opposed to, there was a solution and I didn’t look for it. And then I’m going to have pessimism about my own solutions. So I’m going to try to red team my solutions so that I can find out how they’re going to fail before finding out how they fail the hard way in the world. But then not be devastated by the fact that that solution wasn’t it and keep trying. And I think that’s kind of how the committedly inventive, innovative principle works.
[0:17:01] Bret: So again, we could do almost a one-to-one mapping of your schema onto mine. I do this in terms of prototyping rather than red teaming and discovering it’s wrong. It amounts to the same thing. When you say actually it’s hard, you and I would have to define two different kinds of hard probably. There is hard to make function, to stabilize, and there’s hard to figure out what the solution is. And those are distinct.
[0:17:31] We might find elements of both of them here, but let me just give a, maybe it’s the canonical example of a solution to a game theory problem that everyone will recognize: I divide, you choose. It’s the perfect solution to an obvious problem of choice and selfishness. If there is cake and I know that you’re going to get your choice and that you are incentivized to pick the larger piece, then I am incentivized to get the cut as even as possible. And the point is, it neutralizes the concern. So we are looking for solutions of that nature.
[0:18:15] Now I don’t think they are all that hard to understand in broad terms in general. There may be a lot of work on the discovery end but when you see them they end up being surprisingly simple. My biggest fear is that it is very rare for people to understand how much danger we’re in and why, and therefore what solution we are inherently looking for and how urgently we should be seeking it. In other words, as long as things function pretty well in the present and people get fed and housed, it is very easy for them to ignore the evidence that we are in grave danger even if we are fat and happy and enjoying a period of good luck.
[0:19:14] Daniel: One of the interesting things in the study of previous civilizations is that none of the great civilizations of the past still exist. They all failed even if they had been around for hundreds or thousands of years. And so to understand that civilizations failing is the only thing that’s ever happened, and then recognize that since World War II we have for the first time a fully globalized civilization where none of the countries can make their own stuff, that the supply chains that are necessary to make the industrial and productive base are globalized, and that we’re running up against the failure points of a globalized civilization. That’s an important thing.
[0:19:53] And what’s so interesting is that all the previous civilizations that failed had so much hubris before their fall because there had been so many generations where they had succeeded that they had forgotten that failing was a thing. It was just some ancient myth, it didn’t feel real. So we don’t have an intuition for things not working or for catastrophe because we haven’t experienced it and our parents didn’t experience it and it’s only myth. And as a result we just make bad choices. And I mean, this is where studying history and studying civilizational collapse is really helpful. And you can see that even as the system starts to fail in partial ways, to me it seems very clear that when we look at the George Floyd protests turning into riots over the summer that happened, they were following the COVID shutdowns and specifically all the unemployment from it.
[0:20:48] Whenever the unemployment goes up, whenever the homelessness goes up, when the society makes it to where people who are trying can’t meet their basic needs, then it gets a lot easier to recognize there’s something wrong with the system as a whole and go against it. But we also never had a point in human history where it was like, no matter how outraged I am all I have to do is start scrolling for a second and I’ve forgotten everything. Not to mention the fact that I’m probably on opioids and benzos. And so that makes it to where the frog can keep boiling in hot water longer.
[0:21:21] Bret: Yeah, so I often say that people are too comforted by the idea that people are always predicting the end of the world and it hasn’t happened yet. Because in fact it happens all the time, the ends of these civilizations. But it’s even worse than the analysis that you and I appear to agree on here, because many of those civilizations that have ended, in fact most of them, the civilization—the organizational structure—ended, but the people didn’t. So the Romans continued on as other things. The Maya are still with us, they are not with us as the Maya.
[0:22:00] And the point is actually in this case, the jeopardy that we are creating is to our very capacity to continue as a species, not just to our ability to continue with the structures that we have built. So not only are we all in it together this time, but we’re all in it in a way that we never have been before, or at least very rarely have been before. And that really ought to have people’s attention. But you’re right, the capacity to distract ourselves from it has never been better either.
[0:22:31] Daniel: I think something that I find particularly important when thinking about catastrophic risk now, relative to previous examples of civilization collapse, is that until World War II we couldn’t destroy everything. We just didn’t have enough technological power for catastrophe weapons. And so you could fight the biggest, bloodiest war, violate all of the rules of war, and it would still be a local phenomenon. And with the invention of the bomb, we had now the new technological power to actually destroy habitability of the planet kind of writ large, or at least enough of it that it was a major catastrophic risk. And on the timescales that you think about as an evolutionary biologist, of how long humans have been here and the proto-humans, since World War II is no time at all to have really adapted to understanding what the significance of that is.
[0:23:27] And the only reason we made it through was because we created an entire global order to make sure we never used that new piece of tech, and then all of history we always used the arms that we developed. And so we made this whole Bretton Woods world and mutually assured destruction that said, ok well let’s have so much economic growth so rapidly that nobody has to fight each other and they can all have more because the pie keeps getting bigger. But that starts running up against planetary boundaries. And interconnecting the world so much it gets so fragile that a virus in Wuhan shuts the whole world down because of interconnected supply chain issues. So that thing can’t run forever, and the mutually assured destruction was one catastrophe weapon and two superpowers, so mutually assured destruction works. The game theory of it works.
[0:24:10] Well, as soon as you start to add to that the bioweapons and the chemical weapons, the fact that bioweapons can be made very very cheaply now with CRISPR gene drives and things like that—grown weapons—we have dozens of catastrophe weapons held by many dozens of actors, including non-state actors, and that just keeps getting easier. Mutually assured destruction can’t be put on that situation. It doesn’t actually have a stable equilibria. So now we have to say, how do we deal with many many actors having many types of catastrophe weapons that can’t have a forced game-theoretic solution with a history where we always used our power destructively at a certain point? How do we deal with that? It’s novel, we have no precedent of that.
[0:24:56] Bret: Yeah, it’s absolutely novel. I mean, when I became cognizant, let’s say 1975 is where I first started having coherent thoughts about the world, that was only 25 years after the end of World War II and it seemed like World War II was a very long time ago, but of course we’ve covered that distance twice since then. So the ability for the tools with which for us to self-destruct as a result of aggression are brand new. And you’re absolutely right, the thing that prevented us from using them, that force disappeared, it no longer exists. There is no stable equilibrium here, so what’s protecting us is not well understood at best. And then add to that all of the various industrial technologies that we are now using at a scale where they imperil us.
[0:25:55] I don’t know about you, but I keep having the experience of a catastrophe happens and that’s the point that I get alerted to some process that is very dangerous to humanity that I didn’t know about until the catastrophe. This has happened with the financial collapse of 2008, it happened with the triple meltdown at Fukushima, it happened at Aliso Canyon, I believe it has now happened with COVID-19 and gain of function research. And the point is, it paints a very clear picture. We do things because we can’t see why we shouldn’t. Or this is also a game theory problem: those who can see why we shouldn’t, don’t, and a certain number of people don’t see why we shouldn’t and they do, and we all suffer the consequence of their myopia.
[0:26:42] And so on multiple fronts we are rolling the dice year after year, and the people who can think independently looking at that picture, looking at the series of accidents, looking at the hazard of something like a large-scale nuclear exchange without an equilibrium to prevent it, those people wake up. But the problem is, the mechanism to actually begin to steer the ship in a reasonable direction in light of these things doesn’t seem to exist for reasons I’ve heard you explore many places. So, what does it mean as far as you can tell?
[0:27:25] Daniel: There’s one thing that you said that I think is worth us addressing first, is that some of the things that caused the catastrophe either were unknown, or those who knew them were game-theoretically less advantaged than those who were oriented on the opportunity rather than the risk. Because those who orient towards opportunity usually end up amassing more resources that equals more ability to keep moving stuff through. There is an article in a conversation in the LessWrong community about, regarding catastrophic risk, mistake theory versus conflict theory.
[0:27:55] What percentage of the issues come from known stuff that we knew would cause a problem or at least could cause a problem and game theoretically we went ahead with it anyways, versus stuff where we just couldn’t have anticipated or really didn’t anticipate? And I think it’s fair to say these are both issues. There’s true mistake theory stuff, like we just couldn’t calculate, and then there’s true conflict theory stuff. We knew that escalating this military capacity would drive an arms race, or the other people would, that if we calculated it, there’s an exponentiation on all arms races that takes us to a very bad long-term global situation.
[0:28:33] One of the insights that I think is really interesting is that the fact that the mistake theory is a thing and everyone acknowledges it ends up being a cover, a source of plausible deniability for what’s really conflict theory. So we know there is an issue, we pretend not to know, we do a bullshit job of due diligence and risk analysis and then afterwards say it was a failure of imagination and we couldn’t have possibly known. I have actually been asked by companies and organizations to do risk analyses for them where they did not want me to actually show them the real risk. They wanted me to check a box so they could say they did risk analysis so they could pursue the opportunity, and when I started to show them the real risk, they’re like, fuck we don’t want to know about that.
[0:29:19] And so when it comes to the “could we have possibly factored that?” a classic example I like to give because it’s so obvious in retrospect is could we have known in the development of the internal combustion engine that making street cars, which seemed like a very useful upgrade of having the horse internalized to the carriage, would end up causing climate change and the petro dollar and wars over oil and oil spills en masse and ocean oil spills and whatever. It seems like that would have been hard to know a hundred years in the future that it would do all that stuff.
[0:29:58] And this is a classic example of also where we solve a problem and end up causing a worse problem down the road in the nature of how we do it, which you can’t keep doing forever. The story is oh we caused a worse problem and that’s the new market opportunity to solve that problem in the ongoing story of human innovation. But when you start running up against that the problems are actually hitting catastrophic points, you don’t get to keep doing that ongoingly. You don’t get to regulate after the fact—the way that we always have once a problem hits the world—things that are catastrophic. Could we have known? Well, yeah.
[0:30:31] In London before that, one, there were already electric motors, and two, people were already getting sick of burning coal, from lung disease from the burning of the hydrocarbons. If we had tried to do good risk analysis could we have? Yeah. But there’s so much more incentive on who pursues the opportunity first, and then there’s this multi-polar trap of, well let’s say we don’t, the other guy’s going to so it’s going to get there anyway so we might as well be the ones to do it first. And that thing ends up getting us all the time, which is why collective action again comes in.
[0:31:08] Bret: Well, it’s really interesting how much of this is again parallel. Heather and I use the example of somebody driving the first internal combustion engine and somebody chasing them down the street saying “don’t do that you’ll screw up the atmosphere!” How crazy is that person running down the street saying that, because you would have to scale it up to such a degree before that’s even a concern that that person seems like a nervous Nellie, but of course they would also have been prophetic. But the other thing I want to ask you about is, you say that we have these two categories where sometimes we could have known and we knew in fact and we went ahead anyway, and then in other cases we didn’t know and something snuck up on us, and I want to clarify what you just said.
[0:31:59] Because my understanding here is that if you dig deep enough, somebody always knew.In general there’s some mechanism whereby the person who correctly predicted what was going to happen has been silenced. Often they lose their jobs, they disappear from polite society, at the point they turn out to be right their reputations are never resurrected as far as I can tell. So am I wrong that even in the cases where people who made the decisions may plausibly have not known that the reason they didn’t know is because there’s some sort of active process that, when there’s a profit to be made, shuts down anything that could be the basis of a logical argument that we shouldn’t do it?
[0:32:47] Daniel: I don’t know that I’ll say always. I’ll certainly say most of the time. Let’s say there was a case where really nobody knew. My guess is we probably could have, had we tried harder. And then let’s say there’s going to be some unpredictable stuff. We know in complex situations there’s going to be unpredictable stuff. So you do the best job you can to forecast the risks and harms ahead of time but then you also have to be ongoingly monitoring, well what would the early indicators that there’s a problem be, and when we find there’s something we hadn’t anticipated, how do we factor that into a change of design? Well once the profit stream is going and the change of design fucks up the profit stream, how does the recognition of that there is a problem actually get implemented when those who have the choice implementation power are not the people who are paying attention to those indices?
[0:33:48] So, yes, I would say—and it’s easy to just say, hey yeah there was some wackadoodle who was saying that there was going to be some risk, but there’s always some wackadoodle saying there’s going to be risk about every new tech and if we really listened to all of them we’d have no progress. That’s the story, right? It’s a bullshit story. But there’s a collective coordination issue, because it is fair to say, so let’s take AI weapons right now. Specifically automated drone weapons. There is an arms race happening on automated drone weapons, and I think every general and military strategist knows that all of our chance of dying from AI weapons goes up—their kids, everybody’s—as we progress in that technology. It’s a bad technology. It shouldn’t exist. We should create an international moratorium that says nobody builds AI drone weapons, that we don’t want automated weapons with high intelligence out there.
[0:34:53] But we can’t make that moratorium because if one country doesn’t agree, if one non-country, some non-state actor doesn’t agree that has the technology—or let’s say everybody agrees, how do we know they’re not lying and developing it as an underground black project? So either we don’t even make the agreement, or we make the agreement knowing we’re going to lie, defect in a black project, spy on their black project and try to lie to their spies who are spying on us. And so it’s like how do you get around that thing, where if anyone does the bad thing in the near term it confers so much game-theoretic advantage that anyone who doesn’t do it loses in the long term. Why was it that the peaceful cultures all got slaughtered by the warring cultures and so what ends up making it through is those who end up being effective enough at war. That’s an underlying thing we have to shift because that has as its eventual attractor space, self-destruction in a finite space.
[0:35:47] Bret: Yeah, I totally agree, and the I think fascinating thing when you interact with the incarnate aspect of the process you just described is that the people who are telling the lies that explain why we’re doing something that we know is reckless, often don’t know that that’s what they’re doing. They actually believe their own press and instead of saying, well yes this is terrible but we don’t really have a choice, or somehow indicating that they know that, what you encounter is a true believer who thinks that this is safe. And that’s very frightening because it means that the mechanism at the point something begins to go awry to do anything about it doesn’t exist, or at least it’s not connected to the part that you can talk to.
[0:36:36] So again, not too surprised to find overlap in our map. I would say the process that you describe of by the time you discover what the hazard is that there’s a profit that has accelerated the process, I call this the senescence of civilization because it’s actually exactly a mirror for the process that causes a body to senesce. The evolution of senescence involves processes that have two aspects, one which benefits you when you’re young and another which harms you when you’re old. And because many individuals don’t live long enough to experience the harms in old age, they get away with it from evolution’s point of view and evolution favors the trait in spite of the late-life harm.
[0:37:21] So those late-life harms accumulate and that’s the reason that we grow feeble with age and die. And that’s an exact mirror for the way we’ve set up our economic and political system, where any process that is profitable in the short term at the consequence of having some dire implication for civilization later on, those processes are so ingrained by the time we discover what the harm looks like in its full form there’s nothing we can do to stop them.
[0:37:51] Daniel: Ok, so let’s use two really important current examples. So let’s take Facebook and social media, and the way they’ve affected the information commons and the epistemic commons writ large. So we know that the nature of the algorithms optimizing for time on site while being able to factor what I pay attention to, the whole Tristan Harris story, makes almost—very few people wake up and say, I’d like to spend six hours on Facebook. And so I’m going to spend more time on Facebook if I kind of leave executive function, rational brain, and get into some kind of limbically hijacked state where I forget that I don’t want to spend my whole day on Facebook. And so time on site maximization appeals to existing bias and appeals to limbic hijacks.
[0:38:42] So if I piss off and scare and elicit sexual desire and whatever of the whole population while doubling down on their bias and creating stronger ingroup identities associated with outgroup identities, the algorithms optimize. Well, it is an AI of the power that beats Kasparov at chess beating us at the nature of the control of our attention. So we can see that the right got more right, the left got more left, the conspiracy theories got wackier, the anti-conspiracy theory people became more upset at the idea that a conspiracy could ever exist. Basically everybody’s bias doubles down and they all move apart from each other faster. Well society doesn’t get to keep working, that is a democracy killer, that’s an open society killer.
[0:39:29] There’s a reason China controls its internet is if you don’t want your society to die you have to be able to have some shared basis of thought. And the story is, oh we didn’t know that was going to happen. Well you go back and look and guys like Jaron Lanier were at the very beginning of Facebook and Google whatever saying hey guys, this ad model is going to fuck everything up, you can’t do the ad model thing, you’ve got to have pay for subscription or some other kind of thing. And it was like, ugh shut up dude. Or just don’t even engage in the conversation and then get to say afterwards “failure of imagination”.
[0:40:04] But now how do you regulate it when those corporations are more powerful than countries? Because the regulation is going to be happening in a court where the lobbyists have to be paid for by somebody. So who are the lobbyists paid for by? And it has to be supported by popular attention, and those who can control everybody’s attention can also affect what is in popular attention. So this is a very real example where we know the harms were known and it actually got large enough that it killed the regulatory apparatus’ capacity.
[0:40:36] Bret: Absolutely. In fact, again, this is going to be another alignment of our maps. So what I’ve been playing with is the idea that we are incorrect in imagining that people necessarily want their expectations flattered. That people may actually like to be challenged, but that’s it’s inconsistent with the wellbeing of advertisers. That the very fact is, because advertising is only a tiny fraction informative and is mostly manipulative, you have to be in your unconscious autopilot phase in order for it to cause you to buy a car you wouldn’t have otherwise bought or buy different deodorant than you would otherwise buy.
[0:41:20] And so the point is the thing gets paid for by advertising. In order to be useful to advertisers we have to be unconscious, and the only way to keep us unconscious is not to challenge us, basically to tell us what we think we already know rather than what we need to know. And so they’re lulling us into this even though we would still be interested in the platforms if we weren’t being advertised to, but we would be interested in having more important conversations there. Which is really in some sense what the growth of heterodox podcast space is about.
[0:41:55] Daniel: Oh my goodness, ok there’s two directions I want to go at the same time. I’ll just pick one. There’s a reward circuit on exercise and there’s a reward circuit on junk food. And they both have a dopaminergic element and reward circuit but of a very different kind. And the reward circuit on exercise is that it actually feels like shit at first and is hard, but your baseline of happiness measured in whatever—dopamine, opioid access, whatever—gets better over time. And then you start actually feeling over time but not quickly. This is another place where temporal myopia ends up mattering because there’s a delayed causation on the healthy one and no delayed causation on the unhealthy one. So I start getting the reward circuit on exercise when I start seeing results, and then I want to push hard and then I’m willing to actually go against entropy and put energy into the system so the energy grows.
[0:42:51] Whereas the chocolate cake I get a reward instantly and I don’t have to apply any energy, but as I do it my baseline gets worse. And this is the addiction versus health reward circuit direction. And the same is true for scrolling Facebook compared to reading an educational book. At the end of a month of reading the educational books my life feels better, I feel more proud of myself. At the end of a month of scrolling Facebook I’m like what the fuck am I doing with my life. And yet that one will keep winning for the same reason that 70% of our population is overweight and over a third of them obese. And so, but my only hope is that not everyone who has access to too many calories is obese. There are some people who figured out, hey that’s a reward circuit I don’t want to do, and I’m going to exercise and I’m going to not eat all of the fat, sugar, salt that evolution programmed me to have a dopamine hit for because it’s a shitty direction.
[0:43:49] Now we need to get that number of people who actually have taken some sovereignty over their fitness and wellbeing in the presence of the cheaper reward circuit, we need to get that number up to everybody because right now obviously overweight is one of the main causes of death in the developed world. But we have to then apply that to the even more pernicious, hyper-normal stimuli because salt, fat, sugar are hyper-normal stimuli in the gustatory system. We have to apply that to the sentry system that’s coming in through things like social media and that means less social media, less entertainment, more study. And it doesn’t have as fast a reward circuit, it just doesn’t. But it has a much better longer-term reward circuit where your baseline goes up.
[0:44:30] And this is where enough mindfulness and enough discipline have to come in because otherwise the orientation of the system is that it’s more profitable for corporations for me to be addicted because you maximize lifetime value of a customer through addiction. And it’s an asymmetric war because they’re a billion or trillion dollar company and I’m me. So how do I win in that asymmetric war where it’s in their profit incentive whether it’s McDonald’s or Facebook or Fox for me to be maximally addicted? I have to recognize holy fuck, I actually have no sovereignty—even if I claim to live in a democracy—against these autocracies who want to control and manipulate my behavior in a way that is net negative for me holistically while having the plausible deniability that I’m choosing it because they’re coercing my choice.
[0:45:17] So I have to get afraid of that enough that I mount a rebellion, a revolutionary war in myself against those who want to drive my hyper-normal stimulus reward circuit. So the whole, “how can everybody become more immune to the shitty reward circuits and notice them and become immune to them, and how can they become more oriented to the healthy reward circuits”, that’s another way of talking about what we have to do writ large.
[0:45:42] Bret: Yeah. That’s beautiful, completely agree. In fact it dovetails with another thought that first time I thought it, I thought it was original and then having said it I discovered lots of people had said it before me, that there’s a very close relationship between wisdom and delayed gratification. That it’s the ability to bypass the short-term reward circuit in order to get to something deeper and better. That is what wisdom is about. But you didn’t include on your list what I consider to be maybe one of the most important instances of the failure that you’re talking about, which is sex. There’s a very direct comparison, at least for either males who are wired in a normal fashion for a straight guy, or women who are toying with that same programming, which I believe there are many.
[0:46:46] But the comparison between casual sex which is certainly—we are as males wired to find that a very appealing notion because it’s such a bargain if you can produce a baby where you’re not expected to contribute to its raising, that’s a huge evolutionary win. And then you have to compare that to the rewards of a deep, romantic, lasting relationship with commitment. And the problem is that the deep, lasting relationship stuff has a hard time winning out over the instant gratification thing if the instant gratification thing is at all common. And so that’s really screwing up people’s circuitry with respect to interpersonal relationships and bonding.
[0:47:35] And I have a sense that it is also, in a way that’s much harder to demonstrate, contributing to the derangement of civilization, that many fewer people have a relationship. It’s not like marriage is easy. It’s not, it’s super complex. But having somebody who you can fully trust, somebody who you’ve effectively fused your identity with to the level that they share your interests and they may be the only person who will tell you what you need to know at some points, and the fact that many people are missing that I think is deeply unhealthy.
[0:48:15] Daniel: Yeah, so I would say that market-type dynamics benefit from exploiting the shitty reward circuits across every evolved reward circuit axis. And so from an evolutionary point of view, survive and mate are the things that make your genes get through primarily. So we’ve mentioned the survive, the calorie one, earlier. So in an evolutionary environment I could get plenty of green leafy things. In many environments it was very hard to get enough fat, enough sugar and enough salt. Those were evolutionarily rare things, so more dopaminergic hits on those. So fast food ended up figuring out how to just combine fat, salt, sugar with no other nutrients with maximized ease of palatability and textures.
[0:49:09] And there’s a scientific optimization of all of the dopamine hit with none of the nutrients, so you can actually be obese and dying of starvation. And what that is to nutrition, where you should have a natural dopaminergic hit on something that has nutrients built in for adaptive purposes, is what porn and online dating is to intimate relationship, is what Facebook and Instagram is to tribal bonding, is how do we take the hyper-normal stimuli part of it out, separate it from the nutrients and make a maximally fast, addictive hit that actually has none of what requires energetic process?
[0:49:54] Bret: Yeah, I’ve called this the junkification of everything and it is directly an allusion to junk food where we can most easily see this. But the idea is you will be given a distilled—so if I can rephrase what you said in terms that are more native to me. When you are wired to seek the umami taste that tends to be very tightly correlated with meat, you will tend to get a lot of nutrients along with it in the ancestral context. In the modern context, we can figure out how to just trip the circuit that tells you to be rewarded and it’s no longer a proxy for the things that it was supposed to lead you to.
[0:50:38] And as you just said, you can now look at that across every domain where you have these dopamine rewards and understand why people are living in the world that Pinker correctly identifies we are living in, where we have just a huge abundance and yet are so darn unhealthy, certainly unsatisfied. It explains that paradox of being better off in many ways than any ancestor could have hoped to be and yet being effectively ill across every domain.
[0:51:15] Daniel: Yeah, I will say something about this that’s important. I mean, briefly, the fact that life expectancy started going down in the last five years in the U.S. and certain parts of the developed world is really important to pay attention to. But the deeper point I want to make is the Hobbesian view on the past I think is one of those mistake theory/conflict theory things. I think the dialectic of progress is such a compelling idea, and we’re oriented to the opportunity and not the risk.
[0:51:50] In the same way we don’t want to look at the risk moving forward that would have us avoid an opportunity, we don’t want to look at good things in the past. And we don’t want to look at good things of cultures that we want to obliterate. So we want to call the Native Americans savages so that we can of course emancipate them historically, and we want this Hobbesian view that people had brutish, nasty, mean, short-lived lives in the past so that we don’t have to face the fact that advanced civilizations failed and that is what our own future most likely portends. I think that is a convenient wrong belief system in a similar way.
[0:52:29] Bret: Well I hope you don’t hear me doing that, I certainly—
[0:52:32] Daniel: No, you don’t. I just had to say it.
[0:52:34] Bret: You had to say it so it’s clear to our listeners. Well I appreciate you doing that. I did want to go back to a couple things you said, and of course this happens every time you and I talk where every thread takes on multiple possible directions we could go and there’s no way to cover them all. But in any case, you pointed to survival and mating being the primary mechanisms to get your genes into the future. And I want to point out that this is one of these places where our wiring, which is biased in the direction of those places where our ancestors had agency that was meaningful, upends us. And in fact this is something I think you and I are struggling against as we try to compel people of the kind of danger we’re in and the necessity to upgrade our system before we run into a catastrophe too big to come back from.
[0:53:30] And so in any case, within your population survival and mating makes sense as an obsession. But probably the biggest factor in whether or not your genes are around a hundred generations from now is whether the population that you were a part of persists. And so my field has done a terrible job with this. We have gotten pretty good at thinking about individual level adaptation and fitness and when I say lineage people still don’t know what I’m talking about and they’re confused about why I’m focused on it. And my sense is, it’s like two components to an equation and you’re either aware of the lineage thing but you misunderstand it as group selection, or you’re not aware of the lineage thing and you think group selection is a fiction and it’s all about individuals. And both of those are ways to misunderstand the point.
[0:54:30] Daniel: I’m so happy to hear you saying this because I’m sure this is a conversation I would love to go deeper and understand the distinctions between lineage and group selection the way that you see them. But if I just even take the concept of group selection as opposed to just individual selection and take a species like sapiens, and say there was no such thing as an individual that got selected for that was not an effective part of a group of people. And the tribe, the band, was the thing that was being selected for, so there was fundamentally kinds of prosocial behavior that were requisite.
[0:55:08] But then we get bigger than the Dunbar number only like yesterday evolutionarily and the whole evolutionary dynamics break because that prosocial behavior only worked up to that scale when everybody could see everybody and knew everything. When we start looking at how do we solve collective action problems, you start realizing well, if we make some agreement field as to how nobody does the near-term-game-theoretically-advantageous-relative-to-each-other-long-term-bad-thing, there has to be transparency mechanisms to notice it. So the beginning of defecting on the whole, defecting on the law, the agreement field of morals, is the ability to hide stuff and get away with it. Well you can’t hide stuff in a tiny tribe very well.
[0:55:56] Bret: Even if you could do it once, twice, ten times, sooner or later if hide is your instinct you’ll be revealed and the cost will exceed what you’ve built up by pulling it off however many times you’ve done it.
[0:56:09] Daniel: And so there’s a forced transparency in that smallness of evolutionary scale. And when you start to get up to a large scale, and now there have to be systems where everybody isn’t seeing everyone and I’m smart enough I can figure out how to play it and fuck the whole while pretending that I’m not hiding the accounting of it and getting ahead, that’s the evolutionary niche for corruption, for parasitic behavior. So one way I would describe, and as you’ve described on here before, if there’s a niche with some energy in it, it’s going to get exploited.
[0:56:43] We have to rigorously close the evolutionary niches for human parasitic behavior, humans parasitizing other humans. And the first part of that is a kind of forced transparency that if someone were to engage in that, it has to be known. And now the question is that all the versions of that we’ve explored at scale look like dreadful surveillance states, so how do you make something that doesn’t look like a dreadful surveillance state that also doesn’t leave evolutionary niches for parasitic behavior that ends up rewarding and incenting sociopathy?
[0:57:18] Bret: Absolutely. So bunch of different threads. One, the Elinor Ostrom work is important because it does point to the fact that you can scale these mechanisms up. In fact selection has scaled up these enforcement mechanisms beyond a tiny number of people who know each other intimately. Now it hasn’t scaled them way up, but it’s proof of concept in terms of the ability to get there and it’s a model of what these systems might look like. The other thing though, your focus on corruption I think is absolutely right. And one way to just detect how stark the difference is, is the recognition of how many times in an average day you encounter bullshit. In other words, how many advertisements do you encounter in an average pre-COVID day let’s say.
[0:58:18] These are all cases where somebody you don’t know, or almost all of them are cases where somebody you don’t know is attempting to manipulate you into spending resources differently than you would otherwise spend them. So this is an overwhelmingly dishonest interaction with your world, and there would have been some dishonesty for an ancient ancestor, obviously there are creatures that attempt to look like what they are not. But in general, one could see the world as it was and the deception was the exception not the rule.
[0:58:54] And in some sense we live in a sea of bullshit, and we’re so used to it that we don’t even recognize that that’s abnormal, that it is the result of a gigantic niche that has opened up as a simple matter of scale, as you point out. And that restoring a state where you can actually trust your senses, you can by and large trust the people who you’re interacting with to inform you rather than lie to you, would be a huge step towards reasonability.
[0:59:33] Daniel: Oh I really hope that we follow all the threads here, because this is getting so close to the heart of what we have to do. As scale increases the potential for asymmetry increases, and as the asymmetry increases the asymmetric warfare gets more difficult to deal with. So let’s think about this in terms of market theory. Let’s think about an early hypothetical idealized market, like literally people just took their shoes and their cows and their sheep and their service offerings to a market and they looked at exchanging them. And then because trading cows for chickens is hard we have some kind of currency to mediate a barter of goods and services, but we’re talking about real goods and services.
[1:00:23] Maybe there’s two people, maybe there’s five people that sell shoes. There’s not five thousand of them and I can go touch the shoes myself, I can talk with them, I can see what the price is, and there is no hyper-normal stimuli of advertising, it’s like somebody yelling from his thing. So there’s a symmetry between the supply and the demand side. The supply side is a guy or a few guys selling something, and the demand side is a person or a family trying to buy something, and they can kind of tell each other’s bullshit to some degree of symmetry. Buyer beware becomes an important idea. But now when this becomes Nike and then still one person, there’s still symmetry between supply and demand in aggregate, meaning the total amount of money flowing into supply equals the total amount flowing out of demand, but this side [gesturing] is coordinated and this side [gesturing] isn’t.
[1:01:14] You don’t have something like a labor union on all purchasers. Or it’s like all Facebook users are part of some union that puts money together to counter Facebook in lobbying and regulation. You have Facebook as a close to a trillion dollar organization against me as a person, and I’m still the same size person I was in those early market examples but there wasn’t a trillion dollar organization. And now when that happens, manufactured demand kills classical market theory—which is the idea of why a market is like evolution, it’s like some evolutionary process—is that the demand is based on real people wanting things that will actually enhance the quality of their life. And so that creates an evolutionary niche for people to provide supply and then the rational actor will purchase the good or service at the best price and at the best value.
[1:02:10] But of course, as soon as we get to a situation where, and you look at Dan Ariely and all the behavioral economics saying the rat homo economicus, the rational actor, doesn’t exist. We end up making choices based on status that’s conferred with a brand based on the compellingness of the marketing, based on all kinds of things that are not the best product or service at the best price. But you also get that I want stuff that will not increase the quality of my life. I desperately want shit because the demand was manufactured into me, so it’s not an emergent authentic demand that represents collective intelligence. It’s a supply side saying I want to get them to want more of my shit, and I actually have the power to do that using applied psychology.
[1:02:57] And as soon as you get to split testing and the ability to AI split test a gazillion things, we’re talking about radically scientifically optimized psychological manipulation for the supply side to create artificial demand and then be able to fulfill it. And most of that ends up being of the type that is actually bad for the quality of the life of the people but you have the plausible deniability “they’re choosing it, hey I don’t want to be patriarchal and control what they’re doing. The people are choosing it, I’m just offering the source of supply that they’re wanting.” Bullshit. That’s like offering crack to kids and then when they come back for more of it saying hey [shaking head]. So that was one of the threads I wanted to address.
[1:03:41] Bret: Well, I love it. Back in, must have been 2013 when Game B was actually a group of people who met in a room and talked about things, one of the points that I was making in that context was this inherent asymmetry around unionization, and that the problem is unions have gotten a bad rap because of the tight association cognitively that we have with labor unions. We think of unions and labor unions as synonymous but union is actually a category. It’s potentially a very large category and effectively, management always has the benefit of it. The question is will workers have a symmetrical entity, that’s the labor case, but you can make the same case with respect to banking.
[1:04:31] Credit unions don’t work, they’re very bank-like, but if they were structured in such a way to actually unionize people who utilize the bank it could be highly effective. It could be a complete replacement for the insurance industry which doesn’t even make sense in a market context. But as a risk pool you could do a very effective job. So anyway, yes, the question is how do you scale up the collective force and especially how do you do it in light of the fact that the entities that are already effectively unionized see it coming and they disrupt it with all of their very powerful tools. And so, well anyway—
[1:05:16] Daniel: Can I—
[1:05:17] Bret: Go ahead.
[1:05:18] Daniel: I want to say the beginning of an answer to that because I think it brings us to what you’ve been largely exploring in this show of late, of the breakdown of democracy in open society and what do we do about that and how that relates to breakdown in culture and breakdown in market. We can look at those, the relationship between those three types of entities. So a way of thinking about what the architectural idea of a liberal democracy is and why the founders of this country set it up not as a pure laissez-faire market but as a state that had regulatory power and the market together, the idea is that a market will provision lots of goods and services better than a centralized government will.
[1:06:17] So let’s leave the market to do the kind of provisioning of resources and innovation that it does well, but the market will also do a couple really bad things. It will lead to increasing asymmetries of wealth inexorably. This is what Piketty’s data showed, but it’s just obvious having more money increases your capacity to have access to financial services and you make interest on debt and on compounding interest on wealth. And so you end up getting a power law distribution of wealth. So then a few people in just the market dynamic would be able to have way outsized control over everyone else against everyone else’s interests.
[1:06:55] And the market creates opportunities for things that are really bad. We all know that we want there to be a thing called crime where even though there’s a market incentive for child sex trafficking and whatever else, we say no we’re going to create some rule of law that binds that thing and not just have market drive it. So the idea is that we create a state that we actually give a monopoly of violence to, so it has even more power at the bottom of the stack of what power is than the top of the economic power law distribution. So the wealthiest people and the wealthiest corporations will still be bound by this rule of law. And the rule of law is an encoding of the collective ethics of the people. The ethics are the basis of jurisprudence, and there is some kind of democratic process of getting to say, what is it that we consider the good life and important that we want enshrined in rule of law?
[1:07:45] We give that a monopoly of violence, and really then the goal of the state is to bind the predatory aspects of market incentive while leaving the market to do the things that it does well. But pretty much every law is where someone has an incentive to do something, which is a market-type dynamic, that is bad for the whole enough that we make a law to bind it. Ok, so the purpose of a state is to bind the predatory aspects of a market. That only works as long as the people bind the state. And the people bind the state if you have a government of, for, and by the people of an educated populace who had a quality of education that were capable of understanding all the issues upon which we are governing and making law, and a Fourth Estate where the news that they are getting is of adequate quality and unbiased enough that they’re informed about what’s currently happening.
[1:08:35] If you think about that, that’s what a republic would require, and you realize that both public education and the Fourth Estate have eroded so badly for so long, it’s not that we’re close to losing our democracy, it’s dead. We don’t have a republic, we have a permanent political class and a permanent economic lobbying class, and the people who aren’t really actively engaged in government in any way at all, beyond maybe jury duty now and again if they can’t get out of it. And if the people to be engaged in government in any meaningful way had to tell the DOE what they think should be done about grid security and energy policy, or tell the DOD what should be done about nuclear first strike policy or tell the Fed and Treasury what they think about interest rates, they have no fucking idea how to have a governance of, for and by the people.
[1:09:21] They don’t have that education, they don’t have the media basis. So if the people can’t check the state, then the state will end up getting captured by the market, and so you’ll end up having the head of the FDA be someone who ran a big drug or a big Ag company and the head of the DOD being somebody who ran Lockheed or some military-industrial complex manufacturer, you’ll have just lobbying, just straightforward lobbying gets paid for by somebody. Who’s it get paid for? Those who have the money to pay for lots of lobbyists. And so then you end up getting a crony capitalist structure which is worse than just an evil market because now it has the regulatory apparatus of rule of law and monopoly of violence backing up the market-type dynamics. So then we say, well ok what do we do here?
[1:10:09] And we see that civilizations fail towards either oppression or chaos, those are the two fail states. They fail towards oppression if trying to create some coherence happens through a top-down forcing function. They fail towards chaos if not having enough top-down forcing function, everybody kind of believes whatever they want but they have no unifying basis for belief and so then they will end up going into—they’ll balkanize, they’ll tribalize, and then the tribal groups will fight against each other. Either we keep failing towards chaos, which we can see is happening in the west and in the U.S. in particular right now, and then China, which is happy to do the oppression thing—and oppression beats chaos in war because it has more ability to execute effectively, which is why China has built high speed trains all around the world when we haven’t built a single one in our country.
[1:11:05] So either we lose to China in the 21st century and oppression runs the 21st century, or we beat China at being China, meaning beat it at oppression or it’s like fuck, those are both failure modes. What is there other than oppression or chaos is order that is emergent, not imposed, which requires a culture of people who can all make sense of the world on their own and communicate effectively to have shared sensemaking as a basis for shared choicemaking. The idea of an open society is that some huge number of people can all make choices together, a huge number of people who see the world differently and are anonymous to each other, not a tribe.
[1:11:41] That was an enlightenment-era idea born out of the idea that we could all make sense of the world together, born out of the philosophy of science and the Hegelian dialectic, that we could make sense of base reality and that we could make sense of each other’s perspective, dialectic, find the synthesis and then be able to have that be the basis of governance. So what I think is, this is not an adequate long-term structure because we can talk about why tech has made nation-state democracies obsolete and it’s just not obvious yet, but it has. But as an intermediate structure, the reboot of the thing that was intended has to start at the level of the people, at culture, and that collective sensemaking and collective good faith dialogue, because without that you can’t bind state, without that you can’t bind market incentive.
[1:12:32] Bret: Ok, I love this riff of yours. I think there’s a tremendous amount that’s really important and the synthesis is super tight. I know people will have a little bit of trouble following it, but I actually would advise them to maybe go back through it and listen to it again because it’s right on the money as far as I’m concerned. There’s one place where I wonder if it doesn’t have two things inverted.
[1:12:55] Daniel: Ok.
[1:12:56] Bret: So you talk about the two characteristics that are necessary in order for—what did you call it, liberal democracy or whatever it was that you used as a moniker—to function. One of them had to do with the idea that the state was big enough to bind the most powerful and well-resourced actors. And the second was that the people have to be capable of binding the state. Now I understood you to say that what failed first was the people’s ability to bind the state, is that correct?
[1:13:32] Daniel: I’m saying that’s at the foundation of the stack that we have to address, but the failure was recursive.
[1:13:38] Bret: So as I see it, what happened was the power—the fact that there is always corruption, it’s impossible to drive it out completely—the corruption self-enlarges the loopholes and becomes subtle enough that it’s hard to see directly. The most powerful actors suddenly got an infusion of power, and we could trace down the cause of it, but let’s just say somewhere in recent history the most powerful actors became more powerful than the state. And what they did with that power was they unhooked the ability of the state to regulate the market. I believe the reason for this was that each individual industry had an interest in having its regulations removed in order to create a bigger slice of the pie for it, and so effectively what you had was each industry agreeing to unregulate every other industry.
[1:14:39] If I’m a pharmaceutical company and you’re an oil company, and you want to make money but you have to be able to fuck up the atmosphere to do it, and I want to make money giving people drugs that they shouldn’t have and corrupting the FDA, then we’ll partner. And so what you got was many industries partnering to unhook the ability of the state to bind the market. But one of the things that they had to do in order to make that work was they had to eliminate the ability of the people to veto. And so this is where we get this incredibly toxic duopoly that pretends to do our bidding and pretends to be fiercely opposed—the two sides of it—but in fact the thing they’re united about is not allowing something else to compete with them for power. So it’s the wolf in sheep’s clothing is in charge of the thing that is supposed to be protecting us from wolves. In any case, we don’t have to go too deep there, but—
[1:15:40] Daniel: No, this is actually super important.
[1:15:43] Bret: Go for it.
[1:15:45] Daniel: This is related to the thing we said about as the market as a whole gets bigger, then the individual consumer stays an individual consumer but the supply side, the company gets much larger. As that happens, the asymmetry of the war between them, of the game theory between them, gets larger and so manufactured demand becomes a more intense thing. Well the same thing is true in terms of the market capacity to influence the government, and the market-government complex’s capacity to keep the population from getting in the way of the extraction. And so there’s a heap of mechanisms that happen, and there’s not like five guys at the top who are coordinating all of this, it’s a shared attractor or incentive landscape that orients it.
[1:16:34] Bret: Yeah, largely emergent.
[1:16:36] Daniel: Yeah, and where there are people conspiring it’s because there’s shared incentive and capacity to do so. So the conspiracy is itself an emergent property of the incentive dynamics, which then in turn doubles down on the types of incentive dynamics that make things like that succeed. So, ok let’s take a couple examples. If people haven’t read it, they should all read at least the Wikipedia page on public choice theory, a school of libertarian thought that critiques why representative democracy will always break down, that the founders of the U.S. basically said this, which is—all right, we’ll come back to symmetry for a moment. At the time that we were creating the structure of a liberal democracy, the size of choices and the speed of them was smaller and slower such that the town hall was a real thing. And when the town hall is a real thing, the coupling between the representative and the people is way higher because the people are actually picking representatives in real time that are really representing their interest and they get to have a say in it.
[1:17:46] There was a statement by one of the founders of the country that voting is the death of democracy, because the idea is we should just be able to have a conversation that is good enough that we come up with a solution and everyone’s like that’s a good idea. If we can’t then we vote, but that means that some big percentage, close to half the population, feels unhappy with the thing that happened. And so it’s a sublimated type of warfare, it’s a sublimation of violence but that leads to a polarization of the population. And so the goal is not voting. Voting is the last step of when we couldn’t just succeed at a better conversation in speccing out what is the problem? what are the adjacent problems? what are the design constraints of a good solution? can we come up with a solution that meets everybody’s design constraints as best as possible?
[1:18:29] Bret: Ok so I disagree with this at one level, as I’m sure you will as well, or I’m not sure but I suspect. But I love something about the formulation that voting is itself a kind of failure mode. That ideally speaking, if you had a well-oiled machine, if you had a, military is the wrong analogy here, but let’s say you had a ship of people fighting impossible odds to make it back to safe harbor. The point is you really shouldn’t want a system in which you’re voting between two different approaches to the problem. You should want a discussion in which everybody by the end is on board, and if you try to do that in civilization we’d never accomplish anything. You effectively have to give the majority the ability to exert a kind of tyranny over the minority in order to accomplish the most basic stuff. But that’s because the system is incapable of doing what a better system would do, which is to say, this is the compelling answer and you’re going to know why by the time we decide to do it.
[1:19:41] Daniel: Wait, there’s a symmetry here between the conversation that we had about the market incenting people who focus on the opportunity and not the risks such that it actually suppresses those who look at the risk. Once you say, “hey there’s always going to be somebody talking about a risk that isn’t going to happen, we’ll innovate our way out”, and that becomes the story, now you have plausible deniability to always do that. Once you say “there’s no way to get everybody on the same page, we can’t do that, it’d be too slow”, now I don’t even have any basis to try. And so I don’t ever even try to say what is it that everyone cares about relative to this so I even know what a good solution would look like to craft a proposal. No, we’re going to vote on the proposition having never done any sensemaking about what a good proposition would be. And that’s just mind-blowingly stupid. And so then who’s going to craft the proposition? A lawyer. A lawyer’s paid for by who? Some special interest group.
[1:20:35] So most of the time what happens is you have some situation where one thing that matters to some people has a proposition put forward that benefits it simply in the short term but it externalizes a harm to something that matters to other people. But ultimately all of it matters to everybody, just differentially weighted. And the “how do we put all those things together”, so ok we’re going to do something that’s going to benefit the economy but harm the environment. Well, everybody cares about the economy and everybody cares about the environment, but if I put forward a proposition that says in order to solve climate change we have to agree to these carbon emission controls that China won’t agree to and therefore China will run the world in the 21st century and we all have to learn Mandarin or be like the Uyghur or something. Ok well now I have bunch of people who, because they hate the solution space because it harms something else they care about, don’t believe in climate change. It has nothing to do with not believing in climate change, you’re not caring about the environment, it’s that they care about that other risk so much as well. But if I said ok well let’s look at—
[1:21:37] Bret: It’s a negotiation tactic, is what you’re saying. That at the point that you want X prioritized over Y, you’ll descend into a state in which you’ll make any argument that results in that happening, including “Y doesn’t exist.”
[1:21:54] Daniel: Exactly, because I’m so motivated by this other thing and the solution has a theory of tradeoffs built in that is not necessary. Sometimes the theory of tradeoff is necessary but oftentimes a synergistic satisfier could be found but we didn’t try. In the same way that a way to move forward with the opportunity without the risk could have happened, we could have found a better way to do the tech that internalized that externality. We just need to try a little bit more, but there isn’t the incentive to do it. So let’s say we said no we don’t care about climate change by itself, we care about the climate and we care about the economy and we care about energy independence and we care about geopolitics. And we’re going to look at the adjacent things where making a choice in one of the areas necessarily affects the other area. And we’re going to bring those design constraints together and we say what is the best choice that affects these things together? Then we could start to think about a proposition intelligently.
[1:22:53] We don’t do this in medicine either. We make a medicine to solve a very narrow definition of one molecular target of a disease that externalizes side effects in other areas without addressing upstream what was actually causing the disease. And then the side effects of that med end up being another med and then old people die on twenty meds, of iatrogenic disease. So in complex systems you can’t separate the problems that way, you have to think about the whole complex thing better. And so one part of fixing democracy that we have to think about is we have to define the problem spaces better, more complexly. And we have to be able to actually have a process for coming up with propositions that are not stupid and intrinsically polarizing, because almost no proposition ever voted on gets 90% of the vote, it gets 51 fucking percent of the vote which means half of the people think it’s terrible.
[1:23:46] And so what that means is you care about the environment, I care about the economy on proposition A. Well you petition to get the thing to go through because you care about the owls there but I think that you’re making my kids poor. You’re my fucking enemy now and I’ll fight against you. Now all the energy goes into internal friction and fighting against each other, and any other country that’s willing to be autocratic and force all their people onto one side will just win. And we will increasingly polarize against each other over something where we could have found a more unifying solution.
[1:24:19] Bret: Now this is fascinating. For one thing, you blazed by it there, but I think, so there’s a place where Jim Rutt tells me that someplace that you and he overwhelmingly agree also, but there’s a place in which you and he have hung up where he says that you believe that a properly architected system can do away with the tradeoffs.
[1:24:42] Daniel: No.
[1:24:44] Bret: Right. I think I just heard you give the answer that he must have understood to be that, but wasn’t it. Am I right that the answer—
[1:24:51] Daniel: I don’t think you can—
[1:24:53] Bret: That there are lots of times when you don’t see a tradeoff because you have two characteristics, both of which are suboptimal and you could improve them simultaneously and so it looks like there’s no tradeoff between them. If you push it far enough you’ll eventually reach the efficient frontier where you do have to choose, but if you’re not near the efficient frontier there’s no reason to treat it as a tradeoff, is that—?
[1:25:14] Daniel: Yes, I’m not saying that we get out of having constraints, I’m saying we can do design by constraints much better than we currently do. And so I’m saying that there’s a lot of things that we take as inexorable tradeoffs that aren’t.
[1:25:31] Bret: Well, so you and I will have to chase this down at some point. My argument will be any two desirable characteristics have an inherent tradeoff between them, even if you never see it. There are reasons you wouldn’t see it but that if you push these things far enough you’ll find that there are no desirable things that can be components of the same mechanism that will not exhibit a tradeoff relationship.
[1:25:54] Daniel: Interesting. Initially I don’t agree with that at all, but I’m sure you’ve thought about it a lot so I’m curious why you say it.
[1:26:02] Bret: Well, let me give you the example I used to battle my friend Scott Pikeur [spelling?] over with this, which is, he said why can’t you make a car that’s the fastest and the bluest? And first time I heard that I was like, well ok maybe blue is trivial enough, but it’s not. In fact if you wanted to make a car that was the fastest, and by fastest let’s say fastest accelerating, well you’re going to have to decide how to paint it. If you also decide there’s some color of blue that is bluest and you want the car to be that color, well then it has done a lot of the choosing of what paint you’re going to put on it at the point you decide to paint it that color. That paint will have components that will weigh something. The chances that the bluest, whatever you define that to be, is also the lightest and has the best laminar flow characteristics are essentially zero, because there are an infinite diversity of colors, they will be made out of a wide variety of materials, and the chances that the blue just happens to be the one that is lightest and has the best slipperiness relative to the wind are going to be vanishingly small.
[1:27:12] And that means that if you want to make truly the fastest car, its color will be chosen by whatever paint has the best characteristics. And if you want to make it the bluest as well, you’ll make some tiny compromise that will probably not matter to you but it’s there. So the tradeoff is there even if we don’t see it. But here’s the thing, Daniel. I discovered many years after my argument with Scott was long since put to bed, that I was right about this. And the way I found out was that there is a case where the Navy wanted to set the time to climb record for an aircraft, and they took an F-15 and they souped it up a little bit and in order to set basically the vertical climb rate of this aircraft, they stripped the paint off it. And so if you look at pictures of this aircraft in its record-setting run, it isn’t any particular color. It’s many different colors because effectively you’ve got the bare metal underneath with the paint stripped off it to save however many pounds of paint they were able to remove.
[1:28:17] Daniel: Ok there are three points that come up to address my initial thoughts on this here. So one is, with this particular case of a car, the difference between the blue and the optimal color might be at the boundary of measurement itself.
[1:28:42] Bret: Yep.
[1:28:43] Daniel: And so while it’s true that it might not be a perfect optimum of both at the level of like a nanoscale optimization, it is irrelevant to the scale of choicemaking for the most part, and when we look at something like—
[1:28:59] Bret: Hundred percent.
[1:29:01] Daniel: And when we look at something like Tesla cars, they became faster off the line than Ferraris and safer than Volvos and greener than Priuses at the same time. You could see that ground-up design, just doing a better job of ground-up design was able to optimize for many things simultaneously so much better. Now had they made it less comfortable, could it be faster still? Sure, of course. So it’s optimizing for a product of a bunch of things together, but still in a whole different league than things had been previously. Now that’s functional—
[1:29:35] Bret: But wait wait wait. First of all, this is beautiful because this is exactly what I was hoping for. This is a question of us tripping over each other’s language. Jim misunderstood what you were saying, and he asked me about it and I said yeah, Daniel can’t be right about that if he’s saying what you think he’s saying, but of course it wouldn’t make sense that you would think that you could. So your point about this being trivial, you’re in complete agreement with me and I suspect it would take nothing to get Jim to agree to that formulation as well.
[1:30:08] Daniel: But wait, there’s one more thing I have to say here.
[1:30:11] Bret: Ok.
[1:30:13] Daniel: Of course I’m not pretending that thermodynamics don’t exist. And once you get down to the quantum-scale arrangement of a thing that orientation in one direction doesn’t have effects on other things, duh.
[1:30:26] Bret: Yup.
[1:30:29] Daniel: There’s a difference also between—blue and fast are two different preferences that are arbitrary, that both want to be associated with a car, that don’t have some intrinsic unifying function. And we can say blue is a thing that’s reasonable to be preferential about color. Whereas I would say that there are some characteristics that have a synergistic effect that increasing one increases the other one because of the way they are part of an overall increase in system integrity. And so synergy is the key concept I’m trying to bring about here. Which is behavior of whole systems more than the sum of and unpredicted by the parts taken separately.
[1:31:19] So when I say I’m looking for synergistic satisfiers, the idea that I have X amount of input and that input has to be divided between these various types of output and it’s linear, is nonsense. I can have X amount of input and have something where the total amount of output has increased synergy based on the intelligence of the design. The question of how do we design in a way that is optimizing synergy between all the things that matter becomes the central question.
[1:31:50] Bret: Yes, which is of course the central question that selection must be dealing with in generating complex life. Again, I don’t think we have a hair’s breadth of difference on what we turn out to believe about this tradeoff space. But what I would say is, and I don’t want to drag the audience too far down this road, it’s probably not worth it for what we need to do here. But the benefit of being able to say, so let’s take your example of there are certain characteristics that will co-maximize. Not really, because of the following thing.
[1:32:25] Let’s say that we figure out what color is best for making the fastest car, and then we say, well I want to maximize Gray 37 and speed. Now I can do it, I can maximize Gray 37 and speed because it just so happens that Gray 37 is the color that has the best characteristics for speed. But then the point is, you can’t separate these two things. Whatever characteristic it is you’re actually maximizing, you’ve just found two aspects of it. So your point about synergy is that perfectly aligned characteristics—
[1:32:58] Daniel: Yes.
[1:32:59] Bret: —we could describe that joint, that fusion of those two things as one thing, and we could maximize it. But then if we take the next one over, the next characteristic that we want to add to the list of things, then again we’re back in tradeoff space. So my only point here is that there is a value—in order to be able to get the maximum power out of a tradeoff theory, what we want to do is make it minimally complex. And the ability to say, every two desirable characteristics have a tradeoff between them, the real question is the slope, or the shape of the curve. And that many of these slopes and shapes mean we will see no meaningful variation on it because one side is a bargain and we will always see that manifest. That’s the reason we don’t see tradeoffs everywhere is that in some cases a tradeoff is so dumb that we don’t see anybody exercising variation. Everything is made the same decision.
[1:34:00] Daniel: Yes, and I think for all practical purposes we agree that being able to make a Tesla that is safer than a Volvo and faster than a Ferrari and greener than a Prius is a possibility and that if we applied that to all the problems in the world, we could do a fuck ton better job.
[1:34:18] Bret: Yeah.
[1:34:19] Daniel: I think we also agree, and I love the last point that you made, to the degree that two things can be simultaneously optimized they can be thought of as facets of a deeper integrated thing.
[1:34:30] Bret: Yup.
[1:34:32] Daniel: Ok so now to answer the way that I actually think about it, though this is irrelevant, if people disagree it doesn’t matter at all to the earlier point, I have to wax mystical a moment. When Einstein said it’s an optical delusion of consciousness to believe there are separate things, there is in reality one thing we call universe and everything is a facet of it. If I look at the real things that we have theory of tradeoffs between and the space in the social sphere and the associated biosphere that we’re a part of—so let’s say we talk about in the very beginning of our conversation individual, what would optimize my individual wellbeing and what would optimize the wellbeing of all humans? I believe that I only find that those are differently optimized if I again take a very short-term focus.
[1:35:29] If I take a long-term focus I find that they are one thing, because the idea that I’m an individual and the idea that humanity is a separate thing is actually a wrong idea. They are facets of an integrated reality, and that if I factor all of the things that are in the unknown-unknown set over a long enough period of time, they will simultaneously optimize. And this is the essence of dialectical thinking, is looking for the thesis and the anti-thesis, and not voting between thesis and anti-thesis but seeking synthesis that’s at a higher order of integration and complexity.
[1:36:00] Bret: Totally agree and so I don’t know how many people will be tracking it, but effectively saying on an indefinitely long time scale these things converge is an acknowledgement that we are not talking about design space when we make this recognition. It’s more like trajectory, and that is perfectly consistent. And frankly I think if everybody understood at some level the kind of picture we’re painting, people would be really comfortable with the degree to which it doesn’t do exactly the thing they most hope it will. In other words, the level of compromise is small.
[1:36:44] Daniel: Which is why the compromise in a healthy democracy even was tolerable. Even though that was nowhere near as optimal a system as we could develop.
[1:36:52] Bret: Ok, there’s a point a number of minutes back that I want to return to, and I want to drop an idea on you. It’s actually a place where something you said caused me to complete a thought that I’ve been working on for some time. So the thought as it existed is that markets are excellent at figuring out how to do things, and they are atrocious at telling us what to do. In other words, they will find every defect in human character and figure out how to exploit it if you allow them to do that, but when you have a problem that you really want solved—how can we make a phone that doesn’t require me to be plugged into the wall, allows me to get a message across a distance to report an emergency, whatever—markets do a better job than we could otherwise do of figuring out what the best solution is.
[1:37:43] And so in some sense the question is, how can we structure the incentives around the market so that markets only solve problems that we want them to solve but they can be free to solve them well? And what I think I realized in this conversation here is that in some sense the role of the citizenry in a democracy is to discuss the values that we want government to deploy incentives around. In other words, the people, by deciding what their priorities are, what their concerns are, which problems are top of the list to be solved and which ones could take a backseat, that that’s the proper thing that we are to be discussing.
[1:38:32] That the role of government freed from corruption would be to figure out what incentives will result in the best return on our investment, structuring the incentives of the market, and then the market can be freed to solve the narrowest problems on that list. And I think we fail at every level here, but from the point of view of what we’re actually shooting for, I would say it’s somewhere in that neighborhood: that division of labor between the citizens, the apparatus of governance, and the market.
[1:39:07] Daniel: I’m suffering a little bit here because there’s like ten simultaneous threads that I really wish we could address that are important and I know we’re going to open up more in starting. It would be really fun to go through the transcript of this and come back to the most important threads.
[1:39:20] Bret: Might be worth doing actually.
[1:39:22] Daniel: So first I want to say something against heterodox market theory, is I don’t think the market is the best system for innovation of a known what. And I think World War II and the Manhattan Project is a very clear example, and the Apollo Project.
[1:39:45] Bret: And our failure at fusion, I would say. That the point you’re about to make—to me, fusion would be our top priority because it’s the only plug-and-play solution to a large piece of our problem and the fact that we decade after decade are awaiting a proper fusion solution says despite the fact that the market could potentially solve it, the problem is the investments are too large on the front end and the reward is too delayed for the market to actually even recognize the problem correctly.
[1:40:18] Daniel: Venture capital is not going to put up the amount of money that a nation-state can, for the amount of time that’s necessary. And when you look at the very largest jumps in innovative capacity, a lot of them happen by nation-state funding, not market funding, and then a market emerging in association with kind of government contracting. And so if we look at why the Nazis were so technologically farther ahead than everyone else going into World War II with the Enigma Machine and the beginning of computing, with the V2 rocket, it was not a market dynamic.
[1:40:54] It was a state dynamic where they invested in science and technology development for a long time, which is why this tiny little country with limited industrial supply capacity had more technological advancement than the Soviets or the U.S. and it was our ability to steal their shit and rip it off and then be bigger than them that was a big part of how we were able to succeed in the war effort. And so that’s a clear example that computers were developed by a state, not the market.
[1:41:21] Bret: Whoa whoa whoa, hold on a second. I want to be careful because I don’t want to falsify something that isn’t false. I again think this is a place where our mappings, or at least the language surrounding them is going to upend us. Because this sounds like a place where a government is capable of generating a massive incentive to cause a problem to be solved that the market won’t even find on its own. So that does not strike me as inconsistent with what I was just saying. The state recognizes there’s a problem, creates an incentive big enough to find the solution, and that incentive can be big enough to cause people to get different degrees than they would otherwise seek, and so anyway—
[1:42:03] Daniel: But in these cases it wasn’t like—so let’s take the Manhattan Project. It wasn’t private contractors that solved it because the government had made the incentive. It was actually government that solved it, it was government employees. And so this is an important distinction. NASA was not a private space contracting thing that did the Apollo Project, it was a government project. So I would say the largest jumps we ever made in tech did not happen in the market, for the most part.
[1:42:34] Bret: Well, so then I guess the test of your falsification here is the following question: if the Manhattan Project had consisted of a state yanking people out of their beds and standing over them with rifles, would it have worked? And maybe the Russian version is closer to that, but I think the point is you still have a system of incentives correctly solving a problem that the market would not have found on its own and no entity in the market would have been big enough to solve. So I still see it as consistent, but you might convince me otherwise especially if it turns out that a negative incentive would be just as effective at creating the solution.
[1:43:31] Daniel: There’s a story that people don’t innovate well under duress, that innovation requires executive function and prefrontal function and if they’re too limbically oriented they won’t innovate well, which is one of the reasons why we need an open society. And I think there’s probably some truth to this, but less truth than we would hope. I believe it was called the sharashka system, which was a Russian, basically prisoner of war type camps that had scientists that were doing real innovation up to early Sputnik-like work. So we know that people under rifle duress can innovate. We know that people conscripted by draft into an army can actually innovate on behalf of the military.
[1:44:23] Now I think that it’s true that something more like a market will explore more edge cases that are not known whats, and come up with interesting things, whereas the centralized thing can do a better job sometimes of existing whats that require very high coordination. Because if you look at the Manhattan Project, the scale of the budget and the scale of coordination, no company has that, and a bunch of companies competing for intellectual property and whatever wouldn’t have worked, right?
[1:44:56] Bret: Right.
[1:44:57] Daniel: One of the reasons I bring this up is because there’s a whole bunch, you mentioned fusion, whether it’s fusion or whether it’s thorium or whether it’s closer to room temperature superconduction or any of the things it could possibly generate, whether it’s 65% efficient, photovoltaic through nanotech—there’s a bunch of things where we kind of know the science that could lead to the breakthrough but the level of investment just isn’t there. And I think there’s a heap of examples like this where the percentage of the national budget that used to go to R&D has went down a lot, and it shouldn’t.
[1:45:32] And the Apollo Project was kind of the last thing of its type. And then the government starting to shift to government contractors started to be a source of massive bloat, where the government contractors had an incentive to just charge whatever the fuck they wanted. Which is why then Elon could beat Lockheed and Boeing at rockets so much cost-wise because in that situation he didn’t have to do the fundamental innovation on rocketry, he could just outcompete them with market incentive. And then that could create enough money for iterative innovation. I think fundamental innovation of certain scales does require larger coordination than markets make easy.
[1:46:10] Bret: Ok, so then I want to modify what I said, because you’ve convinced me I didn’t have it right in the initial one. So the point then is you have to extend the governmental structure so that it can deal with two types of market failure, one surrounding the natural system of incentives, which will cause you to innovate things that do net harm for example, and the other is a failure where the scale of the market is not sufficient to solve certain problems that are in our collective interest to solve.
[1:46:42] Daniel: Yes, and we don’t want to give the government that much power because we don’t trust that kind of authority, but that’s because the people aren’t checking the government. Which comes back to the thing that we talked about earlier. And now this becomes one of the central questions of the time. It’s what is the basis of legitimate authority and how do we know, and what is the basis of warranted trust? Because we all know what it means to have trust that isn’t warranted. Everyone who disagrees with us, we think that their trust isn’t warranted.
[1:47:09] If we’re on the left we think people who trust Trump it’s unwarranted, and they think that the people who trust the FDA or vaccine scientists or the CDC have trust that’s unwarranted. We also know that legitimate authority, the idea of legitimate authority is so powerful to be able to be the arbiters of what is true and what is real, that anyone who is playing the game of power has a maximum incentive, however successful they are, to be able to capture and influence that for their good. We also know that it’s possible to mislead with exclusively true facts that are cherry-picked or framed.
[1:47:48] Bret: Absolutely.
[1:47:49] Daniel: So I can cherry-pick facts on one side or the other side of a Gaussian distribution and tell any story I want that will make it through a fact checker. So fact checking is valuable but not even close to sufficient. So I can lie through something like The Atlantic as well as I can lie through something like Breitbart through different mechanisms for different populations.
[1:48:09] Bret: Yeah, this is a super excellent point as well, that a fact checker errs in one direction and if you can build a falsehood out of true objects that have been edited then the fact checker won’t spot it, so, love that point.
[1:48:27] Daniel: And so I can do a safety analysis on a drug, and I’m not looking at every metric that matters. I’m looking at some subset of the metrics and it might be that it’s safe on those metrics but all cause mortality increases, life expectancy decreases. But I only did the safety study for two years, so I wouldn’t notice that. So I can say, no, methodologically this was perfect and sound. It just also doesn’t matter because I wasn’t measuring the right things.
[1:48:55] Bret: Right, and so this also basically what you have just said means that the replication crisis can be understood as a mechanism for generating data which can be cherry-picked to reach any conclusion you want about the effects of this intervention or that intervention, because effectively what you have is the ability to choose between experiments where sampling error will result in both outcomes being evident somewhere.
[1:49:26] Daniel: This is another one of those, is it conflict theory or mistake theory things, is I can intentionally manipulate an outcome that looks methodologically sound and then say, oh we just didn’t know those factors. I’m not saying whether that’s happening or not, it certainly can happen. Ok so now we get back to, how do you have a legitimate authority that has the power of being the arbiter of what is true and real and all the power that’s associated and have it not get captured by the power interests, is a very very important question. How in the name of the Bible and Christendom and Jesus saying let he who has no sins cast the first stone, did we do the Inquisition? Like weird mental gymnastics by which the authority of that thing was able to be used for the power purposes of the time.
[1:50:17] And so now when you start to have increasing polarization between the left and the right, and historically more academics being left-leaning and the social sciences being so complex that you can cherry-pick whatever the fuck you want and do methodologically sound and yet still misrepresentative stuff, then you say is that actually a trustworthy source? And then we say, well ok do we want a bunch of wacky theories going out over Facebook and Twitter and whatever, or do we want to censor it? Well if we want to censor it, who is the arbiter of truth that we trust? If we don’t censor it, we’re appealing to the worst aspects of everyone and making them all worse in all directions. Those both suck so bad, and that’s the oppression or chaos.
[1:50:58] And the only answer out of the oppression or chaos is the comprehensive education of everyone in the capacity to understand at least three things. They have to increase their first person, second person, and third person epistemics. Their third person epistemics is the easiest, philosophy of science, formal logic, their ability to actually make sense of base reality through appropriate methodology and find appropriate confidence margin. Second person is my ability to make sense of your perspective. Can I steel man where you’re coming from? Can I inhabit your position well? And if I’m not oriented to do that, then I’m not going to find the synthesis of a dialectic. I’m going to be arguing for one side of partiality, harming something that will actually harm the thing I care about in the long run.
[1:51:48] And then first person, can I notice my own biases and my own susceptibilities and my own group identity issues and whatever well enough that those aren’t the things that run me? When I look at the ancient Greek enlightenment, first person was the Stoic tradition, the second person was the Socratic tradition, the third person was the Aristotelian tradition. There’s a mirror of all those in modernity. We need a new cultural enlightenment now where everyone values good sensemaking about themselves, about others, about base reality. And good quality dialogue with other people that are also sensemaking to emerge to a collective consciousness and collective intelligence that is more than our individual intelligence. And so that we have some basis of something that isn’t chaos but that also isn’t oppression because it’s emergent more than imposed. So it’s cultural enlightenment or bust as far as I’m concerned.
[1:52:44] Bret: All right, so I don’t disagree with you fundamentally, I believe. This is a place where, when I say my version of this, which is much less sophisticated in some ways and focused elsewhere, but when I say my version of it I lose people. Because my version of it is something like what we need to do is doable, we can see the trajectory from here, you can’t see the objective, but you can see the direction to head, and it will take three generations to get there.
[1:53:23] Daniel: I agree.
[1:53:24] Bret: What you’re describing, you couldn’t just simply take that curriculum and infuse it into any system we’ve got and have any hope of people learning it or giving a shit about it or whatever, it wouldn’t work. So you have to build the scaffolding that would allow a population to be enlightened in this way such that the governance structure you’re imagining might arise out of it could flourish. But let’s put it this way, it’s at least three generations out before you had gotten there, even if you started doing things right now. And so what I try to say to people in order that they don’t completely lose interest in the possibility of a solution because it’s too far out is, things can start getting better right away.
[1:54:08] We are not going to live to be in that world that is the objective, and even if we did we would never be native there. Our developmental trajectory will have been completed in a world that doesn’t function like that and so you can be happy as an expat but we would be expats in the world we were trying to create, and that’s fine. If our grandchildren or our great-grandchildren were native there and we could be expats there, that would be a perfectly acceptable solution. But I think in general people have the sense that a solution sounds like something that we could have in the next few years, and I just don’t see the possibility of it.
[1:54:53] Daniel: No, you’re going to see anything that can be implemented quickly, you want to red team and say either how does it fail or where does it externalize harm? And also what arms race does it drive of whoever doesn’t like it? And if you factor what arms race it drives, where it externalizes harm, and where it fails, you’ll get much less optimistic about most of those things. And if you don’t go into despair you’ll start thinking long-term, and things that converge in long-term direction. And when you start to think about that the thesis and the anti-thesis are both not true, they have partial truth but they are not actually true.
[1:55:30] Synthesis is in the direction of more integration of truth, and still not true, but in the direction of. If I optimize for one of these, it will externalize harm in a way that messes the whole thing up. And that’s why there’s a forcing function of the failure modes on both sides. That’s why it’s important to look at oppression and chaos and say these both create failure modes, so what is it that doesn’t orient in either of those directions? It’s not more power to authorities, it’s not more pure libertarianism, it’s something that’s outside of that axis.
[1:56:04] Bret: Or it is going to involve the equivalent of negative feedback. In other words, a thermostat works by virtue of not embracing it being hot or cold but by pushing it in the right direction as it diverts one way or the other. So I very much like your point about synthesis here. Just to make it clearer, synthesis is two things, even linguistically speaking. We can talk of a synthesis, which is an object. You could write it in a book. A synthesis between several different concepts could exist in a book. Incidentally that’s sort of what I see myself doing in biology is synthesis. But your point is, the most important aspect of synthesis is it is a process.
[1:56:51] Daniel: A verb, yes.
[1:56:52] Bret: Right, and so that process is the thing that takes these competing failure modes and rescues from them something that suffers neither consequence and heads towards optimality. So I agree we have to get good at it.
[1:57:10] Daniel: Yes so synthesis is an ongoing process, and let’s say I have some bits of true information in a thesis and some bits in the anti-thesis. So the synthesis will have more bits than either of them, higher-order complexity. But it will still have radically less bits of information than all of reality about that thing. The model is never the thing. It’s just, it’s the best epistemically we can do at that moment. So now I want to go back to the earlier topic around theory of tradeoffs that you said, because I let it go but as soon as you mention optimization I have to bring it back, because it comes back exactly to here. And it also brings back this question you had that markets can do a good job with the how but not the what, which is the is/ought distinction that comes up in science, right?
[1:57:55] Bret: Yes, it is.
[1:57:57] Daniel: Science can do a good job of what is, but not what ought, which means applied science, i.e. technology, i.e. markets, can do a good job with changing is, but not in the direction of ought. And so that is ethics, which is to be the basis of jurisprudence and law, that’s exactly why you bring those things together. And it’s because is is measurable, third person, measurable and verifiable, repeatable.
[1:58:28] Bret: It’s objective.
[1:58:29] Daniel: It’s objective, right. Whereas ought is not measurable in a—you can do something like Sam Harris does in Moral Landscape and say it relates to measureable things, but it doesn’t relate to a finite number of measurable things. There is a Gödel proof that whatever finite number, there are some other things that we end up finding later that are also relevant to the thing that weren’t part of the model that we were looking at. And so the thing that is worth optimizing for, you talked about that the blue and the fast would be part of the same thing. The thing that is worth optimizing for is not measurable. It includes measurables, but it is not limited to a finite set of measurables that you can run optimization theory and have an AI optimize everything for us.
[1:59:16] Bret: Yeah, I agree. You will have a long list of characteristics that you can measure and as you go from the most important to the least important you’ll eventually drop below some threshold of noise where you’re not noticing things that contribute. So yes, you’ve got a potentially infinite set of things that matter less and less, and you will inherently concentrate on the biggest, most important contributors up top. And that’s natural, it’s an issue of precision at some level, but one that we shouldn’t convince ourselves that we’re solving the puzzle completely at a mathematical level. An engineering solution is not a complete mathematical solution.
[2:00:00] Daniel: Right. Ok so now I’m coming back to the waxing mystical thing, and I don’t think it has to be thought of that way. I think the way Einstein was doing it, he says Spinoza’s god is my god, I’m happy to do it that way. So the first verse of the Tao Te Ching is the Tao that is speakable is not the eternal Tao. The optimization function that is optimizable with a narrow AI is not the thing to optimize for, is a corollary statement. And the Jewish commandment about no false idols is that the model of reality is never reality, so take the model as this is useful, it’s not an absolute truth.
[2:00:35] The moment I take it as it’s an absolute truth, I become some weird fundamentalist who stops learning, who stops being open to new input. And in optimizing the model where the model is different than reality I can harm reality and then defend the model. So I always want to hold the model with this is the best we currently have and in the future we’ll see that it’s wrong. And we want to see that it’s wrong, we don’t want to defend it against its own evolution. And so what we’re optimizing for can’t be fully explicated, and that’s what wisdom is. Wisdom is the difference between the optimization function and the right choice.
[2:01:12] Bret: Oh I love this, this is great. Obviously it dovetails with the basic sense of what metaphorical truth is and the recognition that actually metaphorical truth isn’t something that applies to religious-style beliefs, it’s actually the way we do science also. We have approximations and things get ugly when people forget that that’s what they’re dealing with, and they start treating it as the object itself. A very important example in my field is the instantiation of the term fitness, which in most cases has so much to do with reproductive success that we actually just synonymize them most of the time and we speak as if they’re interchangeable.
[2:01:59] Which is great except for all those cases where they go in opposite directions, which we are perennially confused by. And so anyway, sooner or later I will deliver some work that will take the cases that we can’t sort out because we’ve misdefined fitness and forgotten that it was a model in the first place and shows how you would solve it differently if you defined fitness in a tighter way. But, story for another day. All right, so where shall we go? You were on a roll.
[2:02:38] Daniel: So you’ll see conversations from really smart people like Nick Bostrom and Max Tegmark and whatever of—because of the collective action problem, and the multi-polar trap race to the bottom, and yet because of the complexity of the issues that we face that are beyond what the smartest person could manage by a lot, is the only answer to build the benevolent AI overlord that can run a one-world government because it can process the information to make good choices? So as you can guess, my answer is vigorously no.
[2:03:19] Bret: Yup.
[2:03:21] Daniel: Not just because I think the optimization function that it would run, no matter how many variables, would end up becoming a paperclip maximizer, but I think its own existential risks are bound up in that process. These guys know this but it’s easy to pick solutions like that compared to the other ones that seem maybe even more likely to go terrible. So then we say ok, we don’t want a one-world government run by any of the people we currently have, and we also don’t want separate nations where any of them that defect lead everybody into a race to the bottom, so that means that they have to have rule of law over each other because they affect common spaces. So how do you have rule of law over each other without it being one-world government and then capture?
[2:04:10] Oppression or chaos at various scales, and the only answer is the comprehensive education and enlightenment of the people that can check those systems. Now obviously the founding of this country was fraught with all the problems we know of now in particular, and it was still a step forward in terms of a movement towards the possibility of some freedoms from the feudalism it came from. And so I find the study of the theoretical foundation of it meaningful to what we’re doing right now. And famously there’s this quote from George Washington where he says something to the effect of, I’m going to paraphrase it, the comprehensive education of every single citizen in the science of government should be the main aim of the federal government.
[2:04:56] And I think it is fascinating. So science of government was his term of art, and science of government meant everything that you would need to have a government of, for, and by the people. Which is the history, the social philosophy, the game theory and political science and economics as well as the science to understand the infrastructural tech stack and whatever, the Hegelian dialectic, the enlightenment ideas of the time. But the number one goal of the federal government is not rule of law, and it’s not currency creation, and it’s not protection of its borders, because if it’s any of those things it will become an oppressive tyranny soon. It has to be the comprehensive education of the people if it is to be a government of, for, and by the people.
[2:05:41] Now this is the interesting thing, now I remember where I wanted to go. Comprehensive education of the people is something that makes more symmetry of power possible. Increasing people’s information access and processing is a symmetry-increasing function. So everyone who has a vested interest in the increasing asymmetries has an interest in decreasing people’s comprehensive education in the science of government. And so now let’s look at the education changes that happened following World War II in the U.S. There’s a story that I buy that the U.S. started focusing on STEM education—science, technology, engineering, math—super heavily partly because it was an existential risk because look what happened with the STEM that the Germans did and now we know that a lot of the German scientists that we didn’t get in Operation Paperclip the Russians got in Sputnik and so it’s an existential risk to not dominate the tech space.
[2:06:49] So we need to really double down on STEM and we need all the smartest guys, we need to find every von Neumann and Turing and Feynman there is, so the smarter you are the more we want to push you into STEM so you can be an effective part of the system. That’s part of the story, but also the thing that Washington said, the education, the science of government—we start cutting civics radically and I think it was because social philosophers of the time like Marx were actually problematic to the dominant system.
[2:07:16] And I’m not saying that Marx got the right ideas, I’m saying the idea of ok we have a system where let’s have the only people who really think about social philosophy be the children of elites who go to private schools who learn the classics, and otherwise let’s have people not fuck the system up as a whole but be very useful to the system by becoming good at STEM. I think this is a way of being able to simultaneously advance education and retard the kind of education that would be necessary to have a self-governing system.
[2:07:49] Bret: That’s fascinating. Because of course if you have the elites effectively in charge of governance they can do exactly what you would imagine the elites would hope for, which is to govern well enough that the system continues on no matter what but to continue to look out for the distribution of wealth and power and make sure nothing upends it. They won’t even realize necessarily that that’s what they’re doing. I also love the fact, you know George Washington is one of these characters who it’s very easy to misunderstand how good he was because he wasn’t the most articulate founder or in classical terms the smartest founder by far. On the other hand, an awful lot of wisdom buried in George Washington, and this idea of ultimately he was looking very deeply into the future potentially to understand why the education of the populace would be effectively synonymous with the job of government.
[2:09:01] And it’s not because the purpose is the education, but it’s because that’s the only hope that democratic system will spit out the kind of solution that you want it to generate. Which is, I don’t know, it’s a very interesting analysis. So it raises something else here which is on my list of notes arising, which is I notice this pattern all over the place. There is a state which is awesome, very powerful in terms of what it can do, but it’s fragile and so it falls apart. In other words, we will never have a better system as far as I can tell than science for figuring out what’s true and what is possible. So it’s the most capable state. There are measures by which it is the strongest state, but it is also terrifically susceptible to market forces.
[2:09:58] In fact it can’t be in the same room with them. So we could look for many examples of this where something marvelous requires very careful arrangement of conditions in order for it to survive. And I’m wondering what you make of that in light of this discussion. I guess it’s not hard to make an argument for why those two things go together, capacity and fragility, but what are we to do about it going forward, because surely we’re trying to build these states but do so in a robust form.
[2:10:35] Daniel: They go together because of synergy. Which is you have properties that none of your cells on their own have, you as a whole. There is a synergy of those cells coming together that creates emergent properties at the level of you as the whole thing. But if I run all the combinatorial possibilities of a way of putting those fifty to a hundred trillion cells together, very few of them produce the same synergy of you. Most of them are just piles of goo, right?
[2:10:59] Bret: Yup.
[2:11:00] Daniel: And so it’s a very narrow set of things that actually has the very high synergies, and it’s lots of things that are pretty entropic. And entropy is also obviously easier. I can take this house down in five minutes with a wrecker ball but it took a year to build.
[2:11:17] Bret: Yup.
[2:11:18] Daniel: And I can kill an adult in a second but it takes twenty years to grow one. So this is why the first ethic of Hippocrates and of so many ethic systems is first do no harm. Then try to make shit better, but first do no harm. If you can succeed at the maintenance function, then you can actually maintain your progress functions. And—come back to where you were going with that?
[2:11:51] Bret: Well, so here’s what I’m after. I agree with your basic entropic analysis, that it is easier to destroy than to build. The number of states that work is vastly exceeded by the organization of the same pieces that don’t. But what I’m wondering about is, in effect, one has to be able to build a system that is resistant to that. And life does this, living creatures manage to fend off entropy beautifully. And the fact we need a governmental structure that has that same trick, and we haven’t seen it yet. And the question is, unfortunately I fear that it is almost a prerequisite that if you build the capable structure and you haven’t built the thing that protects it first, then it will be captured before the wisdom develops to preserve it against that force.
[2:12:59] Daniel: Ok now I remember why I used the analogy of the body. What I’m going to say here is wrong, so let’s just take it as a loose metaphor. Let’s take in the body that the closest thing to top-down organization is the neuro-endocrine system. But that there’s a bunch of bottom-up that is at the level of genetics and epigenetics and cellular dynamics and whatever, and that there is a relationship between the bottom-up and top-down dynamics. Well obviously I can take a cell out of a body and put it in a dish and it has its own internal homeodynamic processes. It’s dealing with entropy on its own, they don’t need a top-down neuro-endocrine signal for how they do that.
[2:13:35] So let’s say we tried to make a perfect top-down neuro-endocrine system and the cells had no cellular immune systems or redox signaling homeodynamics or anything else. You would die so quickly. There is no way to have a healthy body at the level of the organization of all the cells if the cells are all unhealthy. And that’s the comprehensive education of the individual thing we’re talking about. Can you make a healthy system of government as a system, can you just get the cybernetics right, that is separate than that which develops all of the individuals and the relationships between them? And the answer is definitely not.
[2:14:15] Bret: Ok, agreed but then here’s the problem that I’m trying to articulate. Ok so we agree that the cells have to be coherent in and of themselves, that there has to be a fractal aspect to this organization of things across many scales from the individual up to the body politic. But if it is true that the key to making that work is that individuals, which are analogous to cells here, have to be educated in the nature of governance, the theory of governance, in order for this to work, how would they end up that way? Well they would end up that way because governance will have created the conditions that would cause that education. So are we not now saying that what is necessary in order for the system to function is that the system is already functional in order that it can generate the conditions necessary?
[2:15:15] Daniel: No, there’s no hole in the bucket situation. There is a recursive situation between bottom-up and top-down dynamics. And so let’s take the classic dialectic that relates to right and left, that’s not the only one, of individual and collective for a moment. And say ok, fundamentally the right is more libertarian, individual, pull yourself up by your bootstraps, we want to have advantage conferred to those that are actually doing—they’re conferring their own advantage, and doing well. And then the left model, the more socialist model is, yeah but people who are born into wealthy areas statistically do better than people who are born into shitty areas in terms of crime and education and access to early healthcare and nutrition and all those things. And you can’t libertarian-ly pull yourself up by your bootstraps as an infant or a fetus and so let’s make a system that tends to that well.
[2:16:20] But then the right would say, but we don’t want something like a welfare state that makes shitty people that just meets their needs for them and orients them to lay on the couch all day and do TV and crack. Ok, I think it’s mind-bogglingly silly that we take these as if they are in a fundamental theory of tradeoffs as opposed to a recursive relationship that can be on a virtuous cycle. What we want to optimize for is the virtuous cycle between the individuals and the society. Do we want to create social systems that take care of individuals but make shittier people? No. Do we want to create social systems that condition people that have more effectiveness and sovereignty and economy? Yes. And do we want to condition ones that in turn add to the quality of society? Yes. So we don’t want to make dumb social systems.
[2:17:23] A social system that is more welfare-like is much dumber than a social system that provides much better healthcare and education and orientation towards opportunity for advancement rather than opportunity towards addiction cul-de-sacs. And so we already have some people, all the listeners of your show I think, we already have some people who are trying to educate themselves independent of not having a government that is doing that. And this is why I say it has to start at culture before state or market. It has to boot in that direction. So those people can start to work together to say how do we influence the state, and two, start to then influence better education for more people, better media and news for more people. And how do we influence it to affect market dynamics where the market dynamics are more bound to the society wellbeing as a whole rather than extractive?
[2:18:21] Bret: Oh I like this, because we actually do see this dynamic. We see people actually seeking out nuance even though we’re told that they won’t do it. And so the other thing we’re seeing is, for various reasons including COVID, the absurdity of the educational system that we have is being revealed in a way that it never has been before. So many more people are recognizing that school will flat out waste your time if you give it that opportunity, and therefore they have more license than ever to seek out high quality insight and exercises or whatever, and to discount the value that we are assured comes along with a standard degree, et cetera. So yeah, I’m favorable to this idea. Go ahead.
[2:19:09] Daniel: Now, there’s something also that you just said that’s interesting is, ok so George Washington’s quote, comprehensive education of every citizen in science of government, well how can you afford that when most of them are going to be laborers? Because them having a strong background in history and in political science and social science and infrastructural tech stack, does that help them be better farmers? Not really, it helps them be better citizens in government but not better farmers. And so how do we afford to pay for all of that additional education and how do they maintain that knowledge when they’re just engaged in a labor-type dynamic? And so this is why the children of the elite who are actually going to become lobbyists and senators and whatever go to that private school and get that education.
[2:19:52] Well now we have this AI and robotic technological unemployment issue coming up, and it’s definitely coming up. Well, the things that it will be obsoleting first are the things that take the least unique human capabilities, because those are the easiest to automate, so labor-type things. So either this is an apocalypse that just increases wealth inequality and everybody’s homeless and fucked or on the absolute minimum amount of basic income so the elites can keep running the robots as serfs rather than the people as serfs. And just hook the people up to Oculus with a basic income so they don’t get in the way. Or this actually makes possible a much higher education of everyone so they can be engaged in higher-level types of activities.
[2:20:45] Bret: Yeah, I agree with that completely and I also agree we should make sure people understand, and I think it was very clear the way you said it, but we are headed for a circumstance in which a shift in the way the market functions and what it requires is going to cause an awful lot of people to be surplus to it all at the same time, and that can only play out in a few ways. None of them are good if we don’t see it coming and plan for it. It’s coming, it’s not the fault of the people who will be obsoleted, and so in any case, yes this makes sense.
[2:21:22] Daniel: Now when you look at COVID, you were mentioning, when you look at COVID and you look at how many small businesses shut down, and how much unemployment happened. And then how much the market rallied because six companies made all of the money of the market, and if you take those companies out, the entire stock market is down, but it’s cap-weighted. And you basically have network dynamics, Metcalfe law dynamics creating winner-take-all economies where you have one winner per vertical. The wealth consolidation, the wealth inequality, has progressed so rapidly that the measurements of GDP and market success and the measurements of quality of life are totally decoupled.
[2:22:05] They’re moving in opposite directions in really important ways. When you combine how intense that is and that of course the forces with the most money are the hardest to regulate because they have the best lawyers and the ability for offshore accounts and for lobbying and whatever else, so how do you do anything about this? Combined with the fact that the debt-to-GDP ratio is unfixable, you realize that a reset of our systems will happen, because this system cannot continue, and we can either do a proactive one, or we get the reactive one, and the reactive one is worse.
[2:22:40] Bret: Yeah. The reactive one is going to inherently be arbitrary and therefore much more violent in every sense of that term. And so yes, you are programming some kind of a—unfortunately none of the terms that one would like are still available to us because Great Reset has obviously been branded in somebody’s interest. But yes, we need some sort of a reboot that takes heed of this dynamic and sets us on a path where it doesn’t turn into a catastrophe, or it doesn’t turn into a spectacular win at everybody else’s expense for some party or other.
[2:23:26] And unfortunately, of course, if we circle back to an early part of this discussion, convincing people of the hazard of this, the certainty that something of this sort will happen if we do nothing, that we must do something, that that something must be coordinated, that you can’t pass it through your inherited lens of is this left-leaning? is this right-leaning? is this for my team? is this against my team? Convincing people of that is extremely difficult in this environment because for one thing, everything we would do to convince passes through these platforms that if they haven’t flexed their muscle yet, as soon as we start talking about what would need to be done to save civilization in ways that they can recognize it, they will find ways to oppose it.
[2:24:22] Daniel: And you’ve had this conversation on here before that, let’s say we look at a particular group and we can predict how they’re going to respond to something we’re going to say with quite high accuracy. So we can take a particular woke SJW group and if we have a conversation of a certain type, we can predict that they’ll say oh that thing you’re calling dialectic is giving platform to racists when you should be canceling them, therefore you’re racist by association, or whatever. You can take a QAnon group and predict that they are going to say that because we talk to someone that was four steps away from Epstein in a network that we are probably part of the deep state cabal of pedophiles or whatever it is. And to the degree that people have responses that can be predicted better than a GPT-3 algorithm, they can’t really be considered a general intelligence.
[2:25:24] They are just a memetic propagator, they are taking in memes, rejecting the ones that don’t fit with the meme complex, taking in the ones that do fit, and then propagating them. And I think if people think about that, they should feel badly about not being someone who’s actually thinking on their own and being a highly predictable memetic propagator. And be like, I would like to have thoughts that are not more predictable than a GPT-3 algorithm, I would like to know what my own thoughts about this are. And in order to know what my own thoughts about it are, can I even understand and inhabit how other people think all the things that they think?
[2:26:04] So that’s one thing, because it’s not only going through the filters like Facebook, it’s going through the filters of the fact that people have these memetic complexes that keep them from thinking. And so the cultural value of trying to understand other people so that we can compromise, because politics is a way to sublimate warfare. And if you don’t understand each other and compromise you get war, and the people who are saying yes let’s bring on the war, they’re just fucking dumb. They just don’t understand what war is actually like, they haven’t been in it.
[2:26:42] Bret: Well, I think you have brought us to the perfect last topic here. Now of course I’d like this conversation to go on and we should pick it up at another date, but the point you make about if we can demonstrate that we know what you’re going to say, then it isn’t a thought worthy of a human. If we can predict you and it’s not by virtue of us having modeled some beautiful thought process of yours, it’s because your thought process looks like that of an indefinitely large number of other people who are totally predictable, and that’s nothing you should be comfortable with. This goes back to the question I asked you at first, which is when you engage in what I would call independent first principles thinking, you immediately run into challenges that somebody who’s not deeply involved in such a thing doesn’t intuit.
[2:27:41] And so I’m imagining a person, somebody who is decent, who has compassion, has all of the basic capacities you would hope they would have, who has fallen into one of these automatic thought patterns. And I’m imagining you manage to sit down with them and show them that their thought pattern is automatic and totally predictable and therefore nothing that they should be comfortable with. And let’s say that they walk out of the room and they start behaving differently, and they start thinking for themselves, they stay awake. They’re going to run into some stuff, because they’re of course going to end up landing on some formulations that as soon as they say them out loud are going to get them punished. That is inevitable. Now those of us who live out here learn how to say things in ways that sometimes the punishments don’t stick.
[2:28:36] We learn where they are best stated, we learn what we shouldn’t say yet. But all of this speaks to what I think is, we don’t live in an authoritarian state, but we live in a state in which thought is policed as if we did. Not perfectly, but enough that one who wishes to escape from the accepted, the sanitized narrative has to be ready for what happens next. And that’s something that is, it’s very hard to generate that. In other words it’s a developmental process that causes you to learn how to navigate that space so somebody who just simply recognizes I don’t want to be an automaton and I’m going to start thinking for myself, if their next move is to start thinking for themselves and speaking openly about it, what comes back next is something for which we don’t have a good response.
[2:29:38] Daniel: Earlier you said, when you were defining at the beginning of our conversation what you meant by independent thinker is someone who wants to go wherever the facts and information that are well-verifiable actually lead them. I would say that there’s something like the spirit of science, which is a reverence and respect for reality, where I want to know what is real and be with what’s real more than I want to hold a particular belief no matter how cherished or whatever ingroup I’m a part of.
[2:30:14] And the uncomfort of not belonging with the ingroup—if I want to belong with anything I actually want to have a belonging with reality first. And a belonging with my own integrity. And then with those who also share that. And that the other belongings that I give up, I don’t stop caring about those people. I care about them still, but I don’t necessarily care about their opinion of me enough that I’m willing to distort my own relationship with reality.
[2:30:43] Bret: All right, so here is the question I want to ask about this, and I’m basically trying to surface some part of my own process in order to figure out what it is, can it be improved, can I teach it to others to the extent that it works. So I was on Bill Maher with Heather last Friday, and I said something that got an awful lot of pushback online, which I knew was coming. He asked if I thought the probability that COVID-19 was the result of a lab leak was at least 50% and I said something quite honest and shouldn’t have been new to anybody who had been paying attention to my channel. Which was that I had said back in June that I thought the chances were at least 90%.
[2:31:33] Now I can imagine that that number would be shocking to many people, but I also know that were I in their shoes, I would process it this way. I would say, all right, this person seems intelligent. I don’t know of a conflict of interest. That number is way off of what I would calculate. Therefore, I need to file this as a flag. Do I not know something? Maybe the person has a conflict of interest and that explains it. But if it’s not that, how have they arrived at a number that is so far off of what I would calculate, and what does it tell me? In other words, I would become agnostic at that moment rather than go on the attack.
[2:32:18] Daniel: People don’t give enough benefit of the doubt to people who think differently and they give too much trust to those who think the same.
[2:32:25] Bret: Right, but then here’s the place that the thought goes. So is it true that if somebody intelligent says something that is completely inconsistent with my model of the universe that I will inherently give it enough credence to look at it? It’s a tough question, because if I try some test cases: if you told me that you believe that there was a strong chance that the Earth was flat, that would throw a huge error for me. Because I know, a) that I’ve checked. In fact I have years ago and several times said what are the chances there is anything that these flat-earthers, that they’re not just a joke? And that it’s a trivial matter to find out what you need to know from your own experience that is inconsistent with that possibility. And so the answer is ok I’m not going to spend too much time checking with it.
[2:33:26] Then we get to, is the moon landing fake? This one is tougher. It’s tougher because when you look at the actual evidence that people are motivated to hypothesize that the moon landing is fake, there are some things in it that are hard to know. I don’t offhand know what the explanation for that is. So anyway my point is there are some ideas I wouldn’t be shocked at all to find you believe. There’s some ideas I would be so shocked that I’d image you’re kidding or you’ve lost your mind or I don’t know what. And so we all draw that line somewhere, and I guess my point is I think almost everybody, even very very smart people who don’t happen to be experienced in first principles independent thinking, draw that line somewhere that creates a fatal error when independence is experimented with.
[2:34:20] That the number of things—you know, it is the Matrix in some sense—once you start experimenting with what would I conclude if I was independent of all incentives and I just went based on the evidence and I gave everybody a chance to articulate their position, what comes back is so jarring that most people are driven back into conventional automatic thinking because the frightening aspects of what they get in response are enough to drive them off the instinct.
[2:34:53] Daniel: Yes, ok, god there’s so much in here that’s really good. The thing about the flat Earth is that the hypothesis is formally falsifiable. And the alternate hypothesis—
[2:35:10] Bret: Even by an individual.
[2:35:12] Daniel: Yes. And the alternative hypothesis is formally verifiable with the best methods that we have, with the highest confidence we can have. And now one thing I would still say is interesting is, I know many people who refer to flat-earthers as the moniker of maximum stupidity who cannot do the Copernican proof. So they take as an article of faith that the Earth is round, but they actually don’t know how to derive it, have never tried, and so then they also move to taking as an article of faith similar things that don’t have the same basis. So does someone even understand what falsifiable and verifiable mean? Does someone have a basis for calibrating their confidence margin? Because if I start to talk about the moon landing, or then I go a little bit further and talk about long-term autoimmune effects or epigenetic drift or whatever that come from a vaccine schedule of 72 vaccines together, is the standard narrative verifiable? Is the alternate narrative falsifiable in the way flat Earth is? No.
[2:36:24] So the fact that we put flat Earth and anti-vax in the same category is an intellectually dishonest bad thing to do. But the fact is that most people don’t even know how to do verify or falsify. And so with the lab hypothesis, when you come to 90% I’m guessing you have a process for that. What I would say is, I haven’t studied it enough to put a percentage because I don’t have enough Bayesian priors to actually come up with a mathematical number. What I would say is I consider the idea of it coming from a lab in some kind of dual purpose gain of function research to be very plausible, and I have seen nothing that falsifies that. And the few attempts that I saw early to falsify it were theoretically invalid to me. Now to be able to go from plausible to a probability number I would need to apply different epistemic tools than I have already applied.
[2:37:22] Bret: Well wait a second. I’m not sure that that’s the case because to me, as a theoretician, there is a hypothesis. There are multiple hypotheses. One is the virus escaped from a lab unmodified. Another is that it was enhanced with gain of function research and then it escaped. Another is that it was weaponized and deliberately released, all of these things. Each of them is a hypothesis, each of them makes predictions, and they are all testable. Now I am not required to have any guess as to which one will turn out to be correct, nor an assessment of how probable it is.
[3:38:09] It is natural to have a guess, but the two things function independently. As a scientist I am obligated to treat a hypothesis by the formal rules of science. I know what they are, I know how they work, and therefore I know at what point it’s going to be falsified, any one of them, and what would be necessary for one of them to become a theory, that is to say, for all of its challengers to fall. Now I can also say, look, if I had to bet here’s where I’d put my money. I happen to be a scientist who would be placing a bet, but my bet is not a scientific bet.
[2:38:47] Daniel: Yeah, we’re aligned. Clarification agreed.
[2:38:50] Bret: Yeah. Ok, good so then—
[2:38:51] Daniel: I could place a bet that is my hunch that I didn’t come to that number through an actual Bayesian or other kind of mathematical process, but if I was actually trying to formally give my percentage basis, I would go through some epistemic process.
[2:39:09] Bret: Yeah.
[2:39:10] Daniel: Now if I had to make a consequential choice based on it, the more consequential the choice is, the more process I would want to go through to calibrate my confidence of it, because the more problematic it would be for me to be wrong.
[2:39:23] Bret: Right. Ok so that all makes sense, but the ultimate question here is, given that we can see we want people not to behave in an automatic way, in a way that is below the nature of human cognition’s capacity to think and to react. But we also know that when people experiment with that under the current regime, it is not that they will produce conclusions that are different than they would otherwise produce, say them to their friends, and their friends will say oh that’s interesting I didn’t realize you think that. Their friends will say oh my god I can’t believe you’re one of them! And that that thing is so powerful that it is artificially depressing the degree of independent thought because anybody who has experimented with it is likely to have effectively touched some third rail and retreated as a response. So we don’t know—
[2:40:21] Daniel: Wait, there’s a failure mode on both sides. There’s a failure mode of creating artificial constraints where we don’t explore the search base widely enough, which is the one you’re mentioning. There’s another one of exploring the search base without appropriate bedding and jumping from hypothesis to theory too fast.
[2:40:40] Bret: Yes.
[2:40:41] Daniel: And those two are reacting against each other. There are people who say because it’s plausible it is. They jump from hypothesis straight to theory without proof, and then they believe wacky-ass shit.
[2:40:56] Bret: Yes.
[2:40:57] Daniel: And they insist that it’s true, and then people over here are like, wow that’s really dangerous and dreadful and anything that looks like that I’m going to reject offhand. And similarly the people over here believe standard models that end up getting either captured or at least limited, and people over here react against that. So this is another place that I would say the poles are driving each other both to nonsensical positions.
[2:41:17] Bret: Well yes, and the way that works in practice is there is a team that in principle knows that it is in favor of doing the analysis, but it does not believe itself capable of doing the analysis. So effectively it signs up for the authority of those who claim to have done the analysis and in principle have the right degrees or whatever. But then we run into this thing which goes back to something you said in several places in this discussion, which has to do with the bias amongst those involved in certain behavior. In other words, if you’re an epidemiologist at the moment, or a virologist, there’s a very strong chance that you believe the lab leak hypothesis stands a very low chance of being true, but you also very likely have a conflict of interest.
[2:42:10] You may be directly involved in the research program that would have generated COVID-19, or you may simply be involved in social circles in which there is a desire not to have virologists responsible for this pandemic and therefore there’s a circling of the wagons that has nothing to do with analysis. But either way the tendency to converge on a consensus is completely unnatural, and those who earnestly are trying to follow science end up following consenses delivered by people who claim the mantle of science while not doing the method. And that is a terrible hazard.
[2:42:56] Daniel: Yeah, I agree. And there’s one step worse which is the thing that we mentioned earlier, which is you can do the method, have all of the data coming out of the method be right and still have the answer be misrepresentative of the whole because you either studied the wrong thing or you studied something too partial. And so this question of what is worth trusting comes up again and is, ok, I don’t want to defect on my own sensemaking to just join the consensus so that I am not rejected.
[2:43:33] At the same time if everyone is sure that I’m wrong and I’m sure that I’m right, I should pay attention to that, because very possibly I have a blind spot and I’m a confused narcissist. Every once in a while, they are all in an echo chamber and I’m actually seeing something, and both can be the case sometimes. So you’re like, ok do I always stick to my guns or do I always take whatever the peer review says? Neither, this is again, the optimization function isn’t it. Wisdom ends up being a “I don’t know the answer to this trolley problem before I get there.”
[2:44:09] Bret: Right.
[2:44:10] Daniel: So what I have to say is, is the basis by which the other people all agree that you were wrong deliberative and methodological and earnest and free of motivated reasoning? Does it have a group-motivated reasoning that’s associated with it? Are there clear blind spots in the thing you’re thinking? So I don’t think there’s an answer to the what actually is right. There is no methodology, it’s the Tao that’s speakable is not the eternal Tao, the methodology that’s formalizable is not the thing that reveals the Tao. Ultimately you have to end up adding placebo at a certain point and then double-blinding and then randomization. The methods have to keep getting better because there’s always something in the letter of the law that doesn’t get the spirit of the law and in the letter of the methodology that doesn’t get actual science right.
[2:45:07] Bret: Right, and in fact, so couple things here. One, there is a part of the scientific method which is a black box. There’ s a part that actually I believe literally cannot be taught. It is the part where you formulate a hypothesis. That is a personal process. If I taught somebody to do it my way, I don’t think they’d do it very well. So the point is that’s something that you learn to do through some process that is mostly not conscious, hard to teach, and hard to discover, but everybody who does it well does it in some different way. And so at that level even just saying “do the method” is incomplete because not everybody can do the method. Let me see, there was something else, oh yeah, there was a missing thing on your list. I realize you weren’t trying to be exhaustive, but there was a missing thing on the list of possible reasons that you could come up against a consensus and still be right even if you’re the only person who disagrees.
[2:46:15] And it has to do with the non-independence of the data points on the other side based on, let’s say, either a perverse incentive or a school of thought having won out and killed off all of the diversity of thought over some issue that turns out to matter. And these things can very easily—so I would say yes, if you always think you’re right and when everybody’s against you they’re wrong, then yeah, narcissism is a strongly likely reason. On the other hand, is as you point out with Tesla and their competitors, sometimes you find that a field or an industry is easy to beat, that there’s something about them that is maybe economically very robust but with respect to their capacity has become feeble.
[2:47:06] And this is true again and again in scientific fields, that scientific fields go through a process where a school of thought delivers handsomely on some insight, it wins the power to own the entire field, that insight runs its course, diminishing returns sets in, it stops delivering anything new, it doesn’t give up the reins and hand them over to somebody else because there’s no mechanism to do that. So the people who have the school of thought that’s already burned out its value stick to their power, and that means that the field is wide open to be beaten by an outsider who just simply isn’t required to subscribe to whatever the assumptions of the school of thought are. And that happens so frequently that there is this, it’s artificially common that you have the experience if you think independently and you know what you’re doing, that you’ll disagree with just about everybody and they’ll actually turn out to be wrong because they’re proceeding from a bad set of assumptions.
[2:48:13] Daniel: So, I think this is actually one of the most interesting applications of blockchain or decentralized ledger technology is this idea of an open science platform. So imagine every time someone did a measurement, the fundamental measurement, it had to be entered into a blockchain, and then the other places that independently did it was entered into a blockchain so it was uncorruptible. And then the axioms and the kind of logical propositions get entered in, and then the logical processes of whether I’m using an inductive or deductive or abductive process gets put in. And then we get to kind of look at the progression of knowledge. Then at any point that we come to realize that a previous thing in there was wrong, some data was misentered or a hypothesis is proved wrong, now we can come back to that point and look at everything downstream from it and re-analyze it.
[2:49:05] Of course you still have the oracle problem of the entry in the first place. So if I’m doing motivated science and I get some answers I don’t like and I can hide them and not enter them, then that’ll happen. So you still have to have then the proper entry into the system, but this addresses something with the integrity of science and also the integrity of government. Government spending and the capture of market forces of the regulators rather than the regulators being able to regulate the market, is we only know when the fucked up thing happens if we can see it. Which means that everyone who wants to do something asymmetric or predatory has a maximum incentive for non-transparency. So certain kinds of uncorruptibility and transparency are very interesting in what they can do towards that.
[2:49:50] Bret: Interesting. Now this actually comes back to something I wanted to raise earlier, but didn’t get to it, which is, I started out very focused on sustainability. I believe sustainability is something that you can’t measure too finely. If you measure too finely then sustainability becomes an absurd block to progress because you can’t dig a hole in your own backyard because you couldn’t dig a million such holes. But if you relax the system so that you’re measuring processes that actually potentially matter, sustainability has to be a feature of this system long-term. It doesn’t have to be a feature of the system in any given time period, but overall it has to net to a sustainable mode.
[2:50:38] Daniel: I wouldn’t say a system has to be sustainable, I would say the meta-system or increasing orderly complexity has to be sustainable. But that might mean a system intentionally obsoleting itself for a new, better system.
[2:50:50] Bret: Ok, I accept that, but what I’ve realized down this road is that the system actually, or the set of systems or the meta-system, however you want to describe it, needs a failsafe which I call reversibility. So the point is, if you set the goal of sustainability and you say well we have to measure things that matter, sooner or later you’re going to fail to measure something that matters, and you’re going to deal with it unsustainably, and at the point that you figure it out it’s going to be too late. So my point would be, and this is a tough one, people don’t like the implications of this if they understand it, but any process that you set in motion has to be something you could undo if it turns out to be harmful in a way that you didn’t see coming.
[2:51:37] So that is to say, you can alter the atmosphere, carbon dioxide is not poisonous. The changes in concentration that change the degree of heat trapping are not terribly meaningful to the wellbeing of living creatures. But at the point you discover that the heat trapping is going to massively change the way the atmosphere functions and the oceans, et cetera, et cetera, you have to be able to undo it. Now, undo it means you could change the concentration back to what it was. What this would mean in practice was that you would have to slow the process of change down such that you scaled up the process that would reverse the change in proportion.
[2:52:27] Now if you imagine all of the disasters that we have faced, all the ones I named up top and all of the other ones that look like it from Fukushima to Aliso Canyon to the financial collapse of 2008, and you imagine that in proportion to the process that went awry we had scaled up the reversal process so that it was there if we needed it, we would have been in a very different situation. Because a) the process would have run away much much slower, and b) the tools to undo it would have been present and ready.
[2:53:01] Oh, before you respond to that, I do want to say that the only way that that would work is if it was over the entire system. In other words, if one nation for example were to decide that it had to adhere to a standard of reversibility while other nations weren’t restricted in the same way, you’d get a tragedy of the commons where the atmosphere or whatever other resource would ultimately be destroyed by the nations that didn’t participate in that system, and the nation that was most responsible would pay the cost of building a reversibility system that wouldn’t work in the end. But other than that I think the principle makes sense. What do you think?
[2:53:41] Daniel: So something like sustainability having a consideration like reversibility as one of the factors to informed choicemaking is a valuable consideration. And that it doesn’t matter at all if we don’t have collective coordination capacity to be able to make the right choices, period. So yes, agreed. Now, regarding reversibility, I think reversibility is a valuable consideration that is impossible in important ways but is still an important consideration. So can I decrease the amount of CO2 in the atmosphere if we realize we need to? Kind of, yes. But the CO2 in the atmosphere went up along with a lot of mountaintop removal mining for coal, and a lot of species extinction in the process, a lot of people who died over wars for oil. Can I reverse and get those dead people back and those extinct species back and those pristine ecosystems back? Nope, they’re gone.
[2:54:51] And then also reversibility over what time frame? Will new old-growth forests come back thousands of years from now? Sure. Does that timescale matter? If I ever extinct a species, is that reversible? Does every species matter? What about killing an individual element within it? So it’s like I can only think about reversibility on very narrowly defined metrics, but the thing that harms that one metric has lots of other effects simultaneously. And so we have to understand that reversibility by itself is an oversimplification because we’ll always be thinking upon metrics that are subsets of all that’s affected.
[2:55:33] Bret: Yep, I agree. It is an oversimplification as is sustainability, but my sense is that you have to instantiate it in some way in order for the system to be safe. And I would say if it prevents you from removing mountaintops, as long as it prevents everybody else from removing mountaintops it’s the right idea. In other words, if we are allowed to degrade the Earth a little bit at a time by removing mountaintops now and drying up rivers next time, then eventually you have a world that isn’t very worth living in, and I do believe that we have a moral obligation not to degrade the planet. That our highest moral obligation has to be to deliver the capacity to live a complete, fulfilling human life to as many people as we can, and that means not liquidating the planet.
[2:56:30] It means a renewal process which is the very definition of sustainability, and it’s inconsistent with removing mountaintops. Now, lots of species don’t matter. There are lots of little offshoots of species and they can go extinct, and they do go extinct and nobody is harmed by their doing so, which is not the same thing as losing orcas or elephants or eagles or whatever. So obviously you need to have a rational threshold in which to protect against degradation and allow degradation that doesn’t have an important implication. But the question is really, is it so compromised by those considerations that it’s not worth considering, or is it rescuable if one figures out how to apply a threshold?
[2:57:26] Daniel: So we said that one of the dialectics that defines the left and right in its most abstract form generally has to do with a focus on the individual versus a focus on society or the collective or the tribe or some kind of group. Another one is in orientation towards conservation, or conservativeness, traditionalness, with an orientation on the other side towards progress or progressiveness. And again these are confused all over the place and even what we call left and right have shifted in the last few decades in a number of ways. But it’s interesting here because when you talk about reversibility and sustainability, another synonym is conservation, what is it that we want to be able to conserve? And so the conservative principle is focused upon, what has made it through evolution that is valuable enough that we should conserve it and not fuck it up?
[2:58:19] And yet, so interestingly, the people who are often called conservatives are not focused on critical aspects of conservation. But you’re talking about biosphere conservation right now, oftentimes they’re talking about sociosphere conservation, conservation of social systems. And you were saying that underneath it is the capacity for humans to thrive and have meaningful lives and relationships and we would say that that is a function of the biosphere, the sociosphere and the technosphere and the relationship between them. And we can say very clearly it’s the technosphere ruining the biosphere most of the time, and yet if it ruins the biosphere enough, the technosphere goes because the technosphere depends upon the biosphere.
[2:58:57] So we have to learn how do we make a technological-built environment that is replenishing, regenerative with the biosphere? And the sociosphere is another really critical one, and I think you’ll probably actually have something to add to this that I haven’t thought of. When I think of what the fundamental intuition of a conservative is, even if they don’t articulate it like this, and the traditionalist kind of impulse, which is, let’s go back to the Constitution, let’s go back to Christianity or European ideas or the free market or whatever it is. Or rigorous monogamy, whatever social structure lasted for a long time. There is an intuition even if they don’t formally think of it this way logically that almost everything didn’t make it through evolution in terms of social systems and the few things that did weren’t the things that people thought would.
[2:59:53] So there’s a lot of embedded wisdom that wasn’t understood that was very hard-earned and we want to preserve that and not break it, because we think we understand it well enough and we might not. And that fundamentally the progressive intuition is that we’re dealing with fundamentally novel situations that evolution didn’t already figure out and we need innovation. And of course the synthesis of that dialectic is we need new innovation that is commensurate with that which should be conserved, and not everything should be conserved. Because some things made it through because they won short-term battles while fucking up the long-term whole. And so what things are worth being conserved? What things are not worth being conserved? Did we understand it well enough that we didn’t say this isn’t worth being conserved out of hubris? And then what progress is commensurate with that, I think is a good way of thinking about that dialectic.
[3:00:42] Bret: Yeah, I like it and I think there’s the flip side of it as well, which is that captured inside Biblical traditions are some bits of basically responses to game theoretic hazards that are consistent with things we’ve talked about. So for example the Christian sense that not only is the world here for humans to make use of but that we are in effect obligated to do it, that belief fits perfectly in a world where if your population doesn’t capture a resource, somebody else is going to. So in other words, that belief structure travels along with a tendency to capture the resources that are available.
[3:01:28] And to the extent that what that does is it causes the exploitation of a resource, the tools with which those resources could have been exploited in Biblical times almost always left a system that would return itself to equilibrium given an opportunity, which isn’t true in the modern circumstance. So what we have is a place where there’s lots of stuff that is conservative that there’s a very good and often hidden reason that we should preserve, and then there are some places where we would actually have to upgrade the wisdom because it doesn’t fit modern circumstances. And the conservation of the natural world is I think a clear case.
[3:02:13] Daniel: Just because you mentioned this case, when people realize that Christendom spread largely by holy war, not exclusively but largely, you need a religion that makes a lot of people that are willing to die in holy war because of a good afterlife, and who you can spare? Large population of people that can die in war. And Islam and Christianity both had this. They both had “be fruitful and multiply and proselytize”, because they both had war as a strategy for propagation of the memes, so you needed numbers. Whereas Judaism didn’t have it, and Quakerism and some other ones didn’t. Judaism had to actually make it hard for people to join the religion because you’re not going to lose a lot of people as soldiers, you’re going to embed yourself as a diaspora within dominant cultures and end up affecting the apparatus of those cultures.
[3:03:07] So it’s interesting to think about how those different memeplexes had different evolutionary adaptations, but it’s important for the reason you mentioned is that those traditions were influenced by politics and economics and war and philosophy and culture and a lot of things. So you can’t wholesale throw them out or keep them, you have to actually understand what allowed those memes to propagate and what their memetic propagation dynamics were. And so that conservative impulse that says the things that made it through made it through for a reason, yes, but some of the things that made it through for a reason won’t keep making it through.
[3:03:41] Dinosaurs were around for a long time, and then they weren’t. And as we’ve mentioned, evolution can be blind and run very effectively into cul-de-sacs and yet the other side is all too often we will criticize a tradition for being dumb, when we don’t understand what made it work well enough, and we throw something out that was actually worth not throwing out. So how do you do a deep enough historical understanding to be able to decide what should be conserved and not, is also a really good question.
[3:04:13] Bret: It’s a really important question because it’s Chesterton’s fence factory effectively. Nobody knows what actually was functional and what had no function but traveled along with it because they were paired very closely in a Biblical text, and what functioned in ways that we don’t want it to function now. These things are all invisible because the whole thing is encoded in myth, so it’s not in there. So yeah, that’s a huge hazard and it’s a tough one. For those of us who want to build reasonably and recognize that there’s an awful lot that we have to do that’s novel because it hasn’t been accomplished before, we have to grapple with the fact that it’s not like these traditions are simply backward. Some of them are very insightful and non-literal and we need to exercise great caution in approaching them.
[3:05:12] Daniel: Ok, so I want to come back to your three generations at least problem. It’s easy to look at the nature of the problems and just assume that we are fucked and usually to tie that to some conversation about human nature. And to say ok well we were able to figure out technology that was extraordinarily powerful, to speak mytho-poetically, the power of gods. The nuke was clearly the power of gods. And then lots of tech since then. We can genetically engineer new species, gain of function, whatever. Without the love and wisdom of gods, that goes in a self-terminate direction. Is it within the capacity of our nature to move towards the love and wisdom of gods to bind that power, or are we inexorably inadequate vessels for the amount of power we have?
[3:06:05] So then I do a positive deviant analysis to look at what are the best stories of human nature to see if they converge in the right direction. And then also where there are conditioning factors that we take for granted because they’ve become ubiquitous and think that they’re nature? So if we go back to the Bible for a moment and we look at Jews, and we look at was there a population of people that were able to educate all of their people at a higher level than most other people around them for a pretty long time in lots of different circumstances? Yes. You look at the Buddhists. Were there a population of people that across millennia and different environments were able to make everybody peaceful enough to not hurt bugs? Yes.
[3:06:52] Across all the genetic variants and across all of the economic factors and the whatever else, do we have examples of very high level of cognitive development and very high level of ethical development of different populations based on cultures? We do. And then we say oh well but look at how well the founding fathers’ ideas failed here. Well the comprehensive education of everyone is not in the interests of the elite that have the most power as we’ve mentioned, and so making it seem like that that’s an impossible thing is actually really good to support the idea that there should be some kind of nobility or aristocracy or something like that. There should be elites who control because they’re more qualified.
[3:07:33] I would say that we have not in modern times ever tried to educate our population in a way that could lead to self-governance because there was no incentive to do so. Or those who had the most capacity had incentive to do something else even when they said they were doing that. So do I think that it’s possible? Do I think that we have examples historically of people who developed themselves cognitively and ethically enough that if we did those together, Buddhist-Jews, however we want to talk about it, do I think that’s possible within human nature and basically untried? Yes.
[3:08:11] Bret: Yeah, I love that and I agree with you. It’s dependent on something which we might as well spell out here, which is that the difference in capacity between human populations is overwhelmingly, if not entirely, at the software level. Which I firmly believe. I’m speaking as a biologist. I’ve looked at this, I will have to defend it at length elsewhere, but the degree to which it’s software that distinguishes us and therefore we can innovate tools, we can democratize tools, all of that is at our disposal, and I agree with you it hasn’t been tried and it might be our only hope, but at least we’ve got prototypes.
[3:08:51] Daniel: Now I will say why I am grateful for what happened at Evergreen, is that you wouldn’t be here doing this otherwise, and on Bill Maher. And you and Heather are both exceptional educators, and so the fact that your tiny little niche for education got blown up so that you took this quality of education to all the people who were interested, this larger scale, I’m really happy about.
[3:09:14] Bret: Thank you my friend!
[3:09:15] Daniel: Because I think that this exact thing is the thing that has a chance, is a strange attractor of those who are called to a cultural enlightenment starting to come together in a way that can then lead them to coordinate, to build systems that can then propagate those possibilities for other people.
[3:09:33] Bret: Well I really appreciate that and I must say I feel it as a calling, as I’m certain you do. And so, yes, and I also love the point that you made earlier about the fact that the audience for this really is people seeking a kind of enlightenment and community and so yes, as much as you and I both focus on existential risks, there is hope in that.
[3:10:04] Daniel: Yup.
[3:10:05] Bret: Ok Daniel, well this is, I think we’ve gone more than three hours. It’s certainly been a great conversation and there are so many threads that are worth revisiting, which we should do sooner rather than later.
[3:10:17] Daniel: This was super fun, I really enjoyed it.
[3:10:20] Bret: Yeah, it was. So Daniel Schmachtenberger, where can people find you?
[3:10:27] Daniel: Well you mentioned in the beginning we have something called The Consilience Project that’ll be launching soon via a newsletter in March and then a website in a few months. So tune back in on that. It is a project in this space, a nonprofit project that is seeking to do a better job of news with education built in. So we look at very complex issues that are polarized and we make the epistemics that we’re applying explicit, so we’re actually teaching people how do you sense-make complex situations in situ.
[3:11:00] And then if anyone ever thinks we missed any of the data or got something wrong, they can let us know and we’ll publicly correct it and credit them if [inaudible] right and et cetera. And the goal there is helping to catalyze cultural enlightenment of this type, recognizing that both education and Fourth Estate are requisite structures for open society. An open society being rebooted has to be rebooted at the cultural level first. Right now you can find me on Facebook, one of those platforms, or I have a blog, an old blog, everything’s out of date on it— civilizationemerging.com.
[3:11:40] Bret: Civilizationemerging.com, and are you not on Twitter? And does that explain how you’re so clear-headed?
[3:11:48] Daniel: I’m not on Twitter, and I’m on Facebook because of Metcalfe law, because everyone is, so it ends up being a useful introduction and messaging tool, but yeah I am not part of the Twitter crew.
[3:12:04] Bret: Well more power to you. All right, Daniel this has been a pleasure and I look forward to our next one.
[3:12:12] Daniel: Me too.
[3:12:13] Bret: Be well. And everybody else, thanks for tuning in.
[3:12:15] Daniel: Thanks.
[3:12:27] [End of video]