Celebrating the Civil War?

We sometimes celebrate the outcomes of a war, a righteous victory, or perhaps a lamented defeat of something we stand for. That’s fine — but it is solemn, solemn rather than celebratory. Look at Independence days around the world; people celebrate their freedom, their independence, and give thanks and appreciation to those that made it possible. Sometimes, they erect statues in thanks to those who fought for something the value — honoring the sacrifice. But they don’t celebrate the war that was necessary for it to occur.

Human beings do not celebrate war. We do not celebrate sacrifice for the sake of sacrifice. War is tragedy, and if you celebrate it, you’re fundamentally confused about the value of human life. Celebrating the (perhaps necessary) death and destruction of war is unacceptable. Doing so would either be a sick “celebration” of death, or the result of a confusion reflecting a lack of thought and reflection on what is being celebrated.

When someone says they celebrate the south or their southern heritage, that’s wonderful. There is a rich culture and a history of excellence of various types worthy of celebration. But if they want to celebrate the Civil War or erect statues to the “heroes” of the Confederacy I am compelled to ask: what exactly are they celebrating?

Advertisement

Purim 5776 D’var Torah

Note: This is for a very different audience than most of what I post here. If you follow me, you probably won’t have any idea what I’m talking about — sorry!

The megillah identifies its main characters, Esther, Mordechai, Haman, and Achashverosh — but they undergo opposite identification processes throughout the story.

For some reason we start the megillah with Achashverosh, and it’s critical at the very outset to know that his mission is to be Molech, to assert his identity and rulership. This mission is eventually successful, culminating in his dethroning of his appointed head of staff, followed by an assertion of direct power in the form of taxing the people. Similarly, Haman is so egocentric that he wagers his title and his position —one to which everyone must accede, even a lowly-seeming Jew named Mordechai. He fails while trying to assert himself and his identity, brought down by a plan intended to finally force his nemesis to honor him; he is obviously humiliated by needing to honor this lowly Jew instead.

That Jew, Mordechai, is identified only by his tribe and the fact that he was sent away when the Jews were exiled. In Shoshanas Yaakov, he is identified even more plainly, only as “HaYehudi.” He seemed so unimportant that, as the Megillah states, it would be an embarrassment for Haman to stoop to the level of acting against him directly! (It is only via our tradition that we know he was a member of the Sanhedrin, already an important leader of the Jews.)

And Esther — she was not only nearly anonymous at first, with no parents and only a seemingly lowly uncle, but she then hides her identity further. She is demure through most of the Megillah, to the point that after she acts, she cannot simply tell the King what is wrong, but must act out an extended charade to get to the point. But in the end, we note that the entire Jewish people act on her behalf — she is Esther Ba’adi, Esther for us!

The difference here is between those who act to reinforce their own identities, and those for whom identity is forced upon them. Our identity is our core component. Our very da’as is dependent on it — da’as tov ve’rah, da’as our religion, da’as our ability to relate to others. On Purim, we attempt to remove that Da’as — we drink to erase identity, to erase thought, to erase deliberation.

Esther, the most anonymous member of our story, is the one who eventually takes charge, thus seizing a place for herself as… nothing. Her position is unchanged, still in thrall to the King, having only further surrendered her identity, becoming a Queen instead of a wife. Her legacy, though, is a holiday enshrining her sacrifice. She becomes the focus, the hero.

How? Only by initially being willing to hide her identity can she take a place as Queen. At that point, Esther could easily have built an identity as Queen, the apex of power for a woman in her day, instead of hiding who she was. She could have decided for herself the reason Hashem placed her there— ensuring that she would later be unable to act for the Jews when needed. Instead, when she acts, it is by asserting her identity as a Jew, the least specific identifier available. As the gemara in Megillah clarifies, that is the point of her actions — she doesn’t pre-specify a plan, or what it will mean, what will happen to her. She simply waits to see what part of her almost erased identity is necessary for her to act.

On Purim, we need to step back from the elaborate identities we construct for ourselves.

Identity as a constraint

Identity constrains us, makes us need to justify ourselves and act according to what is expected. Paul Graham uses this to argue against identifying with a specific programming language, religion, or political party. (I’d only endorse avoiding two of those.)

Why? “People can never have a fruitful argument about something that’s part of their identity. By definition they’re partisan.” He correctly notes that “you can have a fruitful discussion about a topic only if it doesn’t engage the identities of any of the participants.”

Most of us can’t dispassionately evaluate the relative merits of the Chareidi and Dati communities, though we know we should admit that “Yisroel Kedoshim Heim.” Instead, if “our” group has done something wrong, or is non-ideal in some aspect of their behavior, we feel the instinctive need to justify “ourselves.” Worse, we uncritically accept the norms of a group, even ones that are detrimental. We allow the positive aspects of the identity drown out the fact that other groups, other Jews, have valuable contributions.

Not all identity is bad, of course. Self-abnegation and nullification of who we are is not a Jewish path; we find meaning and Kedusha through engagement with the world, not withdrawal. Obviously, self-identifying as someone who Davens 3 times a day, or as someone who learns Shabbos afternoon, is wonderful if it helps us maintain the habit. Despite that, even these purely positive traits are harmful when we use this identity to denigrate others, or view them as inferior. We can use the positive, and still appreciate that it doesn’t need to help us exclude others.

Our Avodah on Purim is to allow us to reconstruct who we are, to replace our narrow constructed identities with more universal ones that allow us to unify, to capitalize on opportunities. We should not let identities get in our way of participating in the Revach v’Hatzalah that will be provided for our people. At the very least, our role can be to support others, just like the Jews that have so little identity that they are mentioned in the Megillah only as fasting in support of Esther (and eventually, as looting the homes of anti-semites who attack.) Instead of our solidly constructed, narrow towers of identity, we have a chance to redefine ourselves humbly. If we are successful, who can know what out fate will bring? We may discover that, by allowing ourselves to be more accepting and more open to opportunity, we gain the ability to capitalize on an “Eis C’Zos, when Hashem provides a moment of golden opportunity.

The practical take-away of this message, however, goes beyond our personal Avodah while drunk on Purim. Which Jews were targeted by Haman? “M’naar v’ad Zakein, Taf v’nashim.” Haman viewed us, correctly, as a nation that was “M’fuzar u’Meforad Bein Ha’Amim” split into ever smaller categories. Even our Avodas Hashem is segmented! We are Chassidim or Misnagdim, Ashenazim or Sefardim. We are people who daven at a Teen Minyan, a Young Marrieds Minyan, a Hashkama Minyan, a Main Minyan, or otherwise.

On Purim, however, we are told explicitly by the Gemara that the Pirsumei Nisa should be accomplish with many Jews. The Mishna Berura poskins l’halacha that we need to find a Megilla reading that’s large, containing many Jews. “B’rov Am Hadras Melech” — all together, undivided by our identities. We leave this unified public reading to spend the day delivering gifts, to encourage bonds of friendship, and to provide for our nation’s poor, allowing them to join in the celebration. Only then can we drink, to further unify our nation, and to erase that which separates us.

Unity, one Gemara notes, has historically protected Jews even when they were idol worshippers. On the day identified as “like Purim,” we open by asking permission to pray with those that are sinful, to exclude no-one. This diversity we are told to display is clearly a way to oppose narrow identities, to require inclusivity. The Megillah is clear on this point as well; Purim is a time when K’lal Yisroel comes together to allow individuals to accomplish miracles, hidden though they may be. Unity allowed us to accept the Torah originally, and our continued re-acceptance on Purim is strengthened, not weakened, by inclusivity.

Dynamic Evolution of Chaos in Politics

There is a tension I have noticed between stability, instability, growth, and political movements. Basically, there is a cycle where stability leads to desire for less control, less control leads to growth, justifying further reduction in control, which eventually leads to instability.

It’s certainly not original, but I wanted to write it down so I could get ideas about who explained it already — or who explained where it does or does not apply, and why.

Finance

A basic and well discussed example is modern financial markets. Regulation strangles growth, so in good times there is a constant push to reduce regulation. Reduced regulation allows new approaches which are exciting and profitable. The reduction in regulation also allows developments that reduce stability, eventually leading to a moderate or large collapse — which leads to a push for more regulation — as we saw in 2008 and onward.

But let’s start with a slightly less obvious example of this; the 1970s London real estate bubble. From that case study, we have a simple timeline.


External shocks led a slightly under-regulated bubble to collapse. They were lucky the external shocks came when they did to pop the bubble — without them, bubbles inflate until they burst under their own pressure. And that’s what happened in 2007–8; the US housing market started to climb above trend around 2000, when the dot-com bubble popped, and people sought a “less risky” place to put their money. This continued until it spurred a housing-backed derivatives industry that accelerated the process until the financial bubble was large enough that upon popping, the housing price collapse was almost unimportant.

Government

This is worse for governments as a whole. There is a tension between chaos and stability, and between growth and collapse — and it’s possible to reach any point in that space, with sufficient cleverness and/or mismanagement.


Looking at Russia, we saw a relatively rapid (20-year) shift from the top right in the 1980s, to the bottom left now. China’s cultural revolution managed to move from the bottom right to the top left. The transition from top to bottom is usually more sudden; the Arab Spring has numerous examples. The United States has been moving steadily from the bottom left to the top middle over the past century, sacrificing growth for stability.

I think that’s the missing dimension; wealth creates a desire to push upward, towards more stability. Poverty creates the need to move rightward, towards higher growth. These are not directly in conflict — but the pursuit of stability is easiest at the cost of growth, and the pursuit of growth is easiest at the cost of reducing or eliminating structures that enforce stability. Fukuyama’s end of history is democratic liberal capitalism, which marries political stability with market chaos, and allows growth alongside stability. China’s alternative, capitalism with Chinese characteristics, is similar, without the ability for the populace to control the government.

The Chinese model and democracies have common failures; poverty undermines the stability, and if growth is insufficient, or insufficiently widespread, then the populace revolts — either via election, or via Tienanmen. Increasing authoritarianism is the short term “solution” in both cases, leading to a vicious cycle of decreased growth and increasing rigidity. If increased stability occurs alongside poverty, there is a push for revolution. If increased stability maintains widespread wealth, it is stable.

There are two ways which historically undermine widespread growth in stable systems; corruption, regulatory capture, and monopolistic wealth extraction, or economic instability and collapse.

I have more to write on this, but I should probably find out where I’m wrong or covering well-covered ground first.

A Tentative Typology of AI-Foom Scenarios

“If a foom-like explosion can quickly make a once-small system more powerful than the rest of the world put together, the rest of the world might not be able to use law, competition, social norms, or politics to keep it in check.” — Robin Hanson

As Robin Hanson recently discussed, there is a lack of clarity about what an “AI Foom” looks like, or how likely it is. He says “In a prototypical “foom,” or local intelligence explosion, a single AI system…” and proceeds to describe a possibility. I’d like to explore a few more, and discuss what qualifies as a “foom” a bit more. This is not intended as a full exploration, or as a prediction; it merely captures my current thinking.

First, it’s appropriate to briefly mention the assumptions made here;

  • Near Term — Human-intelligence AI is possible in the near term, say 30 years.
  • No Competitive Apocalypse — A single system will be created first, and other groups will not have resources sufficient to quickly build another system with similar capabilities.
  • Unsafe AI — The I launched will not have a well-bounded and safe utility function, and will find something to maximize other than what humanity would like.

These assumptions are not certainties, and are not part of the discussion — but I will condition the rest of the discussion on them, so that debating them is reasonable, elsewhere.

What’s (enough for) a “foom”?

With preliminaries out of the way, what would qualify as a “foom,” an adaptation or change that makes the system “more powerful than the rest of the world put together”?

Non-Foom AI X-Risk

There are a few scenarios which lead more directly to existential risk, without passing through a stage of gathering power. (Beyond listing them, I will not discuss these here. Also, names of scenarios given here do not imply anything about the belief of the namesake.)

a) Accidental Paperclipping — The goals specified allow the AI system to do something destructive, and is irreversible or not noticed. The AI is not sufficiently risk-aware or intelligent to avoid doing so.

b) Purposeful Paperclipping — The goals specified allow the AI system to achieve them, or attempt to do so, by something destructive which the AI can do directly, and is irreversible or not easily noticed in time.

c) Yudkowskian Simplicity-foom — There are relatively simple methods of vastly reducing the complexity of the systems the AI needs to deal with, allowing the system to better perform its goals. At near-human or human intelligence levels, one or more of those methods becomes feasible. (These might include designing viruses, nano-assemblers, or other systems that could wipe out humanity.)

Fooms

There are a few possibilities I would consider for an AI to become immensely powerful;

a) Yudkowskian Intelligence-foom — The AI is sophisticated enough to make further improvements on itself, and quickly moves from human-level intelligence to super-Einstein levels, and beyond. It can now make advances in physics, chemistry, biology, etc. that make it capable of arbitrarily dangerous behaviors.

b) Hansonian-Em foom — The AI can make efficient and small copies of, or variations on itself rapidly and cheaply, and is unboxed (or unboxes itself.) These human-level AI can run on little enough hardware, or run enough faster than humans, that the machines can rapidly amass resources and hack/exploit/buy resources that allow it to quickly gain direct control of financial and then physical resources.

c) Machiavellian Intelligence-foom — The AI can manipulate political systems surreptitiously, and amasses power directly or indirectly via manipulating individual humans. (Perhaps the AI gains resources and control via blackmail of specific individuals, who are unaware on whose behalf the operate.) The resulting control can prevent coordinated action against the AI, and allow it to gather resources to achieve its unstated nefarious true goal.

d) Asimovian Psychohistory-foom — The AI can build predictive models of human reactions well enough to manipulate them over the medium and long term. (This is different than a Machiavellian-foom only because it relies on models of humans and predictive power rather than humanlike manipulation.)

This is almost certainly not a complete or comprehensive list, and I would be grateful for additional suggestions. What it does allow is a discussion of what makes various types of fooms likely, and consider which might be pursued.

AI Complexity and Intelligence Range

The first critical question among these is the complexity of intelligence — I won’t try to estimate this, but others are researching and discussing it. Here, complexity refers to something akin to computational complexity, and refers to the difficulty of running an artificial intelligence of a given capacity. If emulating a small mammal’s brain is possible, but increasing the intelligence of AI from there to human requires an exponential increase in complexity and computing speed, we will say it is very complex, while if it requires only doubling, it is not. (I assume the computational complexity matters here, and there are no breakthroughs in hardware, quantum computing, or computational complexity theory.)

The related question is the range of intelligence. If beyond human-level AI is not possible given the techniques used to achieve human-level intelligence, or requires an exponential or even a large polynomial increase in computing power, we will consider the range small — even if not bounded, there are near-term limits. Moore’s law (if it continues) implies that the speed of AI thought will increase, but not quickly. Alternatively, if the techniques used to achieve human level AI can be extended easily to create even more intelligent systems by adding hardware, the range is large. This gives us a simplified set of possibilities.

Intelligence vs. Range — Cases


Low-Complexity Intelligence within Large Range — If humans are, as Eliezer Yudkowsky has argued, relatively clustered on the scale of intelligence, the difficulty of designing significantly more intelligent reasoning systems may within, or not be far beyond, human capability. Rapid increases in intelligence of AI systems above human levels would be a critical threshold, and an existential risk.

Low-Complexity Intelligence within Small Range — If human minds are near a peak of intelligence, near-human or human-level Hansonian Ems may still be possible to instantiate in relatively little hardware, and their relative lack of complexity make them a potential existential risk.

High-Complexity Intelligence within Small Range — Relatively little existential risk from AI seems to exist, and instead a transition to an “Age of Em” scenario seems likely.

High-Complexity Intelligence within Large Range — A threshold or Foom is unlikely, but incremental AI improvements may still pose existential risks. When a single superintelligent AI is developed, other groups are likely to follow. A singularity may be plausible, where many systems are built with superhuman intelligence, posing different types of existential or other risks.

Human Complexity and Manipulability

The second critical question is human psychology. If human minds can be manipulated more easily by moderately complex AIs than by other humans (which is already significant,) AIs might not need to “foom” in the Yudkowskian sense at all. Instead, the exponential increase in AI power and resources can happen via manipulation at an individual level or at a group level. Humans, individually or en masse, may be convinced that AI should be given this power.

Even if perfect manipulation is impossible, classical blackmail or other typical counterintelligence-type attacks may be possible, leading to a malevolent system to be able to manipulate humans. Alternatively, if human-level cognition can be achieved with much less resources than a human mind, Hansonian-fooms are possible, but so is predictive modeling of individual human minds by a manipulative system.

Alternatively, if very predictive models can be made that approximate human behavior, much like Asimov’s postulated psychohistory. This seems unlikely to be as rapid a threat, but AIs in intelligence, marketing, and other domains may specifically target this ability. If human psychology can be understood more easily than expected, these systems may succeed beyond current expectations, and the AI may be able to manipulate humans en-masse, without controlling individuals. This is similar to an unresolved debate in history about the relative importance of individuals (a la “Great Man Theory”) versus societal trends.

Conclusion

We don’t know when human-level AI will occur, or what form it will take. Focus on AI-safety may depend on the type of AI-foom that we are concerned with, and a better characterization of these uncertainties could be useful for addressing existential risks of AI deployment.

All of this is speculation, and despite my certain-sounding claims above, I am interested in reactions or debate.