My Answers to the Big Questions

First published December 30, 2021; last updated April 6, 2024.

These are my tentative answers to the big questions across philosophy and science alongside my probabilities that each answer is the correct one. Some preliminary notes:

  1. This is inspired in part by the similar lists of Brian Tomasik and Pablo Stafforini.
  2. Philip Tetlock and trained “superforecasters” have popularized this approach of quantitative answers to hard-to-quantify questions.
  3. These beliefs span a wide range of intellectual topics—most of which I would not consider myself an expert on—so please take with a chunk of salt.
  4. I do not plan to argue for or against my beliefs here, though I will provide explanatory links when convenient, and I will try to clarify exactly what each belief means in footnotes when necessary.
  5. All beliefs are worded such that probabilities are at least 50%.
  6. Probabilities refer to the likelihood the belief is true as I define it. For example, I am a moral anti-realist in the sense that I deny the existence of stance-independent morality, and if you say, “Moral realism is correct because cultures and civilizations would converge on certain moral stances,” then I would not consider this a reason to lower my probability because it makes use of a different definition. In other words, this would be a verbal dispute.
  7. I try to word beliefs in the way that can be most quickly and easily falsified, though because I want this to stay big picture, these are less falsifiable than Metaculus predictions. For example, falsifiable formulations of the efficient market hypothesis tend to hinge on the specifics of their context (e.g., the quirks of cryptocurrency), such that a less falsifiable general probability seems more useful than a context-specific formulation.
  8. I will try to update this document each time my probabilities change.

Artificial Intelligence

Note: Artificial general intelligence (AGI) refers to a single artificial intelligence (AI) at least as smart as the average human on every measure (e.g., common sense, learning, logic, memory, metacognition, physical movement, spatial reasoning). Superintelligence refers to AGI that is at least 10 times smarter1 than the average human.

Belief
Probability
AGI is possible within the known laws of physics. 99%
Superintelligence is possible within the known laws of physics. 97%
AI and its effects are the most important existential risk, given only public information available in 2021. 92%
The intelligence of an AI and its goals are not necessarily related (i.e., the orthogonality thesis). This does not exclude correlation in the implementation of AI, but only a lack of any necessary association in the architecture itself. 90%
AGI will not be built with an architecture 1 or 0 advances beyond transformers, where an “advance” is a singular technology no larger than moving from pre-transformers to transformers (e.g., from pairwise connections to a shared workspace). This scenario is similar to the idea of “prosaic AGI.” 77%
There will be less than a year between the first AGI and the first superintelligence. 77%
AGI will be more attributable to bottom-up machine learning than top-down symbolic systems.2 73%
AGI will not lead to the extinction of human society.3 70%
AI takeoff will be slower than the midpoint of the “soft” versus “hard” AI takeoff debates as they were in 2021. 70%
Artificial consciousness will emerge in the course of increasing AI capabilities.4 67%
By at least 10 years before AGI is built, debate about AGI risk will still be less mainstream than global warming was in 2015. 65%
There will be only one superintelligence rather than multiple for a sufficiently long period that it will become more powerful than any single government (i.e., unipolar AI takeoff). 65%
There will be at least 5 AI-driven scientific advances in the 2020s as big as the advance from DeepMind’s AlphaFold 1 to AlphaFold 2. 50%
AGI will be built by 2048. 50%

Factory Farming

Belief
Probability
Working in food technology is a more effective career choice than working in grassroots activism if one’s goal is to end factory farming as quickly as possible and one has average skills in each area. 79%
Food technology will be more important than food activism in the history of how animal farming ended, conditional on no AGI or population collapse.5 74%
Plant-based and cultured meat proponents should focus more on moral messaging than price, taste, and convenience than they did in the 2010s. 68%
Cell-cultured meat will be more important than plant-based meat in the history of how animal farming ended, conditional on no AGI or population collapse.5 66%
Farmed animal advocates should focus more on institutions than individuals than they did in the 2010s. 65%
Welfare reforms on factory farms (e.g., cage-free eggs) tend to reduce the number of animals on factory farms who will exist over the next century. 61%
Animal rights publicity stunts (e.g., many PETA campaigns) tend to do more harm than good. 60%
Animal farming will be less than 10% of its current size by 2068. 50%

Philosophy

Belief
Probability
Aesthetic value is subjective (i.e., stance-dependent). There is no aesthetic value (e.g., beauty) aside from the actual or possible aesthetic evaluations, opinions, and stances of moral agents.6 99%7
Consciousness is a pseudo-problem. Consciousness does not exist in the way commonly assumed by everyday and scholarly discourse, and there is no stance-independent fact of the matter as to which entities are conscious. This view can be equivalently described as consciousness eliminativism, illusionism, semanticism, or anti-realism.8 99%7
Personal identity is a pseudo-problem. There is no fact of the matter as to the nature of one’s personal identity, such as whether a teletransported version of yourself is still you.9 98%7
Free will is a pseudo-problem. There is no fact of the matter as to whether free will exists, such as compatibilism versus noncompatibilism.10 98%7
The existence of abstract objects (e.g., numbers) is a pseudo-problem. There is no fact of the matter at to whether abstract objects exist (i.e., platonism) versus do not exist (i.e., nominalism).11 98%7
Morality is subjective (i.e., stance-dependent). There is no morality aside from the actual or possible moral evaluations, opinions, and stances of moral agents. This view is typically described as moral anti-realism.12 97%7
Classical utilitarianism is the best ethical framework.13,14 97%
Totalism is the best population ethics framework.13 96%
One-boxing in Newcomb’s paradox is the best decision.13,15 95%
The best approach to virtually all problems in contemporary philosophy is to treat them as pseudo-problems by making their semantics precise then delineating the often straightforward solutions to each precisification.16 87%7
Logical positivism is the best theory of knowledge.13,17 80%
Functional decision theory and equivalent formulations of causal and evidential decision theory are the best decision theory.13,18 63%

Physics and Computation

Belief
Probability
The average human brain does not use quantum computation.19 90%
The many-worlds interpretation of quantum mechanics is correct.20 74%
We live in a simulation.21 69%
There are no aliens or distinguishable alien materials within 100 light-years of Earth. 64%
Matter and energy are finite within our universe. Under the many-worlds interpretation, this refers to our specific history of the universal wavefunction. 63%
Matter and energy are finite.22 59%
Quantum computation will not radically change computation by 2100 except in encryption, conditional on no AGI or population collapse.5 57%
A fusion power plant will generate a net 100 megawatt of electricity by 2063, conditional on no AGI or population collapse.5 50%

Miscellaneous

Belief
Probability
Intelligence is more singular than the midpoint of the singular versus multiple IQ or “G factor” debate as it was in 2021. 93%
People underestimate the extent to which understanding something well means you can explain it well (i.e., people overestimate the extent of “Polanyi’s paradox”). 90%
The best approach to knowledge production is that all meaningful knowledge can be cashed out in more accurate predictions about observable future events.23 88%
Flying cars will not be an important technology until at least 2050, conditional on no AGI or population collapse. 85%
Virtual and augmented reality will be as popular in 2060 as smartphones were in 2021, conditional on no AGI or population collapse.24 84%
By 2050, it will still not be possible to increase a healthy adult human’s IQ by at least two standard deviations in less than 30 days, conditional on no AGI or population collapse.5 84%
Frequentism will be more common than Bayesianism in science in 2050, conditional on no AGI or population collapse.5 81%
Social science is more productive when it focuses on phenomena than on theory. 77%
Most major, enduring social science questions (e.g., nature versus nurture, structure versus agency) have no interesting general answer and should be treated only as taxonomies to be explored, not as questions to be answered. 75%
Humanity will not go extinct before 2100. At least 1 thousand humans or human descendants (e.g., mind uploads) will be alive in 2100. 75%
Markets are rarely outperformed without insider information or market manipulation (i.e., the efficient markets hypothesis). However, there are specific, compelling, highly plausible exceptions, such as the rise of cryptocurrency and continued underestimation of the economic impact of AI.25 69%
Nobody born before 2000 will live to be 150, conditional on no AGI or population collapse.5 64%
We can productively coordinate with people in parallel universes, conditional on their existence. 57%
Wild-animal suffering will not be a mainstream moral issue by 2100, conditional on no AGI or population collapse.5 55%
A majority of philosophers will endorse a general sort of anti-realism, positivism, or otherwise dismissing many philosophical problems as pseudo-problems within 100 years, conditional on no AGI or population collapse.5 53%
Earth will be at least 2.2˚C warmer in 2100 than in 1880, conditional on no AGI or non-climate population collapse.5 50%
A human will set foot on Mars by 2047, conditional on no AGI or population collapse.5 50%
Half of new cars sold in the US will be fully autonomous (i.e., self-driving) by 2038, conditional on no AGI or population collapse.5 50%

Changelog

Here I record each update to my beliefs (or, at least, change in the explicit probabilities) since January 2022. These may not contain all the most recent updates; for example, with the many developments in AI every month, I try to keep track of them and then every few months consider all the changes together to see how my beliefs should change. The most recent updates are listed first, and past updates on a belief are listed immediately under the most recent update on that belief.

Belief
Old
New
Date
Explanation
5 large AI-driven scientific advances in the 2020s 59% 50% 2024-01-08 Not as much AI science so far as I expected, and the current direction of AI doesn’t seem well-suited to near-term scientific advances
against “quantum mind theory” 81% 90% 2022-03-07 This is due to a medium dive into quantum computation, including a math course on it, reading the main writings, and having a few (fewer than I’d like) conversations with physicists, yet finding basically no new arguments.
median AGI timeline 2055 2048 2023-02-28 With ChatGPT, Bard, etc., I actually had a slight lengthening of my timelines based on technical updates (e.g., Sydney was more easily “upset” than I expected; no big material advances a la AlphaFold recently), but I had a large update from the increase in financial, cultural, and personal capital in the AGI space. An AI winter in the (early) 2020s now seems substantially less likely, and the perceptions of an “arms race”-style dynamic between Microsoft and Google is a gamechanger.
2062 2055 2022-04-29 First, while the performance of DALL-E 2 and PaLM are still brittle in ways slightly below my expectations, STaR “chain-of-thought” models are substantially above in terms of self-explanation and rationale. Second, I’ve been reading more into the literature on the expressiveness of neural nets, and I think people underestimate it. For example, in causality, some suggest that neural nets cannot learn causality, but I think they can learn causal language. Third, while reading the alignment literature, it seems like it will speed up AGI more than I expected, both in agent foundations (e.g., infra-Bayesianism) and human models (e.g., iterated amplification). It looks more similar to standard AI capabilities work than I expected.
AGI from transformers or the next big architecture 81% 77% 2023-02-28 generally shortening AI timelines (see AGI timeline explanation)
unipolar AI takeoff 57% 65% 2023-02-28 generally shortening AI timelines (see AGI timeline explanation)
  1. The word “times” is quite vague here. To my knowledge, a score of over 1000 on any IQ test with a mean of 100 is practically meaningless. I leave its precisification to future evaluators. I could say Transformative Artificial Intelligence (TAI) instead of superintelligence, but I think superintelligence is a little more precise. 

  2. Causal attribution in such cases is imprecise. I mean something like: If A and B both seem to be necessary conditions for C, and a 30% reduction in A is twice as likely to make C fail to occur as a 10% reduction in B, then C is twice as attributable to A as to B. 

  3. I tend to view AGI alignment outcomes as more of a spectrum that the binary between “alignment” and “misalignment” that is usually discussed. There are many scenarios where we end up getting kind-of-what-we-want, such as a functioning human-like society of digital minds, but one oriented around the personal goals of one human being or a muzak-and-potatoes goal, such as creating as many paperclips as possible. My probability here is meant as no more than a gesture at the thing in idea-space people have in mind, people like those in Rob Bensinger’s 2-D graph. There is also a list of probabilities on a similar question in this spreadsheet

  4. As a consciousness eliminativist, I do not think there is a fact of the matter to be discovered about the consciousness of an entity with our vague definitions like “what it is like” to be the entity. However, we can still discuss consciousness as a vague term that will be precisified over time. In other words, I am predicting both its emergent definition and the object-level features of future artificial intelligences. 

  5. Because of the likely transformative nature of AGI and superintelligence, I phrase many of my predictions about the future as conditional on no AGI so that the predictions are less dependent on AGI timeline estimates. I would estimate above 90% that technologies like cultured meat, fusion, and life extension will be developed very quickly after superintelligence. Similarly, I condition on no population collapse because I would estimate under 10% that these technologies would develop after population collapse to below, say, 1 billion humans (e.g., nuclear winter, pandemic) without decades of recovery time.  2 3 4 5 6 7 8 9 10 11 12

  6. For example, aesthetic subjectivism entails that the Mona Lisa would not have aesthetic value if nobody had evaluated, were evaluating, or could evaluate it. “Aesthetic value is subjective” is the wording in the PhilPapers survey. Note that I see most assessments of, “This problem is subjective,” “This problem is a pseudo-problem,” and, “The correct answer to this problem is anti-realism,” as interchangeable. See also the discussion of 8, where I have published on this sort of anti-realist, eliminativist reasoning in the most detail. 

  7. Why am I so certain in these “stance-dependent” and “pseudo-problem” beliefs relative to my other beliefs? First, these probabilities refer to the question as I define them. For example, if you say, “Aesthetic realism is correct because cultures and civilizations would converge on certain aesthetic stances even on art they have never seen before,” then I would not consider this a reason to lower my probability because it makes use of a different definition. In other words, this is a verbal dispute. Moreover, I see these as logical claims like “1+1=2” that do not rely on specific empirical facts, so being wrong would require a logical error, such as a false premise of modus ponens or a flaw in modus ponens itself. See also 6 2 3 4 5 6 7

  8. By “pseudo-problem” I mean a problem that does not have the sort of answer typically assumed. Of these “pseudo-problem” or “anti-realist” topics, I have written the most on consciousness. I have laid out my own assessment in some detail in the case of consciousness in my conference paper, Consciousness Semanticism: A Precise Eliminativist Theory of Consciousness, though I hope to find time eventually to expand upon and share these ideas further. In particular, I would like to show how the same sorts of arguments about tensions between vague and precise semantics undercut “realist” (for lack of a better term) across many areas.  2

  9. These thought experiments can be psychologically and sociologically useful, such as a tool for introspection on one’s values, but there is no right answer to questions of what personal identity is. Similarly, there are interesting social science questions here, such as whether people would be generally accepting of teletransporters, or the AI-equivalent of sending one’s program across electromagnetic waves to a distant location. See also the discussion of 8, where I have published on this sort of anti-realist, eliminativist reasoning in the most detail. 

  10. In other words, there is no fact of the matter as to whether free will “exists.” If it is defined as a sometimes-useful abstraction, the answer is trivially yes. If the answer is some freedom of choice beyond normal physical processes, the answer is trivially no. In other words, debates between compatibilism, determinism, and so on are merely verbal disputes. See also the discussion of 8, where I have published on this sort of anti-realist, eliminativist reasoning in the most detail. 

  11. The debate between platonism and nominalism is particularly imprecise. Mathematical and logical objects are, in a sense, more compelling than other philosophical objects such as morals because they have explanatory power. Incorporating math as a fundamental part of our worldview, such as accepting mathematical proofs for claims, has been an essential step in modern science across medicine, engineering, and so forth, but it is indeterminate whether this constitutes existence in the sense of platonism and nominalism. The closest view to this in the literature is known as eliminativism(similar in structure to consciousness eliminativism). See also the discussion of 8, where I have published on this sort of anti-realist, eliminativist reasoning in the most detail. 

  12. I take the metaethical question of realism versus anti-realism as a question about reality, and therefore it has an associated probabilities. However, normative ethics (e.g., utilitarianism versus deontology) under anti-realist terms has no fact of the matter to be predicted. Moreover, because moral realism seems to be a fundamentally confused notion, it does not seem to elicit even conditional probabilities of normative ethics conditional on its correctness, and even if it did, I am not sure why I would care about this moral reality and decide to account for that moral uncertainty. I endorse Brian Tomasik’s essay on the modesty argument. I also feel this way about the other well-known metaethical questions, such as non-cognitivism and error theory. They seem to map onto the strategic choice, “How do we want to use moral language?” or social science questions such as, “How do people today use moral language?” rather than philosophical questions about the nature of reality. In other words, I do not identify as a non-cognitivist or error theorist. See also the discussion of 8, where I have published on this sort of anti-realist, eliminativist reasoning in the most detail. 

  13. Given my views on the stance-dependence and pseudo-problem nature of most philosophical problems, the usual wording of philosophical beliefs such as, “Utilitarianism is correct” may be misleading. Therefore, I refer to such views as which is the “best” answer to the problem, where “best” means the approach that would be favored after a long period of careful reflection. This still has ambiguity, such as the likely dependence of careful reflection on the initial approach, but it seems sufficient for now.  2 3 4 5

  14. Because I am a moral anti-realist, I don’t think the statement, “The correct normative ethics is utilitarianism,” has a probability in the way empirical and logical claims have probability, so I have not included it here, but this is meant to be a similar anti-realist belief statement. By classical utilitarianism, I mean the approach of maximizing net happiness (i.e., minimizing net suffering), rather than preference utilitarianism (i.e., minimizing preference frustration), as differentiated in cases such as experience machines. I know my probability of retaining a moral view will seem too high to some, but it’s largely informed by me being a utilitarian since 2004 when I was 12, which seems to be a strong indication of not changing in the future, at least barring brain injury. 

  15. Most people I know are very confident in one-boxing, but I think they usually conflate a more specific belief like, “Given this precise specification of the problem, one-boxing provably yields the highest reward,” whereas I’m thinking, “Across specifications where Omega is believed to be a perfect predictor, I should one-box,” which seems like the more interesting claim. For example, it may be the case that I should accept consistently getting a lower reward in Newcomb’s Paradox because it’s actually an impossible problem, and my decision rule(s) should be okay with lower rewards in impossible problems. Or at least it’s such a contrived problem that I should optimize my decision rule(s) for other problems and give this one very little weight. 

  16. I’m critical of most philosophical work, but I also think that good philosophy can be immensely useful. There are difficult, important conceptual problems strewn across science and the humanities: Philosophers of mind could build precise models of the brain that parsimoniously fit neuroscientific evidence, including introspection. Philosophers of science could work at the frontiers of particle and quantum physics. (See the 2018 book Lost in Math by Sabine Hossenfelder for an accessible description of how philosophical the frontiers of physics have become.) Social and political philosophers can design systems for technological governance that we desperately need with advancing technology such as artificial intelligence. Decision theorists can help formalize the foundations of intelligence. Game theorists can help formalize the foundations of cooperation. Etc. 

  17. I define logical positivism as the claim that only statements that are empirically or logically verifiable are true or false, though I also endorse positivism in other senses, such as the application of math and science to the social sciences. Logical positivism and positivism in general are frequently strawmanned in similar ways to how utilitarianism is beat up with moral thought experiments such as organ harvesting, especially with the notions that (i) the specific claim of verificationism is itself not verifiable and therefore self-defeating and wrong, to which I’d briefly respond that verificationism is an instrumentally useful epistemic approach, not a truth claim (see 13), and (ii) there are obviously meaningful statements (e.g., about particular events in human history), that are not verifiable (e.g., because of the unidirectional arrow of time) to which I’d briefly respond both that to the extent these are not verifiable in practice, they should be treated as less interesting to inquire about, and that positivism doesn’t require verification in practice; for example, positivisms are happy to take a statement like, “The sun will rise on January 1, 2100,” as meaningful, even though humanity may go extinct (or the person making that statement may die) before 2100 and thus not actually be able to verify this statement in practice. 

  18. I do not mean to endorse a realist (i.e., stance-indepedent) notion of decision theory, but I think decision theory is a valuable, non-trivial area of inquiry. Even if there is not an objectively correct answer as to which decision theory is best, we still need to make have decision rules for everyday life, especially when we face strange decisions like Newcomb’s Paradox or need to build AGI. And like the Many-Worlds Interpretation of quantum mechanics20, I am being charitable in terms of folding future, not-yet-formalized theories into current ones. 

  19. Quantum physics (or quantum mechanics, QM) makes better predictions about the universe than classical physics, so it seems reasonable to view quantum physics as the more accurate model. What does it mean for a computer to be classical in a quantum universe? It’s vague, but I have in mind a model of the brain that makes no use of quantum algorithms. In other words, the claim, “The average human brain does not use quantum computation,” is probably true if we can predict brain output with classical bits of 0’s and 1’s and probably false if we can only predict brain output with qubits. However, this is made more complicated by our uncertainty about quantum mechanics (e.g., quantum gravity) and the fact that we can simulate the output of quantum computers with classical computers—however, we cannot simulate quantum computational complexity of certain algorithms (e.g., Shor’s integer factorization). From December 2022 to March 2022 I updated from 81% to 90% against “quantum mind” theory. This is due to a medium dive into quantum computation, including taking a graduate-level math course on it, reading the main writings on quantum consciousness, and having a few (fewer than I’d like) conversations with physicists, yet finding basically no new argumentation in favor of the claim that brains use quantum computation. All the arguments seem reducible to either (i) consciousness feels unitary, which jibes better with the continuity of the unit ball of quantum amplitudes rather than the binary 0s and 1s of classical computing, (ii) intelligence feels intuitive or incomputable, or as Penrose (1989) puts it, “one needs insights, not just another algorithm” to solve things like the halting problem and adjudicate truth in Gödel’s incompleteness theorems, and (iii) in the last chapter of Penrose (1989), he makes a strange argument about quantum consciousness and time based on psychological experiments showing we react more quickly than our conscious awareness. These arguments seem very weak. The main reasons I haven’t went above 90% are my own arguments I would add, that (iii) general intelligence is still very hard to imagine with current models, so we should be very uncertain about its nature, (iv) QM seems much closer to ‘reality’ than classical mechanics, so a lot of processes may really be QM, and we have a lot to learn about QM (e.g., quantum gravity), (v) it seems very hard to restrict qubit entanglement with the brain’s “wetware,” but messier forms of quantum computation might contain sufficient signal for minimal evolutionary advantage, and evolution is great at magnifying signal from noisy processes. 

  20. Current interpretations of quantum mechanics seem quite broad. I doubt the many-worlds interpretation as discussed in 2021 has all the important content of the interpretation favored by human descendants in the far future, if they exist, but I would still categorize an interpretation with quantum branching and no wavefunction collapse as many-worlds. This is similar to my approach to categorizing future precisifications of decision theory.18  2

  21. Chapter 5 of the 2022 book Reality+ by David Chalmers is a good summary of the arguments for and against the simulation hypothesis. My most notable disagreements with Chalmers’ presentation would be (i) I approach the simulation hypothesis mora as an empirical claim—on which we should have some probability from 0 to 1—that we have various pieces of evidence with certain weight for a certain probability (e.g., anthropic arguments could suggest a probability of 50%), rather than how Chalmers approaches it as a set of premises and valid logical steps from those premises, and (ii) I put less weight than I think Chalmers would on the evidence that “consciousness” provides because Chalmers is a dualist who actually coined the term “hard problem,” which I think is mistaken

  22. I believe the many-worlds interpretation entails only finite space, matter, and energy because while the number of quantum branches is vast, the number of quantum branching events per unit of time is limited by the number of entanglements, which is finite. I am very uncertain about this because no physicist I have asked has understood the question and been confident in the answer. 

  23. In other words, if someone knows more about something than another person, they should be able to make better predictions about that thing. By “knowledge,” I just mean “phenomena that upon careful reflection would be considered knowledge.” This is a vague belief in the abstract, but in practice, I think it’s quite important. For example, much of the humanities and social sciences do not seem to be creating real knowledge, such as the extensive debates in political theory on “What is legitimate rule?” (Weber 1922) and “What type of power does the state wield?” (Mann 1984). Of course, taxonomies and other intellectual infrastructure can be useful to facilitate future knowledge production, but many academics overdo that and fail to appreciate this distinction. This is also an issue in some natural sciences, though much less so than elsewhere. For example, Deutsch (2011) suggests that the quality of a theory is not in its prediction, but its explanation. However, Deutsch goes on to describe explanation in a way that merely seems to be generalized prediction, such as making accurate predictions even in scenarios we haven’t considered yet or, in the case of fundamental physics, in scenarios that don’t even seem possible for humans to observe within the known laws of physics. 

  24. I have relatively high confidence on the arrival of virtual and augmented reality because I think there are relatively easy predictions to make about personal technology, such as the devices we have in our pockets and homes. I think it was relatively easy to predict in the 1970s, and perhaps earlier, that smartphones would be developed and popularized, in the sense of small devices that have many uses that were previously restricted to other devices (e.g., calculators, cameras, gaming, email). Of course, claims about what could have been predicted should be taken with many grains of salt. 

  25. There are many variations on the efficient market hypothesis (EMH), such as which markets are included and exactly how efficient those markets are claimed to be. I am referring mostly to large markets in which money is exchanged, not more abstracted markets such as “people trying to help others” or “publishing scientific research.” I think there is decent evidence for some EMH exceptions, such as the underinvestment in cryptocurrency before 2020, cryptocurrency arbitrage opportunities, overinvestment in hyped stocks (e.g., those with specious technological goals or virality on Reddit), and slow reactions to COVID-19 in January and February 2020. I also think market manipulation from large actors is the most prevalent form of inefficiency. Another way to operationalize one’s belief in EMH is, “What is the return a US citizen with a small investment can expect per year with no more risk than the US stock market?” which I would ballpark at 25% (conditional on no AGI), somewhat higher than the stock market itself to account for a broad inefficiency such as underestimating the commercial potential of AI. Significantly higher expected returns are possible with more risk.