The good, the bad, and the appropriately under-powered

Many quantitative studies are good — they employ appropriate methodology, have properly specified, empirically valid hypotheses registered before data collection, then collect sufficient data transparently and appropriately. Others fail at one or more of these hurdles. But a third category also exists; the appropriately under-powered. Despite doing everything else right, many properly posed questions cannot be answered with the potentially available data.

Two examples will illustrate this point. It is difficult to ensure the safety and efficacy of treatments for sufficiently rare diseases in the typical manner, because the total number of cases can be insufficient for a properly powered clinical trial. Similarly, it is difficult to answer a variety of well-posed, empirical questions in political science, because the number of countries to be used as samples is limited.

What are the options for dealing with this phenomenon? (Excepting the old unacceptable standby of p-hacking, multiple comparisons, etc. and hope the journal publishes the study anyways.) I think there are three main ones, none of which are particularly satisfactory

  1. Don’t try to answer these questions empirically, use other approaches.
    If data cannot resolve the problem to the customary “standard” of p<0.05, then use qualitative approaches or theory driven methods instead.
  2. Estimate the effect and show that it is statistically non-significant.
    This will presumably be interpreted as the effect having a small or insignificant practical effect, despite the fact that that isn’t how p-values work.
  3. Do a Bayesian analysis with comparisons of different prior beliefs to show how the posterior changes.
    This will not alter the fact that there is too little data to convincingly show an answer, and is difficult to explain. Properly uncertain prior beliefs will show that the answer is still uncertain after accounting for the new data, but will perhaps shift the estimated posterior slightly to the right, and narrow the distribution.

At the end of the day, we are left with the unsatisfying conclusion that some questions are not well suited to this approach, and when being honest we should not claim that the scientific or empirical evidence should shift people’s opinions much. That’s OK.

Unless, perhaps, someone out there has clearer answers for me?


A cruciverbalist’s introduction to Bayesian reasoning

Mathematical methods inspired by an eighteenth century minister (8)

“Bayesian” is a word that has gained a lot of attention recently, though my experience tells me most people aren’t exactly sure what it means. I’m fairly confident that there are many more crossword-puzzle enthusiasts than Bayesian statisticians — but I would also note that the overlap is larger than most would imagine. In fact, anyone who has ever worked on a crossword puzzle has employed Bayesian reasoning. They just aren’t (yet)aware of it. So I’m going to explain both how intuitive Bayesian thinking is, and why it’s useful, even outside of crosswords and statistics.

But first, who was Bayes, what is his “law” about, and what does that mean?

Sound of a Conditional Reverend’s Dog (5)

“Bayes” of statistical fame, is the Reverend Thomas Bayes. He was a theologian and mathematician, and the two works he published during his lifetime dealt with the theological problem of happiness, and a defense of Newton’s calculus — neither of which concern us. His single posthumous work, however, was what made him a famous statistician. The original title, “ A Method of Calculating the Exact Probability of All Conclusions founded on Induction,” clearly indicates that it’s meant to be a very inclusive, widely applicable theorem. It was also, supposedly, a response to a theological challenge posed by Hume — claiming miracles didn’t happen.

Wonders at distance travelled without vehicle upset (8)

“Miracles”, Hume’s probabilistic argument said, are improbable, but incorrect reports are likely— so, the argument goes, it is more likely that the reports are incorrect than that the miracle occurred. This way of comparing probabilities isn’t quite right, statistically, as we will suggest later. But Bayes didn’t address this directly at all.

Taking a risk bringing showy jewelry to school (8)

“Gambling” was a hot topic in 19th century mathematics, and Bayes tried to answer an interesting question; when you see something happen several times, how do can you figure out, in general, the probability of it occurring? His example was about throwing balls onto a table — you aren’t looking, and a friend throws the first ball. After this, he throws more, each time, telling you whether the ball landed to the left or right of the first ball. After a doing this a few times, you still have’t seen the table, but want to know how likely is it that the next ball land to the left of that original ball.

To answer this, he pointed out that you get a bit more information about the answer every time a ball is thrown. After the first ball, for all you know the odds are 50/50 that the next one will be on either side. after a few balls are thrown, you get a better and better sense of what the answer is. After you hear the next five balls all land to the left, you’ve become convince that the next ball landing to the left is more likely than landing to the right. That’s because the probabilities are not independent — each answer gives you a little bit more information about the odds.

But enough math — I’m ready to look at a crossword.

Could wine be drunk by new arrival? (6)

“Newbie” is how I’d prefer to put my ability with crossword puzzles. But as soon as I started, I noticed a clear connection. The method of reasoning I practice and endorse as a decision theorist are nearly identical to the methods that are used by people in this everyday amusement. So I’ll get started on filling in (only one part of) the crossword I did yesterday, and we’ll see how my Bayesian reasoning works. I start by filling in a few easy answers, and I’m pretty confident in all of these. 6 Down — Taxing mo. for many, 31 Across — Data unit, 44 Across — “Scream” actress Campbell.

The way I’ve filled these in so far is simple — I picked answers I thought were very likely to be correct. But how can I know that they are correct? Maybe I’m fooling myself. The answer is that I’ve done a couple crosswords before, and I’ve found that I’m usually right when I’m confident, and these answers seem really obvious. But can I apply probabilistic reasoning here?

Distance into which vehicle reverses ___ that’s a wonder (7)

“Miracles,” or anything else, according to Reverend Bayes, should follow the same law as thrown balls. If someone is confident, that is evidence, of a sort. Stephen Stigler, a historian of math, argues that Bayes was implying an important caveat to Hume’s claim — the probability of hearing about a miracle increases each time you hear another report of it. That is, thee two facts are, in a technical sense, not independent — and the more independent accounts you hear, the more convinced you should be.

But that certainly doesn’t mean that every time a bunch of people claim something outlandish, it’s true. And in modern Bayesian terms, this is where your prior belief matters. If someone you don’t know well at work tells you that they golfed seven under par on Sunday, you have every reason to be skeptical. If they tell you they golfed seven over par, you’re a bit less likely to be skeptical. How skeptical, in each case?

We can roughly assess your degree of belief— if a friend of yours attested to the second story, you’d likely be convinced, but it would take several people independently verifying the story for you to have a similar level of belief in the first. That’s because you’re more skeptical in the first place. We could try to quantify this, and introduce Bayes’ law formally, but there’s no need to bring algebra into this essay. Instead, I want to think a bit more informally — because I can assess something as more or less likely without knowing the answer, without doing any math, and without assigning it a number.

When you hear something outlandish, your prior belief is that it is unlikely. Evidence, however, can shift that belief — and enough evidence, even circumstantial or tentative, might convince you that the claim is plausible, probably, or even very likely. And in a way it doesn’t matter what your prior is, if you can accumulate enough different pieces of trustworthy evidence. And that leads us to how I can use the answers I filled in as evidence to help me make further plausible guesses.

I look at some of the clues I didn’t immediately figure out. I wasn’t sure what 6 Across — Completely blows away, would be; there are lots of 4-letter words that might fit the clue. Once I get the A, however, I’m fairly confident in my guess, conditional on this (fairly certain) new information. I look at 31 Down — Military Commission (6), but I can’t think of any that start with a B. I see 54 Across — Place for a race horse, and I’m unsure — there are a few words that fit — it could be “first”, “third”, “fifth,” “sixth” or “ninth”, and I have no reason to think any more likely than another. So I look for more information, and notice 54 Down — It might grow to be a mushroom (5, offscreen). “Spore” seems likely, and I can see that this means “Sixth” works — so I fill in both.

At this point, I can start filling in a lot more of the puzzle, and the pieces are falling in to place — each word I figure out that fits is a bit more evidence that the others are correct, making me confident, but there are a few areas where I seem stuck.

Being stuck is evidence of a different sort — it probably means at least one of two things — either I have something incorrect, or I’m really bad at figuring out crosswords. Or, of course, both.

At this point I start revisiting some of my earlier answers, ones I was pretty confident about until I got stuck. I’m still pretty confident in 39 Down — Was at one time, but ___ now. “Isn’t” is too obvious of an answer to be wrong, I think. On the other hand, 38 Down — A miscellany or collection, has me stumped, but two Is in a row also seem strange. 37 Down — Small, fruity candy, is also frustrating me; I’m not such an expert in candy, but I’m also not coming up with anything plausible. So I look at 50 Across — A tiny part of this?, again, and re-affirm that “Bit” seems like it’s a good fit. I’m now looking for something that can give me more information, so I rack my brains, and 36 Across — Ho Chi Min’s capital, comes to me: Hanoi. I’m happy that 39 Down is confirmed, but getting nervous about the rest.

I decided to wait, and look elsewhere, filling in a bit more where I could. My progress elsewhere is starting to help me out.

Now, I need to re-evaluate some earlier decisions and update my beliefs again. It has become a bit more complex than evaluating single answers — I need to consider the joint probability of several different things at once. I’ll unpack how this relates to Bayesian reasoning afterwards, but first, I think I made a mistake.

I was marginally confident in 50 Across — A tiny part of this? as “bit”, but now I have new evidence. I’m pretty sure Nerb isn’t a type of candy, but “Nerd” seems to fit. I’m not sure if they are fruity, so I’m not confident, and I’m still completely at a loss on 38 Down — A miscellany or collection. That means I need to come up with an alternative for 50 Across; “Dot” seems like an unlikely option, but it fits really well. And then it occurs to me; A dot is a little bit of the question mark. That’s an annoying answer, but it seems a lot more likely than that “Nerb” is a type of candy. And I’m not sure what Olio is, but there’s really nothing else that I can imagine fitting. And there are plenty of words I don’t know. (As I found out later, this is one of them.)

At first, I had a high confidence that “Bit” was the best answer for 50 Across — I had a fairly strong prior belief, but I wasn’t certain. As evidence mounted, I started to re-assess. Weak evidence, like the strange two Is in a row, made me start to question the assumption that I was right. More weak evidence — remembering that there is a candy of some sort called Nerds, and realizing that “Dot” was a potential answer, made me revise my opinion. I wasn’t strongly convinced that I had everything right, but I revised my belief. And that’s exactly the way a Bayesian approach should work; you’re trying to figure out which possibility is worth betting on.

That’s because all of probability theory started with a simple question that a certain gambler asked Blaise Pascal; how to we split the pot when a game gets interrupted. And historians who don’t think Bayes was trying to formulate a theological rebuttal to Hume suggest that he’s really responding to a question posed by de Moivre — from whose book he may have learned probability theory, which we need to mention in order to figure out why I’d pick “Dot” over “Bit” — even though I think it’s a stupid answer. But before I get there, I’ve made a bit more progress — I’m finished, except for one little thing.

31 Down — Military Commission. That’s definitely a problem — I’m absolutely sure Brevei isn’t the right answer, and 49 Down, offscreen, is giving me trouble too. The problem is, I listed all the possible answers for 54 Across — Place for a race horse, and the only one that started with an “S” was sixth.

Conviction … or what’s almost required for a conviction (9)

“Certainty” can be dangerous, because if something is certain, almost by definition, it means nothing can convince me otherwise. It’s easy to be overconfident, but as a Bayesian, it’s dangerous to be so confident that I don’t consider other possibilities — because I can’t update my beliefs! That’s why Bayesians, in general, are skeptical of certainty. If I’m certain that my kid is smart and doing well in school, no number of bad grades or notes from the teacher can convince me to get them a tutor. In the same way, if I’m certain that I know how to get where I’m going, no amount of confused turns, circling, or patient wifely requests will convince me to ask for directions. And if I’m certain that “Place for a race horse” is limited to a numeric answer, no number of meaningless words like “Brevei” can change my mind.

High payout wagers (9)

“Perfectas” are bets placed on a horse race, predicting the winner and second place finisher, together. If you get them right, they payoff can be really significant — much more than bets on horses to win or to place. In fact, there are lots of weird betting terms in horse racing, and by excluding them from consideration, I may have been hasty in filling out “sixth.” My assumption of having compiled and exhaustive list of terms was premature. Instead, I need to reconsider once again — and that brings us to why, in a probabilistic sense, crosswords are hard.

Disreputable place for a smoke? (5)

“Joint” probabilities are those that relate to multiple variables. And when solving the crossword, I’m not just looking to answer each clue, I’m looking to fill in the puzzle — it needs to solve all of the clues together. Just like figuring out a Perfecta is harder than picking the right horse; putting multiple uncertain questions together is where joint probabilities show up. But it’s not hopeless; as you figure out more of the puzzle, you reduce the remaining uncertainty. It’s like getting to place a Perfecta bet after seeing 90% of the race; you have some pretty good ideas about what can and can’t happen.

Similarly, Bayesians, in general, collect evidence to constrain what they think is and isn’t probable. Once enough balls have been thrown to the left of that first one, you get pretty sure the odds aren’t 50–50. The prerequisite for getting the right answer, however, is being willing to reconsider your beliefs — because reality doesn’t care what you believe.

And the reality is that 31 Down is Brevet, so I need an answer to 54 Across — Place for a race horse that starts “St”. And that’s when it hit me — sometimes, I need to simply be less certain I know what’s going on. The race horse isn’t running, and there are no bets. It’s in a stall, waiting patiently for me to realize I was confused.

A Final Note

I’d note three key lessons that Bayesians can learn from crosswords, since I’ve already spent pages explaining how Crossworders already understand Bayesian thinking. And they are lessons for life, ones that I’d hope crossword enthusiasts can apply more generally as well.

  1. The process of explicitly thinking about what you are uncertain of, and noticing when something is off, or you are confused, is useful to apply even (especially!) when you’re not doing crossword puzzles.
  2. Evaluating how sure you are, and wondering if you are overconfident in your model or assumptions, would have come in handy to those predicting the 2016 election.
  3. Being willing to actually change your mind when presented with evidence is hard, but I hope you’d rather have a messy crossword than an incorrectly solved one.

A Postscript for Pedants

Scrupulously within the rules, but not totally restrictive

“Strict” Bayesians are probably annoyed about some of this — at no point in the process did I get any new evidence. No one told me about any new balls thrown, I only revised my belief based on thinking. A “Real Bayesian” starts with all the evidence already available, and only updates when new evidence comes in. For a non-technical response, it’s sufficient to note that computation and thought takes time, and although the brain roughly approximates Bayesian reasoning, the process of updating is iterative. And for a technical version of the same argument, I’ll let someone else explain that there are no real Bayesians. (And thanks to Noah Smith for that link!)

The crossword clues were a combination of info from, and my own inventions.
The crossword is an excerpt from Washington Post Express’s Daily Crossword for January 11th, 2017, available in full on Page 20, here:

“Bearish” on Z-Cash

I recently made my 2017 predictions, and was asked why I was “bearish” on Z-Cash. I predicted a 25% chance that the price would rise, and a 75% chance that the market cap would do so, over the course of the year.

I’m not sure this is really bearish. First, after 2 months, there are currently about 375,000 ZEC minted, of which 300,000 are in circulation. (Block 40,000.) I’m not sure the exact schedule — it should only half after 840,000 blocks, in well over a year — but in 12 months, there should be 7 times as many coins. That means, by the end of the year, at current prices, the market cap would move from $20m to closer to $140m. So the market cap would need to increase significantly in order for the price to stay stable.

Is this implausible? No. But it would probably involve cannibalizing much of the dark-web market share from Monero, (and darkweb markets won’t necessarily switch to new coins quickly,) or a speculative price bubble that extends through the end of the year. I am bullish on Z-Cash over the longer term, but it’s riding on speculation now, and I’d be a little bit surprised if it managed to attract that large a market cap within the year. Because at some point, as more coins are generated and speculators stop pouring in money, the fundamentals take over from the speculators. Perhaps only 25% was overconfident — but I’m definitely not certain of an increase.

Predictions for 2017

(and onwards, plus a bonus review!)

Now’s the time of year where people make specific-sounding, untestable predictions for the year ahead, right?

Well, here’s a departure from that norm.

These were made 1/3/2017. I may add predictions, but I’ll note that, and the date they were added. And I’ll commit to not altering anything, though I may update to include revised predictions, with notes. (Added my predictions on SSC’s predictions on 1/9.)


Annualized GDP Growth < 4% — 90%

Annualized GDP Growth <2% — 40%

Annualized GDP Growth <0% — 5%

US GDP growth lower than in 2015: 60%

Dow Jones will not fall > 10% this year: 80%

Domestic US Politics:

Repeal of Obamacare, including the individual mandate OR minimum coverage rules— 80%

Conditional on above, Republicans have replacement policies already passed — 5%

International Relations:

These are basically just Slatestarcodex 2016 Predictions, 2017 redux: (Numbers to show which were skipped.)

1. US will not get involved in any new major war with death toll of > 100 US soldiers: 90%
2. North Korea’s government will survive the year without large civil war/revolt: 98%
4. No terrorist attack in the USA will kill > 100 people: 95%
5. …in any First World country: 80%
7. Israel will not get in a large-scale war (ie >100 Israeli deaths) with any Arab state: 95%
9. No interesting progress with Gaza or peace negotiations in general this year: 95%
13. ISIS will control less territory than it does right now: 90%
15. No major civil war in Middle Eastern country not currently experiencing one: 65%
18. No country currently in Euro or EU announces plan to leave: 95%
35. No major war in Asia (with >100 Chinese, Japanese, South Korean, and American deaths combined) over tiny stupid islands: 99%
45. SpaceX successfully launches a reused rocket: 80%
48. No major earthquake (>100 deaths) in US: 99%
49. No major earthquake (>10000 deaths) in the world: 80%


Bitcoin will end the year higher than $1000: 40%
(But last year I was surprised — maybe I’m not well calibrated here. Or maybe something about markets, irrationality, and solvency.)

Lightning Networks / Segwit will be deployed: 65%

There will be a major bug found and exploited in LN/Segwit: 60%

Ethereum will be above $10: 70%

Ethereum will be above $20: 30%

Z-Cash will be above $50 (Price on 1/3): 25%

Z-Cash Market Cap will be above $18m (Value on 1/3): 75%

Longer Term Predictions:

There will be no war involving more than one of the US, Russia, China, or any member of the EU on different sides started in 2017 (Measure: Total eventual casualties above 100) — 98%

There will be no war involving more than one of the US, Russia, China, or any member of the EU on different sides started before 2020 (Measure: Total eventual casualties above 100) — 95%

The Republicans will maintain control of the House of Representatives in 2018 elections: 40%

There will be a Republican primary challenger getting >10% of the primary vote in 2020 (conditional on Trump running) — 70%

The stock market will go down under President Trump (Conditional on him having a 4 year term, Inauguration-Inauguration) — 60%

Slatestarcodex 2017 Predictions:

(Added 1/9/2017. I have left out ones I already forecast above, from last year. His estimate in Parentheses)

5. Assad will remain President of Syria: 90% (80%)
7. No major intifada in Israel this year (ie > 250 Israeli deaths, but not in Cast Lead style war): 70% (80%)
9. No Cast Lead style bombing/invasion of Gaza this year: 75% (90%)
10. Situation in Israel looks more worse than better: 85% (70%)
11. Syria’s civil war will not end this year: 80% (60%)
13. ISIS will not continue to exist as a state entity in Iraq/Syria: 65% (50%)
15. Libya to remain a mess: Unclearly defined — 50% to 100% (80%)
16. Ukraine will neither break into all-out war or get neatly resolved: [No guess — unsure if a cemented status quo with recognition is “resolved” — which seems likely.] (80%)
17. No major revolt (greater than or equal to Tiananmen Square) against Chinese Communist Party: 99% [ China doesn’t let things get to that point any more. They intervene before it can get that large; it’s either toppled gov’t, or nothing like this.] (95%)
19. No exchange of fire over tiny stupid islands: 80% (90%)
20. No announcement of genetically engineered human baby or credible plan for such: 90% I don’t follow this, so I’m mostly relying on Scott’s estimate (90%)
21. EMDrive is launched into space and testing is successfully begun: 50% [No dote announced, and there is a launch backlog for cubesats in the US. These things don’t happen quickly.] (70%)
22. A significant number of skeptics will not become convinced EMDrive works: 95% — if it launches, this will become clear quickly. (80%)
23. A significant number of believers will not become convinced EMDrive doesn’t work. [If it launches, it’s game over.] Conditional: 10% if launched before November, 90% if not. (60%)
26. Keith Ellison chosen as new DNC chair: 80% (70%)

27. No country currently in Euro or EU announces new plan to leave: [Exclusing England, which has no actual plan, 90%](80%)
28. France does not declare plan to leave EU: 97% (95%)
29. Germany does not declare plan to leave EU: 99% (99%)
30. No agreement reached on “two-speed EU”: 90% (80%)
31. The UK triggers Article 50: 80% (90%)
32. Marine Le Pen is not elected President of France: [Conditional on her running] 50% (60%)
33. Angela Merkel is re-elected Chancellor of Germany: 70% (60%)
34. Theresa May remains PM of Britain: 80% (80%)
35. Fewer refugees admitted 2017 than 2016: [to Europe? Unsure. But the flow is slowing, it seems. 95% (95%)

37. Oil will end the year higher than $50 a barrel: [Brent? I guess.] 70% (60%)
38. …but lower than $60 a barrel: 50% (60%)
39. Dow Jones will not fall > 10% this year: 60% (50%)
40. Shanghai index will not fall > 10% this year: 65% (50%)

41. Donald Trump remains President at the end of 2017: 95% (90%)
42. No serious impeachment proceedings are active against Trump: 90% (80%)
43. Construction on Mexican border wall (beyond existing barriers) begins: 50% (80%)
44. Trump administration does not initiate extra prosecution of Hillary Clinton: 95% (90%)
45. US GDP growth lower than in 2016: 50% (60%)
46. US unemployment to be higher at end of year than beginning: 40% (60%)
47. US does not withdraw from large trade org like WTO or NAFTA: 80% (90%)
48. US does not publicly and explicitly disavow One China policy: 80% (95%)
49. No race riot killing > 5 people: 80% (95%)
50. US lifts at least half of existing sanctions on Russia: [How do you measure half? Also, congress does this, not president. So…] 60%[?] (70%)
51. Donald Trump’s approval rating at the end of 2017 is lower than fifty percent: 90% (80%)
52. …lower than forty percent: [It’s only 41% now, and they typically drop — and the big exception is G.W.B., after 9/11.] 75% (60%)

And I’m not predicting his blog traffic or his life. But…
 60. Less Wrong renaissance attempt will seem less (rather than more) successful by end of this year: [I’m a bit hopeful — but I’ll follow his judgement on what happened…] 80% (90%)

Bonus: Scott’s accuracy / calibration will be about as good as it was last year, not materially worse [I’m gonna eyeball this one, I can’t think of a good metric.]: 90%

Review of Last Year’s predictions:

From my predictions on Predictionbook — I was underconfident, overall, but I’m glad I didn’t revise most of the political predictions in the weeks before the election; that would have fixed me good.

Of [50–60%) predictions, I got 1 right and 1 wrong, for a score of 50%
Of [60–70%) predictions, I got 6 right and 2 wrong, for a score of 75%
Of [70–80%) predictions, I got 5 right and 2 wrong, for a score of 71%
Of [80–90%) predictions, I made no predictions. (Hmmm…)
Of [90–95%) predictions, I got 6 right and 0wrong, for a score of 100%
For [95–99%) predictions, I got 9 right and 0 wrong, for a score of 100%
For [99+%] predictions, I got 4right and 0 wrong, for a score of 100%

Assessed probability is noted after each prediction. Predictions with probability >50% that happened are noted (Correct). The base rates are being ignored, so these marks are very unfairly biased in my favor, since I said many very rare events wouldn’t happen.

Donald Trump to be the Republican nominee for President in 2016. 60% (Correct)

Democrats win undivided control of government in 2016. 12% (Correct)

The GOP will keep the senate in 2017. 70% (Correct)

The Bitcoin reward halvening will increase its value by over 25% versus USD within a month of the event. 5% (Correct)

Zika will be found not to be the (sole, proximate) cause of microencephaly for the recent cases in Brazil. 25% (Calling this still unclear.)

No single GOP candidate will get >50% of delegates, and a brokered convention will ensue. 10% (Correct)

The S&P 500 Index will close below 1600 at the end of 2016. 10% (Correct)

The S&P 500 will close below 1750 at the end of 2016. 40% (Correct)

The S&P 500 Index will close below 2000 at the end of 2016. 75% (Incorrect)

Federal Reserve will cut the Federal Funds rate to below zero levels (i.e. a range that includes below zero levels) at or before the March 2016 FOMC meeting. 1% (Correct)

Federal Reserve will cut the Federal Funds rate to the [0, 0.25%] range at or before the March 2016 FOMC meeting. 5% (Correct)

There will be at least one more gravitational wave detection announced in 2016. 70% (Correct)

There will be at least two more gravitational wave detections announced in 2016 69% (Incorrect)

Obama will successfully appoint Scalia’s appointment. 45% (Correct)

North Korea’s government will survive the year 2016 without large civil war/revolt. 95% (Correct)

The 2016 Iowa GOP Caucus result does not get overturned/revised such that Trump takes first place. 99%. 1% (Correct)

A US state (other than Florida) will declare at least a county-wide state of emergency due to the Zika virus before June of 2016. 50% (Incorrect)

Jeb Bush will be the next President-elect. 1% (Correct)

AlphaGo wins 5–0 in march. 40% (Correct)

AlphaGo defeats Lee Sedol in 3/5 or more games in March. 70% (Correct)

Bernie Sanders will win the New Hampshire Primary. 90% (Correct)

No 2016 Presidential candidate receives a majority of Electoral votes. 4% (Correct)

Lee Sedol will win match against AlphaGo but lose at least one game 27% (Correct)

No major earthquake (>100 deaths) in US in 2016. 99% (Correct)

No major earthquake (>10000 deaths) in the world in 2016. 75% (Correct)

SpaceX successfully launches a reused rocket in 2016. (Payload reaches outer space, i.e. 100km) 60% (Inocrrect)

US unemployment to be lower at end of 2016 than beginning. 75% (We’ll see — probably.)

US GDP growth in 2016 lower than in 2015. 60% (We’ll see.)

Dow Jones Industrial Average will not fall > 10% over the course of 2016. (It will end above 15,434) 60% (Correct)

Oil will end 2016 lower than $40 a barrel. 40% (Correct)

Bitcoin will end 2016 higher than $500. 25% (Incorrect.)

ISIS will control less territory at the end of 2016 than it does right now. 60% (Correct)

Israel will not get in a large-scale war (ie >100 Israeli deaths) with any Arab state (Excluding the PA/Hamas, not excluding IS or Hezbollah) in 2016. 97% (Correct)

Greece will not announce it’s leaving the Euro in 2016. 90% (Correct)

North Korea’s government will survive 2016 without large civil war/revolt. 90% (Correct)

No terrorist attack in the in the US will kill > 100 people in 2016. 95% (Correct)

US will not get involved in any new major war with death toll of > 100 US soldiers. 75% (Correct)

Rubio mops up on Super Tuesday, taking more than 60% of the primaries. 4% (Correct)

Brokered convention. 4% (Correct)

2016 Election is Clinton vs Rubio. 10% (Correct)

Democrats win House of Representatives in 2016. 5% (Correct)