Predictions for 2017

(and onwards, plus a bonus review!)

Now’s the time of year where people make specific-sounding, untestable predictions for the year ahead, right?

Well, here’s a departure from that norm.

These were made 1/3/2017. I may add predictions, but I’ll note that, and the date they were added. And I’ll commit to not altering anything, though I may update to include revised predictions, with notes. (Added my predictions on SSC’s predictions on 1/9.)

Economics:

Annualized GDP Growth < 4% — 90%

Annualized GDP Growth <2% — 40%

Annualized GDP Growth <0% — 5%

US GDP growth lower than in 2015: 60%

Dow Jones will not fall > 10% this year: 80%

Domestic US Politics:

Repeal of Obamacare, including the individual mandate OR minimum coverage rules— 80%

Conditional on above, Republicans have replacement policies already passed — 5%

International Relations:

These are basically just Slatestarcodex 2016 Predictions, 2017 redux: (Numbers to show which were skipped.)

1. US will not get involved in any new major war with death toll of > 100 US soldiers: 90%
2. North Korea’s government will survive the year without large civil war/revolt: 98%
4. No terrorist attack in the USA will kill > 100 people: 95%
5. …in any First World country: 80%
7. Israel will not get in a large-scale war (ie >100 Israeli deaths) with any Arab state: 95%
9. No interesting progress with Gaza or peace negotiations in general this year: 95%
13. ISIS will control less territory than it does right now: 90%
15. No major civil war in Middle Eastern country not currently experiencing one: 65%
18. No country currently in Euro or EU announces plan to leave: 95%
35. No major war in Asia (with >100 Chinese, Japanese, South Korean, and American deaths combined) over tiny stupid islands: 99%
45. SpaceX successfully launches a reused rocket: 80%
48. No major earthquake (>100 deaths) in US: 99%
49. No major earthquake (>10000 deaths) in the world: 80%

Cryptocurrency:

Bitcoin will end the year higher than $1000: 40%
(But last year I was surprised — maybe I’m not well calibrated here. Or maybe something about markets, irrationality, and solvency.)

Lightning Networks / Segwit will be deployed: 65%

There will be a major bug found and exploited in LN/Segwit: 60%

Ethereum will be above $10: 70%

Ethereum will be above $20: 30%

Z-Cash will be above $50 (Price on 1/3): 25%

Z-Cash Market Cap will be above $18m (Value on 1/3): 75%

Longer Term Predictions:

There will be no war involving more than one of the US, Russia, China, or any member of the EU on different sides started in 2017 (Measure: Total eventual casualties above 100) — 98%

There will be no war involving more than one of the US, Russia, China, or any member of the EU on different sides started before 2020 (Measure: Total eventual casualties above 100) — 95%

The Republicans will maintain control of the House of Representatives in 2018 elections: 40%

There will be a Republican primary challenger getting >10% of the primary vote in 2020 (conditional on Trump running) — 70%

The stock market will go down under President Trump (Conditional on him having a 4 year term, Inauguration-Inauguration) — 60%

Slatestarcodex 2017 Predictions:

(Added 1/9/2017. I have left out ones I already forecast above, from last year. His estimate in Parentheses)

WORLD EVENTS
5. Assad will remain President of Syria: 90% (80%)
7. No major intifada in Israel this year (ie > 250 Israeli deaths, but not in Cast Lead style war): 70% (80%)
9. No Cast Lead style bombing/invasion of Gaza this year: 75% (90%)
10. Situation in Israel looks more worse than better: 85% (70%)
11. Syria’s civil war will not end this year: 80% (60%)
13. ISIS will not continue to exist as a state entity in Iraq/Syria: 65% (50%)
15. Libya to remain a mess: Unclearly defined — 50% to 100% (80%)
16. Ukraine will neither break into all-out war or get neatly resolved: [No guess — unsure if a cemented status quo with recognition is “resolved” — which seems likely.] (80%)
17. No major revolt (greater than or equal to Tiananmen Square) against Chinese Communist Party: 99% [ China doesn’t let things get to that point any more. They intervene before it can get that large; it’s either toppled gov’t, or nothing like this.] (95%)
19. No exchange of fire over tiny stupid islands: 80% (90%)
20. No announcement of genetically engineered human baby or credible plan for such: 90% I don’t follow this, so I’m mostly relying on Scott’s estimate (90%)
21. EMDrive is launched into space and testing is successfully begun: 50% [No dote announced, and there is a launch backlog for cubesats in the US. These things don’t happen quickly.] (70%)
22. A significant number of skeptics will not become convinced EMDrive works: 95% — if it launches, this will become clear quickly. (80%)
23. A significant number of believers will not become convinced EMDrive doesn’t work. [If it launches, it’s game over.] Conditional: 10% if launched before November, 90% if not. (60%)
26. Keith Ellison chosen as new DNC chair: 80% (70%)

EUROPE
27. No country currently in Euro or EU announces new plan to leave: [Exclusing England, which has no actual plan, 90%](80%)
28. France does not declare plan to leave EU: 97% (95%)
29. Germany does not declare plan to leave EU: 99% (99%)
30. No agreement reached on “two-speed EU”: 90% (80%)
31. The UK triggers Article 50: 80% (90%)
32. Marine Le Pen is not elected President of France: [Conditional on her running] 50% (60%)
33. Angela Merkel is re-elected Chancellor of Germany: 70% (60%)
34. Theresa May remains PM of Britain: 80% (80%)
35. Fewer refugees admitted 2017 than 2016: [to Europe? Unsure. But the flow is slowing, it seems. 95% (95%)

ECONOMICS
37. Oil will end the year higher than $50 a barrel: [Brent? I guess.] 70% (60%)
38. …but lower than $60 a barrel: 50% (60%)
39. Dow Jones will not fall > 10% this year: 60% (50%)
40. Shanghai index will not fall > 10% this year: 65% (50%)

TRUMP ADMINISTRATION
41. Donald Trump remains President at the end of 2017: 95% (90%)
42. No serious impeachment proceedings are active against Trump: 90% (80%)
43. Construction on Mexican border wall (beyond existing barriers) begins: 50% (80%)
44. Trump administration does not initiate extra prosecution of Hillary Clinton: 95% (90%)
45. US GDP growth lower than in 2016: 50% (60%)
46. US unemployment to be higher at end of year than beginning: 40% (60%)
47. US does not withdraw from large trade org like WTO or NAFTA: 80% (90%)
48. US does not publicly and explicitly disavow One China policy: 80% (95%)
49. No race riot killing > 5 people: 80% (95%)
50. US lifts at least half of existing sanctions on Russia: [How do you measure half? Also, congress does this, not president. So…] 60%[?] (70%)
51. Donald Trump’s approval rating at the end of 2017 is lower than fifty percent: 90% (80%)
52. …lower than forty percent: [It’s only 41% now, and they typically drop — and the big exception is G.W.B., after 9/11.] 75% (60%)

And I’m not predicting his blog traffic or his life. But…
 60. Less Wrong renaissance attempt will seem less (rather than more) successful by end of this year: [I’m a bit hopeful — but I’ll follow his judgement on what happened…] 80% (90%)

Bonus: Scott’s accuracy / calibration will be about as good as it was last year, not materially worse [I’m gonna eyeball this one, I can’t think of a good metric.]: 90%

Review of Last Year’s predictions:

From my predictions on Predictionbook — I was underconfident, overall, but I’m glad I didn’t revise most of the political predictions in the weeks before the election; that would have fixed me good.

Of [50–60%) predictions, I got 1 right and 1 wrong, for a score of 50%
Of [60–70%) predictions, I got 6 right and 2 wrong, for a score of 75%
Of [70–80%) predictions, I got 5 right and 2 wrong, for a score of 71%
Of [80–90%) predictions, I made no predictions. (Hmmm…)
Of [90–95%) predictions, I got 6 right and 0wrong, for a score of 100%
For [95–99%) predictions, I got 9 right and 0 wrong, for a score of 100%
For [99+%] predictions, I got 4right and 0 wrong, for a score of 100%

Assessed probability is noted after each prediction. Predictions with probability >50% that happened are noted (Correct). The base rates are being ignored, so these marks are very unfairly biased in my favor, since I said many very rare events wouldn’t happen.

Donald Trump to be the Republican nominee for President in 2016. 60% (Correct)

Democrats win undivided control of government in 2016. 12% (Correct)

The GOP will keep the senate in 2017. 70% (Correct)

The Bitcoin reward halvening will increase its value by over 25% versus USD within a month of the event. 5% (Correct)

Zika will be found not to be the (sole, proximate) cause of microencephaly for the recent cases in Brazil. 25% (Calling this still unclear.)

No single GOP candidate will get >50% of delegates, and a brokered convention will ensue. 10% (Correct)

The S&P 500 Index will close below 1600 at the end of 2016. 10% (Correct)

The S&P 500 will close below 1750 at the end of 2016. 40% (Correct)

The S&P 500 Index will close below 2000 at the end of 2016. 75% (Incorrect)

Federal Reserve will cut the Federal Funds rate to below zero levels (i.e. a range that includes below zero levels) at or before the March 2016 FOMC meeting. 1% (Correct)

Federal Reserve will cut the Federal Funds rate to the [0, 0.25%] range at or before the March 2016 FOMC meeting. 5% (Correct)

There will be at least one more gravitational wave detection announced in 2016. 70% (Correct)

There will be at least two more gravitational wave detections announced in 2016 69% (Incorrect)

Obama will successfully appoint Scalia’s appointment. 45% (Correct)

North Korea’s government will survive the year 2016 without large civil war/revolt. 95% (Correct)

The 2016 Iowa GOP Caucus result does not get overturned/revised such that Trump takes first place. 99%. 1% (Correct)

A US state (other than Florida) will declare at least a county-wide state of emergency due to the Zika virus before June of 2016. 50% (Incorrect)

Jeb Bush will be the next President-elect. 1% (Correct)

AlphaGo wins 5–0 in march. 40% (Correct)

AlphaGo defeats Lee Sedol in 3/5 or more games in March. 70% (Correct)

Bernie Sanders will win the New Hampshire Primary. 90% (Correct)

No 2016 Presidential candidate receives a majority of Electoral votes. 4% (Correct)

Lee Sedol will win match against AlphaGo but lose at least one game 27% (Correct)

No major earthquake (>100 deaths) in US in 2016. 99% (Correct)

No major earthquake (>10000 deaths) in the world in 2016. 75% (Correct)

SpaceX successfully launches a reused rocket in 2016. (Payload reaches outer space, i.e. 100km) 60% (Inocrrect)

US unemployment to be lower at end of 2016 than beginning. 75% (We’ll see — probably.)

US GDP growth in 2016 lower than in 2015. 60% (We’ll see.)

Dow Jones Industrial Average will not fall > 10% over the course of 2016. (It will end above 15,434) 60% (Correct)

Oil will end 2016 lower than $40 a barrel. 40% (Correct)

Bitcoin will end 2016 higher than $500. 25% (Incorrect.)

ISIS will control less territory at the end of 2016 than it does right now. 60% (Correct)

Israel will not get in a large-scale war (ie >100 Israeli deaths) with any Arab state (Excluding the PA/Hamas, not excluding IS or Hezbollah) in 2016. 97% (Correct)

Greece will not announce it’s leaving the Euro in 2016. 90% (Correct)

North Korea’s government will survive 2016 without large civil war/revolt. 90% (Correct)

No terrorist attack in the in the US will kill > 100 people in 2016. 95% (Correct)

US will not get involved in any new major war with death toll of > 100 US soldiers. 75% (Correct)

Rubio mops up on Super Tuesday, taking more than 60% of the primaries. 4% (Correct)

Brokered convention. 4% (Correct)

2016 Election is Clinton vs Rubio. 10% (Correct)

Democrats win House of Representatives in 2016. 5% (Correct)

It’s time for some game theory, about game theory, about game theory.

“Guys,” he writes. “It’s time for some game theory.” Game theory, for the uninitiated, is a branch of mathematics that uses computational models to predict the behavior of human beings in potentially conflictual situations. It’s complex, involves a lot of formal logic and algebra, and is mostly useless. Game theory models human actions on the presumption that everyone is constantly trying to maximize their potential gain against everyone around them; this is why its most famous example concerns prisoners — isolated people, cut off from all the noncompetitive ties that constitute society.

I agree with Sam Kriss about a few points he made in his article on Garland’s now famous/infamous thread. I take issue with his unjustified attack on John Nash, but I don’t blame him for his ignorance. Not many of us know game theorists — though I happen to have a nice one sitting down the hall from me. But the attack on game theory itself seems silly; it’s the basis for a ton of microeconomics and decision theory that was written in the past 50 years, and may have prevented nuclear war. Given what he said, then, Sam Kriss is a bit of an idiot — but unlike Garland, at least he’s a game theoretically optimal one.

Game theory is about describing and understanding the interaction of multiple parties that act at least somewhat rationally — and while Kriss’s straw man isn’t entirely wrong, it’s certainly not right. It’s not always complex, doesn’t always require algebra, and has essentially nothing to do with formal logic — another field I assume Kriss knows nothing about.

An Example

A journalist wants to sound intelligent to their audience, but knows little about most subjects. They have several choices — attempt to learn enough to actually be educated on every subject, learn just enough to sound educated to a lay-person, or not bother to learn anything, and either use technical terms very wrongly, in a way transparent to most readers, or not use them at all.

This has different effects on different people; Journalists have a cost of getting it right, and a cost of sounding dumb. The important difference is that in some cases — being only partly educated, or faking it — there can be a consequence for getting caught or called out, and a benefit for the educated in doing so.

A game theorists would represent this notionally, below. (We don’t know the exact values of each option, but a sketch will be helpful for thinking about it.) For each person, they have a payout for what happens. In the second and third column, the result depends on whether the journalist is called out by someone knowledgeable. The first entry is if they are not called out, the second if they are. Obviously, these numbers are not exact, but they are useful in understanding the dynamic, without resorting to “formal logic” or “complex” “algebra.”

Payoffs to:   | Learn | Learn a bit |  Fake it | Sound Dumb
------------------------------------------------------------
Journalist | -5 | -1 / -3 | 0 / -20 | -10
Knowledgeable | +1 | -2 / 0 | -2 / +50 | -1
Lay Public | +5 | +1 | -1 / +1 | -1

(Of course, the exact values differ by area — the cost for a business journalist to be ignorant might be higher, since their audience is mostly knowledgeable people. Similarly, the choice isn’t discrete — journalists can pick how much to learn, anywhere from a bare minimum to a PhD, and so there are a contiuum of options. But this is sufficient for our purposes.)

If you think about the above table for a while, you can see that journalists would love to be able to fake it — but the cost if they are called out is high. Being moderately informed, however, has little downside. Even if someone knowledgeable calls them out, they only look a little silly, and the benefit for someone knowledgeable to do so is small, or nonexistent. So they know enough about game theory to describe it loosely, but not enough to appreciate why it’s useful.

This is the first insight from a game theoretic explanation — Journalists become moderately educated on most subjects, and rarely fake knowledge completely. But they also aren’t usually interested in becoming really educated, because it has too little benefit for them.

The second insight, though, gets to the famous “prisoner’s dilemma” he mocked. Hundreds or thousands of people would benefit from knowledgeable journalists, but there is not enough incentive for them to do so. This means that the public gains little, but not nothing, from reading their semi-informed thoughts. This has little to do with prisoners, except in the sense that we are all held captive by the idiocy promoted by lazy journalism. This is referred to in economics as a market failure — there is insufficient incentive for smart journalism, and little incentive for educated people to call out journalists on their semi-educated status.

Ideally, journalists should learn more, and then stick to their strengths — everyone would win, at the expense of journalists working a bit harder to develop those strengths. Instead, we have a news culture that rewards writers for moderate ignorance. And that’s why it’s optimal for Kriss to stay ignorant of game theory — while still obeying its dictates.

Evaluating Ben Franklin’s Alternative to Regression Models for Decision Making

Recently, Gwern pointed me to a blog post by Chris Stucchio that makes the impressive-sounding claim that “a pro/con list is 75% as good as [linear regression], which he goes on to show based on a simulation. I was intrigued, as this seemed counterintuitive. I thought making choices would be a bit harder than that, especially when you have lots of choices — and it is, kind of. But first, let’s setup the problem motivation, before I show you pretty graphs of how it performs.

Motivation

Let’s posit a decision maker with a set of options, each of which has some number of characteristics that they have preferences about. How should they choose? It’s not easy to figure out exactly which option they would like the most — especially if you want to get the perfect answer! Decision theory has a panoply of tools, like Multi-Attribute Decision Theory, each with whole books written about them. But you don’t want to spend $20,000 on consultants and model building to choose what ice cream to order; those methods are complicated, and you have a relatively simple decision.

For example, someone is choosing a car. They know that they want fuel efficiency of more than 30 miles per gallon, they want at least 5 seats for their whole family to fit, they prefer a sedan to an SUV or small car, and they would like it to cost under $15,000. Specifying how much they care about each, however, is hard; do they care about price twice as much as the number of seats? Do they care about fuel efficiency more or less than speed?

Instead of asking people to specify their utility function, as many decision theory methods would require, most people just look at the options and pick the one they like most. That works OK, but given cognitive biases and sales pitches that convince them to do something they’ll regret later, a person might be better off with something a bit more structured. That’s where Chris brings in Ben Franklin’s advice.

…my Way is, to divide half a Sheet of Paper by a Line into two Columns, writing over the one Pro, and over the other Con. Then…I put down under the different Heads short Hints of the different Motives…I find at length where the Ballance lies…I come to a Determination accordingly.

Chris interprets “where the Ballance lies” as which list, Pro or Con, has more entries.

The question he asks is how much worse this fairly basic method, which is uses a statistical method referred to as “Unit-Weighted Regression,” is than a more complex regression model with exact preference weights.

Where did “75% as Good” come from?

Chris set up a simulation that showed that, given two random choices and random rankings, with a high number of attributes to consider, 75% of the time the choice given by Ben Franklin’s method is the same as that given by a method that uses the (usually unknown) exact preference weights. This is helpful, since we frequently don’t have enough data to arrive at a good approximation of those weights when considering a decision. (For example, we may want to assist senior management with a decision, but we don’t want to pester them with lots of questions in order to elicit their preferences.)

Following the simulation, he proves that, given certain assumptions, this bound is exact. I’m not going to get into those assumptions, but I will note that they probably overstate the actual error rate in the given case; most of the time, there are not many features, and when there are, features that have very low weights wouldn’t be included, which will help the classification, as I’ll show below.

But first, there’s a different problem; he only talks about 2 options. So let’s get to my question, and then back to our car buyer.

Multiple Options

It should be fairly intuitive that picking the best option is harder given more choices. If we picked randomly between two options, we’d get the right choice 50% of the time, without even a pro-con list. (And coin-flipping might be a good idea if you’re not sure what to do — Steven Levitt tried it, and according to the NBER working paper he wrote, it’s surprisingly effective. Despite this, most people don’t like the idea.)

But most choices have more than two options, and that makes the problem harder. First, I don’t have any fair three-sided coins. And second, our random guess now gets it right only a third of the time. But how does Ben Franklin’s method do?

First, this shows the case Chris analyzed, with only two options, compared to 3;


The method does slightly worse, but it’s almost as good as long as there aren’t lots of dimensions. Intuitively, that makes sense; when there are only a couple of things you care about, one of the options probably has more than the other— so unless one of the options is much more important than the others, it’s unlikely that the weights make a big difference. We can check this intuition by looking at our performance with many more options;


With only a few things that we care about, pro/con lists still perform incredibly well, even when there are tons of choices. In fact, with few enough features, it performs even better. This makes sense; if there is a choice that is clearly best we can pick it, since it has everything we want. This is part of the problem with how the problem was set up; we are looking at whether each item has or doesn’t have the thing we want — not the value.

If we have a lot of cars to choose from, and we only care about the 4 things we listed, (30 MPG, 5 seats, Sedan, cost < $15,000), picking one that satisfies all of our preferences is easy. But that doesn’t mean we pick the best one! Given a choice between a five-seater sedan that gets 40 MPG and costs $14,000 or one that gets 32 MPG and costs $14,995, our methods calls it a tie. (It’s “correct” because we assumed each feature is binary.) There are plenty of algorithmic ways to get around this that are a bit more complex, but any manual pro/con list would make this difference apparent without adding complexity.

Interestingly, however, with many choices, the methods starts working much worse with many feature dimensions. Why? In a sense, it’s actually because we don’t have enough choices. But first, let’s talk about weak preferences, and why they make the problem seem harder than it really is.

Who Cares?

If we actually have a list of 10 or 15 features, odds are good that some of them don’t really matter. In algorithm design, we need a computer to make decisions without asking us, so a binary classifier can have problems picking the best of many choices with lots of features — but people don’t have that issue.

If I were to give you a list of 10 things you might care about for a car, some of them won’t matter to you nearly as much as others. So… if we drop elements of the pro/con list that are less than 1/5 as important as the average, how does the method perform?


And this is why I suggested above that when building a Pro/Con list, we normally leave off really low importance items — and that helps a bit, usually.

When we have lots of choices, the low importance features add noise, not useful information;


Of course, we need to be careful, because it’s not that simple! Dropping features when we don’t have very many is a bad idea — we’ll miss the best choice.


The Curse of Dimensionality versus Irrelevant Metrics

We can drop low importance features, but why does the method work so much worse with more features in the first place? Because, given a lot of features, there are a huge number of possibilities. 5 features allows 2⁵ possibilities — 32. Anything that has all 32 that we want (or most of them,) will be the best choice — and ignoring some of them, even if they are low weight, will miss that. If we have 50 features, though, we’ll never have 2⁵⁰ options to find one that has everything we might want — so we want to pay attention to the most important features. And that’s the curse of dimensionality.

If I were really a statistician, that would be an answer. But as a decision theorist, that actually means that our metric is a problem. Picking bad metrics can be a disaster as I have argued at length elsewhere. And our car buyer shows us why.

There are easily a hundred dimensions we could consider when buying a car. Looking at the engine alone, we might look at torque, horsepower, and top speed, to name a few. But most of these options are irrelevant, so we would ignore them in favor of the 4 things we really care about, listed above; picking a car with the best engine torque that didn’t seat 5 would be a massive failure.

And in our analysis here, these dimensions are collapsed into a binary, both in our heuristic pro/con list, and in the base case we compared against! As mentioned earlier, this ignores the difference between 32 MPG and 40 MPG, or between $14,000 and $14,995 — both differences we do care about.

And that’s where I think Ben Franklin is cleverer than we gave him credit for initially. He says “I find at length where the Ballance lies…I come to a Determination accordingly.” That sounds like he’s going to list the options, think about the Pros and Cons, and then make a decision — not on the basis of which list is longer — but simply by looking at the question with the information presented clearly.

Note: Code to generate the graphs in R can be found here; https://github.com/davidmanheim/Random-Stuff/blob/master/MultiOption_Pro_Con_Graphs.R

Internet Evolution — Complexity

I’m posting these here not because they are particularly good (they weren’t) but because I wanted to leave them online after Internet Evolution, the site which originally hosted them, went away.

How Complexity Is Crippling the Internet

Written by David Manheim 1/11/2011

The layering of protocols and the intelligence of the Internet’s users, and attackers, are driving increased complexity.

The idea of having a seven-layer OSI model for networking was great until we realized that we’ve built another half-dozen layers above the session layer — and HTML and the old application layer are being used as a substrate for multilayer systems of their own. And the layers aren’t stable; they interact in complex ways.

The operating systems we use are an example of base complexity on which rests the entirety of our systems. The lines of code supporting the infrastructure of the Internet number in the tens of millions per OS, of which there are many competing versions. The code for each of these has been developed over decades and has been layered in ways that are not clear. Microsoft Corp.(Nasdaq: MSFT) still uses code that was originally in DOS, which was “replaced” by Windows 95, and still hasn’t changed much since then — not since Windows 3.1, 3.0, or possibly even earlier. Do we know what problems exist with that code?

A great example of how code reuse and undiagnosed problems can proliferate is found in the case of Gamma errors in picture scaling. There was a bug in how luminosity was computed for resizing. This was originally a logic bug in how software was implemented, and the color differences were small. The error was copied by almost every major piece of image editing software. It took decades for someone to notice what was wrong. Imagine if something similar happened in a piece of security software: Would anyone notice an extreme case that was included carelessly, or planted maliciously?

Is this surprising? Not really. The question is: How do you know what bugs are getting written into all of our software stacks? When was the last time someone reimplemented the software stack that runs our now-critical infrastructure? Is the TCP/IP stack routinely reimplemented, or is Windows using a very slight variant of what was implemented decades ago? I know that there are dialogue boxes that are essentially unchanged since Windows 3.1, and I would add that in almost every case, no one has rewritten the underlying logic or code; these boxes are still using a version of the same code that was written originally.

Code bases don’t take kindly to removal of code. Software stacks develop badly understood dependencies, which break when parts are replaced or reworked. We haven’t replaced IPv4 yet, originally created in 1981. IPv6 was first standardized more than a decade ago, and 10 years in, the new standard had a less than 1 percent adoption rate. We can’t replace parts that we know limit our existing networks, and we know we need more secure networks. Parts that only compromise network efficiency are even further down the list.

It is unclear how exactly these dependent code bases that exist almost everywhere will be made to work with most changes in complex pieces of code. Testing has grown at speeds exceeding that of the coding itself. According to blogger Larry O’Brien, since Windows 2000, the Microsoft internal testing teams are larger than the development team. Despite this, the code itself is almost unmanageable, and the completeness of testing regimes is approximate at best. We have no idea what will break when we upgrade.

As another possible approach, modularization, already a popular tool, can be utilized more fully. If we can make code bases fully modular, then we only need to worry about requirements changing; when that happens, we are back to the problem of complex dependencies in the code, and kludgy solutions involving compatibility with multiple versions of a protocol.

— David Manheim works as a terrorism insurance modeling specialist at Risk Management Solutions.
Link:

Internet Evolution — Name Game

I’m posting these here not because they are particularly good (they weren’t) but because I wanted to leave them online after Internet Evolution, the site which originally hosted them, went away.

The Name Game Moves to the Web

Written by David Manheim 4/28/2011

Addressable space has been an issue with computers since time immemorial. My first computer had expanded, extended, and HMA memory, just so I could use the second DIMM of RAM. We had 2- or 4-gigabyte limits on addressable RAM on 32-bit machines, and FAT16 limits for hard drives that are similar. Hell, IPv6 faces the same issue. We just don’t know how much space we’ll need when designing a system.

But there’s an impending bigger deal, with a less well defined limit: There aren’t enough words to describe what we need!

You may have noticed this issue with acronyms over the last 10 years: There are too many of them that could stand for anything. The FAT16 I mentioned above stands for File Allocation Table, but that’s obvious only to those who know it. It could just as easily stand for Feature Acceptance Test, Factory Allowance Test, Failure Automation Triangle, and so on. Or fat.

Tech acronyms aren’t the only terms for which we’re running out of alternatives. We’re also running out of names for rock bands, for instance. Dave Barry has been worried about this longer than IPv6 has been around, and he has a useful starter list in case any budding musicians get stuck.

Pharmaceuticals is having a similar problem, but with more profound consequences: OPDRA(one of those acronyms again) is the branch of the FDA making sure that prescription names are not similar enough to be confused with one another. Giving clozapine instead of olanzapine, or giving serzone for seroquel, has injured or killed patients. Computerized medical systems would prevent transcription errors, but no one can prevent confusion or misstatements on the part of a prescribing doctor.

In computing, the latest generation of technologies is ignoring the problem and reusing terms with reassigned definitions: I use Windows (wooden framework with a glass pane) and Vista (a distant view or prospect) to get to clouds (a visual mass of water droplets) via LAMP (an artificial source of illumination), so I can access the grid (a regularly spaced pattern of lines).

If we don’t have enough space for new terms, we can always reuse old ones!

But this only works if the technology is widespread, and no one needs to copyright it. Otherwise, we need to expand the namespace, or what I’ve called Wordv1 to Wordv2, which squares the size of the available namespace for the low price of using two-word names.

So we have Facebook and Foursquare, Techcrunch, Techflash, and Techdirt. We even had an “Untitled Startup” for awhile, but it finally found a name that wasn’t already copyrighted and is now “Simply Measured.”

I guess it won’t be long until we complete the circle and names expand into Wordv3, and we get great three-word names like International Beekeeping Mavens. But, don’t worry, if it sounds too long, we can always use the acronym.

— David Manheim works as a terrorism insurance modeling specialist at Risk Management Solutions.

Link:

https://web.archive.org/web/20130218232428/http://www.internetevolution.com/author.asp?section_id=1181

Expanding definitions and Obsoleting Industries

I’ve already explained why we can’t figure out if bitcoin is “really” a currency. But I think there is a lot more to say — because those 3 characteristics of money (Unit of Account, Store of Value, and Medium of Exchange) are not the only things that make money useful.

What else is needed?

Principally, gold stopped being used as currency directly because it was too hard to carry around as a currency — especially safely! But the gold standard was abandoned because the supply was too inflexible. Country’s economies started to expand much faster than their metal-backed currency, so that the value that existed was in excess of the medium of exchange for that value, and their needs for credit couldn’t be easily supplied with gold. This clarifies that these three tasks are not all a currency can do, nor is it the only thing we might care about. In fact, much before digital currencies, there were lots of things that we need that traditional forms of money don’t provide. Instead, systems were created to fill in the gaps, and these systems sprouted entire industries that software is getting ready to eat.

For example, we frequently need a system for measuring, processing, and communicating transactions. This is traditionally done by bookkeepers (not even necessarily accountants). I’ll refer to it as a “ledger of transactions” (4). Bookkeeping employs a couple hundred thousand people and costs around $50bn in the US alone. Under modern accounting principles, using third party verifiers, this system also allows an owner to provide a “proof of worth” (5) for a company or an individual. That’s accounting, (audit, not tax) and it employs another couple hundred thousand people and costs $100b. Both of these are bonuses that blockchain ledger currencies like bitcoin can provide, for example, using merkel hash tree signed proofs. Because of this, traditional businesses and currencies must rely on those large, expensive systems instead.

Next, we have more ancillary, non-currency systems that provide “tokens of ownership” (6) for non-currency goods, like real estate deeds or stock certificates. In order for these to be liquid and saleable, a “verifiable exchange market” (7) is needed, such as a county clerk for real estate, so that people can reliably sell their property without worrying that someone else will appear with a different deed to the property. If you’ve ever bought a house, you know the “title search” fee of $250+ you pay, instead of a 1 line query of a distributed blockchain database. And then you probably still need title insurance, in case the search missed anything.

The verifiable exchange can also provide a “transaction price log” (8) like the stock market’s ticker, where everyone can see the value of the goods currently being traded, and therefore be able to make decisions about whether to buy and sell. And all three of these can be accomplished using on-chain blockchain tokens, which track ownership, and the blockchain can show the prices paid for them.

We also want a “system of credit” (9) that allows for transactions that are contingent, risky, or require a future payment. This last is closely related to, and requires, a unit of account — but credit cards and most other typical forms of credit use outside systems for their unit of account. This is a feature that blockchain based currencies don’t do well, yet. Some smart contract platforms seem poised to allow, for example, collateralized loans, but the key difficult with extending credit via blockchain is that unsecured credit requires known identities, which blockchains don’t do. (If I lend money to you, then you stop paying me back and just start using a new wallet for your money, it’s not clear what recourse I have.)

Further, built on top of these systems, we have complex systems for many other features of the modern economy that are enabled by these various services and characteristics of money and related market systems. For example, fractional reserve banking relies on a “system of credit” for loans so that banks can have assets in excess of their reserves. For this to be trustworthy, we need (but don’t fully have) a robust “proof of worth” for these banks.

We have a repurchase agreement (“repo”) market that allows “tokens of ownership” (6) to assist the “system of credit” (9) which helps ensure liquidity on the basis of owned assets.

The Future

This large set of needs, combined with a healthy dose of historical path dependence, leads to the current complicated set of systems that are all intertwined. But as one 2016 Nobel laureate said, “The times, they are a changin’.”

Cryptocurrencies and Smart contracts have the ability to do all of these things. Definitions don’t dictate reality, they reflect it. Cryptocurrencies can do things currencies cannot, and focusing on the definitions is a red herring.

Problems Defining Money

There seems to be confusion about the functions of currency, and the definition of what money is. I’d like to explain why the confusion exists — and it starts with the fact that “Money” and “Currency” aren’t defined simply or clearly.

For something to be considered money by economists, it must have certain uses, or characteristics. Economists have identified three. It should be a;

  1. Unit of Account, which is used to measure value.

2) Store of Value, which is used to hold in order to ensure it can be used in the future.

3) Medium of Exchange, which is used to allow transactions without direct barter.

That last one is the one even critics agree bitcoin displays most obviously. But bitcoin definitely displays all three of these characteristics, to a greater or lesser extent. On the other hand, almost everything else can display these characteristics to some extent as well. The things historically used as money have been whatever does these function best.

Gold has served as a store of value for millennia at least, since it is a fairly compact and valuable item, and has intrinsic worth for its beauty. It was also of limited supply ensuring that it remained valuable, and it worked well as a store of value . Even before coinage, it was a useful medium of exchange, since relatively large amounts can be carried easily, and it is also divisible. Once coinage was created, this was still sometimes true, and Spanish pieces of eight were broken into pieces to allow this. Coinage also led to gold as a much more useful unit of account, no longer requiring careful measurement and sometimes difficult verification of its nature.

Other historical “money” was less well suited for some of these different tasks; Cattle was a great medium of exchange in ancient cultures, was easy to validate, and was a reasonable store of value over the short term. It was also a unit of account, but served this purpose less well. (They sometimes varied in value between different animals, and over time, and were not easily divisible — it tended to get messy when you split them in half.)

Now, many things fall into the grey area created by our linguistic imprecision in grouping these functions. This means that they are money, but aren’t perfectly suited for the purpose. Examples include the UN’s Special drawing rights, which are a unit of account (1) and a store of value (2), but not a medium of exchange(3). Similarly, banknotes (or fiat currencies!) and gold certificates are a great unit of account (1), and a very convenient medium of exchange (3), but their reliability as a store of value (2) is contingent on the trustworthiness of the issuer, which can vary.

Cryptocurrencies are great as a medium of exchange, since the transaction system is designed explicitly for transacting, and they are divisible into fairly small amounts. Their usefulness as a unit of account and store of value, however, is less clear. As exchange rates fluctuate, the reliability of these uses can be tricky. As Bitcoin has stabilized, this has become a less critical issue, but digital currencies lack intrinsic worth, so any stability is limited by trust. As noted, fiat as a store of value also depends on trust, albeit different; it’s trust in a central bank instead of a algorithm and a social agreement.

In terms of just these three characteristics, we seem to have nothing that works quite as well as gold once did, leading some to wonder why we don’t go back to gold. But this is a bad idea, because those three characteristics aren’t a full list of what we need from a currency — notably, gold has a fixed supply, not just a limited one, which has real downsides. (Not just nominal ones. But that’s a different discussion.)

In any case, I think that it’s obvious that nothing fits the platonic ideal of a currency. And arguing about definitions is a silly thing to do. (That doesn’t mean you can use words however you want!) Given the lack of a platonic exemplar, the question of whether to call something money is really just asking what the best trade-off is between different choices — and that’s subjective and contentious. Which, I think, explains the confusion I started with.

Added: But this confusion misses the point; Instead of arguing definitions, we need to look at expanding definitions and obsoleting industries.