Comment on: “ How to Beat Science and Influence People” —

This is an interesting and well written policy paper that clearly explains the dynamics which occur in public policy that exploit policy maker’s naivete. Unfortunately, I think it goes too far in claiming that the failure occurs “even when policy makers rationally update on all evidence available to them.” This follows from a more general failure of many rational actor models to properly model data sources, a critical characteristic of a model used for inferences. (See Chapter 2 of my dissertation, forthcoming.)

This failure results from not appreciating what Yudkowsky calls “Filtered evidence.” If, in fact, rational policymakers have a model that acknowledges the process which generates evidence, the failure mode assumed in the paper largely disappears. To see how, consider the difference between the following two Bayesian models that treat data x as representative of item y:

Naive Model:
x~normal(y, σ²)

Filtered Evidence Model: 
z~normal(y, σ²)
x~(filterer goal-epsilon<z<filterer goal+epsilon)

Note that the filterer goal can usually be inferred contextually, and the range may be asymmetric — but people with a policy goal will select evidence to match that goal, and policymakers can often infer that goal and use it to account for the potential filtration.

This filtered evidence model relies on a policymaker knowing that the provider of evidence x has a motive to misrepresent the truth — something that politicians generally understand, though other motives exist for ignoring this fact. This does not imply that it requires any particular level of stupidity or obvious failure, since the need to account for this is far from typical. For example, the failure to appreciate this fact can be seen in the recent discussions of p-hacking and how p-curves display anomalies.