Comments on Arxiv Paper 1712.03198

Compiling some random, nitpicky comments on this generally excellent paper;

Pg 2 — 
Paragraph “This article outlines…” seems mostly unnecessary given the abstract and the following paragraph.

Perhaps clarify that ADMEP is a new method (or cite it.)

Sentence: “Section 8.1…” seems like the order should be reversed to mention 8 before 8.1.

“This excludes the article types: tutorial in biostatistics, commentary, book review, correction, letter to the editor and authors’ response. In total, this returned 264 research articles.” I would suggest: “In the volume, there were a total of 264 articles after removing the following article types: tutorial in biostatistics, commentary, book review, correction, letter to the editor and authors’ response” (The phrase “this returned” seemed to imply a different search than what I think you meant.)

Pg 9 — 
“ are, to all intents and purposes, truly random” — I’m annoyed by this only because for cryptographic purposes this is very untrue. I’d prefer “are, for all statistical purposes, truly random”

Review of Brand et al. — MetaPsychology osf.io/6s29n

Overall Comments

Very interesting paper, and very interesting method, one which seems easy to integrate into current Bayesian modeling practice.

It took me a while to figure out what you meant by posterior passing. It might be worthwhile explaining the method more simply in the abstract; “posterior passing, where posterior found in a past analysis is used as the Bayesian prior for the subsequent analysis.” This seems simpler to me. Others may disagree.

Methodological Comments

1) If the simulation was an attempt to replicate the various methods, why is the study size fixed for the NHST methods? The parameter passing method allows the Bayesian approach to take advantage of prior data, but the way in which prior data is incorporated in NHST is at least partially via power calculations; they should vary the sample size based on the previously observed effect size.

2) If the intent is for posterior passing to be used in place of meta-analysis, shouldn’t the analysis of frequentist methods include a meta-analysis of the results from the 80 trials, to compare to the result found with prior-passing?

3) You note the importance of file-drawer bias. Would it be possible to run the analysis of the posterior-passing method only allowing passing of results when they are above some threshold, to account for this?

General conceptual introduction and attempt to improve science overall:

The presentation in the paper mentions that “the attempts of advocates of Bayesian methods of data analysis to introduce these methods to psychologists… have been without widespread success or response from the field.” To remedy this, some model of how it might change is necessary, and that model should explain the observation.

One plausible explanation is offered earlier in the review; “due to incentives for high numbers of publications, poorer methods that 65 produced false positives persisted and proliferated.” Another plausible explanation is that newer methods are more complex, and people prefer not to learn new methods.

Ideally, at least a comment should explain how the proposal would address the presented problems — the answer to which eludes me. Perhaps embracing the proposed method needs to be a standard for the method to fix the problem of people incentivized to use simpler/easier to cheat methods, in which case how and why would people start to use it?

Alternatively, the background should be cut significantly, and the problem presented should be more closely restricted to “what method would reduce false positive rates and incorporate / replace reproducibility?” (This seems to be what was actually done.)