I just posted an article on my other blog, Reproducible Ideas, called Musical chairs and reproducibility drills. The post is about rotating programmers, in classes and in professional software development. The post ends with some thoughts on having a build master and rotating that position.
On my other blog, Reproducible Ideas, I wrote two short posts about this morning about reproducibility.
The first post is a pointer to an interview with Roger Barga about Trident, a workflow system for reproducible oceanographic research using Microsoft’s workflow framework.
The second post highlights a paragraph from the interview explaining the idea of provenance in art and scientific research.
Yesterday I added a blog to the ReproducibleResearch.org web site.
I’d like a couple people to join me in writing this blog, and I would greatly appreciate suggestions, guest posts, etc. If you’re interested, please send a note to contribute at the domain name.
Greg Wilson pointed out an article in The Chronicle of Higher Education about scientists using Photoshop to manipulate the graphs of their results. The article has this to say about The Journal of Cell Biology.
So far the journal’s editors have identified 250 papers with questionable figures. Out of those, 25 were rejected because the editors determined the alterations affected the data’s interpretation.
This immediately raises suspicions of fraud which is, of course. However, I’m more concerned about carelessness than fraud. As Goethe once said,
… misunderstandings and neglect create more confusion in this world than trickery and malice. At any rate, the last two are certainly much less frequent.
Even if researchers had innocent motivations for manipulating their graphs, they’ve made it impossible for someone else to reproduce their results and have cast doubts on their integrity.
I started a new web site this week, http://www.reproducibleresearch.org, to promote reproducible research.
I’d like to see this become a community site. Depending on how much interest the site stirs up, I may add a blog, a Wiki, etc. For now, if you’d like to contribute, send me articles or links and I’ll add them to the site. You can send email to “contribute” at the domain name.
Greg Wilson gave a great interview on the IT Conversations podcast recently. He says the emphasis on HPC draws time and energy away from quality concerns, and may not even help scientists get their results faster. While some problems definitely require HPC, most could be solved faster by developing software in the simpler environment of a single PC and waiting longer for it to run.
I’ve written here about reproducibility problems in statistics and in general software development. Apparently there are similar problems in every area of scientific computing. For example, Wilson quotes a survey of computational economics articles that found that 70% of the results could not be reproduced a year after publication. I doubt that computational economics is worse than other fields.
Wilson says he wants to make raise the reproducibility expectations in computational research closer to those common in physical research. I admire his efforts, but it’s a sad commentary that reproducibility standards are lower in computational science than in physical science.
Related: Scientific and statistical computing
I attended a National Cancer Institute workshop yesterday entitled “Barriers to producing well-tested, user-friendly software for cutting-edge statistical methodology.” I was pleased that everyone there realized there is a huge difference between code created for personal use and reliable software that others would willingly use. Not all statisticians appreciate the magnitude of the difference.
I was also pleased that several people at the workshop were aware of the problem of irreproducible statistical analyses. Not everyone was aware how serious or how common the problem is, but those who were aware were adamant that something needs to be done about it, such as journals requiring authors to publish the code used to analyze their data.
It’s infuriating to read published sample code that’s wrong. Sometimes code given in books is not even syntactically correct. I’ve wondered why publishers didn’t have a way to verify that the code at least compiles, and maybe even check that it gives the stated output.
Dave Thomas said in recent interview that his publishing company, The Pragmatic Programmers, does just that. Authors write in a logical mark-up language and software turns that into a publishable form, compiling code samples and inserting the output. Sample code from one of their books is more likely to work the first time you type it in than code from other publishers.
Sweave is a tool for making statistical analyses more reproducible by using literate programming in statistics. Sweave embeds R code inside LaTeX and replaces the code with the result of running the code, much like web development languages such as PHP embed code inside HTML.
Sweave is often launched from an interactive R session, but this can defeat the whole purpose of the tool. When you run Sweave this way, the Sweave document inherits the session’s state. Here’s why that’s a bad idea.
Say you’re interactively tinkering with some plots to make them look like you want. As you go, you’re copying R code into an Sweave file. When you’re done, you run Sweave on your file, compile the resulting LaTeX document, and get beautiful output. You congratulate yourself on having gone to the effort to put all your R code in an Sweave file so that it will be self-contained and reproducible. You forget about your project then revisit it six months later. You run Sweave and to your chagrin it doesn’t work. What happened? What might have happened is that your Sweave file depended on a variable that wasn’t defined in the file itself but happened to be defined in your R session. When you open up R months later and run Sweave, that variable may be missing. Or worse, you happen to have a variable in your session with the right name that now has some unrelated value.
I recommend always running Sweave from a batch file. On Windows you can save the following two lines to a file, say
sw.bat, and process a file
foo.Rnw with the command
R.exe -e "Sweave('%1.Rnw')" pdflatex.exe %1.tex
pdflatex.exe are in your path. If they are not, you could either add them to your path or put their full paths in the batch file.
Running Sweave from a clean session does not insure that your file is self-contained. There could still be other implicit dependencies. But running from a clean session improves the chances that someone else will be able to reproduce the results.
See Troubleshooting Sweave for some suggestions for how to prevent or recover from other possible problems with Sweave.
Update: See the links provided by Gregor Gorjanc in the first comment below for related batch files and bash scripts.
* * *
My first assignment as a professional programmer was to build another person’s program. I learned right away not to assume a project will build just because the author says it will. I’ve seen the same pattern repeated everywhere I’ve worked. Despite version control systems and procedures, there’s usually some detail in the developer’s head that doesn’t get codified and only the original developer can build the project easily.
The first step in making software builds reproducible is documentation. There’s got to be a document explaining how to extract the project from version control and build it. Requiring screen shots helps since developers have to rehearse their own instructions in order to produce the shots.
The second step is verification. Documentation needs to be tested, just like software. Someone who hasn’t worked on the project needs to extract the code onto a clean machine and build the project using only written instructions — no conversation with the developer allowed. Everyone thinks their code is easy to build; experience says most people are wrong.
The verifiers need to rotate. If one person serves as build master very long, they develop the same implicit knowledge that the original programmers failed to codify.
The third step is automation. Automated instructions are explicit and testable. If automation also saves time, so much the better, but automation is worthwhile even if it does not save time. Clift Norris and I just wrote an article on CodeProject entitled Automated Extract and Build from Team System using PowerShell that helps with this third step if you’re using Visual Studio and VSTS.