Castles made of sand in the land of cancer research

Not all problems with scientific practice are statistical.  Sometimes, methods and protocols are introduced and accepted without sufficient vetting and quality control.  Hopefully this is rare, but in the biological sciences there is an ongoing worry that too many ‘accepted’ techniques might not be well founded.  Here is one striking example from the world of cancer research, where it turns out that a particular cell line thought to have been a useful tool for studying breast cancer is not really representative of breast cancer at all.  Ok, so mistakes happen–but sadly hundreds of papers have been published using this cell line, many were published after data were available suggesting the cell lines were not representative of breast cancer, and it is still taking surprisingly long for the problem to be widely recognized.  Fortunately, steps are being taken to make sure that cell line work included more careful checks, and the NIH is even requiring these plans be part of funding applications.

Here’s the story, in Slate by Richard Harris, an NPR correspondent and author of a new book about reproducibility in science:


I'm a teacher, researcher, and gadfly of neuroscience. My research interests are in the neural basis of learning and memory, the history of neuroscience, computational neuroscience, bibliometrics, and the philosophy of science. I teach courses in neuroscience, statistics, research methods, learning and memory, and happiness. In my spare time I'm usually tinkering with computers, writing programs, or playing ice hockey.

Posted in Replication

Leave a Reply

Your email address will not be published. Required fields are marked *