The need for standardization is a popular topic at recent microbiome conferences. With technologies having advanced rapidly over the past two decades, is the field now ready to define and adopt standard operating protocols (SOPs) for all stages of microbiome research, from sampling to analysis? Investigators in diverse microbiome-related fields are discussing the same issue: how far should standardization go, and who should invest in creating standards?
Even before obtaining samples, researchers face decisions about what to control in their experiments: the use of different techniques, as well as inherent limitations of some methods (e.g. cage effects in mouse models) may make results harder to reproduce. Different vocabulary is often used to describe phenotype or clinical conditions and even study design and protocols.
But it’s generally the subsequent steps that are the topic of discussion around standardization. Various collection devices and stabilization buffers introduce differences. Samples are either banked or go directly through several critical steps that help increase isolation of the target DNA and prepare it for sequencing. For many of the commonly used next-generation sequencing platforms, this involves careful cutting of DNA and attachment of sequencing primers. These early steps leading up to DNA sequencing are also when errors and artifacts are likely to be introduced. Depending on the sequencing platform and the quality and characteristics of the DNA, certain errors or biases may be present. For example, some platforms struggle with repetitive stretches of DNA and tend to add the wrong base. Many platforms also struggle with GC rich sequences. Furthermore, labs use different bioinformatics tools and computational errors can be introduced during data analysis. Bioinformatics software is designed to run quality checks, perform genome assemblies, and identify genotypes and/or mutations, but no software package is infallible and errors in any one step can skew the entire analysis. Not all users understand the settings regarding cut-off points for quality checks and assemblies.
It is not surprising, then, that arguments for standardization of these steps are regularly put forth. Standardization is especially relevant to large, collaborative endeavours such as the Human Microbiome Project wherein multiple labs are collecting, preparing, and sequencing microbiome samples from various subjects and tissues; any introduced artifact in one lab will lead to confusion and incompatibility in the datasets. But for the field as a whole, factors such as price, added workload, and the real strengths and weakness of each protocol (depending on the sample type) all provide weight to the argument that remaining flexible is the way to go.
The case for standardization
The study of the microbiome can involve spotting patterns and differences in microbiome composition among samples from different sources. This usually requires establishing a baseline condition against which samples are compared and differences quantified. Achieving consistent and reproducible results is therefore essential when defining the baseline. Any biases or errors introduced in the data collection and analysis would alter the findings and lead to misinterpretations of both baseline and “altered” conditions.
The stakes get even higher when multiple labs are involved in a project, a situation which happens to be common (or even necessary) in collaborative microbiome studies. Collecting and comparing microbiome samples from humans living in different geographic, demographic, and environmental contexts is a focus of many research groups, resulting in several labs, protocols, and datasets being used and produced. In order to minimize biases, initiatives like The International Human Microbiome Standards or The Microbiome Quality Control project are working to establish SOPs to ensure data from the many groups working on metagenome research are comparable. All of these issues can erode confidence in the results from microbiome work, and can lead to time lost in attempting to duplicate and validate each other’s work.
The case for flexibility
One of the biggest concerns surrounding a standardization of research methods in the microbiome space is cost. For many labs, the adherence to a particular protocol hinges on the availability of funding. If large, well-funded projects are setting the standards, smaller groups may be left out. What new insights might possibly be missed by setting a financial barrier to contributing to the microbiome databases and collaborations?
There is also a case to be made that a one-size-fits-all approach is not practical in the microbiome field because differences in starting material and desired downstream application will dictate which protocol is best; what works best for environmental samples may not be optimal for human tissue, and what works for one type of human tissue may not work for another type. Furthermore, in microbiome research the variable region selected for sequencing depends on the study question; and in turn, the selection of bioinformatics tools depends on the experimental design. The argument may also be made at this relatively early stage of progress in the field that experimenting with different techniques is the only way to ultimately move the field forward.
Questions going forward
The conversation about standardization in microbiome research continues – and as the field progresses, the pertinent questions for researchers include:
- In what specific ways can standardization help eliminate biases without sacrificing too much in the way of insights or innovation?
- With techniques still rapidly changing and improving, will any agreed-upon standard soon become obsolete?
- What organization(s) should invest in creating standards? Who should have a voice in determining what these standards are?
What do you think? How will standards affect your own work? Share your thoughts below or on our social media channels: