Researchers carefully protect dangerous pathogens – but how secure are all their data?

Limiting open discussion
There are also policies in place that curtail how freely researchers can intentionally share information about their work on these dangerous microorganisms.

Since the implementation of the federal government’s first Dual Use Research Policy in 2012, the notion that some nonclassified research information may need to be withheld has marked a big change from science’s typical culture of openness. Researchers are used to running studies and experiments, then publishing details and results in freely available peer-reviewed journals.

Never before has the U.S. scientific enterprise been as constrained as it currently is. There is even an ongoing moratorium on so-called gain-of-function experiments that involve certain agents potentially capable of causing a pandemic.

Information security at least as vulnerable
Recent safety lapses by government laboratories involving anthrax and H5N1 flu prove that despite all precautions, the system is far from perfect. And the bad news is there might be more to worry about – even if the microbes remain under lock and key and the researchers aren’t deliberately sharing sensitive findings.

Vulnerabilities in information security can directly affect the physical security of dangerous pathogens. For instance, someone gaining access to a computerized key card system could use that information to enter a restricted area.

So-called “dual-use” knowledge, which could be used to weaponize some of these agents, is also at risk. In theory, a hacker could gain access to a researcher’s data on how a particular microbe could become more pathogenic: for instance, by increasing its resistance to available therapeutic or prophylactic drugs.

My colleagues and I recently published an article in the journal Health Security describing these kinds of vulnerabilities. It was the result of a unique collaboration. I am an associate professor of environmental and occupational health who specializes in biosecurity. Nick Lewis came from an information security perspective. And Mark Campbell is a biosafety officer and select agent responsible official at Saint Louis University.

We found that current information security guidelines are inadequate. For instance, government agencies must abide by the Federal Information Security Management Act (FISMA), which is considered the gold standard for a risk-based approach. Unfortunately, current government-mandated information security around dangerous pathogens does not meet even the lowest standard of the act. One example: FISMA specifies how to configure a firewall in great detail; on the other hand, select agent information security guidelines mention firewalls, but don’t specify how to configure or manage the firewall securely.

Why isn’t research’s data security cutting-edge?
Understanding of the threats unique to the academic and research environment is still evolving. There’s very poor communication between the scientific community, the security community and the information technology community.

Scientists themselves are largely uneducated in matters of information security. For instance, many remain unaware that they might be targeted to divulge sensitive information through a variety of stealth tactics. Since advances in science often depend on open communication and sharing data, scientists aren’t trained to be wary of inquiries about their work.

Many also don’t recognize that shared computer systems and laboratory equipment capable of storing or transmitting data – from microscopes with digital photography capability to freezers that send emails when temperatures are too high – are sources of vulnerabilities. After all, everything connected to a computer network is at risk, even if it doesn’t look like a computer.

How to lock down the information, too
First (and obviously), the standards required for government agencies by FISMA should be implemented for information related to research with dangerous pathogens. This is a matter of carrying out what the law already calls for.

Secondly, there should be a secure way for research institutions to exchange information about current information security threats, as well as effective strategies to protect scientific data that could be misused. While implementing these measures now is not without monetary and time costs, they would prevent the big security and research expenses that would be incurred after a major security breach and implementation of reactive measures.

Finally, there should be more concrete efforts at effective communication between science, information technology, and security experts, so they may understand each other’s disciplines better. An effective approach could include educational opportunities for individuals who are interested in working at the interface of these very different communities.

My colleagues and I found writing our research paper to be difficult because we were all outside of our comfort zones. Professionals, whether they are life scientists or computer people, do not like to admit that they don’t know or understand something. When we had to ask each other for explanations regarding simple concepts in the others’ fields, it was humbling.

But we have proved it can be done. The cross-disciplinary conversations must continue. Information security concerns are not going away, so we need to awaken to this reality before a major disaster happens.

Carole Baskin is Associate Professor of Environmental & Occupational Health at Saint Louis University. This article is published courtesy of The Conversation (under Creative Commons-Attribution/No derivative.