cropped bannermashup2

By Beth Russell

Intellectual exceptionalism. The idea that scientists have a special quality that has been honed and amplified by training on difficult problems over copious hours hit a speedbump this week when video game players beat scientists in a competition to determine the structure of an Alzheimer’s Disease related protein. The game was Foldit, a research-based game designed to allow the incremental process of discovery in protein-folding biochemistry to be worked out by a group of players building on each other’s best ideas. The 469 players were able to deduce the structure faster than two crystallographers and a team of 61 undergraduate students using computer-based modeling, and faster than two computer algorithms for automatic structure determination.

Building highly accurate models of protein structure using crystallographic data is highly labor-intensive and the accuracy of these models can impact downstream science for many years after the initial model is produced. Research games like Foldit have revolutionized our ability to solve problems that required more labor than can be practically obtained. The phenomenal success of these games and other mechanisms for involving the public in scientific research signal a paradigm shift for the research enterprise. What was previously the purview of an elite few highly educated scientists is now, with a little training, the domain of the everyman. We call it Citizen Science. In a brief period, the concept of public participation of scientific research went from a few birdwatchers and butterfly counters to an international phenomenon of such importance that last year the White House issued a memorandum directing federal agencies and American institutions to take better advantage of the opportunities that Citizen Science provides.

The most interesting thing about the most recent Foldit results isn’t that the humans were faster, but they also developed the better model. This poses the curious scientist to ask why? I posit that the gamers beat the scientists and the computers for the same reason that revolutionary science isn’t usually developed from an incremental process, the same reason that we associate the word “Eureka!” with scientific discovery. This reason is plasticity.

Humans really can think outside of the box. Not only can we understand rules, we can also be curious about what happens when we bend or break them. Science has devised these “rules,” properties of different atoms, functional groups, and structural types. Unlike computers who must follow the rules coded in their program or algorithm, or the scientists who drilled the properties into their heads with years of study, the non-scientist can see the new way, the exception, that in the complex world of biology ends up being right pretty often. Throw enough people together, and you’ll get a few of these. Some right, some wrong, but the group is self-correcting and doesn’t take long to find the right combination of bends in the rules to solve the puzzle.

We need the scientists, to collect the data, to build hypotheses, and to integrate complex ideas that require deep knowledge, but we also need the everyman too. As scientists we can’t keep locking the discipline up in our labs and ignoring the power of bringing the citizenry to the table. For some problems, two (or two thousand) heads really are better than one.

By Kathryn Ziden

The Tsarnaev brothers, who carried out the 2013 Boston Marathon bombings, built their pressure cooker bombs using instructions found in al Qaeda’s English-language, online magazine Inspire. In the same 2010 issue of Inspire, it states, “For those mujahid brothers with degrees in microbiology or chemistry lays the greatest opportunity and responsibility. For such brothers, we encourage them to develop a weapon of mass destruction.” Although the bombs that were detonated and discovered in New York and New Jersey this past weekend were also pressure cooker bombs, what if it had been a bio-engineered, deadly pathogen? New, inexpensive and readily available gene-editing techniques could provide an easy way for terrorists to stage bioterrorist attacks.

CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) is a novel gene-editing technique that has the potential to do everything from ending diseases like cystic fibrosis and muscular dystrophy to curing cancer. CRISPR also has the power to both bring back extinct species and cause living species to go extinct. There is hot debate currently within the scientific and policy communities about the ethical ramifications of this powerful tool and how it should be regulated. However, there is almost no discussion within these communities of the security risks that CRISPR poses, or the scary scenarios that could result from unintended consequences or its misuse.

The Office of the Director of National Intelligence’s “Worldwide Threat Assessment” listed gene-editing techniques like CRISPR on its list of weapons of mass destruction for the first time in 2016. Here, we list some actors that could use CRISPR to create a bioweapon.

Non-state actors: Terrorism specialists have warned that obtaining a biological weapon is much easier than obtaining a nuclear or chemical weapon, given the relative ease by which components can be purchased and developed. Terror groups intent on developing biological weapons could use existing members’ skills, or send recruits to receive adequate education in the biological sciences, similar to al Qaeda’s method of sending attackers to train in U.S. flight schools prior to 9/11.

Rogue scientists: Disgruntled or mentally ill scientists could easily use CRISPR to mount an attack, similar to the 2001 anthrax attacks. However, unlike other deadly pathogens, CRISPR is widely available and requires no security clearance or mental health screening for access.

Do-it-yourself biohackers: Do-it-yourself (DIY) scientist movements are growing across the country. DIY centers now offer CRISPR-specific classes and DIY CRISPR kits are inexpensive and widely available for sale online for amateur scientists working out of their basements. Some websites sell in vivo, injection-ready CRISPR kits for creating transgenic rats (rats included), and directly advertise to “full service” and “DIY” users.

Religious groups: The first and single largest bioterrorist attack in the U.S. was perpetrated by followers of an Indian mystical leader, infecting 751 people with salmonella bacteria in 1984. In 1993, the doomsday cult Aum Shinrikyo attempted an anthrax attack in Tokyo, but mistakenly used a non-virulent strain.

Foreign governments: The development of bioweapons is banned under the 1975 Biological and Toxin Weapons Convention; however many countries, including China, Russia and Pakistan are widely believed to have bioweapons programs. Each of these countries are also actively using CRISPR in scientific research.

The large, potential impacts of gene-editing techniques combined with the low barriers to obtaining the technology make it ripe for unintended and intended misuse. In order to address the security challenges of this emerging technology, all stakeholders need to act.

The scientific community can add value by:

Shifting their focus from ethical concerns to security concerns, or at least give security concerns equal footing in their discussions.

Engaging with the intelligence and policy communities to identify real-world scenarios that could be actualized by the actors discussed above.

Regulatory bodies can counter the risks poses by the unintended use or potential misuse of gene-editing techniques by:

Designating all precision gene-editing enzyme systems as controlled substances, similar to radioactive isotopes or illicit drug precursors used in research laboratories, and putting use-verification and accounting procedures into place.

Registering, licensing and certifying all laboratory-based and DIY users of CRISPR. Gene-editing technology users could also be required to undergo National Agency Check with Inquiries background investigations.

The intelligence community can lead the efforts of countering more serious, bioterrorism threats by:

Tracking all gene-editing kits or other system-specific plasmids or components, including materials already purchased during the current pre-regulation timeframe.

Tracking all users of gene-editing technologies, specifically looking for rogue or DIY users who fail to register, individuals actively seeking to buy kits through the black market, or individuals searching for CRISPR instructions or other relevant information online.

These recommendations are just some of the actions that could be taken to minimize risks of gene-editing technologies. CRISPR is a powerful technology that is capable of creating a gene drive that can result in mass sterilization and extinction. If it can be used to kill off a species of mosquito, then it can be used to kill off the human race. It is time to think of these gene-editing techniques in terms of an existential threat.

By Beth Russell

If data is the gold standard, then why don’t all scientists agree all the time? We like to say the devil is in the details but it is really in the analysis and (mis)application of data. Scientific errors are rarely due to bad data; misinterpretation of data and misuse of statistical methods are much more likely culprits.

All data are essentially measurements. Imagine that you are trying to figure out where your property and your neighbors meet. You might have a rough idea of where the boundary is but you are going to have to take some measurements to be certain. Those measurements are data. Maybe you decide to step it off and calculate the distance based on the length of your shoe. Your neighbor decides to use a laser range finder. You are both going to be pretty close but you probably won’t end up in the exact same place. As long as his range finder is calibrated and your stride length is consistent, both methods are reliable and provide useful data. The only difference is the accuracy.

Are the data good or bad? It depends upon how accurate you need to be. Data are neither good or bad as long as the measurement tool is reliable. If you have a legal dispute your neighbor will probably win, on the other hand if you are just trying to figure out where to mow the grass you’re probably safe stepping it off. Neither data sets are bad, they just provide different levels of accuracy.

Accuracy is a major consideration in the next source of error, analysis. Just as it is important to consider your available ingredients and tools when you decide what to make for dinner, it is vital to consider the accuracy, type, and amount of data you have when you go to choosing a method for analysis. The primary analysis methods that science uses to determine if the available data supports a conclusion are statistical methods. These are tests that can estimate how likely it is that a given assumption is not true, they are not evidence that a conclusion is correct.

Unfortunately, statistical methods are not one size fits all. The validity of any method is dependent on properties of the data and the question being tested. Different statistical tests can lead to widely disparate conclusions. In order to provide the best available science, it is vital to choose, or design the best test for a given question and data set. Even then, two equally valid statistical tests can come to different conclusions, especially if there isn’t very much data or the data has high variability.

Here’s the rub… even scientists don’t always understand the analysis methods that they choose. Statistics is a science in itself and few biologists, chemists, or even physicists are expert statisticians. As the quantity and complexity of data grows, the importance of evaluating which analysis method(s) should be used becomes more and more important. Many times a method is chosen for historical reasons - “We’ve always used this method for this type of data because someone did that before.” Errors made due to choosing a poor method for the data are sloppy, lazy, bad science.

Better education in statistics will reduce this type of analysis-based errors and open science will make it easier to detect them. Another thing we can do is support more team science. If a team also includes a statistics expert, it is much less likely to make these type of errors. Finally, we need more statistics literate editors and reviewers. These positions exist to catch errors in the science and they need to consider the statistics part of the experiment, not the final arbiter of success or failure. High quality peer-review, collaboration, and the transparency created by open data are our best defenses against bad science. We need to strengthen them and put a greater emphasis on justifying analysis methodology choices in scientific discovery.

By Charles Mueller

Sunday was an emotional day. It was the 15th anniversary of one of the most traumatic days in US history, the anniversary of 9/11. That day is burned into the memories of the American people because its events defied what we believed was possible. We will never forget because we will always remember the day the unthinkable became reality.

The official story that came out of the investigations of 9/11 to explain how it was able to occur highlighted a failure to imagine the kinds of horrors terrorists could unleash upon our nation. In some ways this finding was ironic because it was our imagination that helped us land on the moon, invent the Internet, and harness the atom, all accomplishments in our climb to become the world’s only remaining superpower at the time. On 9/11 though it somehow became our weakness. By failing to take serious what might seem impossible, by failing to imagine the extremes people might go to hurt us, we created an opportunity that could be exploited. The sad reality of that day is that many people saw the signs of what was coming, but we still chose to ignore it; we chose to refrain from imagining it could ever take place.

That day showed the real the power of imagination. If you can imagine it, you can often make it real. The terrorists imagined all that took place on 9/11 and because they believed, were able to inflict a wound on this country that may never fully heal. As we move forward, continuing to recover from that day, we must never forget this lesson; we must never forget the power of imagination.

Today we live at a time where what was once the imagination of science fiction writers is now becoming reality. We are on the cusp of being able to engineer all types of life, including ourselves, to have the traits and properties we desire. We are on the verge of potentially creating sentient life fundamentally different than our own. We have tools today that are enabling our imagination to translate into reality. As amazing as the future can be, days like 9/11 remind us that there exist those that will ultimately try to use these new technologies and their imaginations to make the future worse. We have to remember this as we start thinking about how to manage this brave new world.

In order to ensure the future is better than tomorrow, we have to use our imagination to consider all the different ways it can go right and wrong. We have to imagine the future we want and then work together to figure out the right path to get there. We cannot afford another failure of imagination moving forward because S&T has simply made the stakes too high. Let’s use the power of imagination to create a better world and ensure 9/11 is a day we remember, not relive.

By Kathryn Ziden

Robotics and artificial intelligence (AI) are changing the field of healthcare. Doctors are seemingly open to this change, as long as there still is a place for them in the system. But is this a reality? Will we need doctors in the future? In the short term, yes. In the long term, not likely.

A recent study by the market research firm Frost & Sulllivan estimates the AI market in healthcare will exceed $6 billion by 2021. AI is already making big advances in automated soft-tissue surgery, medical imaging, drug discovery, and perhaps its biggest success so far: using big data analytics to diagnose and treat disease. IBM’s Watson is already being used at 16 cancer institutes, and recently correctly diagnosed a rare form of leukemia in a Japanese woman, after conventional (human) methods had failed.

However, on the question of where this leads in the long term, there is a disconnect between technology forecasts and doctors’ opinions. Article after article on the future of AI in healthcare quotes doctors and healthcare professionals as predicting that computers/robots/AI will never be able to replace them. The reasoning of these professionals seem to fall into one of the following arguments, and reflects a we-are-too-big-to-fail attitude or a god complex on behalf of the doctors:

#1.) Doctors will always be needed to give that special, reassuring human factor. One doctor claims that she could never be replaced by a computer because patients routinely leave her office saying how much better they feel after just talking to her. Another doctor adds, “Words alone can heal.”

Rebuttal: Although the human factor may be vital, it does not require a medical degree to provide comfort or solace to a patient. Social workers, nurses or psychologists can fill this role.

#2.) Only doctors can pick up on nuanced subtleties of a patient’s mood, behavior or appearance to make a diagnosis. For example, one doctor posits that if a woman has ovarian pain or a missed menstrual period, a computer would never be able pick up on the fact that it could be caused by anxiety, stress, depression, a lack of sleep or over-mediation-- the way a doctor can.

Rebuttal: AI systems could almost certainly be able to probe for these types of secondary causes through a patient’s facial cues, bloodwork and a line of questioning, the same way a doctor currently does.

#3.) Doctors will be assisted by AI in calculating diagnoses and treatment plans, but doctors will still be needed to make the final decision. A doctor’s decision-making process can account for additional variables.

Rebuttal: An AI framework has already been shown to make better medical decisions than human doctors. Doctors are prone to biases and human error; their decisions can be based on emotions, influenced by fatigue from a long day, and limited by the brain’s capacity to store and recall data.

#4.) Computers will never have a doctor’s intuition. Doctors have a “sixth sense” used in their diagnoses. Practicing medicine is an art.

Rebuttal: Intuition is pattern recognition; something computers are much better at. Medicine is an applied science that requires decision making (i.e. the “art” part). AI algorithms are already better at making medical decisions than doctors.

#5.) Doctors will still be needed to perform physical examinations.

Rebuttal: The idea of having a physical exam by a doctor is already going by the wayside. As one doctor puts it, “The physical exam was critical when it was all we had.” And in cases where a physical exam is needed, there is already a robotic glove that can perform a physical exam.

In the next 10 to 20 years, I believe we will see a human-AI hybrid healthcare system, where individuals will be more in control of their own healthcare. Successful doctors will need to change their practices and attitudes to cope with the emergence of new technologies. In the long term, however, our entire healthcare system will likely undergo an AI-ification, and the idea of human doctors and surgeons will be obsolete. But perhaps there will be an increased market for homeopathy or alternative medicine by patients who aren’t ready for this future either.

Doctors’ reluctance or inability to see a future in which they are not needed is a form of optimism bias and is something that all of us are susceptible to. A 2016 PEW Research Center study showed that 65% of respondents thought that robots and computers will take over jobs currently done by humans within the next 50 years; however, 80% of people said that it won’t be their job that is taken over. There is a disconnect. Nothing is more powerful than human delusion… except perhaps the efficiency and skillfulness of your future AI doctor.