Looking Ahead to 2030: Four industry watchers share their hopes for advances in precision medicine

Original article from Clinical Omics

There is typically much reflection whenever tipping from one decade into the next, as we look back at what has transpired over the past 10 years. When it comes to omics technologies and their application to precision medicine, there have certainly been significant advances. Notable among them would be the completion of the 100,000 Genomes Project which both commenced and reached its final goal of 100,000 whole genomes sequenced within the decade and has served as the springboard for the launch of a genomic medicine program by England’s National Health Service. Today, there are more than a dozen country-wide efforts focused on collecting health and sequencing data of large and diverse swaths of each countries’ citizens—all with an eye toward using these vast data to improve how we provide healthcare for both large populations of patients and, ultimately, individuals.

But are we yet able to consistently provide precision medicine? In small niches, yes. So, while much progress has been made, it is fair to say more advances are needed before precision medicine and/or genomic medicine is standard practice. So how will we get there? Read on as four industry watchers share with you their thought on what may happen in the next ten years to advance the field.

The World of Genomics in the Decade to Come

Niv Mizrahi

Niv Mizrahi
CTO and co-founder, Emedgene

Emedgene logo

Over the next decade, patient genetic data will cross over from genetic labs into health systems, incorporated into EMR systems, and informing clinical decisions regularly. This will include a shift in the use of genetics from diagnostics to patient care, where we utilize variant and pathway information in order to proactively affect disease mechanisms. Several trends will converge in order to make this a reality.

First, next-generation sequencing (NGS) cost reductions and technological advances are leading us to a point where regardless of the genetic test done, it always makes sense to sequence a patient’s whole genome. Once we do this, we gain access to far more information than the original diagnostic question that led to sequencing, providing we can reinterpret the data.

Second, there is a growing utility for a wide array of genetic tests performed at various points in a patient’s life, from healthy population screening, through pharmacogenomics, carrier screening and more. We expect the rapid pace of research to introduce more clinically validated applications for genetic testing, and also increase the diagnostic yield on each individual test. On the flip side, research will also advance in prevention and care, using the insight we gain into disease mechanisms to improve patient outcomes.

However, genome interpretation is currently at a chokepoint, making it difficult to scale genetic-based care. This is due to the rapid growth in the genetic testing market, along with a move to NGS, which is significantly more complex to interpret. Working on interpreting this data are only a few thousand geneticists and genetic counselors worldwide. If all of the U.S. certified geneticists and their peers worldwide worked only on rare disease patients, we estimate the worldwide capacity of interpretation in 2020 is capped at roughly 2.4 million tests, just under the predicted volume of rare disease patient testing, which is expected to hit 2.5 million. And that’s excluding any types of genetic testing like hereditary cancer, healthy population screening, population genetics projects, all of which are interpreted by the same genetics workforce.

Over the next few years cognitive AI solutions that automate genetic interpretation will be widely adopted, alleviating the interpretation bottleneck and enabling the growth in genetic testing. These solutions will deliver succinct results and condense the interpretation research cycle (which can be as long as 16 hours in the case of a rare disease), keeping the geneticists in the loop, but making sure they spend significantly less time per test, and enabling high-throughput testing. This type of solution won’t be achieved with any single machine learning algorithm or neural network, but a whole stack of AI algorithms coming together to replicate the work of geneticists in what we call cognitive genomics intelligence.

Which leads us to a third trend. Once cognitive genomics intelligence can automate interpretation, it can also be used to automate re-interpretation, or to query the patient’s genome for different diagnostic questions at different points in time. Imagine a cognitive genomic intelligence embedded in the EMR, activated throughout a patient’s life. This is a core enabling technology for genetics to cross over from the lab to the clinic, from diagnostics to care. Our expectation is that these systems will provide a robust layer of explainable AI, so that we can eventually make the results accessible for any clinician.

Protection and Individual Control of Personal Health Data Will Fuel Research

Dawn Barry
president and co-founder, LunaPBC

Discovery flows from research and research requires data. This longstanding truth will dramatically change in the decade ahead through how crucial data is acquired, aggregated, controlled, and protected. This transformation will occur as a consequence of maturation in our thinking around personal data sovereignty, and accountability and transparency in data stewardship.

A marked change in public sentiment around health data privacy is afoot. Google is under investigation by the Office of Civil Rights in the Department of Health & Human Services because they partnered with Ascension, the nation’s largest nonprofit health system, which provided health data without notifying individuals that their information was disclosed. This, despite asserting that they were compliant with HIPAA regulations that protect disclosure of personal health information. And on January 1st, the strictest data privacy law in the U.S., the California Consumer Privacy Act, took effect strengthening consumer data privacy rights in the country’s most populous state.

There is a growing lack of trust among the public in the institutions holding, buying, and using people’s data, which has fueled fears that data may be used against individuals and their families—from discrimination to “tailored” information feeds. People also now better understand the value of their personal data, and that they are not sharing in the value created from it. With DNA data in particular, people are now keenly aware that this information is shared within families, as numerous 2019 headlines detailing law enforcement applications demonstrated.

Data fuels research which in turn fuels discovery, but ultimately it is people who fuel this data—sick people, healthy people, old and young people, rich and poor people, people of all colors. They are the best curators of their health condition. If the past ten years are remembered as the decade that made genome sequencing for disease research possible, I believe the next ten years will see us execute discovery with a more holistic and inclusive lens. We will broaden our study beyond disease research to human health—with ‘health’ defined as more than just the absence of disease—and quality of life with the recognition that genetics is a mere 30% contributor to premature death. Human behavioral patterns, social circumstances, health care, and environmental exposure contribute the remaining 70%. I hope transcriptomes, microbiomes, and epigenetics will complement DNA datasets, and that person-reported, real-world, and environmental information will be included.

I’ll use the next decade to champion raising the standing of people from subjects of research to partners in discovery. In our increasingly digital world, and respecting that all personal health data—including DNA, health records, social and structural determinants of health, and clinical outcomes—starts with people, it stands to reason that if people are included, it represents a step function increase in discovery.  As partners in discovery, we must win people’s trust starting with transparency and assurances they are in control over how their data is used, who it’s stored with, and empower them with the ability to un-share all their data, at any time, if they wish.

Research has suffered for lack of data scale, scope, and depth, including insufficient ethnic and gender diversity, datasets that lack environment and lifestyle data, and snapshots-in-time versus longitudinal data. Artificial intelligence is starved for data that reflects population diversity and real-world information. I worry about the impact on research if people disengage with science and digital tools for fear of privacy violations. It’s time to feed discovery with data that reflects the diversity of the population we wish to serve. I believe people are the key to the next generation of discovery, and that protecting their privacy will empower discovery.

To read more from other industry watchers, click here.

© 2020 LunaDNA. LunaDNA and the moon logo are trademarks of LunaPBC, Inc. All other trademarks depicted herein are the property of their respective owners and there is no sponsorship, association, or affiliation between LunaPBC, Inc. and those trademark owners.