Interconnected points

Demystifying Artificial Intelligence (AI)


By Sanjay John, a software engineer, and Scott Kahn, PhD, the Chief Information and Privacy Officer, at Luna.

With the recent launch of ChatGPT, suddenly every tech company has artificial intelligence (AI) capabilities. News stories everywhere are expounding on the promise and threat of AI and its family of applications including machine learning (ML) and large language models (LLMs). But are these technologies really that new? And what is the truth buried in the confusing technical jargon on which most stories focus? Tune in while we try to demystify AI and related applications.

Large Language Models

The fields of AI and ML are over 70 years old. At their foundation is the mathematics of probability and statistics. LLMs, like ChatGPT, are simply a collection of equations that determine the estimated probability of an answer to a given question. The similarity with known answers can define these probability equations. Consider a model to identify a cat in an image. A model like this “learns” from a large number of cat pictures. However, if the set of cat pictures only shows white cats or only shows wild cats, the model may be faulty and return incorrect answers that do not match the goal of creator. The same model can also be quantitative to represent the likelihood that an answer is present, for example, a model that returns the percent likelihood that there is a cat somewhere in an image. And while there are other models – such as a model that would identify if an image would make a cat-lover happy or sad – and many different types of (machine) learning to train models – these are more technical than we will explore today.

Neural Networks

Now let’s tackle neural networks. As the name implies, neural networks try to emulate the complex reasoning of a human brain. To create a neural network, we must build the instructions and logic that allow for this more complex reasoning to occur. First, we build the instructions using a class of algorithms. Algorithms are specific, unambiguous rules that instruct the model in how to react when presented with external data. Algorithms combine multiple models or “nodes” using a weighting scheme – for example, an answer derived from 50% of one model, 30% of another model, and 20% of a third model to create neural networks.

Neural networks often have many nodes (thousands or even billions) and many combinations of these nodes to present an answer to a question. One version of a neural network, known as a generative adversarial network (GAN), pits a generator network (a network focused on creating fake data) against a discriminator network (a network focused on determining if a piece of data is real). These networks have become famous for their ability to create seemingly realistic images, videos, and text. A more complex version of a neural network is a transformer. Transformers learn context and meaning by tracking relationships within data points, like words in a sentence. For example, the sentence, “The cup was poured into the bowl until it was empty,” compared to, “The cup was poured into the bowl until it was full,” shows how our complete understanding of sentences influences how we consider the meaning of the word “it” in these contexts. Transformers can decipher and apply this context, allowing for better prediction. ML and feedback loops help networks learn and adjust the weights of the various nodes accordingly.

Natural Language Processing

The final piece of the puzzle involves natural language processing, in which a model converts common written or verbal language into the meaning of the phrases. Neural networks typically perform this process, including probability models that encode the similarity of words and phrases, to predict future words and phrases. Combining the processing power of transformer networks, the creative ability of generative networks, and the large available dataset of the internet and/or databases, we arrive at LLMs (large language models). LLMs are at the cutting edge in their understanding of natural language. Unfortunately, the data sets used from the internet and other databases are often unreliable and incomplete, which again, can cause the output to be biased, misleading, and sometimes completely wrong. Meaning AI, ML, and LLMs are still only as good as the attention the creator pays to ensuring the applications learn from valid and representative data sets and that their learning feedback loop incorporates novel data over time and not just a regurgitation of the data they’ve already consumed. The better creators are at monitoring this, the more useful current and future tools using these applications will be.

Unfortunately, the data sets used from the internet and other databases are often unreliable and incomplete, which again, can cause the output to be biased, misleading, and sometimes completely wrong.

Let’s take ChatGPT, for example. It is the marriage of a powerful LLM with predictive neural network models that can learn from user input. However, it has limitations rooted in the information used to create or “train” the underlying models and the user feedback used to reinforce them.  The resulting models will reflect these gaps if the data used to train the models is not comprehensive. For example, if the model used health information strictly from men 21 years or older, you would not be able to use that model to characterize women’s health, or even boys’ health. Further, today’s health data sets typically lack representation of many individuals beyond those of European descent.

The Takeaway

So, while the headlines are provocative, AI, ML, and LLMs are just tools. Like most tools, they work best when the user knows which jobs they are most suitable for, and where the boundaries and risks lie. At Luna, we focus on using AI to assist researchers with the extraction of clinically relevant information from data that our members share in studies they join. The broader the health experiences of our members, the better these tools become in understanding what is important to help drive research faster and with more successful outcomes. At the end of the day, human intelligence and experience still reign supreme, as we decide where and when to apply these technologies, where they fall short, and when to unplug them.


About Luna

Luna’s suite of tools and services connects communities with researchers to accelerate health discoveries. With participation from more than 180 countries and communities advancing causes including disease-specific, public health, environmental, and emerging interests, Luna empowers these collectives to gather a wide range of data—health records, lived experience, disease history, genomics, and more—for research.

Luna gives academia and industry everything they need from engagement with study participants to data analysis across multiple modalities using a common data model. The platform is compliant with clinical regulatory requirements and international consumer data privacy laws.

By providing privacy-protected individuals a way to continually engage, Luna transforms the traditional patient-disconnected database into a dynamic, longitudinal discovery environment where researchers, industry, and community leaders can leverage a range of tools to surface insights and trends, study disease natural history and biomarkers, and enroll in clinical studies and trials.