Desktop version

Home arrow Mathematics

  • Increase font
  • Decrease font


<<   CONTENTS   >>

ARTIFICIAL INTELLIGENCE AND MODERN ANALYTICS IN REGULATORY SETTINGS

Introduction

It is widely acknowledged that the digital revolution has considerable potential to transform medical research and drug development, as has been the case in other areas of human endeavors (Alemayehu and Berger 2016; Mayer-Schönberger et al. 2014). Smart algorithms and powerful computing resources are now available to process and analyze huge volumes of data collected from diverse sources and in a variety of forms to address important medical problems, especially those related to personalized and precision medicine (Panahiazar et al. 2014; Teli 2014). Unsurprisingly, the complexity, speed, and size of the data, as well as the new computing approaches, have presented unprecedented challenges and opportunities for drug development, regulatory reviews, and healthcare utilization and decision-making (Roski et al. 2014).

According to McCarthy (2007), artificial intelligence (Al) is "the science and engineering of creating intelligent machines," and may be viewed, in a broad sense, as the marriage of modern statistical predictive models with expert systems and machine-learning (ML) algorithms. Al has come a long way since its inception by Turing (Turing 1950), with a wide range of applications, including image recognition, speech detection, and robotics. A major factor in the success of Al in medical research is the advance made in the development of powerful predictive models (Emir et al. 2017). Figure 4.3 depicts the ML prediction paradigm, which involves an iterative

ML Model Development Paradigm

FIGURE 4.3 ML Model Development Paradigm

process of model development and validation, using training and test-data sets (Hastie et al. 2009; Shmueli 2010).

Interestingly, the methodological development in ML has been driven not just by statisticians and mathematicians, but also by researchers in other disciplines, including computer scientists and software engineers (Goodfellow et al. 2016). Classification and regression trees, including random forests (Breiman 2001a; 2001b), as well as penalized-regression methods (Hastie et al. 2009), are now in routine use in medical research. Other analytic tools include k-nearest neighbors (Dixon 1979), neural networks and deep learning (inspired by Rosenblatt (1958) and enhanced by Rumelhart et al. (1986)), and support vector machines (Vapnik 1995).

Deep learning models have especially been used in various applications in medical research. Notable architectures of such models include Convolutional Neural Networks (CNNs), restricted Boltzmann machines (RBMs) such as deep belief networks (DBNs), stacked Autoencoders, and recurrent neural networks (RNNs). The range of applications also includes bioinformatics (e.g., cancer diagnosis, gene classification, drug design, etc.), medical imaging (e.g., tissue classification and tumor detection), prediction of disease, and study of infectious disease epidemics (Ravi et al. 2017).

In this section we provide some potential use cases of Al in drug development, including approved examples of algorithms that have gone through FDA reviews. Since the use of real-world evidence in a regulatory setting is discussed elsewhere in this monograph, the focus in this section will be on specific applications of Al and ML tools.

Al in Drug Development

In the face of increasing costs of research and development (R&D), a major concern of the pharmaceutical industry is to explore ways of enhancing productivity and efficiency. Pharmaceutical companies are, therefore, attracted by the recent advancements in Al technology to transform the drug-development process, from early discovery through loss of exclusivity.

In the initial steps of drug development, ML algorithms can help in the identification of new or novel compounds with interesting biological activities. Modern predictive models permit incorporation of information from high-dimensional data, including genomics and relevant biochemical features (see, e.g., Wang et al. 2017). The approach can also be used for drug repurposing, in which an existing treatment option is considered to treat a new disease. As an example, in one instance an Al system involving neural networks was used to classify drugs into categories defined by transcriptional profiles (Aliper et al. 2016).

A promising area of application of Al and ML is prediction of the outcomes of drug-development programs to support go/no-go decisions, especially in the early phase, using preclinical or Phase I data, as well as other relevant data from the literature or available in the public domain. In a recent publication, Beinse et al. (2019) demonstrated the performance of an ML algorithm in predicting the time to FDA approval in oncology right after Phase I. Similarly, reliable prediction of toxicity in the preclinical phase may help to make informed decisions about the need to run subsequent clinical trials. As reported in Wu and Wang (2018), ML methods, such as deep learning, random forests, k-nearest neighbors, and support vector machines, have been applied to toxicity prediction, employing data not only from chemical structural descriptions, as is done customarily, but also genetic and other information.

A potential application of Al is in precision medicine, which requires the integration of data from diverse sources, including patient, drug, and environmental factors. Advanced machine learning models will enable use of the vast digital information to transform medical practice, by tailoring treatment of individual patients. While the application is still at the concept stage, there are efforts to accelerate use of the available Al tools and digital data to advance precision medicine (see, e.g., Kim et al. 2019).

Al has also applications in enhancing operational aspects of clinical trials, including site and patient selection, risk-based monitoring, and proactively assessing data-quality issues. For example, inclusion of patients in the trial may be based on reliable biomarkers that are identified using modern analytic tools. In one application, a novel Al platform has been implemented to monitor patient compliance (Bain et al. 2017).

Regulatory Experience with Machine Learning and Artificial Intelligence

Regulatory agencies have recognized the significance of Al and ML in providing new and important insights in the delivery of healthcare and are in the process of formulating relevant framework for the proper use of these technologies. A case in point is the recently proposed framework that the US FDA issued relating to AL/ML-based software as a medical device (FDA 2019b). The agency has also outlined good ML practices, as a total product lifecycle regulatory approach to continually improve the performance while limiting degradations (FDA 2019b).

In the EU the EMA has highlighted, among the strategic goals it formulated recently, the exploitation of the digital technology and artificial intelligence in decision-making (EMA 2018). As part of the scheme, the agency recommends, among others, the establishment of an Al laboratory and the building of capabilities in relevant areas, such as cognitive computing, that have applications in the regulatory system.

The US FDA has accepted use of certain Al algorithms for medical devices. Topol (2019) provided a list of at least fourteen approvals in 2017 and 2018 in several therapeutic areas. For example, in ophthalmology, FDA approved an autonomous Al system to detect diabetic retinopathy using data from a prospective trial conducted in primary-care settings compared to the historic gold standard (Abramoff et al. 2018). In cardiolog); Apple received FDA approval for their electrocardiogram (ECG) algorithm used with the Apple Watch Series 4 and 5 to detect signs of arrhythmias for those older than 22 years (Victory 2018). However, it is not intended to provide a diagnosis (Buhr 2017; Fingas 2018). In pathology, QuantX was approved by the FDA as a platform that uses Al as an adjunct tool for assisting radiologists to analyze the breast ultrasound images of patients with soft tissue breast lesions (www.accessdata.fda.gov/cdrh_docs/reviews/ DEN170022.pdf).

The standard practice for glucose monitoring in diabetic patients is the use of an invasive procedure. In September 2017 the FDA approved a continuous glucose monitoring system, in which the sensor continuously measures the glucose level every minute and can also provide graphics and summary statistics for glucose history through a handheld device (Bolinder et al. 2016).

Finally, signal detection is an important framework for identification of a risk for developing a drug adverse event after being exposed to it. vigiRank is a predictive model for emerging safety signals using the VigiBase (Caster et al. 2017). Ordinarily the disproportionality analysis is based on assessing disproportionality in pharmacovigilance data by observed-expected ratios (Zink et al. 2013). Caster et al. (2017) showed that vigiRank has outperform disproportionality analysis in real-world pharmacovigilance signal detection. Similarly, the European Medicines Agency developed a predictive signal-detection algorithm and applied to the EudraVigilance database that showed encouraging results (Pinheiro et al. 2018).

Concluding Remarks

With recent improvements in computer algorithms, many activities in our daily lives are increasingly relying on Al and ML applications. Unsurprisingly, regulatory bodies and pharmaceutical companies have begun to recognize the potential of the rapidly growing technology to enhance drug development and medical research. However, unlike in other industries, Al and ML appear to play very limited roles in the drug development and approval space. The recent approvals by FDA of limited algorithms in medical devices are examples of the degree of interest of the agency in the new technology.

With the skyrocketing cost of drug development, pharmaceutical companies are aggressively looking into the possibility of leveraging Al technology to improve productivity and efficiency. Advanced predictive models, coupled with rich databases, could be used to inform decisionmaking about drug discovery, continuation of development programs, or planning clinical-trial operations. In addition, there is considerable potential to advance the field of precision medicine, which depends on synthesizing vast digital information to tailor treatment to individual patients.

 
<<   CONTENTS   >>

Related topics