12

Understanding Current Industry Trends and Future Applications

We have now covered different ways in which ML is making a difference for healthcare and life sciences organizations. With the help of examples in this book, you have seen that ML is more present than you may have thought and is having more impact on the way we live our lives than you may have believed. We have also explored the role AWS is playing in this transformation, particularly how the services from the AWS ML stack are making it easy for healthcare and life sciences customers to innovate at scale. From SageMaker, which lets you build, train, deploy, monitor, and operationalize ML models, to Comprehend Medical, which allows you to extract meaningful information from unstructured clinical records using pretrained models, the services cater to both experienced power users (such as research scientists) and people who are not very familiar with ML and are just getting started. The depth and breadth of these services make them applicable to all stages of the ML life cycle and help democratize the use of AI in healthcare. We also understand that there are some challenges we need to tackle. It’s not easy but there are ways to address challenges with the right knowledge and capabilities.

The question now is, where do we go from here? What will the next decade hold for AI in healthcare? What new capabilities will ML provide to our healthcare organizations and what new breakthroughs will it enable? Well, no one can see into the future, but there are some trends to look out for. Advancements in research have already led to continually better-performing ML models that beat the previous state of the art (SOTA). Better instances have led to faster training and inference times that are breaking previous performance benchmarks. Thanks to Moore’s law (https://en.wikipedia.org/wiki/Moore%27s_law), we now have supercomputers in the palm of our hands! We are at the cusp of something truly exciting as we face the inevitable: the merging of biology and technology.

In this chapter, we will look at some trends that might give us a clue as to where the applications of AI in healthcare and life sciences are headed. We will cover some examples of innovative use of AI in healthcare and also some technological advancements that are enabling those innovations to occur. Next, we will look at some future applications. Consider them as early experiments (at the time of writing this book) that have shown promise. We’ll look at the following sections:

  • Key factors influencing advancements of AI in healthcare and life sciences
  • Understanding current industry trends in the application of AI for healthcare and life sciences
  • Surveying the future of AI in healthcare
  • Concluding thoughts

Key factors influencing advancements of AI in healthcare and life sciences

While there are multiple factors that influence advances in AI, here are some of the key factors that are having a direct impact on the healthcare and life sciences industry. Let’s take a closer look at them.

Availability of multimodal data

One key dependency for any AI algorithm is the availability and access to high-quality labeled datasets. An algorithm that doesn’t get exposed to enough real-world data points cannot be expected to predict true values in all scenarios. This affects the generalizability of the models and may make them biased and unfair. However, the open sharing of healthcare data is prohibited. This data contains the protected health information (PHI) of patients and is bound by multiple privacy and security regulations. Moreover, datasets that represent only a single modality can be constrained in the amount of information that they can capture or pass to the ML model. For instance, a physician who makes a decision about a patient’s medical diagnosis gets their information from multiple modalities of data. They may be looking at the patient’s imaging results, reading data from the historical test results stored in the EHR system, talking to the patient in person, and getting cues about their condition from conversations. All these data points are derived from different modalities. Similarly, for ML models to better reflect the real world, they need to be exposed to information from different data modalities.

Fortunately, we have seen a positive trend in the availability of real-world datasets in healthcare and life sciences. Multimodal datasets such as the MIMIC 3 (https://physionet.org/content/mimiciii/1.4/) and the MIMIC CxR (https://physionet.org/content/mimic-cxr/2.0.0/) are great examples of public resources for healthcare. For genomics, The Cancer Genome Atlas (TCGA; http://www.tcgaportal.org/index.html) is paving the way for researchers to get easy access to genetic information for a variety of cancer types, leading to a better understanding of these diseases and helping to find ways to detect them early. There is government support for these kinds of initiatives. For instance, the Centers for Medicare and Medicaid Services (CMS) makes a wide range of public health data available on their portal, Data.CMS.Gov. The FDA has made the OpenFDA website (https://open.fda.gov/data/downloads/), containing datasets for a variety of clinical trials and drug recalls. The Registry of Open Data (https://registry.opendata.aws/) and AWS Data Exchange (ADX; https://aws.amazon.com/data-exchange/) are capabilities available from AWS that allow consumers to easily access, use, share, and subscribe to real-world datasets.

Active learning with human-in-the-loop pipelines

Another important factor influencing the advancement of AI in healthcare is the practice of using active learning pipelines. Active learning is a technique that allows algorithms to learn from human behavior and apply that learning to future events. A common use of active learning is in labeling new data. A lack of labeled datasets can pose a problem for supervised learning algorithms. Moreover, if the labels are highly specialized, it becomes a limiting factor for scaling out and utilizing crowd-sourced labelers who can generate labeled records. For example, you can get anyone to label an image with common objects such as a book, table, or tree. However, for a specialized medical image that recognizes a tumor in a brain MRI, you need highly specialized surgeons.

Active learning pipelines with humans in the loop are helping close this gap. You can start with a small subset of labeled records that can be sent to a model that starts to train and learn the labels from that small subset. The same data is sent to a group of physicians who then validate the model output and have the power to override the model-generated label if it’s not accurate. As new data becomes available, the process continuously subsamples labels for training a new version of the model from the labels generated by the physicians. Over a period of time, the models learn from the ground-truth labels provided by the physicians and are able to automatically label newer records. You can automate this entire pipeline on AWS using SageMaker Ground Truth (https://docs.aws.amazon.com/sagemaker/latest/dg/sms.html).

Another key reason for using human-in-the-loop capability in ML workflows for healthcare is the concept of trust and governance. People feel more comfortable when they know that a physician is validating the output of AI models and that their healthcare decisions are not being made entirely by an AI algorithm. It makes AI more acceptable and changes the public perception of it. SageMaker Augmented AI (A2I; https://docs.aws.amazon.com/sagemaker/latest/dg/a2i-use-augmented-ai-a2i-human-review-loops.html) allows you to easily add human in the loop to any ML workflow so humans (in this case medical professionals) can validate and certify the output of the models.

Democratization with no-code AI tools

The AI technology landscape is undergoing a similar transformation to the one that BI and data warehousing technology went through in the 2000s. The early data warehousing projects used to rely on complex extract, transform, load (ETL) queries to pull data from multiple source systems. SQL was heavily used to process and cleanse the data with pages and pages of logic embedded within stored procedures. The process of creating the data warehousing schema involved months of data modeling efforts. The querying of data warehouses was done using analytical SQL queries that had to slice and dice facts and dimensions in a performant way. In fact, performance tuning was a key aspect of any analytical application. In today’s world, everything is automated. The discovery of source system schema, pulling data from them, cleansing the data, and visualizing and slicing and dicing the data can all be done with a few clicks, or at most with a few lines of code. BI projects that used to take years to deploy in production can now be potentially completed in a few months, thanks to data transformation and BI tools.

We are seeing a similar trend in AI. Deep learning networks that used to take multiple pages of code to implement can now be done with just a few lines of code, thanks to automatic ML tools utilizing AutoML. These tools are becoming better every year and are making it easier for data scientists to try out multiple approaches to modeling quickly. A great example of this is AutoGluon (https://auto.gluon.ai/), which allows you to create a highly accurate ML model with just a few lines of code. We also see a lot of exciting innovations happening in no-code ML tooling spaces. These tools provide an easy-to-understand user interface that automates all stages of the ML pipeline, making it easy for anyone to try it. For instance, SageMaker Canvas (https://docs.amazonaws.cn/en_us/sagemaker/latest/dg/canvas-getting-started.html) allows you to create ML models for classification, regression, and time series forecasting problems without writing a single line of code using an easy-to-understand intuitive visual interface. We are still early in this journey of no-code ML tools, but it is all set to follow the same trajectory that the BI tooling industry followed in the early 2000s. Just like no one needs to write multiple pages of SQL to get meaningful analytics from a data warehouse, we may not need to write multiple pages of Python to create a highly accurate deep learning model. So, in a way, Python is becoming the new SQL!

Better-performing infrastructure and models

As the availability of data and AI tools increases, we also need improved infrastructure to handle the large amounts of storage and compute that are needed to process and train on that data and handle the demand of a greater number of experiments being launched as a result. Deep learning algorithms that work on large volumes of data stored in formats such as free text and images can consist of billions of parameters that need to be computed in memory. They also need better-performing GPUs and high-throughput storage. Organizations working with these large models and datasets also want to stay away from managing the large infrastructure footprint, which could be complex to scale. It is not their core competency and it takes time away from researchers, who want to concentrate on building better models to solve their research problems.

The advancements made in cloud computing technology allow us to close this gap. Organizations can get access to the best-in-class compute for a variety of workloads that may be CPU-, GPU-, or memory-bound. They can be put together in a cluster or can be containerized to create highly scalable compute environments. These compute environments are supported by multiple high-throughput storage options, making it easy for researchers to customize the environment for any ML task. Moreover, these high-end infrastructure components are available to anyone without any upfront investment. You can utilize as much or as little of the infrastructure as needed and pay for only the capacity you use, making it easy for you to optimize cost and utilization. In addition to this, AWS SageMaker providers the data parallel and model parallel libraries to train large models easily using distributed training. To learn more about these libraries, refer to the following site: https://docs.aws.amazon.com/sagemaker/latest/dg/distributed-training.html.

Better focus on responsible AI

Responsible AI refers to the steps taken by organizations to design safe, secure, and fair AI technology with good intentions that creates a positive impact on society. We all understand that AI has immense potential but can also do a lot of harm when not applied correctly. Due to this technology being new, there are hardly enough imposed regulations by agencies to govern and monitor the fair use of AI in critical fields such as defense and healthcare. Big technology firms such as Amazon, Google, Facebook, and Microsoft have all been vocal about their initiatives around responsible AI. They closely monitor their data collection and usage policies and have proactively maintained transparency in those practices. It has led to more confidence from the people, who are willing to put their trust in these organizations in return for the immense convenience the services provide. Even government organizations are beginning to create regulatory policies around responsible AI. A good example of this is the US Department of Defense (DoD). They formed a Defense Innovation Unit (DIU) that launched a strategic initiative in March 2020 to implement the DoD’s Ethical Principles for Artificial Intelligence into its commercial prototyping and acquisition programs. You can find the details of these guidelines here: https://www.diu.mil/responsible-ai-guidelines.

The healthcare and life sciences domain is no exception to this increase in responsible approach to AI. For instance, the FDA issued an action plan titled Artificial Intelligence and Machine Learning in Software as a Medical Device, which outlines five action plans that the federal administration plans to take to ensure that the safety and effectiveness of software as a medical device (SaMD) is maintained. Moreover, every country across the globe has its own country-specific regulations around ensuring fair and responsible use of AI technology. All this has led to more healthcare organizations than ever proactively implementing responsible AI practices in their use of AI in mission-critical healthcare workloads.

Now that we have an understanding of some of the factors influencing more healthcare and life sciences organizations to implement AI in their workloads, let us look at some industry trends to watch out for.

Understanding current industry trends in the application of AI for healthcare and life sciences

AI is now more present than ever. As we saw in the previous section, the availability of better tools, easier access to technology, and the ability for everyone to try it out are some of the key factors influencing advancements in AI/ML in healthcare and life sciences. Let us now look at some trends that show how AI applications are transforming healthcare and life sciences.

Curing incurable diseases such as cancer

The traditional method of mass-producing generic medications has existed for a while now. These medications have the same formulary and have been around for years, treating the same clinical conditions in the same way. One or two of these for a pharmaceutical company earns them billions of dollars in profits. Blockbuster drugs (as they are referred to) are produced using the same methods in large quantities and are expected to have the same effect on every patient. However, research has shown that variations in individuals and their environment play a critical role in the way they respond to medications. This is especially true for diseases such as cancer. As a result, we have now seen a trend of more personalized therapeutics development that is designed for smaller groups of individuals. These therapeutics are developed in smaller batches and are more targeted, such as focusing on a protein known to cause or spread a particular type of cancer. The design of these therapies makes heavy use of AI to engineer proteins or compounds that react with a particular target and produce the desired effect. AI allows a lot of these simulations to be carried out in silico, which also improves the chances of these drugs succeeding in clinical trials, reducing the risk for a pharmaceutical organization’s upfront investments. New and innovative techniques applied to therapeutics development have led to methods such as CAR-T cell-based therapies that involve using specially formulated human cells to use in the delivery of therapies to targeted sites. Better technology in labs has led to improved equipment such as sequencers that can sequence genes faster and with much more detail than before. Better software and hardware innovation has allowed us to process extensive amounts of data generated from these processes and draw conclusions from them. Advancements in protein engineering and molecular structure prediction have provided researchers with more information about how they behave in the last few years than ever before. These trends have the potential to cure previously incurable diseases and maybe even create a world free of all diseases.

Telehealth and remote care

Healthcare is more accessible than ever. Thanks to advances in technology, you can talk to your physician from anywhere in the world, with no need to spend time waiting in long lines to talk to a nurse or book an appointment. Everything is available via your mobile phone as an app or from your computer at the click of a button. This distribution of care services via the internet and telecommunication technology is collectively known as telehealth.

Technology allows healthcare companies to provide remote consultations, order prescriptions or tests, and even collect samples via mail without the patient ever being in a care facility such as a clinic or hospital. This trend extends into clinical trials as well. Trial participants can now be remote and distributed across different facilities instead of traveling to a particular site where the trial is being conducted. This concept of decentralized trials allows for better participation rates from patients who enroll voluntarily in these trials. Having the ability to remotely access your healthcare records and having your physician or nurse available on demand helps provide better access to care and promotes health equity, especially in regions of the world where access to healthcare facilities or providers may be difficult. Telehealth digitizes your healthcare visits and associates them with your clinical history, removing the need for any manual maintenance or records, which could be error prone and inefficient. Moreover, digitizing patient visits generates a treasure trove of information that is being analyzed using ML algorithms that can help process this data and generate a lot of insightful information about your overall health and well-being. It also allows healthcare providers to better monitor you to make sure you are following all recommended actions and are regular with your medications. This trend of increased use of telehealth and using remote patient monitoring is improving patient experience and providing multiple new options for AI to create an impact on healthcare delivery.

Using the Internet of Things (IoT) and robotics

Another trend that is prevalent in healthcare AI is the increased usage of connected devices. These devices have the ability to connect to a network and also have the processing power to run ML inference, for instance, an MRI machine that can perform image segmentation to highlight a tumor on a patient’s brain scan in real time directly on the machine, or a glucometer that takes regular blood sugar level readings from a patient and is able to detect anomalies that may lead to increased diabetes risk. The ML models can be deployed directly on these devices and make real-time inferences on data gathered from the device. The on-device deployment of ML models is known as edge deployment. Edge deployments of models directly on devices improve the latency of inferences by performing edge inference. Since the model is available on the device, it doesn’t need to reach out to an external service to generate those inferences. Moreover, edge deployments can also handle poor connectivity for these medical devices that may not have the ability to maintain a regular connection to the network. In addition to the edge inferences, the devices usually are backed by a backend data platform that can aggregate the reading or information collected from multiple such connected devices across a period of time and perform even more aggregated analysis using ML.

The trend of edge deployments and working with real-time information has led to multiple innovations in robotics. For example, in the case of healthcare, we have seen the increasing use of robotic surgery, which involves the complex use of sensors mounted on robotic arms that can carry out a procedure with utmost accuracy and precision. Sometimes, these procedures are so advanced that only a handful of surgeons across the world can perform them. Robotic surgery can help reduce the dependency on these specialized groups of surgeons and also help with training new surgeons with similar skills.

Simulations using digital twins

One of the ways in which IoT and robotics is being applied is by utilizing them to simulate actual products or processes. For example, IoT devices mounted with sensors are being used to generate computer simulations of large machinery such as wind turbines. Using these kinds of digital simulations, engineers can understand how environmental factors affect the performance of the machinery and also alter the design digitally, until an optimal design is achieved. This sort of design alteration would be extremely hard to accomplish with actual physical systems, which could take months or years to manufacture and test. This idea of simulations could also extend to processes. For instance, computer-generated simulations can tell you how to alter your disaster management processes in the event of a failure. This simulation of a process or a physical object using computer algorithms by taking into account real-world data is known as digital twins.

There are multiple known applications of digital twins in the healthcare industry, and the number continues to grow every year. In the life science of therapeutics design, simulated digital twins of compounds are used to determine the composition of the therapy depending on how it interacts with the target. Simulations of infection have been used to predict how an infectious disease would spread in a population and determine how to slow its spread. Disease progression models can simulate how it would affect an individual from the time the disease is detected to the time it becomes lethal. The fact that digital twins are simulated has allowed healthcare and life sciences organizations to safely run without causing risk to patients.

All data points to an increasing trend in the adoption of AI in healthcare and life sciences. Technology and biology are merging and creating new opportunities for improving care quality across the board while finding new pathways that could lead to a world free of disease. Let’s now look at the future of AI in healthcare and life sciences by summarizing some nascent areas of research for healthcare AI.

Surveying the future of AI in healthcare

Researchers are continuously pushing the boundaries of what can be achieved in healthcare and life sciences organizations with the help of AI. Some of these areas are new and experimental, while others have shown promise and are in various stages of prototyping. The following sections detail some of the new and upcoming trends in the future of AI in healthcare and life sciences.

Reinforcement learning

Reinforcement learning is a technique in ML that involves an AI algorithm learning the right sequence of decisions for a problem based on trial and error. An agent evaluates each trial made by the algorithm based on certain rules. The correct decisions made by the algorithm are awarded by the agent, while the incorrect ones are penalized. The overall goal of the algorithm is to maximize the reward. The point to note here is that, unlike supervised learning algorithms where the initial set of correct and incorrect outputs are fed into the algorithm during training, reinforcement learning doesn’t provide a data point to the algorithm to begin with. The algorithm starts with completely random decisions and learns to narrow down to the correct one with each reward or penalty it receives from the agent during multiple runs. This makes reinforcement learning great for use cases where a sequence of decisions leads to the correct or desired outcome and when there is no labeled training data available.

Reinforcement learning applications in healthcare, while still early, can have a lot of potential applications. For example, since reinforcement learning algorithms work on sequential decision making, they could be applied to a patient’s long-term care plan, which can be modified as new information about the disease or the patient’s condition is available. Sequential decision making can also extend to progressive disease diagnoses, such as cancer, where a reinforcement learning algorithm can project its due course of progression based on data available as the disease spreads. These initial applications have shown a lot of promise in research and may become more mainstream in the future.

Federated learning

Federated learning is a collaborative ML technique that allows you to train a model across multiple entities, each having its own sample of training data. Unlike traditional ML approaches that expect you to centralize all training data in one location, federated learning allows for data to be decentralized and located on each client without the need to share or move the data into a central server. The process trains local models for each client with the data available locally. It then shares the model weights of the trained model with the central server. The server then computes the overall metrics for the aggregated model and sends a new version of the model to each of the clients to train on. This process continues till the desired condition is satisfied.

Federated learning has the potential to increase sharing and collaboration among researchers working on common ML problems. In healthcare, open sharing of sensitive information is not possible. Hence, researchers can use federated learning to work collaboratively with other researchers without the need to share any sensitive information. Moreover, models trained on healthcare data from only one hospital have reduced generalizability. By using federated learning, the models can be exposed to datasets across multiple hospitals, giving them exposure to new patterns and signals that they can learn from. This improves the overall generalizability of the models.

Virtual reality

Virtual reality (VR) is a simulated environment generated using computers. It allows users to experience a virtual world. VR can mimic the real world or it can be completely different from the real world. Users of VR can immerse themselves in the virtual world using gadgets such as a VR headset, which allows users to experience the virtual world in three dimensions and provides a 360-degree view. Users may also use sensors and controllers, which give them the ability to interact with objects in the virtual world.

One of the most popular applications of VR is in gaming. The global VR gaming market size was $11.56 billion in 2019 and is expected to grow by about 30% from 2020 to 2027.

While the gaming industry is the leader in the adoption of VR, it has valid applications in other industries as well. For example, the media and entertainment industry uses VR videos to create immersive 360-degree videos for their consumers, the education industry can create VR classrooms where students in remote learning situations can attend classes and feel more connected with their teachers and peers, and the real estate industry uses VR to create virtual tours of properties that clients could experience from the comfort of their homes. Even the healthcare industry has seen interesting applications of VR. For instance, VR assistants can answer medical questions for patients who suffer from common problems before routing patients with serious conditions to medical professionals, and medical interns can use VR to learn how to conduct complex procedures in a simulated environment that is safe but still similar to the real world. The fitness industry is increasingly adopting VR to allow users to work out from the comfort of their homes by creating real-life gymnasium experiences in the virtual world. Individuals who were hesitant to travel to the gym due to lack of time are now more willing to use VR to work out. This improves their overall health and well-being. There are countless other applications of VR in healthcare that technology companies are creating on a regular basis. The next decade is sure to see more ways in which patients experience healthcare virtually. All this leads to the increasing use of ML, which is the engine that drives these virtual environments and gets them increasingly closer to the real world.

Blockchain

Blockchain technology allows for the decentralized storage of data that cannot be owned by a single entity. The data in the blockchain is updated by a transaction that cannot be modified. This creates a decentralized public ledger of transactions that is transparent and cannot be altered. This ledger is maintained in a network of computers and is able to chain together blocks of information (which is where the name blockchain comes from). Each block of information consists of information about its previous blocks it is connected to. This keeps the chain traceable and also prevents it from being modified. Blockchain has multiple applications in areas where transactional information needs to be maintained for extended periods of time. A common example of the use of blockchain is in the cryptocurrency space. The immutable nature of the blockchain ledger makes it desirable for maintaining healthcare transactional information. For instance, patient history records can be maintained in the blockchain to maintain the auditability and traceability of those records. The secure nature of blockchain transactions ensures that the data is not exposed to hackers. Another common usage of blockchain in healthcare is in the supply chain space. Healthcare facilities require a lot of consumables, such as bandages, reagents, and medicines. The demand for these consumables varies over a period of time. Maintaining the supply chain information in various stages in a blockchain ensures transparency in the supply chain processes and removes inefficiencies.

Once the data is stored in the blockchain ledger, it is validated and unalterable. This makes the data ideal for training ML models. Blockchain data provides less noise and has complete information without any chances of missing data. This helps ML models become more accurate and closer to the real world. While the application of ML in blockchain is quite new, prototypes have shown a lot of promise to allow for further exploration in the future.

Quantum computing

Quantum computing is a computing technique that allows you to harness the power of quantum mechanics to perform computational operations. Quantum computers, the machines that perform these operations, use quantum bits, or qubits. Qubits can have the values 0 and 1 at the same time, which makes them quite unique when compared to a normal bit involved in classical computation, where a bit is binary (0 or 1). While there are different models of quantum computation available to quantum computers, the most common model of quantum computation is quantum circuits. Quantum circuits allow you to create a sequence of steps that are needed in your quantum computing operation, such as the initialization of qubits. While quantum computers are still behind classical computing when it comes to their performance in typical computing tasks, they have shown encouraging results for certain types of computing operations involving complex correlations between input parameters, where they have been shown to outperform classical computers.

ML can inherently take advantage of quantum computing to accelerate certain types of tasks in an ML pipeline. For example, quantum embeddings (instead of traditional embeddings) can be used to train quantum neural networks, which have advantages over traditional neural networks trained on classical embeddings. Moreover, applying quantum computing to particular problems in healthcare and life sciences that require large amounts of data processing involving correlative search parameters could have an advantage over traditional high-performance computing techniques. One potential application of quantum computing in life sciences could be in the early stages of drug discovery, where billions of compounds need to be evaluated for desirable properties. In the future, when quantum computing is more accessible and available to researchers and developers, we will certainly see more applications of its unique capability in applied AI/ML for healthcare and life sciences problems.

While no one has seen the future, these are certainly new areas of innovation that will define the AI landscape for healthcare and life sciences. I will surely be keeping a close eye on these to see how it evolves.

Concluding thoughts

The US spent a record 20% of its GDP on healthcare in 2020. While the costs are high, it is also promising to note that a large portion of that budget has been used for the modernization of the healthcare and life sciences technology landscape. Overhauling any existing domain with technology does require some upfront investment and time. The long-term return on these investments is expected to overshadow the costs in the shorter term. For example, investments like these have led to a decrease in cancer deaths in the US. A report published by the American Association for Cancer Research (https://cancerprogressreport.aacr.org/wp-content/uploads/sites/2/2022/09/AACR_CPR_2022.pdf) found that deaths from cancer have decreased by 2.3% every year between 2016 and 2019 and is on a downward trend. These statistics make me more optimistic that technology, specifically AI and ML, can have a meaningful impact on our health and wellness. Easier access to technology and the proof that these technologies have a direct impact on improving our health and well-being are going to lead to transformational change in the way healthcare is conceived. From the discovery of new therapies and drugs to its delivery to patients and tracking the effects it has on their lives, the entire value chain of healthcare and life sciences is connected and available to us to analyze using a single pane of glass. Biotech organizations that utilize advanced analytics and ML can draw out insights from this data that no one thought was possible. No wonder that according to a publication in Nature, the US biotech sector revenue is estimated to have grown on average >10% each year over the past decade, which is much faster than the rest of the economy (https://www.nature.com/articles/nbt.3491). This makes biotech one of the fastest-growing segments of the US economy, contributing $300-400 billion annually. While a lot of this may just sound like a dream, it is not. It is real. We are already seeing returns on these investments and the future is going to provide many more examples.

Summary

In this chapter, we got an understanding of some of the factors that are directly responsible for the increasing use of AI in the healthcare and life sciences industry. We also looked at some common trends that AI is influencing, from curing incurable diseases to the use of IoT, robotics, and digital twins. We also summarized a few topics to watch out for. These topics are new, with limited real-world data around their successes, but they have a lot of potential for impact.

This chapter concludes our journey. The guidance in this book summarizes the years of learning that I have personally gone through and continue to experience. Through the chapters in this book, I have tried to summarize the use of AI/ML in healthcare and life sciences in a structured and accessible manner. The practical, applied ML implementation examples using AWS services should help articulate the role AWS is playing in making AI more accessible to healthcare and life sciences organizations. I am fortunate to have a front-row seat on this innovation journey and I am in awe. I hope I have passed this feeling on to you. With the knowledge you now have, I encourage you to continue on this learning path and apply your creativity and technical abilities to create new applications for healthcare and life sciences that utilize AI/ML. This is just the beginning.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.218.114.244