© Tom Taulli 2019
Tom TaulliArtificial Intelligence Basicshttps://doi.org/10.1007/978-1-4842-5028-0_9

9. The Future of AI

The Pros and Cons
Tom Taulli1 
(1)
Monrovia, CA, USA
 

At the Web Summit conference in late 2017, the legendary physicist Stephen Hawking offered his opinion about the future of AI. On the one hand, he was hopeful that the technology could outpace human intelligence. This would likely mean that many horrible diseases will be cured and perhaps there will be ways to deal with environmental problems, including climate change.

But there was the dark side as well. Hawking talked about how the technology had the potential to be the “worst event in the history of our civilization.”1 Just some of the problems include mass unemployment and even killer robots. Because of this, he urged for ways to control AI.

Hawking’s ideas are certainly not on the fringe. Prominent tech entrepreneurs like Elon Musk and Bill Gates also have expressed deep worries about AI.

Yet there are many who are decidedly optimistic, if not exuberant. Masayoshi Son, who is the CEO of SoftBank and the manager of the $100 billion Vision venture fund, is one of them. In an interview with CNBC, he proclaimed that within 30 years, we’ll have flying cars, people will be living much longer, and we’ll have cured many diseases.2 He also noted that the main focus of his fund is on AI.

OK then, who is right? Will the future be dystopian or utopian? Or will it be somewhere in the middle? Well, predicting new technologies is exceedingly difficult, almost impossible. Here are some examples of forecasts that have been wide off the mark:
  • Thomas Edison declared that AC (alternating current) would fail.3

  • In his book The Road Ahead (published in late 1995), Bill Gates did not mention the Internet.

  • In 2007, Jim Balsillie, the co-CEO of Research in Motion (the creator of the BlackBerry device), said that the iPhone would get little traction.4

  • In the iconic science fiction movie Blade Runner—released in 1982 and was set in 2019—there were many predictions that were wrong like phone booths with video phones and androids (or “replicants”) that were nearly indistinguishable from humans.

Despite all this, there is one thing that is certain: In the coming years, we’ll see lots of innovation and change from AI. This seems inevitable, especially since there continues to be huge amounts invested in the industry.

So then, let’s take a look at some of the areas that are likely to have an outsized impact on society.

Autonomous Cars

When it comes to AI, one of the most far-reaching areas is autonomous cars. Interestingly enough, this category is not really new. Yes, it’s been a hallmark of lots of science fiction stories for many decades! But for some time, there have been many real-life examples of innovation, like the following:
  • Stanford Cart: Its development started in the early 1960s, and the original goal was to create a remote-controlled vehicle for moon missions. But the researchers eventually changed their focus and developed a basic autonomous vehicle, which used cameras and AI for navigation. While it was a standout achievement for the era, it was not practical as it required more than 10 minutes to plan for any move!

  • Ernst Dickmanns: A brilliant German aerospace engineer, he would turn his attention to the idea of converting a Mercedes van into an autonomous vehicle…in the mid-1980s. He wired together cameras, sensors, and computers. He also was creative in how he used software, such as by only focusing the graphics processing on important visual details to save on power. By doing all this, he was able to develop a system that would control a car’s steering, gas pedal, and brakes. He tested the Mercedes on a Paris highway—in 1994—and it went over 600 miles, with a speed up to 81 MPH.5 Nevertheless, the research funding was pulled because it was far from clear if there could be commercialization in a timely manner. It also did not help that AI was entering another winter.

But the inflection point for autonomous cars came in 2004. The main catalyst was the Iraq War, which was taking a horrible toll on American soldiers. For DARPA, the belief was that autonomous vehicles could be a solution.

But the agency faced many tough challenges. This is why it set up a contest, dubbed the DARPA Grand Challenge, in 2004, which had a $1 million grand prize to encourage wider innovation. The event involved a 150-mile race in the Mojave Desert, and unfortunately, it was not encouraging as the cars performed miserably. None of them finished the race!

But this only spurred even more innovation. By the next year, five cars finished the race. Then in 2007, the cars were so advanced that they were able to take actions like U-turns and merging.

Through this process, DARPA was able to allow for the creation of the key components for autonomous vehicles:
  • Sensors: These include radar and ultrasonic systems that can detect vehicles and other obstacles, such as curbs.

  • Video Cameras: These can detect road signs, traffic lights, and pedestrians.

  • Lidar (Light Detection and Ranging): This device—which is usually at the top of an autonomous car—shoots laser beams to measure the surroundings. The data is then integrated into existing maps.

  • Computer: This helps with the control of the car, including the steering, acceleration, and braking. The system leverages AI to learn but also has built-in rules for avoiding objects, obeying the laws, and so on.

Now when it comes to autonomous cars, there is lots of confusion of what “autonomous” really means. Is it when a car drives itself completely alone—or must there be a human driver?

To understand the nuances, there are five levels of autonomy:
  • Level 0: This is where a human controls all the systems.

  • Level 1: With this, computers control limited functions like cruise control or braking—but only one at a time.

  • Level 2: This type of car can automate two functions.

  • Level 3: This is where a car automates all the safety functions. But the driver can intervene if something goes wrong.

  • Level 4: The car can generally drive itself. But there are cases in which a human must participate.

  • Level 5: This is the Holy Grail, in which the car is completely autonomous.

The auto industry is one of the biggest markets, and AI is likely to unleash wrenching changes. Consider that transportation is the second largest household expenditure, behind housing, and twice as large as healthcare. Something else to keep in mind: The typical car is used only about 5% of the time as it is usually parked somewhere.6

In light of the enormous opportunity for improvement, it should be no surprise that the autonomous car industry has seen massive amounts of investment. This has not only been about venture capitalists investing in a myriad of startups but also innovation from traditional automakers like Ford, GM, and BMW.

Then when might we see this industry become mainstream? The estimates vary widely. But according to a study from Allied Market Research, the market is forecasted to hit $556.67 billion by 2026, which would represent a compound annual growth rate of 39.47%.7

But there is still much to work out. “At best, we are still years away from a car that doesn’t require a steering wheel,” said Scott Painter, who is the CEO and founder of Fair. “Cars will still need to be insured, repaired, and maintained, even if you came back from the future in a Delorean and brought the manual for how to make these cars fully autonomous. We make 100 million cars-per-year, of which 16 million-a-year are in the U.S. And, supposing you wanted the whole supply to have these artificial intelligence features, it would still take 20 years until we had more cars on the road including all the different levels of A.I. versus the number of cars that didn’t have those technologies.”8

But there are many other factors to keep in mind. After all, the fact remains that driving is complex, especially in urban and suburban areas. What if a traffic sign is changed or even manipulated? How about if an autonomous car must deal with a dilemma like having to decide to crash into an oncoming car or plunging into a curb, which may have pedestrians? All these are extremely difficult.

Evening seemingly simple tasks can be tough to pull off. John Krafcik, who is the CEO of Google’s Waymo, points out that parking lots are a prime example.9 They require finding available spots, avoiding other cars and pedestrians (that can be unpredictable), and moving into the space.

But technology is just one of the challenges with autonomous vehicles. Here are some others to consider:
  • Infrastructure: Our cities and towns are built for traditional cars. But by mixing autonomous vehicles, there will probably be many logistical issues. How does a car anticipate the actions of human drivers? Actually, there may be a need to install sensors alongside roads. Or another option is to have separate roads for autonomous vehicles. Governments also will probably need to change driver’s ed, providing guidance on how to interact with autonomous vehicles while on the road.

  • Regulation: This is a big wild card. For the most part, this may be the biggest impediment as governments tend to work slowly and are resistant to change. The United States is also a highly litigious country—which may be another factor that could curb development.

  • Adoption: Autonomous vehicles will probably not be cheap, as systems like Lidar are costly. This will certainly be a limiting factor. But at the same time, there are indications of skepticism from the general public. According to a survey from AAA, about 71% of the respondents said they are afraid of riding in an autonomous vehicle.10

Given all this, the initial phase of autonomous vehicles will probably be for controlled situations, say for trucking, mining, or shuttles. A case of this is Suncor Energy, which uses autonomous trucks for excavating various sites in Canada.

Ride-sharing networks—like Uber and Lyft—may be another starting point. These services are fairly structured and understandable to the public.

Keep in mind that Waymo has been testing a self-driving taxi service in Phoenix (this is similar to a ride-sharing system like Uber, but the cars have autonomous systems). Here’s how a blog post from the company explains it:

We’ll start by giving riders access to our app. They can use it to call our self-driving vehicles 24 hours a day, 7 days a week. They can ride across several cities in the Metro Phoenix area, including Chandler, Tempe, Mesa, and Gilbert. Whether it’s for a fun night out or just to get a break from driving, our riders get the same clean vehicles every time and our Waymo driver with over 10 million miles of experience on public roads. Riders will see price estimates before they accept the trip based on factors like the time and distance to their destination.11

Waymo has found that a key is education because the riders have lots of questions. To deal with this, the company has built in a chat system in the app to contact a support person. The dashboard of the car also has a screen that provides details of the ride.

According to the blog post, “Feedback from riders will continue to be vital every step of the way.”12

US vs. China

The rapid ascent of China has been astonishing. Within a few years, the economy may be larger than the United States, and a key part of the growth will be AI. The Chinese government has set forth the ambitious goal of spending $150 billion on this technology through 2030.13 In the meantime, there will continue to be major investments from companies like Baidu, Alibaba, and Tencent.

Even though China is often considered to not be as creative or innovative as Silicon Valley—often tagged as “copycats”—this perception may prove to be a myth. A study from the Allen Institute for Artificial Intelligence highlights that China is expected to outrank the United States in the most cited technical papers on AI.14

The country has some other advantages, which AI expert and venture capitalist Kai-Fu Lee has pointed out in his provocative book, AI Superpowers: China, Silicon Valley, and the New World Order15:
  • Enthusiasm: Back in the 1950s, Russia’s launch of Sputnik sparked interest in people in the United States to become engineers for the space program. Something similar has actually happened in China. When the country’s top Go player, Ke Jie, lost to the AlphaGo AI system, this was a wake-up call. The result is that this has inspired many young people to pursue a career in AI.

  • Data: With a population of over 1.3 billion, China is rich with data (there are more than 700 million Internet users). But the country’s authoritarian government is also critical as privacy is not considered particularly important, which means there is much more leeway when developing AI models. For example, in a paper published in Nature Medicine, the Chinese researchers had access to data on 600,000 patients to conduct a healthcare study.16 While still in the early stages, it showed that an AI model was able to effectively diagnose childhood conditions like the flu and meningitis.

  • Infrastructure: As a part of the Chinese government’s investment plans, there has been a focus on creating next-generation cities that allow for autonomous cars and other AI systems. There has also been an aggressive rollout of 5G networks.

As for the United States, the government has been much more tentative with AI. President Trump has signed an executive order—called the “American AI Initiative”—to encourage development of the technology, but the terms are vague and it is far from clear how much money will be committed to it.

Technological Unemployment

The concept of technological unemployment, which gained notoriety from famed economist John Maynard Keynes during the Great Depression, explains how innovations can lead to long-term job loss. However, evidence of this has been elusive. Notwithstanding the fact that automation has severely impacted industries like manufacturing, there is often a transition of the workforce as people adapt.

But could the AI revolution be different? It very well could. For example, California Governor Gavin Newsom fears that his state could see massive unemployment in areas like trucking and warehousing—and soon.17

Here’s another example: Harvest CROO Robotics has built a robot, called Harv, that can pick strawberries and other plants without causing bruises. Granted, it is still in the experimental phase, but the system is quickly improving. The expectation is that one robot will do the work of 30 people.18 And of course, there will be no wages to pay or labor liability exposure.

But AI may mean more than replacing low-skilled jobs. There are already signs that the technology could have a major impact on white-collar professions. Let’s face it, there is even more incentive to automate these jobs because they fetch higher compensation.

Just one category that could face AI job loss is the legal field, as a variety of startups are gunning for the market like Lawgood, NexLP, and RAVN ACE. The solutions are focused on automating areas such as legal research and contract review.19 Even though the systems are far from perfect, they can certainly process much more volume than people—and can also get smarter as they are used more and more.

True, the overall job market is dynamic, and there will be new types of careers that will be created. There will also likely be AI innovations that are assistive for employees—making their job easier to do. For example, software startup Measure Square has been able to use sophisticated algorithms to convert paper-based floorplans into digitally interactive floorplans. Because of this, it has been easier to get projects started and completed on time.

However, in light of the potential transformative impact of AI, it does seem reasonable that there will be an adverse impact on a broad range of industries. Perhaps a foreshadowing of this is what happened with job losses from manufacturing in the 1960s to 1990s. According to the Pew Research Center, there has been virtually no real wage growth in the last 40 years.20 During this period, the United States has also experienced a widening gap in wealth. Berkeley economist Gabriel Zucman estimates that 0.1% of the population controls nearly 20% of the wealth.21

Yet there are actions that can be taken. First of all, governments can look to provide education and transition assistance. With the pace of change in today’s world, there will need to be ongoing renewal of skills for most people. IBM CEO Ginni Rometty has noted that AI will change all jobs within the next 5–10 years. By the way, her company has seen a 30% reduction of headcount in the HR department because of automation.22

Next, there are some people who advocate basic income, which provides a minimum amount of compensation to everyone. This would certainly soften some of the inequality, but it also has drawbacks. People definitely get pride and satisfaction from their careers. So what might a person’s morale be if he or she cannot find a job? It could have a profound impact.

Finally, there is even talk of some type of AI tax. This would essentially claw back the large gains from those companies that benefit from the technology. Although, given their power, it probably would be tough to pass this type of legislation.

The Weaponization of AI

The Air Force Research Lab is working on prototypes for something called Skyborg. It’s right out of Star Wars. Think of Skyborg as R2-D2 that serves as an AI wingman for a fighter jet, helping to identify targets and threats.23 The AI robot may also be able to take control if the pilot is incapacitated or distracted. The Air Force is even looking at using the technology to operate drones.

Cool, huh? Certainly. But there is a major issue: By using AI, might humans ultimately be taken out of the loop when making life-and-death decisions on the battlefield? Could this ultimately lead to more bloodshed? Perhaps the machines will make the wrong decisions—causing even more problems?

Many AI researchers and entrepreneurs are concerned. To this end, more than 2,400 have signed a statement that calls for a ban of so-called robot killers.24

Even the United Nations is exploring some type of ban. But the United States, along with Australia, Israel, the United Kingdom, and Russia, have resisted this move.25 As a result, there may be a true AI arms race emerging.

According to a paper from the RAND Corporation, there is even the potential that the technology could lead to nuclear war, say by the year 2040. How? The authors note that AI may make it easier to target submarines and mobile missile systems. According to the report:
  • Nations may be tempted to pursue first-strike capabilities as a means of gaining bargaining leverage over their rivals even if they have no intention of carrying out an attack, researchers say. This undermines strategic stability because even if the state possessing these capabilities has no intention of using them, the adversary cannot be sure of that.26

But in the near term, AI will probably have the most impact on information warfare, which could still be highly destructive. We got a glimpse of this when the Russian government interfered with the 2016 presidential election. The approach was fairly low-tech as it used social media troll farms to disseminate fake news—but the consequences were significant.

But as AI gets more powerful and becomes more affordable, we’ll likely see it supercharge these kinds of campaigns. For example, deepfake systems can easily create life-like photos and videos of people that could be used to quickly spread messages.

Drug Discovery

The advances in drug discovery have been almost miraculous as we now have cures for such intractable diseases like hepatitis C and have continued to make strides with a myriad of cancers. But of course, there is certainly much that needs to be done. The fact is that drug companies are having more troubles coming up with treatments. Here’s just one example: In March 2019, Biogen announced that one of its drugs for Alzheimer’s, which was in Phase III trials, failed to show meaningful results. On the news, the company’s shares plunged by 29%, wiping out $18 billion of market value.27

Consider that traditional drug development often involves much trial and error, which can be time consuming. Then might there be a better way?

Increasingly, researchers are looking to AI for help. We are seeing a variety of startups spring up that are focusing on the opportunity.

One is Insitro. The company, which got its start in 2019, had little trouble raising a staggering $100 million in its Series A round. Some of the investors included Alexandria Venture Investments, Bezos Expeditions (which is the investment firm of Amazon.com’s Jeff Bezos), Mubadala Investment Company, Two Sigma Ventures, and Verily.

Even though the team is relatively small—with about 30 employees—they all are brilliant researchers who span areas like data science, deep learning, software engineering, bioengineering, and chemistry. The CEO and founder, Daphne Koller, has the rare blend of experience in advanced computer science and health sciences, having led Google’s healthcare business, Calico.

As a testament to Insitro’s prowess, the company has already struck a partnership with mega drug operator Gilead. It involves potential payments of over $1 billion for research on nonalcoholic steatohepatitis (NASH), which is a serious liver disease.28 A key is that Gilead has been able to assemble a large amount of data, which can train the models. This will be done using cells outside of a person’s body—that is, with an in vitro system. Gilead has some urgency for looking at alternative approaches since one of its NASH treatments, selonsertib, failed in its clinical trials (it was for those who had the disease in the later stages).

The promise of AI is that it will speed up drug discovery because deep learning should be able to identify complex patterns. But the technology could also turn out to be helpful in developing personalized treatments—such as geared to a person’s genetic make-up—which is likely to be critical for curing certain diseases.

Regardless, it is probably best to temper expectations. There will be major hurdles to deal with as the healthcare industry will need to undergo changes because there will be increased education for AI. This will take time, and there will likely be resistance.

Next, deep learning is generally a “black box” when it comes to understanding how the algorithms really work. This could prove difficult in getting regulatory approval for new drugs as the FDA focuses on causal relationships.

Finally, the human body is highly sophisticated, and we still are learning about how it works. And besides, as we have seen with innovations like the decoding of the Human Genome, it usually takes considerable time to understand new approaches.

As a sign of the complexities, consider the situation of IBM’s Watson. Even though the company has some of the most talented AI researchers and has spent billions on the technology, it recently announced that it would no longer sell Watson for drug discovery purposes.29

Government

An article from Bloomberg.com in April 2019 caused a big stir. It described a behind-the-scenes look at how Amazon.com manages its Alexa speaker AI system.30 While much of it is based on algorithms, there are also thousands of people who analyze voice clips in order to help make the results better. Often the focus is on dealing with the nuances of slang and regional dialects, which have been difficult for deep learning algorithms.

But of course, it’s natural for people to wonder: Is my smart speaker really listening to me? Are my conversations private?

Amazon.com was quick to point out that it has strict rules and requirements. But even this ginned up even more concern! According to the Bloomberg.com post, the AI reviewers would sometimes hear clips that involved potentially criminal activity, such as sexual assault. But Amazon apparently has a policy to not interfere.

As AI becomes more pervasive, we’ll have more of these kinds of stories; and for the most part, there will not be clear-cut answers. Some people may ultimately decide not to buy AI products. Yet this will probably be a small group. Hey, even with the myriad of privacy issues with Facebook, there has not been a decline in the user growth.

More likely, governments will start to wade in with AI issues. A group of congresspersons have sponsored a bill, called the Algorithmic Accountability Act, which aims to mandate that companies audit their AI systems (it would be for larger companies, with revenues over $50 million and more than 1 million users).31 The law, if enacted, would be enforced by the Federal Trade Commission.

There are also legislative moves from states and cities. In 2019, New York City passed its own law to require more transparency with AI.32 There are also efforts in Washington state, Illinois, and Massachusetts.

With all this activity, some companies are getting proactive, such as by adopting their own ethics boards. Just look at Microsoft. The company’s ethics board, called Aether (AI and Ethics in Engineering and Research), decided to not allow the use of its facial recognition system for traffic stops in California.33

In the meantime, we may see AI activism as well, in which people organize to protest the use of certain applications. Again, Amazon.com has been the target of this, with its Rekognition software that uses facial recognition to help law enforcement identify suspects. The ACLU has raised concerns of accuracy of the system, especially regarding women and minorities. In one of its experiments, it found that Rekognition identified 28 members of the Congress as having prior criminal records!34 As for Amazon.com, it has disputed the claims.

Rekognition is only one among various AI applications in law enforcement that are leading to controversy. Perhaps the most notable example is COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), which uses analytics to gauge the probability of someone who may commit a crime. The system is often used for sentencing. But the big issue is: Might this violate a person’s constitutional right to due process since there is the real risk that the AI will be incorrect or discriminatory? Actually, for now, there are few good answers. But given the importance AI algorithms will play in our justice system, it seems like a good bet that the Supreme Court will be making new law.

AGI (Artificial General Intelligence)

In Chapter 1, we learned about the difference between strong and weak AI. And for the most part, we are in the weak AI phase, in which the technology is used for narrow categories.

As for strong AI, it’s about the ultimate: the ability for a machine to rival a human. This is also known as Artificial General Intelligence or AGI. Achieving this is likely many years away, perhaps something we may not see until the next century or ever.

But of course, there are some brilliant researchers who believe that AGI will come soon. One is Ray Kurzweil, who is an inventor, futurist, bestselling author, and director of Engineering at Google. When it comes to AI, he has left his imprint on the industry, such as with innovations in areas like text-to-speech systems.

Kurzweil believes that AGI will happen—in which the Turing Test will be cracked—in 2019, and then by 2045, there will be the Singularity. This is where we’ll have a world of hybrid people: part human, part machine.

Kind of crazy? Perhaps so. But Kurzweil does have many high-profile followers.

But there is much heavy lifting to be done to get to AGI. Even with the great strides with deep learning, it still generally requires large amounts of data and significant computing power.

AGI will instead need new approaches, such as the ability to use unsupervised learning. Transfer learning will likely be critical as well. For example, as we’ve covered earlier in the book, AI has been able to realize superhuman capabilities in playing games like Go. But transfer learning would mean that this system would be able to leverage this knowledge to play other games or to learn other fields.

In addition, AGI will need to have the capacity for common sense, abstraction, curiosity, and finding causal relationships, not just correlations. Such abilities have proven extremely difficult with computers. If anything, there will need to be breakthroughs in hardware and chip technologies. This is the opinion of Yann LeCun, one of the world’s top AI researchers and the chief artificial intelligence scientist at Facebook.35 He also thinks there needs to be much more progress with batteries and other energy sources.

Something else that will be critical: more diversity within the AI field. According to a report from the AI Now Institute, about 80% of AI professors are men; and among the AI research staffs at Facebook and Google, women accounted for 15% and 10%, respectively.36

This lopsidedness means that research could be more susceptible to bias. Furthermore, there will be the loss of the benefit of broader views and insights.

Social Good

The management consulting firm, McKinsey & Co., has written an extensive study entitled “Applying Artificial Intelligence for Social Good.”37 It shows how AI is being used to deal with such issues as poverty, natural disasters, and improving education. The study has roughly 160 use cases. So here’s a look at just some:
  • The analysis of social media platforms can help track the outbreak of a disease.

  • A nonprofit, called the Rainforest Connection, uses TensorFlow to create AI models—based on audio data—to locate illegal logging.

  • Researchers have built a neural network that is trained on videos of poachers in Africa. With this, a drone flies over areas to detect violators, such as by using thermal infrared images.

  • AI is being used to analyze data on 55,893 parcels in the city of Flint to find evidence of lead poisoning. The system relies primarily on a Bayesian model, which allows for more sophisticated predictions of toxicity. This means the health workers can more quickly take action if there are any problems in the city, potentially saving lives.

Conclusion

This topic is a good place to end this book, I think. Regardless of all the potential for harm and the adverse consequences, AI truly has the promise for being transformative for the world. And the good news is that there are many people who are focused on making this a reality. It’s not about making huge amounts of money or getting fame. The goal is to change the world.

Key Takeaways

  • Autonomous cars are far from new. But the inflection point for the development of this technology came in 2004, with a contest sponsored by DARPA.

  • Some of the key components of an autonomous car include video cameras, Lidar (lasers that help process the environment), and sensors (such as for detecting other vehicles and obstacles like curbs).

  • In terms of defining what is “autonomous,” there are five levels. The fifth is when the vehicle is fully autonomous.

  • Some of the challenges for autonomous cars are infrastructure (existing highways are not ideal), regulation, costs, and consumer adoption.

  • The United States is considered the global leader in AI. But this could change soon. China is investing heavily in AI and has major advantages like enormous amounts of data and large numbers of skilled engineers.

  • One of the fears of AI is that it will lead to mass unemployment, whether for blue-collar or while-collar jobs. It’s true that technology has already impacted industries, like manufacturing, but the markets have proven adaptable. But if AI is transformative, it could lead to quite a bit of disruption. This is why there will likely be a need for training and re-skilling for new careers.

  • Drones have had a major impact on warfare. But with AI, it’s becoming possible to allow this technology to make the decisions on the battlefield. Now there are many people who see this as a big problem. However, the United States, Russia, and other countries appear to be focused on pursuing autonomous weapons.

  • But when it comes to warfare—at least in the near term—AI may have a more immediate effect with the spread of false information. We saw this with the Russian’s interference with the 2016 presidential election.

  • AI is expected to greatly help with the drug discovery process. Already mega pharma operators, like Gilead, are exploring the technology. AI can not only process huge amounts of data but also detect patterns that may not be discernable for humans.

  • As AI becomes more pervasive, there will be growing concerns about privacy and transparency. Because of this, there have been moves in the Congress, including cities and states, to impose regulations. It’s not clear what may transpire, but it seems likely we’ll see more restrictions. In the meantime, some companies are trying to be proactive, such as by setting up ethics boards.

  • Artificial General Intelligence or AGI is where a system has human intelligence. We are likely a long way from this, though. The reason is that there will need to be new innovations in AI, such as with unsupervised learning and the creation of new hardware.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.227.13.219