Chapter 21
Patrick Glauner: Everyone Needs to Acquire Some Understanding of What AI Is

Photograph of Patrick Glauner, professor of AI, Deggendorf Institute of Technology.

Patrick Glauner, professor of AI, Deggendorf Institute of Technology

Source: Shengqin Yang

Patrick Glauner has been a full professor of artificial intelligence at Deggendorf Institute of Technology in Bavaria, Germany, since the age of 30. In parallel, he is the founder and CEO of skyrocket.ai GmbH, an AI consulting firm. He has published three books: Creating Innovation Spaces: Impulses for Start-ups and Established Companies in Global Competition (Springer, 2021), Digitalization in Healthcare: Implementing Innovation and Artificial Intelligence (Springer, 2021), and Innovative Technologies for Market Leadership: Investing in the Future (Springer, 2020). His works on AI have been featured by New Scientist, McKinsey & Company, Imperial College London, Udacity, the Luxembourg National Research Fund, Towards Data Science, and other publications and organizations. He was previously head of Data Academy at Alexander Thamm GmbH, innovation manager for artificial intelligence at Krones Group, a fellow at the European Organization for Nuclear Research (CERN), and a visiting researcher at the University of Quebec in Montreal (UQAM). He graduated as valedictorian from Karlsruhe University of Applied Sciences with a BS in computer science. He subsequently received an MS in machine learning from Imperial College London, an MBA from Quantic School of Business and Technology, and a PhD in computer science from the University of Luxembourg. He is an alumnus of the German National Academic Foundation (Studienstiftung des deutschen Volkes).

Alexander: You are professor of AI at Deggendorf Institute of Technology. What is your mission?

Patrick: AI has started to transform nearly every industry. In my view, however, that is just the tip of the iceberg as AI has the potential to fundamentally change our economy. At Deggendorf Institute of Technology, we established an undergraduate program in AI1 in 2019. Our goal is to turn our students into high-caliber AI experts within three and a half years. I teach a number of courses in this program, including Computer Vision, Natural Language Processing, Big Data, and Algorithms and Data Structures. On the other hand, my courses include real-world projects. For example, in my Computer Vision course, my students use the NVIDIA Jetbot mobile robot platform depicted in Figure 21.1. They then implement some of the algorithms discussed in class on this platform and use the robot's camera. For my students it is a very rewarding experience to see their code running on a robot that interacts with its environment, for example, by following objects. Some of my students also do their projects together with industrial partners. As a result, they quickly get the big picture of how to adapt these theories so that they succeed in solving real-world problems.

Photograph of NVIDIA Jetbot.

Figure 21.1 NVIDIA Jetbot

Alexander: You also teach a course on innovation management. How does this topic relate to AI?

Patrick: Various market reports remind us of the sad truth about AI projects in industry: 80 percent fail or do not make it beyond the proof-of-concept stage. There are multiple reasons for this gap. One of them is that most AI courses at universities do not address the real-world challenges that arise when deploying AI. Those challenges are often nontechnical and include change management, business-process management, and organization management. There is an acute need in industry for experts who understand how AI adds value to companies. To bridge this gap, I am teaching a novel course, Innovation Management for Artificial Intelligence. In my course, I share my experience and best practices and how these lead to deployed applications that add real business value. No other university around the globe teaches a similar course. As a professor, my goal is that my innovative course sets my students apart, turning them into future experts in real-world AI applications. Over time, I can thus make a real and lasting difference to the field of AI and its applications in industry. I have started to promote my course, aiming to inspire other professors to adapt it. With our collective efforts, we are training the next generation of worldwide AI leaders.

Alexander: Which technology or digital capabilities are essential for a digital strategy? How should business models evolve to survive and thrive in an increasingly digital world?

Patrick: I find AI particularly essential as it allows us to automate human or manual decision-making. That is quite a contrast to previous phases of industrial automation that aimed only to automate repetitive tasks. To survive and thrive, business models have to become more efficient. In one of the later answers, I describe an AI application that predicted the power usage of special-purpose machines. Previously, it took domain experts days to do those calculations. Using AI, this task now only takes milliseconds and comes with much more accurate results. AI is part of the solution, but we must not abandon human intelligence and blindly apply AI to any problem. AI can thrive only if we use it for the right tasks and lay the necessary foundation for it.

Alexander: How can technology shift the roles and responsibilities of the workforce?

Patrick: AI is already starting to have significant impacts on the jobs market, both for businesses and workers. Bear in mind also that we have been seeing tremendous changes in the jobs market ever since the beginning of the industrial revolution some 250 years ago. For example, look back 100 years ago: most of the jobs from that time do not exist anymore. Furthermore, those changes are now happening more frequently. As a consequence, employees may need to undergo retraining multiple times in their careers. Another challenge is that not everyone who loses their job to AI will be able to transition into a highly technical career.

Even though those changes are dramatic, we simply cannot stop them. For instance, China has become a world-leading country in AI innovation.2 Chinese companies are using that advantage to rapidly advance their competitiveness in a large number of industries. If Western companies do not adapt to that reality, they will probably be out of business in the foreseeable future.

Alexander: Which AI use cases would you see as essential to contribute to any organization's digital strategy?

Patrick: I often see people who look for AI use cases that provide a solution and then tend to create problems that actually had not existed in the first place. My approach is very different: I rarely think in terms of AI use cases. Rather, I look at business problems and analyze how these could be solved efficiently. I then assess the following three criteria for a concrete business problem: high costs, long processing time, and uncertainty (meaning that two or more domain experts try to solve the same problem but come to different solutions). If at least one of those criteria is met, AI may be a solution to the problem. However, you should always strive for the simplest solution that provides you with the best outcome. For example, if a business process includes unnecessarily complex dependencies, an AI-based approach may not be very useful for improving its overall performance. Instead, we should first use our human intelligence and rethink the business process. AI, nonetheless, may then still help to further improve various steps of the revised business process.

Alexander: Because so much data is needed to train AI algorithms, how can organizations and companies stay ahead of legal, regulatory, and ethical issues associated with collecting and applying data?

Patrick: Those topics have recently started to gain momentum and keep evolving very rapidly. Organizations should thus send their AI experts to conferences or other events to stay up-to-date. They should also contribute to standardization committees and thus help shape the rules rather than being shaped by others through such initiatives.

Furthermore, organizations should actively reach out to the politicians who regulate technology. For example, look at the General Data Protection Regulation (GDPR) 2016/679, a regulation in EU law on data protection and privacy in the European Union and the European Economic Area. One of the initial goals of the GDPR was to limit the power of international — predominantly U.S.-based — service providers. However, it has turned out that those service providers are best able to comply with this overly complex and ambiguous regulatory framework. As a consequence, (smaller) European service providers have gotten even more pressure because of the GDPR. That is why I strongly believe the GDPR has been a disaster and needs to be revised.

When I look at the current attempts of the European Union to regulate AI, concerns and fears seem to be the center of attention. For example, one of those fears is the so-called black-box model whose decision-making cannot be fully explained. Oftentimes, however, human decision-making is a black box, too. Even drug discovery is not explainable to some extent but employs statistical analyses to assess the outcomes. In my opinion, the European Union would do better by embracing opportunities instead of fears. They should regulate only the application domains of AI where regulation is absolutely necessary, such as medical applications, safety-critical systems, and so on. I doubt that those domains actually require a lot of additional regulation, as there are already plenty of regulations for systems. Whether they include traditional software, AI-based software, or no software at all does not really matter from a regulatory perspective.

Alexander: You have previously headed the corporate AI competence center at Krones Group, the world's leading manufacturer of bottling lines. Tell us more about how AI adds value in mechanical engineering.

Patrick: Most articles on AI in mechanical engineering seem to focus on predictive maintenance. For a lot of market reports and people in industry, predictive maintenance seems to be the only AI use case or perhaps the ultimate objective in mechanical engineering. However, predictive maintenance is just the beginning of AI in this domain. During my time at Krones Group, my team and I built a number of AI applications, including some that predicted the power usage of special-purpose machines entirely from the machine requirements defined by customers. This information could then be provided to customers in the quotes so that they could build the necessary infrastructure in their plants before the machines were built and shipped. I have described this use case and others in detail in my book Innovative Technologies for Market Leadership: Investing in the Future.

Alexander: What types of professional roles will we see evolve alongside the development and increasing use of AI across the industries?

Patrick: Any corporate AI strategy needs to align to the corporate digitalization strategy. Who is actually in charge of digitalization in an organization? Most companies have a chief information officer (CIO) — or head of IT — who is in charge of the company's IT department and defines the corporate IT strategy. IT departments are usually very conservative and seem to favor the status quo instead of innovation. That makes sense: the most important duty of an IT department is to provide key services such as Internet access, emails, enterprise resource planning systems, data storage, or phone services as reliably and securely as possible. Focusing on reliability typically comes with a limited user experience, however. For example, users may not be able to install any third-party software on their computers or browse random websites.

In contrast, to take full advantage of modern digitalization technology, your digitalization specialists need more freedom and must be able to try new tools, programming languages, frameworks, or cloud services. Successful corporate digital transformations typically have one thing in common: a chief digital officer (CDO) who runs the company's digital department and does not report to the CIO. CDOs typically report to a board member3 and thus have the autonomy to run a department that has its own IT infrastructure and that is able to develop new products and services quickly.

Most companies also do not only have a central data infrastructure. Instead, data sources are typically spread all over the company in many different locations and formats (for example, spreadsheets, databases, and so on). This makes it challenging for machine learning projects to take full advantage of the company's data. Therefore, one of the CDO's responsibilities includes establishing a central and harmonized data storage environment that can be used in subsequent projects that aim to get insights from all the corporate data.

Alexander: How do you personally develop and update your AI skills?

Patrick: I recently read the fourth edition of Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig. It provides an excellent overview of the state of the art of AI, in particular how the field has evolved in the last ten years. However, the state of the art in AI keeps evolving very rapidly, so I also regularly read papers on arXiv.org to stay up-to-date. In addition, I regularly take massive open online courses (MOOCs). For example, I recently took the edX “Quantum Machine Learning”4 MOOC and loved it!

I actually had started to learn the foundations of machine learning and AI through MOOCs in 2011. At that time, I had signed up for a number of MOOCs, including Andrew Ng's MOOC on machine learning.

Alexander: What skills will managers need to develop to be able to work with AI?

Patrick: I keep telling decision-makers — in particular my customers — all the time: “Everyone, from the board members down to the factory workers, needs to acquire some understanding of what AI is and how your business can take advantage of it.” That understanding will help every employee identify tasks that can be automated using AI. For example, a factory worker may recognize an opportunity for how to improve a process using previously collected data. Supervisors also need some understanding of how AI works in order to assess the worker's proposal. If the assessment is positive, they can forward it to their supervisor or directly to the organization's AI competence center. Also, the top management of a company needs to be aware of what AI is and how it can improve the company. That understanding helps the top management challenge the status quo and assign resources accordingly to innovation. Training your staff in AI is actually quite cheap as there are plenty of MOOCs available.

Alexander: What projects would you consider are at the forefront of AI to support the digital transformation? How can deep learning fill the gap?

Patrick: Looking back at three criteria I mentioned previously, any projects that improve business processes in terms of costs, running time, and certainty or repeatability of the outcomes tend to be helpful. I honestly have mixed feelings about deep learning. There is no doubt that deep learning has recently led to major progress in a large number of popularized AI applications, in particular in computer vision and natural language processing. However, I have also noticed a number of exaggerated claims about deep learning in recent years. Those include that deep learning worked like the human brain or that deep learning models were generally better than other machine learning models. Both claims are wrong in my opinion. First, deep learning models are loosely inspired by the human brain — at the very most. Neurons and the neural network that our brain is made of are much more complex than the contemporary deep learning models used in research. Second, I consider deep learning to be one of many items in a modern machine learning tool set. Deep learning may be helpful for some problems, in particular in computer vision or natural language processing, in which you have large training data sets available. In contrast, deep learning often tends to be an overly complex choice for a lot of other machine learning problems and may actually lead to worse results. Deep learning models are not generally better than other machine learning models. See Wolpert's no-free-lunch theorem,5 which proves that there is no one model that works best for every problem. When I start applying machine learning to a problem, I first build a simple baseline model. In supervised learning that is usually a random forest model, which often works surprisingly well. If its outcome is sufficient, I am done — at low cost and short training time. If it is not sufficient, I then try increasingly more complex models, including deep learning.

Alexander: What should modern university courses in and around AI look like so that the students are able to add value in industry?

Patrick: Most courses at universities or in MOOC platforms focus on methodology, such as optimizing machine learning models in order to learn patterns. However, real-world AI projects require a lot of other competences, as depicted in Table 21.1.

In reality, training a model for some tasks is usually not that challenging, unless you need a completely new methodology, which is rarely the case. In supervised learning, choosing and training a machine learning model can even be automated to a large extent. For example, this can be done by employing automated machine learning (AutoML)6 approaches, which often return reasonable results. In contrast, other tasks such as defining KPIs, collecting data, and so on, tend to be awfully challenging and time-consuming in real-world projects. Universities need to address all of those topics so that their graduates are able to actually quickly add value in industry. We cover all of those aspects (and many others) in our AI undergraduate program at Deggendorf Institute of Technology, a belief confirmed by the feedback we get from our industrial partners.

Table 21.1 Relative Time Needed in an Industrial AI Project

Step Courses Reality
Defining KPIs Small Large
Collecting data Small Large
Exploratory data analysis Small Large
Building infrastructure Small Large
Optimizing ML models Large Medium
Integration in existing infrastructure Small Large

Alexander: What are your thoughts about the path to artificial general intelligence (AGI)? Can AI ever achieve general intelligence? How far are we from AGI?

Patrick: Today's AI applications tend to be very narrow, performing well (sometimes even outperforming humans) on exactly one task. AGI is the hypothetical intelligence of a machine that performs similarly to humans on a wide spectrum of tasks. The point when computers become more intelligent than humans is referred to in the literature as the technological singularity. There are various predictions as to when — or even if at all — the singularity will occur. They span a wide range, from a period in the next 20 years, to realistic predictions about achieving the singularity around the end of the twenty-first century, to the prediction that the technological singularity may never materialize. Since each of these predictions makes various assumptions, making a reliable assessment is challenging. Today it is impossible to predict how far away the singularity is. Murray Shanahan's book7 is excellent. He provides a first-class and extensive analysis on this topic and a discussion of the consequences of the various predictions.

We have seen some initial work towards AGI in the last few years. In particular the works of DeepMind, such as the underlying deep reinforcement learning methodology of AlphaGo. However, a lot of those outcomes perform well predominantly in fully observable and discrete environments. Our environment, however, is only partially observable and continuous and thus diametral to the assumptions made by most of those works. I get the impression that DeepMind has, in the meantime, somewhat deviated from their original aim to build AGI and now focuses mainly on nearer-term AI applications.

Furthermore, today's computers, no matter how impressively fast they have become, are really not that different from the computers of the late 1940s and early 1950s when the von Neumann architecture was introduced. They have the same limitations that a Turing machine has. In order to achieve artificial general intelligence, we probably first need to rethink computational models and computer architecture as a whole and finally come up with a novel, more powerful model. (Note that quantum computers will probably not be a solution to this problem.)

Alexander: There are concerns that AI can become a threat once it reaches the level of an artificial super intelligence (ASI). What do you think about ASI and a recursive self-improvement loop?

Patrick: In recent years, various stakeholders have in fact warned about so-called killer robots and other possible unfortunate outcomes of AI advances. The fear of an out-of-control AI is exaggerated now and possibly in the near and foreseeable future. We are still far away from artificial general intelligence and thus from possible artificial super intelligence scenarios, too.

I really like the much-noticed comparison made by Andrew Ng a few years ago:8 Andrew's view is that science is still very far away from the potential killer robot threat scenario. The state of the art of AI can be compared to a planned manned trip to Mars, which is currently being prepared by researchers. Andrew further states that some researchers are also already thinking about how to colonize Mars in the long term, but no researcher has yet tried to explore how to prevent overpopulation on Mars. He equates the scenario of overpopulation with the scenario of a killer robot threat. That danger would also be so far into the future that he was simply not able to work productively to prevent it at the moment, as he first had much more fundamental work to do in AI research.

The large number of negative connotations of AI, such as killer robots, are unnecessary distractions. We should rather embrace the opportunities of tomorrow. AI is nothing but the next phase of the industrial revolution. AI will further increase our prosperity just as previous phases have. Nonetheless, there are definitely tangible threats to people from AI in the near and foreseeable future, such as job losses. We need to address those issues and make sure that everyone benefits from the advances of AI and its applications.

Alexander: Thank you, Patrick. What quick-win advice would you give that is easy for many companies to apply within their digital strategies?

Patrick: Train each and every single person in your organization in digitalization and AI for 30 minutes.

Alexander: What are your favorite apps, tools, or software that you can't live without?

Patrick: I really like iPython and Jupyter Notebook, two interactive programming environments. Both allow me to quickly write a few lines of code, run them, improve them, reuse results of previous lines, and so on. This approach helps to substantially reduce the time needed for designing and implementing algorithms. What a difference from the old days when we had to compile and link C/C++ code and start from scratch after any minor change!

Alexander: Do you have a smart productivity hack or work-related shortcut?

Patrick: I am an early bird and get up at 4 a.m. two or three days a week. At that time, I do not receive any emails or phone calls and can entirely focus on research, writing, or coding for a couple of hours. Usually, I get most of my work done before the daily routine of answering emails, making phone calls, or attending meetings. During my PhD research time, I got up that early five days a week. Over the years, however, I noticed that I tended to be tired all day long. Nowadays, I only get up that early two or three times a week and feel healthier, better rested, and even more productive!

Alexander: What is the best advice you have ever received?

Patrick: A few years ago, when I had completed about half of my PhD, I noticed that most of the recent PhD graduates in our lab struggled finding jobs in industry. They then typically had to stay in our lab as a postdoc, which actually further complicated finding a job in industry. I really did not want to get stuck in our lab at all upon my PhD graduation. One of my friends then told me that I should acquire some extra skills that are diametral to what I was doing in my PhD research. For example, that could be doing a project management certificate, becoming a patent attorney, or pursuing an MBA. I chose the third option and pursued an MBA in parallel to my PhD. My MBA experience enriched me with an additional skill set. That proved to be very helpful for me as I got a major management position in industry immediately upon the completion of my PhD.

Key Takeaways

  • Organizations should send their AI experts to conferences or other events to stay up-to-date. They should also contribute to standardization committees and thus help shape the rules rather than being shaped by others through such initiatives.
  • To take full advantage of modern digitalization technology, your digitalization specialists need more freedom and must be able to try new tools, programming languages, frameworks, or cloud services.
  • Deep learning models are not generally better than other machine learning models, because there is no one model that works best for every problem. When you start applying machine learning to a problem, first build a simple baseline model. In supervised learning, that is usually a random forest model, which often works surprisingly well. If its outcome is sufficient, you are done — at low cost and short training time. If it is not sufficient, I then try increasingly more complex models including deep learning.

Endnotes

  1. 1 th-deg.de/ki-b-en.
  2. 2 Lee, Kai-Fu, AI Superpowers: China, Silicon Valley, and the New World Order. Boston: Houghton Mifflin Harcourt, 2018.
  3. 3 With digitalization and AI becoming ever more crucial to the success of a company, more companies will likely appoint CDOs directly to their boards in the future.
  4. 4 www.edx.org/course/quantum-machine-learning.
  5. 5 Wolpert, David, “The Lack of A Priori Distinctions between Learning Algorithms.” Neural Computation, 8.7 (October 1996): 1341–90.
  6. 6 automl.github.io/auto-sklearn.
  7. 7 Shanahan, Murray, The Technological Singularity. Cambridge, MA: MIT Press, 2015.
  8. 8 www.theregister.com/2015/03/19/andrew_ng_baidu_ai/.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.226.96.61