AI, Politics and Society

AI became a large part of our modern world, and its hard to overestimate its influence on our daily life. Whether you google, ask for directions, scroll social media, watch Netflix - machine learning algorithms are behind every daily task.

US presidential elections in 2016 brought malicious use of harvesting data and privacy issues to the public. This resulted in Mark Zuckerberg, Facebook CEO, testifying before Congress, and in general more scrutiny towards the whole tech community. Silicon Valley ten years ago still seemed like a promised land. Ten years later, books like The Age of Surveillance Capitalismof Shoshana Zuboff point towards imperfections and inherent flaws in the California startup ecosystem.

Thats why when introducing AI to organisations, communities, or societies as a whole, we need to clearly state our goals and explain what AI can and cant do. People should not worry about their jobs, or if they are directly influenced by automation, they should have access to retraining and education to remain relevant to the job market. AI will augment human capabilities, and the most likely scenario for our future is human-machine collaboration on every level, which would result in more effective and more pleasant work. The tedious, repetitive part will be taken care of by machines, while human workers will focus on original, creative jobs.

However, for this to happen, AI has to be democratised and by that, I mean we need to democratise access to AI tools, AI education, and computing power. The broader the workforce that can use AI, the better for our future and stability. Thats why regulating AI business is vital to avoid monopolies and make sure that everyone can play a role in this booming industry, which influences every other sector.

Its worth mentioning that even jobs that rely on empathy and reading emotions are not immune to AI automation. Ive discussed an example of a machine winning in Texas Hold-em, where bluffing is a common tactic, and one not only needs to count well but also recognise human emotions to win. Thus it is possible that AI will be able to read our emotions better than we do ourselves.

On the other hand, the role of the human will shift towards steering and applying AI to problems that are the most relevant and guiding its steps to a solution.

Its interesting to note how global news coverage of Artificial Intelligence has been growing in recent years. From the first mentions of breakthroughs to discussions of ethics, for example, about autonomous vehicles or weapons. It is essential to regularly educate the general public about possible opportunities and dangers of AI, and discuss with honesty what it can and cant do. This way, we will be all prepared for the inevitable changes that AI has for us in the near future.

AI for social good

Artificial Intelligence can do a lot of social good if it is applied wisely. Automatic analysis and insights in real-time deployed in critical areas can help:

  • minimise the damage done by earthquakes, cyclones, and other extreme weather;
  • guide blind people;
  • personalise education to anyone anywhere;
  • lower costs of drugs;
  • decrease loneliness among the elderly;

and much more. However, this will require a considerable effort from the research community, entreprises, and regulators to make the whole AI ecosystem work together.

With AI being applied in each domain of our life, its worth thinking about general social issues that might be solved using AI. McKinsey in their AI social report86 mapped them into the following categories:

Crisis response: Disease outbreak, Migration crises, Natural and human-made disasters, Search and rescue missions.

Economic empowerment: Agricultural quality and yield, Financial inclusion, Initiatives for economic growth, Labor supply and demand matching.

Education: Access and completion of education, Maximizing student achievement, Teacher and administration productivity.

Environment: Animal and plant conservation, Climate change and adaptation, Energy efficiency and sustainability, Land, air, and water conservation.

Equality and inclusion: Accessibility and disabilities, Marginalized communities.

Health and hunger: Treatment delivery, Prediction and prevention, Treatment and long-term care, Mental wellness, Hunger.

Information: Verification and validation, Fake news, Polarization.

Infrastructure management: Energy, Real estate, Transportation, Urban planning, Water and waste management.

Public and social sector: Effective management of the public sector, Effective management of the social sector, Fundraising, Public finance management, Services to citizens, Transparency.

Security and justice: Harm prevention, Fair prosecution, Policing.

We can map these AI social goals directly with UN Sustainable Development goals which are:

  • Life below water,
  • Affordable and clean energy,
  • Clean water and sanitation,
  • Responsible consumption and production,
  • Sustainable cities and communities,
  • Gender equality,
  • Partnerships for the goals,
  • Zero hunger,
  • Decent work and economic growth,
  • Climate action,
  • Reduced inequalities,
  • Industry, innovation, and infrastructure,
  • No poverty,
  • Life on land,
  • Quality education,
  • Peace, justice, and strong institutions,
  • Good health and well-being.

Can AI solve all our problems? Not by itself. AI is just a tool and we will need to collaborate across organisations, societies, and nations to build our common AI future which would serve everyone and which would be inclusive.

Public programs

Realising the importance of AI, governments started joining the race as well. In recent years dozens of countries drafted their AI strategies, which usually concentrated on boosting the local startup ecosystem and incentivising large enterprises on R&D research.

The biggest AI spender in the world seems to be China, which some estimates to spend around $70 billion up to 2020 on AI-related research and businesses.87 The precise estimates are hard to make due to lack of transparency, however, the annual AI budget of China seems to be within the range of a couple to a dozen billion dollars.88 USA seems to have a similar yearly budget level, though due to US-China competition, we should see a growing level of investments in AI in upcoming years, as this technology is of strategic importance.

In general, 2019 was the biggest year in funding, both federal and private, for artificial intelligence ventures yet, and 2020 should be even bigger. This trend should continue as the technology matures.

In the USA, in June 201989, the White Houses AI R&D Strategic Plan defined several key areas of priority focus. The main 8 strategic priorities were:

  1. continued long-term investments in AI;
  2. developing effective methods for human-AI collaboration;
  3. understanding and addressing the ethical, legal, and societal implications for AI;
  4. ensuring the safety and security of AI;
  5. developing shared public datasets and environments for AI training and testing;
  6. measuring and evaluating AI technologies through standards and benchmark;
  7. better understanding the National AI R&D workforce needs;
  8. expanding public-private partnerships to accelerate AI advances.

On the other hand, China launched its national program in 2017, with a goal to become the global leader in AI by 2030.

European countries are also joining this AI arms race, but with much more humble budgets. In 2018 France unveiled 1.5B plan to transform France into a global leader in AI. The main points of the plan were to launch a network of AI research institutes, have an open data policy to boost the adoption of AI, have a regulatory and financial framework for local companies, introduce ethical regulations.

In 2018 Germany allocated 3B for investment in AI R&D in its AI national strategy, with similar goals to French ones.

As a whole European Union also allocates funds to AI, which should reach 1.5B for the period 2018-2020 in its Horizon 2020 program. The goal is to:

  • connect AI research centers across Europe,
  • improve access to relevant AI resources in the EU for all users by a creating AI-on-demand platform,
  • support the development of AI applications in key sectors.

Not only are countries allocating budgets to AI, but also cities strive to become AI hubs and attract top talents. There are various initiatives and strategies to meet these goals like:

  • developing infrastructure,
  • tax incentives for companies,
  • investing in education and research,
  • an easier path to implementation of AI and city-startup collaborations,
  • regulatory frameworks.

All in all, through these programs around the world, the public sector tries to join the AI race and make sure that particular countries and cities stay relevant in the global technological economy. Equal access to AI is not given and has to be created by smart regulations, partnerships, and incentives. Thats why its crucial to educate policy-makers and decision-makers in the public sector so that in the new AI world, everyone will have the same chance of succeeding.

Ethics and Regulations

AI systems raise a variety of ethical and regulatory challenges. However, observing public discussion as well as governmental and academic debates, it seems that were going in a good direction. Awareness of these problems is growing, and more funds are allocated to make sure that everyone has the same access to technology.

Most common challenges which appear in the context of AI are:

  1. Interpretability and explainability, especially when it comes to AI-driven decision making;
  2. Transparency, fairness, and accountability;
  3. Democratisation of AI: diversity, inclusion;
  4. Automation and job loss;
  5. Data privacy and security;
  6. Reliability and certainty of models;
  7. Sustainability;
  8. Compliance;
  9. Human control.

Organisation for Economic Co-operation and Development (OECD) has drafted the OECD Principles on Artificial Intelligence90 to promote AI that is trustworthy and respects human rights and democratic values. The following principles are recommended by OECD to build trustworthy AI systems:

  • AI should benefit people and the planet by driving inclusive growth, sustainable development, and well-being.
  • AI systems should be designed in a way that respects the law, human rights, democratic values, and diversity, and they should include appropriate safeguards to ensure a fair and just society.
  • There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
  • AI systems must function in a robust, secure, and safe way throughout their life cycles, and potential risks should be continually assessed and managed.
  • Organisations and individuals developing, deploying, or operating AI systems should be held accountable for their proper functioning in line with the above principles.

Consistent with these value-based principles, the OECD provides five recommendations to governments:

  • Facilitate public and private investment in research and development to boost innovation in trustworthy AI.
  • Foster accessible AI ecosystems with digital infrastructure and technologies and mechanisms to share data and knowledge.
  • Ensure a policy framework that will open the way to deployment of trustworthy AI systems.
  • Empower people with AI skills and support workers for a fair transition to AI-enhanced jobs.
  • Co-operate across borders and sectors to progress on responsible stewardship of trustworthy AI.

The growing role of AI in our lives asks for better regulations. Old regulations must be updated to account for the transformative role AI can have on any business, organisation, and individual. Many countries have already realised that and national AI strategies explicitly mention the role of good regulatory frameworks for building trustworthy AI systems.

Risks of using AI

Developing Artificial Intelligence is not without risk. One of the biggest risks is that machine learning algorithms will be used maliciously to harm individuals, organisations, and society. We have already discussed how deepfakes and fake news can be created at scale using recent deep learning breakthroughs. Thats why its important to know how we can defend ourselves and what to look for.

Another risk is the displacement of the workforce or, in other words, the scale at which people could lose jobs due to automation. With AI becoming more sophisticated, it might be hard to find new job openings for less skilled staff. Thats why its crucial to democratise access to AI and technological education so that anyone can enter the job market again. This concerns not only physical jobs but also office jobs, which are also under attack from AI systems like RPAs we have discussed.

Yet another risk of using AI is the lack of transparency and bias in models. Machine learning models are trained on data provided by people and, as such, can inadvertently inherit biases in data provided. Imagine youve built an AI model for college admissions automated. If youve trained it on past data, then the chances are its far from todays diversity standards, and it is biased towards students similar to alumni. This is why transparency and explainability of models are essential. We want to know why AI makes a particular decision.

Uncertainty of models is a risk that arises when we deal with sensitive data like medical data or crisis response. We cant rely on models that are not certain of their predictions. In other words, models have to be trained to account for what they dont know instead of always making predictions even if its outside of the scope of their expertise.

Putting it all together, we can extract four categories of risks:

  • the risk of malicious use of AI,
  • the risk of negative impact on the job market and workers,
  • the risk of bias and lack of transparency,
  • the risk of privacy violation.

As a rule of thumb, the more sensitive data we deal with, the more cautious we should be about deploying AI. As machine learning is a statistical method, we should decide on how much risk of bias we allow, how certain AI should be, and how explainable. Each of these components increases the difficulty of building a compliant AI model able to solve the problem.

Summing up, AI can be a great tool to foster growth and build the wealth of a society, but one should be careful with applying it on a massive scale. Good regulations, together with consistent strategies, will help in using technology to our benefit. Policymakers need to balance between fostering AI growth and managing risks associated with it. As we have already discussed, AI strategy should account for democratisation, that is: accountability, explainability, and transparency while remaining secure and trustworthy. Theres still a lot to do when it comes to establishing standards in ethics, regulations, and access. AI can be hugely beneficial to our society as well as it can do harm. It depends on us and our decisions. Our goal should not be to constrain the adoption of AI, but rather to encourage its safe use for social good.


..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.129.39.252