Artificial intelligence (AI) ethics and compliance is a new and evolving field of study and practice that aims to ensure that AI systems are designed, developed, deployed, and used in a way that respects human dignity, rights, and values, as well as current legal and social norms. AI ethics and compliance also seeks to address the potential risks and challenges that AI may pose to individuals, groups, and society at large, such as bias, discrimination, privacy, security, accountability, and social impact.
The Stats:
-
A report by Goldman Sachs estimates that AI could replace the equivalent of 300 million full-time jobs, or a quarter of work tasks in the US and Europe and that of those occupations that are exposed, roughly a quarter to as much as half of their workload could be replaced.
-
A 2019 study by the Brookings Institution found that workers with graduate or professional degrees will be almost four times as exposed to AI as workers with just a high school degree.
-
A 2020 report by the OECD states that greater exposure to AI was associated with higher employment in occupations where computer use is high, suggesting that workers with strong digital skills may benefit from AI.
It is in our best interest to continuously develop ways in which we monitor the evolution of the technology!
AI ethics and compliance are influenced by the regulatory frameworks and policies that different countries and regions adopt to govern AI. Some examples of current or proposed AI regulations are:
-
The EU AI Act, which is expected to be passed later this year, is a comprehensive legal framework that sets out rules and requirements for high-risk AI systems, such as those used in health, education, law enforcement, and recruitment.
-
The UK AI White Paper, which outlines the government’s vision and approach for AI regulation, based on five principles: public good, public trust, public engagement, public accountability, and public understanding.
-
The US Executive Order on Safe, Secure, and Trustworthy AI, which was issued by President Biden in October 2023, and directs federal agencies to adopt policies and standards to ensure the responsible and ethical use of AI in government, as well as to promote innovation and competitiveness in AI.
-
After two years of extensive consultations and negotiation, the UNESCO Recommendation on the Ethics of AI, which was adopted unanimously by its 193 Member States in November 2021, and establishes a set of common values and principles for the development and use of AI, such as human dignity, human rights, inclusion, diversity, transparency, and accountability.
The US strategy for ensuring ethics in AI is based on the following goals and actions, as stated in the Executive Order:
-
Advance the responsible use of AI in healthcare and the development of affordable and life-saving drugs, by creating a new interagency council to coordinate AI research and innovation in health, and by supporting the FDA’s regulatory framework for AI-based medical devices and software.
-
Shape AI’s potential to transform education by creating resources to support educators deploying AI-enabled educational tools, such as personalized tutoring in schools.
-
Enhance national security and economic competitiveness by investing in AI research and development, strengthening the AI workforce, and fostering international cooperation on AI standards and norms.
-
Protect consumers while ensuring that AI can make Americans better off, by directing federal agencies to review and update their AI regulations and guidance, and by establishing an AI Advisory Committee to provide independent advice and recommendations on AI policy issues.
Other countries have also adopted or proposed their own strategies and initiatives for ensuring ethics in AI, such as:
-
Canada’s Directive on the use of AI in government, which sets out a set of principles and requirements for the design, development, and deployment of AI systems by federal departments and agencies, such as respecting the law, human rights, and Canadian values.
-
Singapore’s Model AI Governance Framework, which provides a set of voluntary guidelines and best practices for private sector organizations to ensure the ethical and responsible use of AI, such as ensuring human oversight, transparency, and accountability.
-
Japan’s Social Principles of Human-Centric AI, which articulate a vision and values for the development and use of AI that respects human dignity, diversity, and autonomy, and promotes social welfare, justice, and democracy.
-
The UK’s guidance on understanding AI ethics and safety, which offers a practical tool for organizations to assess and mitigate the ethical and safety risks of their AI systems, based on four principles: fairness, accountability, sustainability, and transparency.
One of the key challenges for ensuring accountability in AI is the complexity and opacity of AI systems, which may make it difficult to identify, monitor, and mitigate the potential harms and impacts of AI decisions and actions. To address this challenge, some possible measures and mechanisms are:
-
Implementing audit and oversight processes to evaluate the performance, behavior, and outcomes of AI systems, and to ensure compliance with ethical and legal standards and obligations.
-
Establishing clear roles and responsibilities for the different actors involved in the design, development, deployment, and use of AI systems, and ensuring that they have the necessary skills, knowledge, and authority to fulfill their duties.
-
Providing transparency and explainability for the data, algorithms, and logic behind AI systems, and enabling meaningful human involvement and intervention in AI decision-making processes.
-
Creating feedback and redress mechanisms to allow the affected parties to express their concerns, complaints, or grievances, and to seek remedy or compensation for any adverse effects or harm caused by AI systems.
Ethical compliance for companies that use AI is a complex and evolving challenge. There is no one-size-fits-all solution for writing guidelines, but some general steps that can help are:
-
Developing a global view of AI compliance, consider laws, regulations, standards, and best practices that apply to different regions, sectors, and use cases.
-
Getting involved in AI compliance intelligence, staying updated on the latest developments and trends in AI ethics and governance, and participating in relevant forums and initiatives.
-
Enabling an AI compliance mapping capability, identifying the specific AI compliance requirements and risks that affect each AI project, and mapping them to the relevant ethical values and principles.
-
Investing in AI compliance enablement, providing the necessary resources, tools, guidance, and training to support the ethical design, development, deployment, and monitoring of AI systems.
-
Enforcing AI compliance positively, creating a culture of accountability and responsibility for AI ethics, and rewarding good practices and behaviors.
-
Monitoring impacts and engaging stakeholders, measuring and evaluating the actual outcomes and impacts of AI systems, and soliciting feedback and input from diverse and inclusive groups of users, customers, employees, and other affected parties.
These steps can help companies to build trust and confidence in their AI systems, and to avoid potential harms and liabilities. For more information and examples, you can check out these resources:
-
A Practical Guide to Building Ethical AI, by Reid Blackman
-
Ethics and AI: 3 Conversations Companies Need to Have, by Reid Blackman and Beena Ammanath
-
What is ‘ethical AI’ and how can companies achieve it?, by The Conversation
-
Start Preparing For AI Regulatory Compliance Now, by Forbes Tech Council
As AI continues to grow and develop, it is in our best interest to continuously develop ways in which we monitor the evolution of the technology, regulate the development and use of the tech to ensure that is serves in the best interest of humankind, and allow (or disallow) the technology to evolve in a safe and ethical manner. I believe that the regulation of AI will ironically end up creating more jobs than it will take away in other areas. I can almost guarantee with certainty that the job of AI Ethics and Compliance will be seen as a growing employee sector by the end of 2024!
In what ways do you think we should regulate AI to ensure ethical compliance?