Introduction
Since the rapid emergence of AI technologies in 2023, the AI regulatory landscape has started to formalize, marked by a global shift towards more structured and comprehensive policies. This evolution reflects an increased recognition of the profound impact AI technologies are having on various facets of society, economy, and governance.
Countries and international organizations are actively working to establish regulatory frameworks that balance the need for innovation with ethical considerations, data privacy, and security. These frameworks are essential to ensure that AI development and deployment is conducted responsibly, and that they align with data laws while being able to keep up with the rapidly advancing landscape.
As these policies take shape, they will define the boundaries and responsibilities of AI developers, businesses, users, and regulatory authorities, shaping the future of AI integration into our daily lives and industries.
Below, we have outlined some of the key learnings we have had through closed-door interactions with some of the top stakeholders in this domain.
The EU AI Act
On 9 December 2023, the European Union reached a landmark agreement, after three days of marathon talks, on a comprehensive legislation to regulate artificial intelligence. This EU AI Act is being hailed as a global first in setting the standard for this rapidly advancing technology.
The AI Act was originally proposed by the EU's executive arm in 2021, and has since gained significant momentum following the widespread impact of AI technologies. This legislation represents a significant step forward in regulating AI development and usage within the EU, aiming to ensure it is developed and used responsibly, ethically, and human-centrically.
Key Features
Central to the EU's AI policy is the debate around the regulation of foundational AI models. Key member states, such as France, Germany, and Italy, pushed for a balanced approach that doesn’t stifle innovation while ensuring a certain code of ethics.
Key aspects of the EU AI Act include:
- Scope and Objectives: The AI Act aims to ensure the safe and ethical use of AI technologies, focusing on safeguarding the rights of people and businesses. It establishes a unique legal framework for the development of AI that can be trusted.
- Risk-Based Approach: AI systems should be categorized based on their potential risk, with stricter requirements for high-risk systems such as those used in healthcare, law enforcement, and critical infrastructure. The bottomline is: the higher the risk, the stricter the rules. AI systems with minimal risk are only required to adhere to basic transparency rules. For instance, they must reveal if the content is AI-generated, allowing users to make more informed decisions about its further use.
- Prohibited Practices: Certain AI applications, like social scoring by governments and manipulative toys, will be banned entirely. The Act also prohibits certain AI applications, such as manipulating cognitive behavior, indiscriminate collection of facial images, emotion detection in workplaces and schools, social scoring systems, biometric categorization for sensitive data like sexual orientation or religious beliefs, and certain predictive policing methods.
- Foundational Models: The agreement sets specific rules for foundational models, which are large, multi-functional AI systems. These include obligations for transparency before market placement. A more stringent regime applies to 'high impact' foundational models. The Act also contains provisions to govern general purpose AI systems (GPAI), where GPAI technology is subsequently integrated into another high-risk system.
- Regulation on High-Risk Applications: The Act requires tech companies operating in the EU to disclose data used in training AI systems and to conduct a thorough testing of products, especially those used in high-risk applications such as self-driving vehicles or healthcare.
- Transparency: The EU AI Act mandates that before launching high-risk AI systems, deployers must assess their impact on fundamental rights of citizens. Public entities using such AI systems are required to register on the EU's high-risk AI database. Additionally, users of emotion recognition systems are obligated to inform individuals when they are being monitored by these systems.
- Sandbox Model: The agreement specifies that AI regulatory sandboxes will be created, which are designed for controlled development, testing, and validation of AI innovations. This will permit AI system testing under real conditions with certain safeguards. To support smaller companies and reduce their administrative load, the agreement outlines specific actions and provides limited, well-defined exceptions.
- Penalties for Non-Compliance: Penalties for AI Act breaches are based on the higher of either a set percentage of the company's global annual turnover from the previous year, or a fixed sum. The penalties are structured as follows: €35 million or 7% for prohibited AI uses, €15 million or 3% for breaching AI Act obligations, and €7.5 million or 1.5% for providing false information. The provisional agreement also includes scaled-down fines for SMEs and start-ups for AI Act violations.
- Implementation Timeline: The final legislation will be worked out in the coming days, with the expectation that it could come into force by 2025.
The EU's AI Act is seen as setting a global benchmark for AI regulation, influencing other countries and regions. The United States, India, China, and other countries are also looking at similar regulations to balance the benefits of AI with the need for regulations. However, a significant feature of the Act is to prevent overregulation, which could hinder the growth of European AI companies like Mistral AI and Aleph Alpha thereby allowing for dominance by US companies.
The EU AI Act also specifies that its regulations will not extend to jurisdictions which are beyond the reach of EU laws and they won't interfere with member states' national security roles. It excludes AI systems used solely for military or defense purposes. Additionally, the act doesn't apply to AI used exclusively for research and innovation, or by individuals for non-professional purposes.
NIST and Evolving AI & Cloud Framework in the US
The National Institute of Standards and Technology (NIST) in the US has developed a framework for guiding the development and deployment of trustworthy AI and cloud-based systems. This framework emphasizes a scientific approach to policy drafting, focusing on evidence-based decision-making and stakeholder engagement.
Given its comprehensive methodology, the NIST framework has garnered significant attention and its principles have been incorporated into several key initiatives, including the US Executive Order on AI.
The NIST framework consists of five core components:
- Trustworthiness Principles: These principles outline the fundamental characteristics of trustworthy AI and cloud-based systems, such as fairness, accountability, transparency, and explainability.
- Risk Management Framework: This framework provides a structured approach to identifying, assessing, and mitigating risks associated with AI and cloud-based systems.
- Technical Standards and Guidelines: These standards and guidelines provide technical specifications for implementing trustworthy AI and cloud-based systems.
- Conformity Assessment: This process helps to ensure that AI and cloud-based systems comply with relevant standards and regulations.
- Workforce Development: This component focuses on developing the skills and knowledge needed to design, develop, deploy, and operate trustworthy AI and cloud-based systems.
The framework is intended to be a living document, evolving as technology and societal needs change. Its scientific approach and focus on stakeholder engagement make it a valuable tool for policymakers and industry leaders seeking to ensure the responsible development and deployment of AI and cloud-based technologies.
In fact, the US Executive Order on AI, signed by President Biden in October 2023, incorporates several key elements of the NIST framework. A similar approach is likely to shape the future of AI policy in other countries around the world as well.
US Executive Order on AI
In 2023, the US administration, under President Joe Biden, issued a comprehensive executive order to establish a new framework for AI governance, marking a shift towards greater transparency and safety in AI development.
This executive order is the most extensive set of AI rules and guidelines issued by the US government till date. Primarily, it mandates increased transparency from AI companies about their models, particularly in how they work, and establishes new standards for labeling AI-generated content. The overarching goal is to enhance AI safety and security, including a requirement for developers to share safety test results for new AI models with the US government, especially if these models could pose a risk to national security.
This order, titled ‘Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence’, has several key features:
Trustworthy AI
- Transparency and Explainability: Developers will be encouraged to make their AI systems more transparent and understandable to users.
- Fairness and Non-Discrimination: The order aims to mitigate potential bias and discrimination in AI systems.
- Privacy and Security: The order emphasizes the importance of protecting user privacy and data security.
Mitigating AI Risks
- Development of Safety Standards: The order calls for the development of voluntary standards and best practices for the safe development and deployment of AI systems.
- Addressing Algorithmic Bias: It directs federal agencies to assess and address potential biases in their AI systems.
- Promoting Public Awareness: It encourages public education and awareness about AI risks and benefits.
Promoting Innovation
- Investment in Research and Development: The order calls for increased federal investment in AI research and development.
- Fostering Talent: It aims to attract and develop a skilled AI workforce.
- Facilitating Collaboration: It encourages collaboration between government, industry, and academia on AI initiatives.
Global Leadership
- Promoting International Cooperation: The order encourages the US to collaborate with other countries on AI governance.
- Protecting National Security and Economic Interests: It aims to ensure that AI is developed and used in a way that protects national security and economic interests.
Key Initiatives
- Federal AI Research and Development Initiative: This initiative aims to accelerate AI research and development across the federal government.
- National AI Advisory Committee: This committee will provide expert advice to the government on AI policy and strategy.
- National AI Workforce Initiative: This initiative aims to address the growing demand for skilled AI workers.
Overall, the Executive Order represents a significant step forward in the United States' approach to AI governance. It sets ambitious goals for promoting responsible AI development and use, while also acknowledging the challenges and risks associated with this technology.
India’s Stance on AI & Cloud
India's approach to AI regulation is undergoing a significant transformation with the evolution of the Digital India Act (DIA), which is set to replace the IT Act of 2000.
This new act is currently in its drafting stage, but it is already becoming clear that India is likely to adopt a guidelines based approach, rather than a strict implementation framework for AI development. This aligns with the need for flexibility in adapting to evolving regulatory frameworks.
As I write this article, India is hosting the inaugural ceremony of the Global Partnership on Artificial Intelligence (GPAI) Summit 2023, in Delhi, with participation from 29 countries. The Prime Minister of India, Narendra Modi, has announced the launch of an Artificial Intelligence Mission to promote the use of AI in sectors such as agriculture, healthcare and education, with a national AI portal playing a key role. He’s raised questions around deep fakes, cyber security and data theft, emphasizing on transparency in the use of AI. India is a founding member of the GPAI.
Current Status
India currently doesn’t distinguish between various cloud service models like IAAS, PAAS, and SAAS. Singapore is soon to release a policy segregating each service, and it will be important to track the same.
The Telecom Regulatory Authority of India (TRAI) proposed consolidating control over cloud and telecom policies under a single regulatory body, suggesting the integration of cloud policies under the Department of Telecommunications (DoT). However, this proposal was rejected, and the response indicated that the regulation of cloud services would remain under the purview of the Ministry of Electronics and Information Technology (MeitY).
The MeitY Secretary is closely overseeing forthcoming policies, with specific focus on policies being introduced by Asian countries. This is noteworthy as the Digital India Act is anticipated to encompass a combination of these policies.
Evolving Approach: Glocalizing Data for AI
Looking at the way frameworks are evolving globally, we are moving towards an approach that strikes a balance between global integration and local compliance. This acknowledges the need for unrestricted data flow across borders, while keeping checks and balances in place.
A key aspect of this is the selective enforcement and policing of data. Instead of imposing universally rigid regulations, there is a growing consensus for tailored enforcement measures based on specific contexts and needs. The goal is to strike a balance between securing data, while pushing for innovation. This will allow for adaptability in diverse regulatory frameworks while addressing concerns like privacy, security, and the ethical use of data.
Another critical consideration is the concept of geofencing. Geofencing involves creating virtual boundaries around geographical areas to regulate data based on location.
This concept aligns with the strategy of selective enforcement, where data governance measures are customized according to specific geographic regions. This approach helps comply with domestic privacy laws within states or countries and adds an additional layer of granularity to data governance.
The Future of AI Policy-Making
The global AI regulatory landscape is rapidly evolving, characterized by a shift towards comprehensive and structured policies. Governments and international organizations are taking proactive steps to ensure responsible AI development and deployment, aiming to balance innovation with ethical considerations, data privacy, and security.
Key initiatives to track are the NIST Framework, US Executive Order on AI, EU AI Policy, and India's Digital India Act, all of which demonstrate a commitment to establishing responsible AI governance frameworks. These frameworks often share common principles, including:
- Trustworthiness: Ensuring AI systems are fair, transparent, explainable, and accountable.
- Risk Management: Mitigating potential risks associated with AI technologies.
- Human Decision-Making: Emphasizing the importance of human control and decision-making.
- Data Governance: Protecting user privacy and security through data regulations.
We are witnessing a trend towards ‘glocalizing’ data for AI, where frameworks seek to balance global data flow with local compliance needs. This involves selective enforcement, geofencing, and adapting regulations to specific contexts.
Overall, these developments point towards a global convergence towards responsible AI, shaping the future of AI integration into our lives and industries. It is important for all cloud players, developers, startups and enterprises to track the evolving regulatory landscape in order to build AI in an ethical and responsible way.