a
a
Weather:
17 C
light rain
Bristol
humidity: 83%
wind: 6 m/s WSW
H18 • L16
Sun
17 C
Mon
21 C
Tue
20 C
Wed
22 C
Thu
25 C
HomeLatin America & CaribbeanNavigating Brazil’s proposed AI regulation in a global context

Navigating Brazil’s proposed AI regulation in a global context

The pressing need for AI regulation in Brazil

Artificial Intelligence (AI) is rapidly transforming various sectors in Brazil, from healthcare to finance, bringing significant advancements. However, these advancements also present risks and ethical dilemmas, such as biases in algorithms, privacy concerns, and job displacement. Within this scenario, the reasons why there is urgent need for AI regulation in Brazil include:

  • Implementation of AI systems by the Public Power: The Brazilian state appears increasingly enthusiastic about adopting new AI tools in its digital systems, particularly in the administration of justice. While this enthusiasm indicates Brazil’s readiness to play a significant role in developing and integrating new technologies for societal benefit, the lack of regulation poses risks related to predictability and safety, especially in handling sensitive data. These risks could not only slow down the pace of AI implementation for public policy benefits but also lead to significant human rights violations.
  • Strengthening of Brazilian digital sovereignty: As a developing country with significant potential in research and technology, a harmonized regulatory framework would enable Brazil to reposition itself globally and strengthen its digital sovereignty.
  • Impact on democracy: Since 2018, AI systems have significantly impacted Brazilian democracy and political life by generating and spreading fake news. This has prompted the judiciary to take a leading role in regulation to improve its efficiency and safeguard democratic processes. However, this has not been sufficiently coordinated with the legislative framework.
  • Digital inclusion: As AI fundamentally transforms society, Brazil, as a voice from the Global South, can contribute to building a human-centred, inclusive, development-oriented, responsible, and ethical approach to AI, aiming to improve people’s lives and bridge the digital divide.

In Brazil, the urgency for AI regulation has been recognized at the highest levels. Senator Rodrigo Pacheco, President of the Brazilian National Congress, has emphasized AI regulation as a priority for 2024. This has led to a surge of legislative activity, with around 40 bills introduced in both the House of Representatives and the Senate. Among these, Bill 2338/2023 by Senator Pacheco and Bill 21/2020 by Federal Deputy Eduardo Bismarck have become central to the national debate. In April 2024, Senator Eduardo Gomes presented a report consolidating these bills, aiming to align Brazil’s AI regulations with those of the European Union (EU) and the United States (US) and incorporating public feedback. This is the most recent version now under debate in Brazil.

Brazil’s newly proposed AI regulatory framework  

Brazil’s SIA: an integrated approach

One major proposals contained in the substitute text presented in April is the introduction of the National System of Regulation and Governance of Artificial Intelligence (SIA), integrating elements of self-regulation and certification for AI systems. This authority is empowered to regulate high-risk AI systems in critical areas like public safety, health, and justice, among others. It also defines scenarios for relaxing governance obligations and updates impact assessments for high-risk AI systems.

Brazil’s approach here reflects the structure seen in the EU’s AI Act, which classifies AI systems based on risk levels and establishes rigorous oversight for high-risk applications. Both frameworks emphasize the importance of a central regulatory authority working alongside sector-specific bodies, aiming for a balanced approach that encourages innovation while safeguarding public interests. This approach is slightly different from the path the US has been demonstrating, as noted in the “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” (EO 14110), of 2023, which established new standards for AI safety and security in the US, focusing on sector-specific guidelines and less centralized oversight.

Targeting high-risk AI

The recent regulatory proposal also narrows the law’s scope to target predominantly AI systems with significant potential impacts, excluding personal, national defence, research, and open-source AI systems unless they pose substantial risks. This approach also aligns with global trends. The EU AI Act also targets high-risk AI systems, particularly those affecting fundamental rights, safety, and critical infrastructure. Both Brazil and the EU aim to balance innovation with protection by focusing on high-impact systems and avoiding overregulation of low-risk applications. Similarly, the US regulatory approach, including EO 14110 and other sector-specific guidelines such as those by the FDA, NHTSA, and FTC, targets high-risk sectors like healthcare and transportation.

New definitions for AI systems and agents

The new regulatory proposal introduces refined definitions, emphasizing conceptual distinctions between different types of AI systems and agents, compared to previous proposals in Brazil. It categorizes AI systems into “functional models,” designed for diverse tasks using large-scale data, and “generative AI systems,” intended to create or modify content autonomously. This approach aligns Brazil’s regulatory language with the EU AI Act, which similarly defines and categorizes AI systems to clarify regulatory requirements.

Risk categories and biometric identification

The revised proposal categorizes certain AI systems as excessively risky and prohibits their use, including systems that facilitate child exploitation, assess crime risk, or operate autonomous weapons without human control.

This prohibition closely aligns with the EU AI Act, which also bans AI systems deemed excessively risky, such as those involving biometric identification without consent and social scoring. Both frameworks aim to safeguard fundamental rights by preventing the deployment of the most dangerous AI applications. In the US, stricter regulations exist for specific high-risk AI uses, but comprehensive bans on high-risk systems are less common. Executive Order 14110 requires stringent reporting and oversight for high-risk AI, particularly those with dual-use potential, but it does not yet reach the level of outright prohibitions seen in the EU and Brazil.

Guidelines for public sector AI systems

The report also provides that high-risk AI systems already implemented by the Public Power be adjusted to comply with the law within a reasonable timeframe, as defined by the competent authority. Similar to the EU AI Act, which imposes strict requirements on public sector AI systems, Brazil’s guidelines aim to ensure responsible and transparent AI use in the public sector.

Copyright and synthetic content

The preliminary report also introduces new rules to protect copyrighted material used in AI systems. AI developers must disclose which copyrighted works, like books and articles, were used for training. Copyright holders can prevent their content from being used, except for lawful uses like educational or non-commercial purposes. Discriminating against those enforcing their copyright is prohibited.

The SIA will establish guidelines for compensating and ensuring transparency when copyrighted content is used in high-risk AI systems. Public access to impact assessments will be provided while protecting trade secrets. The proposal also states that copyright holders will have control over the use of their content, and that, for AI-generated content, the AI systems` providers must include labels to verify its authenticity or origin, particularly for high-risk AI systems. This aligns with the EU AI Act’s strong emphasis on transparency and accountability in AI data usage. The US, with a more fragmented regulatory landscape, addresses similar issues through sector-specific guidelines from agencies like the FTC, focusing on transparency and fair use in AI development.

Brief evaluation of Brazil’s AI regulatory approach

Brazil’s new AI regulatory framework takes a progressive stance, incorporating elements from the EU AI Act. Emphasizing self-regulation, certification, and a hybrid model, it aims to foster innovation while protecting public interests. The proposal addresses diverse stakeholder demands from the market, academia, and civil society by implementing comprehensive measures for high-risk AI systems and ensuring transparency in using copyrighted content.

Despite its strengths, the proposal might need some refinement. The introduction of complex technical concepts and rapid adoption of EU nomenclature and risk levels raise concerns about legal certainty and practicality. Issues like regulating high-risk AI systems and compensating for copyrighted content could lead to disagreements over their effectiveness and implementation. Moreover, the principled approach, while sound in theory, requires careful scrutiny for practical viability. The effectiveness of various provisions, particularly those related to governance measures and internal processes, depends on how well they integrate into Brazil’s existing legal and regulatory frameworks. It’s worth noting that quick alignment with international standards, although proactive, might not fully consider Brazil’s unique market needs. This rapid integration risks overlooking specific challenges and priorities in the Brazilian context. Thorough research into the impact of these technical concepts and thresholds is crucial to ensure they align with Brazil’s strategic goals.

Fernanda Dias

Picture by Brett Sayles

Post Tags
No comments

Sorry, the comment form is closed at this time.