Senate discusses rules for artificial intelligence with respect to ‘democratic values’ as a principle | Policy

Senate discusses rules for artificial intelligence with respect to ‘democratic values’ as a principle | Policy
Descriptive text here
-

The report presented for the project that regulates artificial intelligence (AI) in Brazil has as its principles “respect for democratic values”, “freedom of expression” and “non-discrimination”.

Web Summit Rio debates artificial intelligence

The document, still preliminary, was presented by senator Eduardo Gomes (PL-TO), on Wednesday (24). There is still no forecast for voting on the proposalbut the Internal Temporary Commission on Artificial Intelligence in Brazil (CTIA), created to debate the subject, must close by May 23rd.

The rapporteur highlighted the difficulties of legislating on topics that are constantly changing, such as artificial intelligence itself and disinformation.

“We will have to learn how to make a living law, so this is a complicated business. (…) We will have to work on this legislative dynamic that provides legal certainty and at the same time updates rights and duties”, he said Gomes.

The report makes clear that the proposal will not affect the population’s personal use of artificial intelligence, focusing only on the commercial use of the technology. The text also opens up space so that national defense does not have to be subordinated to regulation.

Gomes unified in the report all the bills being processed in the Senate on artificial intelligence, including the proposal that gave rise to the commission, authored by Senate President Rodrigo Pacheco (PSD-MG).

1 of 2 President of the Senate, Rodrigo Pacheco, has already expressed concern about the issue. — Photo: TV Senado/Reproduction
President of the Senate, Rodrigo Pacheco, has already expressed concern about the issue. — Photo: TV Senado/Reproduction

The Minister of the Institutional Relations Secretariat, Alexandre Padilha, participated in the commission meeting and highlighted the relevance of a regulatory framework to encourage public and private investments in the topic.

“The first need for this regulatory debate is to establish very clearly what the rules are to have legal security for investors, to be able to collaborate and for Brazil to be one of the main parks for this production”, said the minister.

Among the propositions presented in the report is the creation of the National Artificial Intelligence Regulation and Governance System (SIA)a structure to “implement and monitor” compliance with the Law.

According to the text, the SIA will be coordinated by a federal public administration body, but the specific body is not defined in the project. During the session, the rapporteur suggested that the National Authority for Personal Data Protection (ANPD) could be expanded to meet this task.

As proposed, the SIA’s duties will include regulate high-risk artificial intelligence which, for example, negatively impact the exercise of users’ rights and freedoms.

For these high-risk classification cases, the proposal requires companies to provide preliminary assessments of the system and the impacts that the algorithm they are developing will have when it is operational. This data will be evaluated by independent professionals not linked to the National Artificial Intelligence Regulation and Governance System.

2 of 2 Project envisages creation of a National Artificial Intelligence Regulation and Governance System. — Photo: GETTY IMAGES
Project foresees creation of a National Artificial Intelligence Regulation and Governance System. — Photo: GETTY IMAGES

The rapporteur maintained one of Pacheco’s proposals, which allows the imposition of a fine of up to R$50 million or 2% of revenue, in the case of a legal entity, for each violation committed by technology companies and places them within the punishments of the General Personal Data Protection Law (LGPD), of 2018.

The text provides, however, that companies will not be held liable if they prove that third parties acted in bad faith when using the tool to cause harm to victims.

Prohibitions and criminal identification

The report provides an extensive list of prohibited actions for the agent responsible for artificial intelligence and provides for a type of preliminary self-assessment of the systems. Among the seals are:

  • subliminal techniques that induce the user to behave in a harmful way or pose a risk to their own health or safety or that of third parties;
  • exploitation of user vulnerabilities;
  • rank people based on social behavior or personality attributes;
  • production, dissemination or creation of material that characterizes child sexual abuse or exploitation;
  • using AI as autonomous weapons, which select targets and attack without human intervention.

In relation to maintaining privacy and adapting to the LGPD, the proposal foresees the prohibition of the use of remote biometric identification systems, in real time, in public spaces. Public security systems, however, could use tools to monitor the population in some specific cases:

  • in the case of an investigation or criminal proceeding, with authorization from the court – as long as it is not for the investigation of a “criminal offense of lesser offensive potential”;
  • to search for missing people, victims of crimes or circumstances of serious threat to physical integrity;
  • in cases of investigation and repression of flagrant crimes, as long as the crimes carry a minimum sentence of two years; It is
  • for the recapture of escaped defendants, execution of arrest warrants and restrictive measures.

Are facial recognition systems reliable?

Another possibility brought by the proposal is for companies to join together to create a type of private agency to self-regulate the system. According to the text, this self-regulation may establish technical criteria for systems for issues such as:

  • sharing experiences on the use of artificial intelligence;
  • contextual definition of governance structures provided for in this Law;
  • action of the competent authority and other agencies and authorities of the SIA to employ precautionary measures; It is
  • channel for receiving relevant information about the risks of using artificial intelligence by its associates or any interested party;

The article is in Portuguese

Tags: Senate discusses rules artificial intelligence respect democratic values principle Policy

-

-

NEXT Selected deals on Amazon CDs and vinyls with Prime discount coupons
-

-

-