Before the AI hype, TB’s project was a pioneer in mapping technologies in use by public authorities

Mapping Algorithmic Transparency in 2020 includes analysis of possible risks to privacy, rights and civil liberties in artificial intelligence tools
Data de publicação
23/12/2025
Nathália Mendes
Data protection Technology and artificial intelligence News reports

In 2020, the Superior Court of Justice had an automated analysis system for each appeal sent to the Court. The algorithm consulted previous decisions in the case, normative sources and legal precedents and recommended a decision for the case, leaving the final word to the minister.

A tool of this kind represents a risk of rights violations by its very nature, since the algorithm learns from previous decisions and can reproduce prejudices – especially in cases involving historically vulnerable groups. If the algorithm errs in its recommendation on an appeal, a citizen could receive an ill-founded and biased court decision.

This and 43 other tools were identified in the pioneering survey by Transparência Algorítmica, a Transparência Brasil project that mapped the technologies in use by public authorities in 2020. Before the AI hype, the initiative sought to identify what the systems were hired for in the public sector and their possible risks to privacy, rights and civil liberties.

The idea came about after TB’s executive director, Juliana Sakai, who at the time was operations director, took part in a machine learning immersion, Tech Camp for Civic Defenders, at Stanford University in the United States. Transparência Algorítmica was one of five selected for seed funding from the International Center of Not-for-Profit Law, offered to participants in the immersion.

“We saw the need to look into the possible threats to civil society posed by AIs contracted and developed in the public sector, given the growing use of technologies in policies and processes, without any transparency about their use. There was no survey of this kind,” recalls Sakai.

She adds that, at the time, discussions about algorithmic transparency were centered on big tech.

In order to broaden the scope of the mapping, TB entered into a partnership with the Office of the Comptroller General (CGU), the Ministry of Science, Technology and Innovation and the NIC.br Center for Web Technology Studies (Ceweb.br). A questionnaire, requesting information on the use of AI technologies, was drawn up jointly, under the coordination of researcher Tamara Burg, and sent by the CGU to the 319 federal executive bodies under its purview. More than 200 responses were received.

The same questionnaires were sent via the Access to Information Act to the federal legislative and judicial bodies and to the Federal Court of Auditors (TCU). In addition, with the support of Northwestern University, which was carrying out similar work in the United States, the initiative developed an automated search algorithm on official federal government portals for mentions of the use of AI systems. Thus, in addition to identifying the technologies in use, a model was created for civil society to monitor new AI contracted or developed.

The 44 tools mapped, which included chatbots, bank fraud detectors and document classification systems, varied in type of use and target audience. Most of them (20) were aimed at government agents themselves and had the function of supporting them in making a decision or taking action. These are the ones that cause the most concern in civil society, as they have an impact on people’s lives and, directly or indirectly, on the exercise of fundamental rights.

The Bem-te-vi tool, for example, was used by the Superior Labor Court to classify cases and make predictions about how the case would proceed in the offices – automated decision-making that could affect the exercise of the fundamental right of access to justice and due process of law.

The process of identifying the possible risks of these systems involved the collaboration of 12 organizations specializing in different areas, so that the evaluation could address the impacts on different groups, such as children and adolescents, black people and consumers. The different expertise qualified and broadened the work of Transparência Algorítmica and, given the novelty of the initiative, the organizations showed great willingness to collaborate, says TB’s program director, Marina Atoji, project coordinator at the time.

“It was important to bring in the views not only of technology experts, but also of groups whose rights are notably more vulnerable in the face of the use of AI by public authorities, such as black people, as well as groups working in areas such as consumer law and the defense of human rights,” says Atoji.

The main risk pointed out in the analysis of the tools concerns precisely the training database and criteria to be used by the automated predictive and classification models. These conditions given to the algorithms by government agencies could lead to the reproduction of pre-existing social discrimination.

According to TB’s executive director, there was a lack of transparency about how the algorithms worked and the bases used to train them, which limits social control to minimize the risk of rights violations. “Even if there are no threats to fundamental rights, society must be able to assess whether there is gain or harm in the use of a particular AI by the public authorities, since most of the systems identified had an impact on people’s lives,” says Sakai.

In the light of this diagnosis, TB drew up a multi-sectoral evaluation framework for the use of AI by the state. It acts as a guide for public oversight bodies and civil society organizations to identify potential threats to rights and social control and, based on this, to draw up recommendations, demand the publication of information, corrections or tests and even the eventual discontinuation of a tool.

The model covers four dimensions: risks to rights due to the nature of the tool; risks to rights due to algorithmic discrimination; risks to the right to privacy; and potential authoritarian abuse of civic space. It also includes an assessment of the level of transparency given to the entire process of using the tool.

In 2021, the framework served as a reference for the TCU in developing a methodology for auditing AI systems in the federal public administration. The Court’s report highlights that the mapping of Algorithmic Transparency is the first of its kind and includes, in its own assessment, the initiative’s recommendations, demonstrating the novelty and impact of TB’s work.

The AI regulation imbroglio

The Algorithmic Transparency mapping indicated a widespread use of tools in public security activities, with these systems posing the greatest risks, “with known serious consequences such as the deprivation of an individual’s liberty or even a threat to life”.

The growth in the adoption of technologies for these purposes and their serious impacts on civil society have already shown the urgency of regulating AI in Brazil. The lack of legal parameters leaves open a legal, regulatory and ethical gap, with the bad consequences that the use of these technologies without governance can bring.

These concerns contributed to TB investigating AI in public security in 2024. Through the Surveillance and Technology project, the organization found that, more often than not, public security agencies hire data management and online activity monitoring tools without observing clear provisions for the protection of personal data and do not apply the General Data Protection Law.

According to director Marina Atoji, the scenario has changed little since the diagnosis of Transparência Algorítmica. “There is still little dedication to producing impact reports on rights, especially in the use of AI in public security, and little transparency for citizens about whether and how technology is used in customer service and even in screening for access to public services,” she says.

But the first legislative discussions on regulating AI only came in 2023, with Bill 2.338/2023, and they are still going on today, with intense lobbying from big techs. TB has been following the debate since its inception, including lobbying the federal legislature, seeking to ensure that the regulation is transparent and protects rights – premises brought from the pioneering initiative on AI in the public sector.

Apoie a transparência dos dados públicos