March 4. 2024. 6:49

The Daily

Read the World Today

German Constitutional Court strikes down predictive algorithms for policing


The German Federal Constitutional Court declared the use of Palantir surveillance software by police in Hesse and Hamburg unconstitutional in a landmark ruling on Thursday (16 February).

The ruling concludes a case brought by the German Society for Civil Rights (GFF) last year, hearings for which began in December. The plaintiffs argued that the software could be used for predictive policing, raising the risk of mistakes and discrimination by law enforcement.

The German state of Hesse has been using the software since 2017, though it is not yet in place in Hamburg. The technology is provided by Palantir, a US data analytics firm which received early backing from intelligence agencies, including the CIA, FBI and NSA.

The case was brought on behalf of 11 plaintiffs and rested on the argument that the software programme – named ‘Hessendata’ – facilitates predictive policing by using data to create profiles of suspects before any crime has been committed.

The legal basis of the acts authorising these systems was questioned by the GFF, which said that Hesse and Hamburg had not made clear the sources police could use for obtaining data or how much and on what grounds data mining could be conducted by law enforcement.

According to the court, the powers granted to the police in Hesse have been used thousands of times per year via the Hessendata platform.

However, state representatives have argued that the software is key to preventing crime and simply gathers and processes data collected elsewhere.

Palantir, from whose Gotham AI system Hessendata is derived, has said that it only provides the software for data analysis rather than the data itself.

“It is our customers who determine which data is relevant to the investigation in accordance with the relevant legal provisions.”

On Thursday, however, the constitutional court in Karlsruhe struck down acts which provided a statutory basis for police to process stored personal data through automated data analysis, in the case of Hesse, or automated data interpretation, in Hamburg.

The systems were deemed unconstitutional as they violated the right to informational self-determination.

“Given the particularly broad wording of the powers, in terms of both the data and the methods concerned, the grounds for interference fall far short of the constitutionally required threshold of an identifiable danger”, the Court said in a statement.

The use of automated measures that interfere with people’s rights in this way, it was added, “is only permissible to protect particularly weighty legal interests – such as life, limb or liberty of the person.”

The ruling voids the Hamburg act, meaning the system will not be installed. The state of Hesse, however, where the tech is already in use, now has until 30 September to reform its legislation. In the meantime, it will remain in place with restrictions.

The case will also have broader implications, said Bijan Moini, head of GFF’s legal team: “Today, the Federal Constitutional Court prohibited the police from looking into the crystal ball and formulated strict guidelines for the use of intelligent software in police work. This was important because the automation of policing is just the beginning.”

In December, a report by the EU’s fundamental rights agency called for policymakers to ensure that AI algorithms used by law enforcement for predictive policing were tested for biases that could potentially result in discrimination, particularly in the context of the AI Act, on which lawmakers are currently working.

The application of AI-driven tools by law enforcement is also a controversial point in the discussions on the AI Act, a landmark EU legislation to regulate AI. The EU Council of ministers has been pushing to give police forces more leeway, whilst progressive MEPs are arguing for a more restrictive approach.

EU fundamental rights agency warns against biased algorithms

The European Union Agency for Fundamental Rights (FRA) published a report on Thursday (8 December) dissecting how biases develop in algorithms apply to predictive policing and content moderation models.