How Should Local Governments Approach AI and Algorithms? – Government Technology

How can government agencies avoid causing more harm than good when they use artificial intelligence and machine learning? A new report attempts to answer this question with a framework and best practices to follow for agencies pursuing algorithm-based tools.

The report comes from the Pittsburgh Task Force on Public Algorithms. The task force studied municipal and county governments use of AI, machine learning and other algorithm-based systems that make or assist with decisions impacting residents opportunities, access, liberties, rights and/or safety.

Local governments have adopted automated systems to support everything from traffic signal changes to child abuse and neglect investigations. Government use of such tools is likely to grow as the technologies mature and agencies become more familiar with them, predicts the task force.

This status quo leaves little room for public or third-party oversight, and residents often have little information on these tools, who designed them or whom to contact with complaints.

The goal isnt to quash tech adoption, just to make it responsible, said David Hickton, task force member and founding director of the University of Pittsburgh Institute for Cyber Law, Policy and Security.

The task force included members of academia, community organizations and civil rights groups, and received advice from local officials.

We hope that these recommendations, if implemented, will offer transparency into government algorithmic systems, facilitate public participation in the development of such systems, empower outside scrutiny of agency systems, and create an environment where appropriate systems can responsibly flourish, the report states.

While automated systems often intend to reduce human error and bias, algorithms make mistakes, too. After all, an algorithm reflects human judgments. Developers choose what factors the algorithms will assess and how heavily each factor is weighted, as well as what data the tool will use to make decisions.

Governments therefore should avoid adopting automated decision-making systems until theyve consulted with residents through multiple channels, not just public comment sessions who would be most impacted.

Residents must understand the tools and the ways theyll be used, believe the proposed approach tackles whatever issue in a productive way, and agree the potential benefits provided by an algorithmic system outweigh the risk of errors, the task force said.

Sufficient transparency allows the public to ensure that a system is making trade-offs consistent with public policy, the report states. A common trade-off is balancing the risk of false positives and false negatives. A programmer may choose to weigh those in a manner different than policymakers or the public might prefer.

Constituents and officials must decide how to balance the risk of an automated system making a mistake. For instance, Philadelphia probation officials have used an algorithm to predict the likelihood of people released on probation becoming reoffenders. These officials have required individuals on probation to receive more or less supervision based on the findings. In this case, accepting more false positives means increasing the chance that people will get inaccurately flagged as higher risk and be subjected to unnecessary intensive supervision, while accepting more false negatives may lead to less oversight for individuals who are likely to reoffend.

For example, an individual may be flagged by a pretrial risk assessment algorithm as unlikely to make their court date. But theres a big difference between officials jailing the person before the court date and officials following up with texted court date reminders and transportation assistance.

Community members told the task force that the safest use of algorithms may be to identify root problems (especially in marginalized communities) and allocate services, training and resources to strengthen community support systems.

Residents also emphasized that issues can be complex and often require decision-makers to consider individual circumstances, even if also using algorithms for help.

Systems should be vetted before adoption and reviewed regularly such as monthly to see if theyre performing well or need updates. Ideally, independent specialists could evaluate sensitive tools and employees training on them, and in-house staff would examine the workings of vendor-provided algorithms.

Contract terms should require vendors to provide details that can help evaluate their algorithms fairness and effectiveness. This step could prevent companies from hiding under claims of trade secrecy.

Local government faces few official limitations around how they can use automated decision-making systems, Hickton said, but residents could put pressure on election officials to make changes. Governments could theoretically appoint officials or boards in charge of overseeing and reviewing algorithms to improve accountability.

I can't predict where this will all go, but I'm hopeful that what we've done is put a spotlight on a problem and that we are giving the public greater access and equity in the discussion and the solutions, he said.

See the original post here:
How Should Local Governments Approach AI and Algorithms? - Government Technology

Related Posts

Comments are closed.