The Roles and Limitations of AI in Government
Govciooutlook

The Roles and Limitations of AI in Government

By Gov CIO Outlook | Friday, November 23, 2018

The newly created UK Government Centre for Data Ethics and Innovation had a public consultation seeking guidance on governing the use of AI in the public sector as well as data usage in the private sector. The governments that can readily adapt to the use of AI are spread across the Americas, Australia, Asia, and the Middle East. With their strong ties to the technology sector, which is likely to intensify with time, governments have begun to approach start-ups to provide AI solutions for the public sector. “GovTech,” a state-backed technology providing government funds nurtures the growth of this sector.

Public-private AI collaborations are complicated—while the government answers to its citizens, the private sector is responsible to its shareholders. This can lead to several questions regarding the protection and control of citizens’ data or the economic value of the insights they provide. Citizen-centric governments require ample in-house expertise to navigate between the clashing priorities and ambitions across the policy-making and technological aspects of government.

The role and limits of the private sector in developing AI for the government is a rising concern, with the increasing contribution of technology firms to the government. Several governments, in the U.S., France, China and Russia, have improved their digital strategy, and augmented it further with an AI chapter. However, as the governments are not well-equipped to work with disruptive technologies, they need to keep certain points in mind. This includes increased focus on principles instead of the details of delivery, as public services are provided to citizens rather than consumers. They can sub-contract to the private sector for increased expertise in operation and project management, and this delegation of services should be regulated and actively enforced. This will bring the operations of the government at par with twenty-first century technology, failing which, they risk being forced to bow down to players from the private sector.

Relying on the private sector for digital innovation has brought successive governments costly IT contracts, loss of in-house expertise, and poor user experience and hence, the public sector needs to take charge of its digital destiny by itself. To build a true partnership with the private sector, AI researchers should be funded in government departments and senior policymakers should receive appropriate training to benefit from it. AI start-ups should focus on public service issues through R&D investment and funding. The government should prioritize ethical AI usage, and address data protection and control issues.

As the use of AI in the government becomes increasingly common, it is crucial to remember that governing has no “objectively right” way of doing things. However, the use of algorithms in governance turns these valuable judgments into a code. These judgments find limited appreciation, which becomes problematic when they are encoded in machine learning systems built and maintained by the private sector. The government needs to bear the responsibility to ensure that normative choices made in building machine learning systems reflect consciousness and legitimacy.

The use of AI and machine learning, if utilized correctly, holds great promise for the government.

Weekly Brief

Read Also