artificial intelligence

How is Our State Government Using Artificial Intelligence?

Connecticut’s chief information officer said artificial intelligence is used in different ways in the state.

NBC Universal, Inc.

ChatGPT has been all the buzz in the tech world.

The software is a type of artificial intelligence, or AI, that can have human-like conversations, write term papers and more.

ChatGPT is powered by algorithms - a set of rules or instructions - that are the basic building blocks of artificial intelligence, and those algorithms are used by governments and businesses, too. But how?

In an exclusive interview with NBC Connecticut Investigates, Connecticut's Chief Information Officer Mark Raymond said AI is not used to make key decisions about people.

Raymond explained the state does use AI to keep state data secure and protect state computers from viruses and ransomware.

“We don't really use the technology in a way that makes decisions, right, but we use it to look at the environment in which we operate,” Raymond said.

But recent research by Yale Law School concluded our state has used algorithms to assign students to schools and to try to save at-risk children from life-threatening episodes.

Raymond disputed that, “Some of these things can be taken a little out of context. You know, DAS (Department of Administrative Services) does not use it that way. It's our understanding that DCF (Department of Children and Families) does not as well.”

The American Civil Liberties Union of Connecticut said the use of AI for government work remains in its early stages and the time is now to set boundaries.

“There's an absolute black box around what the government is doing with algorithms. We don't know anything,” Executive Director David McGuire said.

ACLU Connecticut said it wants legislation requiring state agencies to:

  • Disclose which algorithms they're using
  • Test the algorithms before they're used
  • Conduct regular audits of the algorithm results afterwards
  • And to have an avenue for appeal, if someone believes an algorithm incorrectly processed a decision

McGuire said it's all about the data used.

“You will see in terms of arrest data and incident data, will oversample Black and brown folks because of the nature of our law enforcement deployment. If you then feed that data into an algorithm and ask it to make for example, risk assessments, you're going to have risk assessments that are overly harsh and oftentimes completely inaccurate about Black and brown communities,” McGuire said.

A bipartisan bill, Senate Bill 1103, has been making its way through the legislature.  It addresses many of the concerns about algorithms and AI, but not all of them.

One of the bill’s chief architects, Sen. James Maroney of Milford, said “Too many times we let things get out of the barn before we try to regulate them. I think it's important that we get started now before we get too far along with AI and we lose control.”

Raymond said he prefers state use of algorithms to focus less on decision-making and more on evaluating outcomes of work performed by state employees.

“I can see a world a couple of years down the line that you know we are checking, saying you as a human, we're going to do something you might want to take another look at this, right, so using the technology to hold us accountable and decisions we're making, instead of recommending or making them on our behalf,” Raymond said.

As for ChatGPT use by the state, Raymond said it is not investigating that at this time. However, he added that vendors like Microsoft and Salesforce.com have talked about embedding emerging technologies like ChatGPT into their products. 

Contact Us