BYLINE: Balaji Padmanabhan

News — The recent  (AI) is designed, as described , to streamline high-skilled immigration, create a raft of new government offices and task forces, and pave the way for the use of more AI in nearly every facet of life touched by the federal government, from health care to education, trade to housing, and more.

Reviewing the order, it’s on such a track. It’s extremely timely and comprehensive in its scope while being balanced.

The order clearly identifies major opportunities to improve government functions and deliver services more efficiently. It further exhibits urgency to initiate specific plans to monitor and mitigate potential harm. Moreover, it meets a perceivable growing consensus in both the tech industry and academia for a need for clear federal guidance in AI, especially with the looming 2024 elections.

Given the trillions in government spending on education and healthcare, the incentive to leverage AI for "standing up for consumers, patients and students” should be of little surprise. This can significantly spur the quality of educational content and delivery and level the playing field. With the technology and AI capabilities we have today, it seems unconscionable that education quality varies significantly based on a student’s physical location. Same for healthcare, especially with AI’s potential to improve quality care to seniors while cutting costs. 

Elsewhere, AI can enhance routine, government-provided services for ID, traffic, travel, immigration and, of course, defense. These areas require federal agencies to both significantly enhance their capabilities and contract with industry for expertise. The latter brings real risks in terms of data use, copyright issues, security, fairness issues and other types of harm. So, oversight is critical. The executive order addresses this and lays the foundation for activities across agencies, such as NIST and the Department of Commerce, to help realize these goals. 

We have seen time and again bad actors using capabilities of technology (sometimes) even better than the rest of us. The executive order recognizes this with specific, mitigating objectives, which also can unleash a wave of innovation that starts within the federal enterprise, then percolates externally with second-order gains for the economy. Some of the initiatives, such as building methods to detect AI-enabled fraud and verifying AI-generated content, point to significant potential for innovation.

Regarding industry and government collaborating to mitigate risk, consider how technology greatly facilitated inter-bank transactions and money flow over the last two decades. But many of these channels were also used for money laundering and other criminal activities. The federal government responded by developing oversight capabilities which led to banks investing in effectively flagging and reporting bad transactions to agencies.

We might need similar structures online as the use of AI becomes widespread.

Tech firms will need to demonstrate responsible use of AI and perhaps report cases where they find harm. And federal agencies might need the capabilities to take on such oversight. The executive order alludes to this with “red-team safety tests,” with ideally additional strategies going forward. Such monitoring critically ensures the safety and integrity of all informational channels, plus our democracy and values as a society.

The order also addresses consumer privacy, for which few guardrails exist, including for discouraging platforms from using behavioral data to “click bait” and send children down the path to harmful addiction to social media. AI capabilities can exacerbate some of these problems for many reasons, including the ability to generate powerful, engaging (and blatantly false) “deep fakes.” The order will prompt more thorough discussion of ways Congress can protect children from such harm. 

In academia, MBA and other graduate programs, like in UMD’s Smith School, are developing AI programs that can guide industry and government in many of these directions. In addition to a deeper understanding of AI capabilities, these new programs explore careful design of incentives and penalties, governance and management strategies, balancing short-term and long-term interests, and the future of labor itself. 

While these are exciting times for some of us, they are challenging times for many as well that call for an “all hands-on deck” approach to the way forward. It’s fair to say that which direction is “forward” became a lot clearer because of the clarity and scope of the executive order itself. 

Balaji Padmanabhan is a professor in the Department of Decision Operations & Information Technologies at the University of Maryland’s Robert H. Smith School of Business

MEDIA CONTACT
Register for reporter access to contact details