The US AI Safety Institute is facing uncertain stability.

One of just a few U.S. government offices dedicated to analyzing the safety of AI could be zeroed out, unless Congress decides to fund it.
The US AI Safety Institute is facing uncertain stability.

One of just a few U.S. government offices dedicated to analyzing the safety of AI could be zeroed out, unless Congress decides to fund it.

AISI is a new Federal research agency that would study the risks of AI systems-a Nov 2023 follow-through to President Joe Biden AI Executive Order. Under NIST, an agency belonging to the Commerce Department charged with the responsibility of developing guidelines for implementing several technologies.

But though the AISI has a budget, a director, and research partnership with its U.K. equivalent, the AI Safety Institute, it could be wound down with a simple repeal of Biden's executive order.

Chris MacKenzie, a senior communications director at Americans for Responsible Innovation, a lobby group that represents AI developers, told TechCrunch the argument made: "If another president were to come into office and repeal the AI Executive Order, he would dismantle the AISI. And he has promised to repeal the AI Executive Order. So Congress formally authorizing the AI Safety Institute would ensure its continued existence regardless of who's in the White House.

In addition to securing the future of the AISI, funding the office also might involve more predictable, sustained appropriations from Congress for its effort. The AISI budget is roughly $10 million, a pretty modest sum given the focus of many of the world's major AI labs in Silicon Valley.

Appropriators in Congress tend to give higher budgeting priority to entities formally authorized by Congress, MacKenzie said, "with the understanding that those entities have broad buy-in and are here for the long run, rather than just a single administration's one-off priority.".

In a letter submitted today, more than 60 companies, nonprofits, and universities implored Congress to pass a law codifying the AISI by year's end. The signatories include OpenAI and Anthropic, both of which have entered into agreements with the AISI to collaborate on AI research, testing, and evaluation.

Both the Senate and House have advanced bipartisan measures codifying the work of the AISI, but conservatives in both chambers, including Sen. Ted Cruz (R-Texas), which has opposed the measures, have pushed for the Senate measure to roll back diversity programs.

Granted, the AISI is a pretty anemic organization in terms of enforcement. Its standards are voluntary. But think tanks and industry coalitions — and even tech titans like Microsoft, Google, Amazon, and IBM, all of which signed the letter above — see the AISI as the best available path to AI benchmarks that could eventually be used as a foundation for policy.

Other interest groups also harbor fears that folding the AISI would end up surrendering AI leadership to other countries. International leaders agreed to plan a net of AI Safety Institutes, including agencies from Japan, France, Germany, Italy, Singapore, South Korea, Australia, Canada, and the European Union alongside the U.K. and the U.S. while meeting at an AI summit in Seoul in May 2024.

Thus, as others rapidly advance, members of Congress can ensure the U.S. is not relegated to being left out of the global AI competition by permanently authorizing the AI Safety Institute and certainty for its role in advancing U.S. AI innovation and adoption, according to a statement by Jason Oxman, president and CEO of the Information Technology Industry Council, an IT industry trade association. "We call on Congress to heed the call to action today by industry, civil society, and academia to pass necessary bipartisan legislation before the end of the year," the Chamber said.

Blog
|
2024-10-23 19:02:17