The Colorado Springs Gazette

How should we go about regulating AI?

Paolo Mastrangelo is the co-founder and co-president of American Policy Ventures. Trey Price is a technology policy analyst for the American Consumer Institute.

The writers address the regulation of artificial intelligence.

POINT: Paolo Mastrangelo

Artificial intelligence is developing faster than many of us can imagine and is now becoming an integral part of everyday life. So far, businesses are the primary catalysts for this deployment. Studies show that in one year of introducing a new type of AI, onethird of respondents reported their organizations were using the technology in some form, and 40% expected to up their usage investments.

As we saw with the development of computers, once the workforce gets a taste of advantageous technology, it’s likely irremovable. And that is why our policymakers must get off the sidelines in regulating AI.

Despite AI’S burgeoning usage, Americans are just starting to warm up to the technology in many ways because of the “unknown” factor. In the same vein, they are skeptical that elected officials can get enough of a grasp on the gravity of AI capabilities to regulate. Recent polling found that 57% of voters said they were extremely or very concerned with the government’s ability to regulate AI to promote innovation and protect citizens effectively. While these voters are rightfully concerned and calling for action, it’s important our lawmakers approach this issue delicately, not applying too broad a brush but also not letting perfect be the enemy of good.

Some legislators in Washington are catching onto this. Sen. Todd Young, R-ind., is a part of the bipartisan Senate AI working group to create rules to protect Americans from AI while promoting its potential uses. At an event in October, Young said the United States should use a “light touch” when regulating AI. He emphasized the importance of harnessing the power of AI, stating that “our bias should be to let innovation flourish.”

He also argued that the government must safeguard Americans from the risks that come with AI. He says this will involve carefully filling the gaps within existing laws.

There are several possibilities for how to go about regulating AI. In October, the White House issued an executive order regarding AI that will require safety assessments, research on labor market effects and consumer privacy protections.

The Department of Defense also announced a Generative AI Task Force. In Congress, there is the Senate working group, while House Democrats have also announced an active group to enact legislation authorizing more programs and policies geared toward harnessing the power of AI. No matter what form it takes, the regulation of AI will involve a tightrope walk that balances protecting national security without choking off potential innovation. Regardless of what future legislation and rulemaking will look like, most voters have indicated they are concerned with the government’s actions so far.

While discussing how we could potentially regulate AI is continuing, tangible movement is just beginning.

Americans, regardless of their political perspectives, are concerned about inaction. The ball is in Congress’s court to take these concerns and enact meaningful, bipartisan legislation to keep pace with the development of AI by regulating its risks and capitalizing on its rewards.

COUNTERPOINT: Trey Price

With new technology comes new possibilities. A side effect is where these possibilities fit into existing law. Dynamic, or algorithmic, pricing is a strategy where artificial intelligence uses data collected about market conditions to determine pricing in real time.

Algorithmic pricing has been a concern for antitrust regulators for years, even before the AI boom. While a re-examination of laws due to changing circumstances is a normal part of progress, there is currently insufficient evidence to suggest that dynamic pricing is leading to conspiracy. The potential for issues is not enough to show that they are actually occurring, and lawmakers should not create a solution for a problem that doesn’t exist.

Earlier this year, a class-action lawsuit was filed against several Las Vegas companies, including MGM Resorts and Caesars Entertainment, arguing that the companies used the same dynamic pricing software. The lawsuit claims this led to elevated hotel prices. The U.S. District Court in Nevada dismissed the case, saying the plaintiffs did not provide enough evidence of collusion between the companies but did give the claimants 30 days to submit a revised complaint addressing the issue.

While the lawsuit was recent, concern about dynamic pricing using algorithms is nothing new. In the field of antitrust, some have raised the possibility that these programs could lead to collusion.

The Federal Trade Commission released a public statement in 2017 discussing this possibility. The statement explored how algorithms could facilitate collusion in different ways: either purposefully by making it easier for companies colluding to respond to lower prices by a company trying to undercut them or autonomously as the AI behind the software learns and decides on an anti-competitive strategy.

The complaint in the case against the hotels argues that using the same software means that the hotels don’t have to price independently. The complaint also claims that academic research supports the idea that dynamic pricing algorithms lead to anticompetitive behavior and higher consumer prices. Some have argued that regulators should explore implementing new regulations in response to the possibility of anti-competitive effects from algorithmic pricing.

Despite claims that algorithmic pricing could lead to anticompetitive behavior and collusion, the evidence is inconclusive. While the potential for intentional and tacit collusion exists in theoretical models, there are numerous obstacles to implementing this in a real-world setting. A primary barrier is that algorithms are not advanced enough to achieve optimal pricing in real-world applications, as many algorithms do not consider essential factors such as product differentiation and the effect of advertising on consumer choice.

The first major real-world test of dynamic pricing in antitrust law also found that evidence for collusion was lacking. In dismissing the court case, the court found insufficient evidence to suggest a conspiracy between the defendants. Instead, the plaintiffs inferred collusion, which is not enough to prove it occurred. Lawsuits like this and calls for regulation on what may happen should be tempered by this lack of evidence.

While automation may make collusion easier in the future, that is not sufficient to justify expanding regulation. If there comes a time when dynamic algorithmic pricing is used to create market conditions detrimental to consumers, then the topic can be revisited. As it now stands, there is not sufficient evidence to warrant regulatory intervention. Intervening directly would be creating a solution without a problem.

OP/ED

en-us

2023-12-09T08:00:00.0000000Z

2023-12-09T08:00:00.0000000Z

https://daily.gazette.com/article/281925957793135

The Gazette, Colorado Springs