Monday, July 22, 2024
30.2 C
Los Angeles

How to assess a general-purpose AI model’s reliability before it’s deployed | MIT News

Foundation models are massive deep-learning models that...

El Salvador: Rights Violations Against Children in ‘State of Emergency’

El Salvador’s state of emergency, declared in...

Vietnam: New decree on cashless payments

On 15 May 2024, the Government officially...

Is it time for limits on artificial intelligence and the news?

AI/MLIs it time for limits on artificial intelligence and the news?

Credit: Unsplash/CC0 Public Domain

If you’re like most people, when you saw an update to the terms and conditions to use the New York Times’ website last week, you just accepted them and moved on.

But there was something unusual in this particular update—a prohibition on artificial intelligence.

“A lot of folks who are creating content—reporters and writers, but also artists and others—are discovering that their work is essential for these models to function,” said Robin Burke, professor and chair of the information science department at CU Boulder’s College of Media, Communication and Information. “And yet there’s this business model for which this is the input, but there’s no compensation for it.”

Generative AI platforms like ChatGPT create content based on user input. If you ask it to write a thank-you note to your grandmother for the sweater she knitted for your birthday, it draws upon all the text it has “read” online and generates a fairly convincing note. But there is no recognition for the writers whose prose generated the source material that make the AI’s output possible.

The Times’ action forbids AI systems from scraping its content to train machine learning systems. So far, it’s the most influential shot fired as AI’s perceived impact looms in newsrooms, creative fields and beyond.

“The first round with AI has kind of been a free ride, because nobody was paying attention to what they were doing,” Burke said. “Now, I think it makes sense that the organizations producing content are thinking, ‘Do I really agree with this as a usage of my work?'”

A unique perspective on news, AI

Burke has unique expertise in this arena. He’s the son of a newspaper publisher and a scholar who is part of a team that’s creating tools for the close study of news recommender systems and their impacts on users, including journalists and editors.

The Times is facing the same challenges as other papers in this new chapter of the digital age. But with a very robust subscriber base and a global audience, it is not really in the same category of daily newspapers that have been constricted by technologies that have moved audiences online and siphoned away significant advertising revenue. It’s easy to read journalists’ concerns over AI as a chance to correct what the industry got wrong at the dawn of the internet—when publishers made their news free to everyone online, counting on the new technology of digital advertising to pay the bills.

“In the early days of the Internet, people had a lot of different crazy ideas,” Burke said. “And certain models came out of that—some thrived, some failed—but as it relates to AI, we’re not far enough along to understand who the winners are.”

Need proof? A month before the Times changed its terms, the Associated Press signed a deal to allow ChatGPT to scrape its archive going back to 1985.

“AP is a little different, in that their model is very different from the Times—they get their money mostly from publishers for using their content,” Burke said. “It might also be the case that OpenAI saw the writing on the wall and looked to AP as a reliable source, especially in case other publishers start to lock them out.”

It’s something Burke feels is worth watching as he continues his research, particularly as those smaller papers face the choice of whether to restrict access to their reporting or consider AI’s role in a newsroom. If you’ll task AI with analyzing government records in search of scandal, it’s not a far leap to just ask an algorithm to write the story, leaving out human judgment altogether.

“Part of that recommendation equation is this question of credibility,” Burke said. “So when an article is recommended to you, what does the system need to do to ensure it’s credible—even if I might prefer some version of the news that suits my ideal ideological inclinations better?

“It’s why I think it’s such an important research goal to explore more of this space.”

Provided by
University of Colorado at Boulder


Is it time for limits on artificial intelligence and the news? (2023, August 18)
retrieved 20 August 2023

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Story from

Disclaimer: The views expressed in this article are independent views solely of the author(s) expressed in their private capacity.

Check out our other content


Check out other tags:

Most Popular Articles