If 2023 was a year of wonder about artificial intelligence, 2024 was the year to try to get that wonder to do something useful without breaking the bank.
There was a “shift from putting out models to actually building products,” said Arvind Narayanan, a Princeton University computer science professor and co-author of the new book “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell The Difference.”
The first 100 million or so people who experimented with ChatGPT upon its release two years ago actively sought out the chatbot, finding it amazingly helpful at some tasks or laughably mediocre at others.
Now such generative AI technology is baked into an increasing number of technology services whether we’re looking for it or not — for instance, through the AI-generated answers in Google search results or new AI techniques in photo editing tools.
“The main thing that was wrong with generative AI last year is that companies were releasing these really powerful models without a concrete way for people to make use of them,” said Narayanan. “What we’re seeing this year is gradually building out these products that can take advantage of those capabilities and do useful things for people.
At the same time, since OpenAI released GPT-4 in March 2023 and competitors introduced similarly performing AI large language models, these models have stopped getting significantly “bigger and qualitatively better,” resetting overblown expectations that AI was racing every few months to some kind of better-than-human intelligence, Narayanan said. That’s also meant that the public discourse has shifted from “is AI going to kill us?” to treating it like a normal technology, he said.
On quarterly earnings calls this year, tech executives often heard questions from Wall Street analysts looking for assurances of future payoffs from huge spending on AI research and development. Building AI systems behind generative AI tools like OpenAI’s ChatGPT or Google’s Gemini requires investing in energy-hungry computing systems running on powerful and expensive AI chips. They require so much electricity that tech giants announced deals this year to tap into nuclear power to help run them.
“We’re talking about hundreds of billions of dollars of capital that has been poured into this technology,” said Goldman Sachs analyst Kash Rangan.
Another analyst at the New York investment bank drew attention over the summer by arguing AI isn’t solving the complex problems that would justify its costs. He also questioned whether AI models, even as they’re being trained on much of the written and visual data produced over the course of human history, will ever be able to do what humans do so well. Rangan has a more optimistic view.
“We had this fascination that this technology is just going to be absolutely revolutionary, which it has not been in the two years since the introduction of ChatGPT,” Rangan said. “It’s more expensive than we thought and it’s not as productive as we thought.”
Rangan, however, is still bullish about its potential and says that AI tools are already proving “absolutely incrementally more productive” in sales, design and a number of other professions.
Some workers wonder whether AI tools will be used to supplement their work or to replace them as the technology continues to grow. The tech company Borderless AI has been using an AI chatbot from Cohere to write up employment contracts for workers in Turkey or India without the help of outside lawyers or translators.
Video game performers with the Screen Actors Guild-American Federation of Television and Radio Artists who went on strike in July said they feared AI could reduce or eliminate job opportunities because it could be used to replicate one performance into a number of other movements without their consent. Concerns about how movie studios will use AI helped fuel last year’s film and television strikes by the union, which lasted four months. Game companies have also signed side agreements with the union that codify certain AI protections in order to keep working with actors during the strike.
Musicians and authors have voiced similar concerns over AI scraping their voices and books. But generative AI still can’t create unique work or “completely new things,” said Walid Saad, a professor of electrical and computer engineering and AI expert at Virginia Tech.
“We can train it with more data so it has more information. But having more information doesn’t mean you’re more creative,” he said. “As humans, we understand the world around us, right? We understand the physics. You understand if you throw a ball on the ground, it’s going to bounce. AI tools currently don’t understand the world.”
AI can mimic what it learns from patterns, he said, but can’t “understand the world so that they reason on what happens in the future.” That, he said, is where AI falls short.
“It still cannot imagine things,” he said. “And that imagination is what we hope to achieve later.”
Saad pointed to a meme about AI as an example of that shortcoming. When someone prompted an AI engine to create an image of salmon swimming in a river, he said, the AI created a photo of a river with cut pieces of salmon found in grocery stores.
“What AI lacks today is the common sense that humans have, and I think that is the next step,” he said.
That type of reasoning is a key part of the process of making AI tools more useful to consumers, said Vijoy Pandey, senior vice president of Cisco’s innovation and incubation arm, Outshift. AI developers are increasingly pitching the next wave of generative AI chatbots as AI “agents” that can do more useful things on people’s behalf.
That could mean being able to ask an AI agent an ambiguous question and have the model able to reason and plan out steps to solving an ambitious problem, Pandey said. A lot of technology, he said, is going to move in that direction in 2025.
Pandey predicts that eventually, AI agents will be able to come together and perform a job the way multiple people come together and solve a problem as a team rather than simply accomplishing tasks as individual AI tools. The AI agents of the future will work as an ensemble, he said.
Future Bitcoin software, for example, will likely rely on the use of AI software agents, Pandey said. Those agents will each have a specialty, he said, with “agents that check for correctness, agents that check for security, agents that check for scale.”
“We’re getting to an agentic future,” he said. “You’re going to have all these agents being very good at certain skills, but also have a little bit of a character or color to them, because that’s how we operate.”
AI tools have also streamlined, or lent in some cases a literal helping hand, to the medical field. This year’s Nobel Prize in chemistry — one of two Nobels awarded to AI-related science — went to work led by Google that could help discover new medicines.
Saad, the Virginia Tech professor, said that AI has helped bring faster diagnostics by quickly giving doctors a starting point to launch from when determining a patient’s care. AI can’t detect disease, he said, but it can quickly digest data and point out potential problem areas for a real doctor to investigate. As with other arenas, however, it poses a risk of perpetuating falsehoods.
Tech giant OpenAI has touted its AI-powered transcription tool Whisper as having near “human level robustness and accuracy,” for example. But experts have said that Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences.
Pandey, of Cisco, said that some of the company’s customers who work in pharmaceuticals have noted that AI has helped bridge the divide between “wet labs,” in which humans conduct physical experiments and research, and “dry labs” where people analyze data and often use computers for modeling.
When it comes to pharmaceutical development, that collaborative process can take several years, he said — with AI, the process can be cut to a few days.
“That, to me, has been the most dramatic use,” Pandey said.