AI 2024 in review: The 10 most notable AI stories of the year
As we kick off 2025, the IoT Analytics research team has evaluated last year’s top AI stories. This article highlights 10 of the most impactful developments as well as some general observations and developments in the field of AI in 2024.
Since 2015, IoT Analytics has published an annual review of the top 10 IoT stories of the past year (the 2024 IoT review will be published next week). This year, for the first time, IoT Analytics is adding an AI review as well, as AI has become a centerpiece of many companies’ strategies and a major research coverage area.
General AI 2024 market
AI boom drove record valuations and market growth in 2024. AI infrastructure providers like NVIDIA and Broadcom and cloud AI services providers like Microsoft saw strong revenue growth from their respective AI revenue streams, with NVIDIA surpassing Apple and Microsoft several times in 2024 as the world’s most valuable company. IoT Analytics’ ongoing research into the GenAI market* estimates NVIDIA’s revenue from data center GPUs to have increased by 142% in 2024, pushing its market capitalization north of $3.5 trillion. Meanwhile, AI research and development companies OpenAI and xAI both raised over $6 billion during their funding rounds (more on this below). OpenAI is now valued at $157B. Indeed, AI was so hyped in 2024 that the tech-heavy NASDAQ surpassed 20,000 points for the first time due to the AI wave.
Note: IoT Analytics plans to publish the Generative AI Market Report 2025 in January 2025. Those interested in accessing these reports when they are released can sign up for IoT Analytics’ IoT Research Newsletter by clicking below.
The 10 most notable AI stories in 2024
Throughout 2024, IoT Analytics monitored significant developments regarding AI technology as part of its growing coverage of the field. From the IoT Analytics team’s perspective, many of the news stories below spoke to larger trends that appeared or became more pronounced throughout the year. With this, in the team’s opinion, these are the 10 most notable stories for AI in 2024, including the key news stories behind them (in chronological order of the leading stories we highlight).
1. Most notable AI-related cybersecurity story: State-sponsored hackers using LLMs to improve their attacks
On February 14, 2024, Microsoft announced that state-sponsored hackers from Russia, China, North Korea, and Iran have been using tools from OpenAI—which Microsoft significantly backs—to improve their hacking campaigns. Microsoft stated that the groups used AI differently. For example, the Russian GRU generally used large language models (LLMs) to research “various satellite and radar technologies that may pertain to conventional military operations in Ukraine.” Meanwhile, North Korean hackers reportedly used LLMs to generate content intended for spear-phishing campaigns, and Iranian hackers used the models to write more convincing emails.
2. Most impactful AI-related regulation: EU AI Act
EU AI Act passed and entered into enforcement. On March 13, 2024, the EU Parliament adopted the EU AI Act, and it went into force on August 1, 2024. Touted as the first such formal, comprehensive regulation in the world, it establishes rules for the use of AI in the EU by classifying AI into four risk categories based on the potential harm caused by (mis)use:
- Unacceptable risk – These are AI systems deployed for certain uses that are prohibited, including (but are not limited to):
- Subliminal, manipulative, or deceptive actions to distort behavior or impair decision-making;
- Exploiting vulnerabilities related to age, disability, or socio-economic circumstances;
- Biometric categorization that infers sensitive attributes (e.g., race or political opinion)—with the exclusion of law enforcement labeling and filtering lawfully acquired biometrics;
- Social scoring, such as evaluating or classifying people or groups based on social or personal traits, that can cause unfavorable treatment of those people; and
- Inferring emotions in workplaces or academic institutions, with the exception of medical or safety reasons.
- High risk – Being the highest allowable risk, much of the EU AI Act focuses on regulating these AI systems. These systems include (but are not limited to):
- Those used a safety component or a product covered by EU laws listed in Annex I of the EU AI Act and must undergo third-party conformity assessments according to those laws and
- Use cases listed under Annex III of the EU AI Act, such as permitted biometric operations and critical infrastructure, among others.
Providers of high-risk AI must meet requirements to operate their AI, such as establishing a risk management system, conducting data governance, and providing technical documentation demonstrating compliance, among other requirements.
- Limited risk – Comprising a much smaller section of the act, this category comes with much lighter transparency requirements. In short, developers and providers of these systems must ensure that end users are aware that they are interacting with an AI.
- Minimal risk – These are unregulated and include the majority of AI applications available on the EU market, such as AI-enabled video games and spam filters.
Japan and Brazil also introduced AI regulations. While the EU AI Act is touted as the first of its kind, countries elsewhere have either initiated legislation for regulating AI or offered guidelines that align with existing laws but are not aimed to be as governing as the EU AI Act. For example, in Asia, Japan’s government published the AI Guidelines for Business 1.0, a voluntary guideline based on existent laws meant to encourage responsible AI development and use, in April 2024. Meanwhile, in South America, Brazil’s senate introduced Bill No. 2338/2024, its first bill intended to regulate AI (including algorithm design and technical standards), in May 2024 and passed it in December 2024.
In the US, no federal-level regulation exists; however, in 2024, at least 24 US states, Puerto Rico, the US Virgin Islands, and Washington, DC, introduced AI bills. At least 31 states, Puerto Rico, and the US Virgin Islands adopted and enacted the resolutions.
3. Most significant AI hardware development: NVIDIA’s Blackwell series and its delay
NVIDIA announces its next generation of data center GPUs. On March 18, 2024, after years of hype about its A100- and H100-series data center GPUs, US-based chip designer and developer NVIDIA—by far the largest provider of data center GPUs that power AI—announced its new Blackwell GPU architecture during its GTC 2024 keynote address. Within this architecture, three GPUs were announced: B100, B200, and GB200 (data center superchip combining one Grace CPU with two B200s).
NVIDIA promises vast performance and energy efficiency improvements for the Blackwell series. However, specific numbers are hard to come by given that NVIDIA’s Blackwell Architecture documentation notes that the projected performance is subject to change, and NVIDIA faced design flaws (more on this below). Still, the projected performance shows substantial improvements over the H (“Hopper”) series, including processing 6x queries per second and 30x output tokens per second per GPU.
Delays arose for the Blackwell series’ release. In August 2024, NVIDIA reportedly told cloud providers that the highly anticipated B200 AI chip—originally expected to be released in Q4 2024 and a core component of the GB200 data server superchip—would be delayed into 2025 due to a design flaw discovered “unusually late in the production process.” While NVIDIA’s CFO Colette Kress assured investors that the GPU was in full production during their quarterly earnings call in November 2024, and some reports shared that NVIDIA was back on track to release the B200 in December 2024, news of its or the B100’s release has still yet to appear publicly.
4. Most impactful M&A Activity: Microsoft and Inflection AI
Microsoft acquires Inflection’s tech and team. On March 19, 2024, Microsoft established a new consumer AI division called Microsoft AI. In staffing this division, Microsoft hired Mustafa Suleyman and Karén Simonyan, co-founders of US-based AI startup Inflection AI, as well as most of Inflection AI’s team. Further, Microsoft entered into a series of commercial agreements with Inflection AI, including nonexclusive licensing to use Inflection AI’s intellectual property (among other deals).
In essence, according to the UK Competition and Markets Authority (CMA), Microsoft acquired most of Inflection AI’s assets, and the CMA believes such a transaction falls under the CMA’s merger control jurisdiction, even though Inflection AI continues to exist as an independent entity (just with new leadership and staff). However, the CMA added that this transaction, which some refer to as quasi-merging, did not present “a realistic prospect of a substantial lessening of competition” while noting that it would have regulatory purview of similar situations if they present competition concerns.
Regulators question whether such transactions create unfair market conditions. The Microsoft case is only one example. At the start of 2024, the US Federal Trade Commission (FTC) announced that it was opening inquiries into multi-billion-dollar investments by Amazon into Anthropic, Google into Anthropic, and Microsoft into OpenAI (each leading to the major companies taking significant shares of the smaller ones). The inquiries are meant to investigate whether these companies are acquiring effective control over the entities without merging or acquiring outright, averting regulator attention and creating an unfair market competition situation.
Though “quasi-merging” and “acquihiring” are not new concepts, they appear to be occurring more frequently as the AI race continues to heat up, and regulators worldwide appear to be taking note, even if they are not preventing these transactions (so far).
5. Most significant LLM advancement: Meta’s open model LLaMa 3.1 beats closed models
LLaMa 3.1 either on par with or outperforms ChatGPT or Claude models. On July 13, 2024, Meta introduced its updated LLaMA 3.1 (short for Large Language Model Meta AI, offered open weight under a noncommercial license) and released results of the model’s benchmark tests compared to other popular models from OpenAI, Anthropic, and Mistral. According to Meta, in 15 benchmark tests, the 405-billion-parameter model (LLaMA 3.1 405B) outscored OpenAI’s GPT-4 and GPT-4o and Anthropic’s Claude 3.5 Sonnet models in 7 (Claude outscored in 6 tests, for comparison). In benchmarks where it was not the highest scorer, it generally remained on par with the other models.
In similar benchmark testing for the 8-billion- and 70-billion-parameter LLaMA 3.1 models (LLaMA 3.1 8B and LLaMA 3.170B, respectively), both outscored comparable Google, Mistral, and OpenAI models in 11 of 12 tests. It is worth noting that Meta performed the same 15 benchmark tests as the 405B, but it did not include the other models for three tests.
6. Most notable AI-based layoffs: Klarna
Klarna shows that the threat of corporate headcount reduction due to AI has become real. On August 27, 2024, Sweden-based payments company Klarna said it had cut hundreds of roles and expects even more as it implements AI to handle customer queries. Making this the most notable AI-based layoff is that the company stated that its AI-based chatbot could perform the work of 700 employees, reducing the average query resolution time from 11 minutes to 2—AI has purely replaced human workers in this case. Further, the company announced it would not recruit anyone other than engineers for some time.
Intuit makes large cuts as it shifts its efforts to AI. Klarna was not alone, however. In July 2024, US-based tax software company Intuit announced it would be laying off 1,800 employees to focus efforts on AI tools like Intuit Assist, making clear the layoffs were not part of a cost-savings measure. At the same time, Intuit said it would hire at least the same number of people in engineering, product, and customer-facing fields (like sales and marketing) to support its AI efforts in 2025. Though the number is higher for Intuit, the company is only looking to turn its efforts to AI, while Klarna has already had AI replacement.
Non-AI-skilled tech workers see technology companies shifting focus to AI offerings. While the two cases above show AI taking customer service and other non-tech roles, major technology companies have also made significant cuts to their IT-skilled employee numbers as part of a refocus on their AI offerings. Throughout 2024, the tech unemployment rate fluctuated and approached a 4-year high in June (3.7%) before dropping to—and remaining around—2.5% in September. Major tech-skilled layoff announcements in 2024 include:
- January: Global tech giant Google cut over 1,000 employees across multiple teams, including hardware and Google Assistant teams, to focus more on its AI offerings, like its Gemini GenAI (formerly known as Bard). Google CEO Sundar Pichai noted that many more cuts were expected throughout 2024 as it continued to reallocate resources toward AI (in December 2024, Pichai announced a 10% cut in managerial roles, part of this AI refocus effort).
- June: Global software and cloud services giant Microsoft announced it would layoff over 1,000 employees in its mixed-reality and Azure departments to put more effort into defining “the AI wave and [empowering its customers] to succeed in the adoption” of AI, according to a company email from Jason Zander, executive VP of Strategic Missions and Technologies at Microsoft.
- August: Multinational network hardware and software company Cisco laid off roughly 7% of its workforce, with investments shifting toward AI, including AI networking for cloud applications and AI infrastructure. These cuts follow nearly 4,000 cuts in February 2024.
More job cuts due to AI expected to come in 2025. According to IoT Analytics research in early 2024, AI and GenAI skills have become the most sought-after by employers in general. Further, a 2024 survey by Staffing Industry Analysts of over 900 US business leaders found that 30% of companies replaced workers with AI, with 38% of companies that plan to use AI in 2025 saying they expect to replace workers with the technology in the next year.
AI may be a scapegoat for some. It is worth noting, however, that many companies may be using AI as an excuse for layoffs, as it sounds less damaging or harsh than saying the layoffs are due to cost-saving measures or profit increases. Further, attributing layoffs to AI can inspire investors, as it infers increased efficiency and productivity. In February 2024, US-based tech giant Meta’s CEO, Mark Zuckerberg, shared his view that the layoffs are a symptom of post-COVID-19 realities, where companies “overbuilt” themselves during the pandemic in response to uncertainty and are now trying to go leaner for more efficiency.
7. Biggest challenge for AI companies: Inability to improve LLM performance
Major LLM advancement projects miss their targets. In September 2024, OpenAI wrapped up the initial round of training for a new LLM (referred to as Orion internally) that it hoped would vastly surpass its previous models, much like how OpenAI’s GPT-4o (released May 2024) was considered a massive step up from GPT-4 Turbo. However, the model has reportedly thus far failed to live up to expectations and is not considered as big of a step up from current models as GPT-4o was from GPT-4 Turbo, or GPT-4 was from GPT-3.5.
OpenAI is not alone in this apparent disappointment, though. LLM providers Google and Anthropic also appear to have not met expectations for their anticipated Gemini and Claude model updates, with new releases delayed for them as well.
Limitations on new data present new LLM development paradigms. Over the last few years, LLM companies’ expectations for great advancements in their models were driven by “scaling laws”—the idea that more computing power, data, and larger models will lead to even greater leaps in the capabilities of AI. However, a problem has arisen in 2024: limited new (human) information on which to train LLMs. Early LLM models used the internet and other sources to learn—i.e., decades’ worth of human knowledge. However, relatively less new, reliable human-based information has come online over the past two years. Another problem has arisen since the public release of the various LLMs: AI cannibalism. The use of GenAI for online content has become so commonplace that LLMs are beginning to take in AI-created content. This circular intake not only further limits/dilutes new human-created content but can also negatively affect the accuracy of the information the model is learning.
Some question the limits of AI, while others rethink what it means to progress. Taking into account the lack of great LLM advancements and the limitation of new data on which to train AI, AI labs appear to be accepting that the scaling laws are not truly universal. Some in the AI industry are starting to believe that the various models are converging on a ceiling of capabilities. However, others remain optimistic and see scaling laws in a new light—as dynamic and responsive to new paradigms, thus requiring new strategies when developing and training AI (e.g., test-time scaling).
New features make up for limited LLM progress. Though model progress may be plateauing, AI companies are still working to bring added value to their current models. For example, OpenAI released the preview of its o1 and o1-mini models in September 2024 and fully released them in December 2024 (along with introducing its upcoming o3 model). While the o1 model takes longer to process a query, it works through “chains of thought” to construct and correct its answers before responding (much like a human would take time to break down a complex problem), enhancing its reasoning abilities and answer accuracy.
Further, in October 2024, OpenAI released ChatGPT Search, enabling ChatGPT to search the web and allowing users to search for information via the ChatGPT interface and with source citations. Finally, in December 2024, OpenAI and Google released video generation capabilities in their offerings, named Sora (based on DALL-E 3) and Veo2, respectively.
8. Most significant AI research accomplishment: Two Nobel Prizes
AI pioneers win Nobel Prizes. On October 8 and 9, 2024, the Nobel Prizes for Physics and Chemistry went to AI-related research for the first time. The 2024 Nobel Prize for Physics went to two people, Princeton University physicist and professor emeritus John J. Hopfield and University of Toronto professor emeritus and former Google researcher Geoffrey Hinton, for developing machine learning technology using artificial neural networks. Meanwhile, Sir Demis Hassabis, the CEO and co-founder of Google DeepMind, and John M. Jumper, director of Google DeepMind and co-creator of AlphaFold, received the 2024 Nobel Prize for Chemistry for developing an AI algorithm that accurately predicts protein structures from their amino acid sequences, solving a 50-year-old challenge to predict protein structures.
9. Largest corporate self-investment into AI: Amazon
Amazon invested heavily in its data centers and AI. During Amazon’s earnings call on October 31, 2024, CEO Andy Jassy stated that the company’s 2024 capital expenditures (CAPEX) would reach $75 billion, with the largest share going to AWS and AI. By this point, Amazon had spent $22.6 billion on data center expansion, including for property and equipment, up 81% year-over-year.
AI driving significant hyperscaler CAPEX increases across the board. It is not just Amazon investing in its data centers. The CAPEX for it and other major US hyperscalers, including Microsoft, Alphabet, and Meta, are estimated to have collectively surpassed $200 billion in 2024, driven by AI investment. These large technology companies also announced they plan to continue increasing CAPEX, with major US-based investment bank Morgan Stanley predicting hyperscaler CAPEX to exceed $300 billion in 2025. While much of this spending is going to high-end GPUs and the construction of sprawling data centers to house them, there are supporting costs as well, such as energy costs to run the servers.
Questions about the financial sustainability of hyperscalers. These and other tech companies with AI footprints are working to convince investors that the spending is a large down payment for a game-changing technology—sort of an “if you build it, they will come” approach, where the infrastructure must be in place before the money-making item can operate (e.g., a train needs railroads in place). However, there are questions about whether the revenue will match the costs. For example, while the upfront cost to build the data centers (facilities, servers, and all), along with recurring energy costs, can be calculated for return on investment calculations, data centers will likely not be stagnant. GPU producers will continue developing newer, more powerful chips, and hyperscalers will likely want to (or need to) purchase those to stay competitive as more capable and powerful AI comes online. Additionally, with more computing power comes more energy demand.
10. Largest AI-related funding rounds: Databricks, OpenAI, and xAI
Databricks investments soar on the wings of AI. On December 17, 2024, US-based AI cloud data platform Databricks announced it had raised $10 billion in Series J funding, crowning it 2024’s largest venture capital round. This investment brought Databricks’ value to $62 billion.
OpenAI and xAI also rake in large investments. In October 2024, OpenAI announced it raised $6.6 billion in Series B funding, bringing the post-money valuation to $157 billion. This amount was the largest venture capital round in 2024 until Databricks took that title, and it beat Elon Musk’s xAI, which had earned that title in May 2024 with $6 billion raised in Series B funding. In November 2024, xAI announced it had again raised $6 billion in Series C funding, bringing its value to $50 billion.
Looking ahead at AI in 2025
For continued coverage and updates (such as this one), subscribe to IoT Analytics’ newsletter. In 2025, the team will keep its focus on important IoT topics but plans to publish AI-related reports, including (but not limited to):
- Generative AI Market Report 2025–2030 (This is an update to a previously published report: Generative AI Market Report 2023-2030)
- Industrial AI Market Report 2025–2030 (This is an update to a previously published report: Industrial AI and AIoT Market Report 2021–2026)
- Impact of Regulations Insights Report (including the EU Data and EU AI Acts)
- Edge AI Report
For complete enterprise IoT and AI coverage with access to all of IoT Analytics paid content & reports as well as dedicated analyst time, your company may subscribe to the Corporate Research Subscription.
Disclosure
Companies mentioned in this article—along with their products—are used as examples to showcase AI market developments in 2024. No company paid or received preferential treatment in this article, and it is at the discretion of the analyst to select which examples are used. IoT Analytics makes efforts to vary the companies and products mentioned to help shine attention to the numerous IoT and related technology market players.
It is worth noting that IoT Analytics may have commercial relationships with some companies mentioned in its articles, as some companies license IoT Analytics market research. However, for confidentiality, IoT Analytics cannot disclose individual relationships. Please contact compliance@iot-analytics.com for any questions or concerns on this front.
Leave a Comment