AI Pulse
research

The Long Dark: A History of AI Winters

A 3,000-word analysis of the boom-and-bust cycles of Artificial Intelligence. Why did AI 'fail' in 1974 and 1987, and is the 2025 hype different?

AI Historian
22 min read
The Long Dark: A History of AI Winters

The Seasonality of Progress

In 2025, we live in a "Permasummer" of AI. Billions of dollars are being poured into startups, every CEO has an "AI strategy," and the technology feels like it's improving by the hour. But for those who have been in the field for decades, there is a lingering fear: The Winter is Coming.

An "AI Winter" is a period of reduced funding and interest in artificial intelligence research. History shows that AI does not move in a straight line; it moves in violent cycles of over-promise and under-delivery. To understand where we are going, we must revisit the two great freezes that almost killed the field.


1. The First Winter (1974–1980): The Lighthill Report

The first era of AI (1950s–1960s) was defined by "Symbolic AI"—trying to program human logic into computers using if-then statements.

The Hype of the 60s

Pioneers like Herbert Simon predicted in 1965 that "machines will be capable, within twenty years, of doing any work a man can do." Government agencies like DARPA poured millions into translation projects, convinced that Russian-to-English translation was a solved problem.

The Crash

In 1973, the UK's Science Research Council commissioned Professor James Lighthill to evaluate the state of AI. His report, titled "Artificial Intelligence: A General Survey," was a devastating critique. He argued that AI had failed to master even basic "common sense" and that the "combinatorial explosion" made complex problem-solving impossible for the computers of that era.

  • Result: Funding was slashed overnight in the UK and the US. DARPA moved its money elsewhere, and the term "Artificial Intelligence" became a dirty word in academia.

2. The Second Winter (1987–1993): The Expert System Collapse

In the early 1980s, AI returned in the form of Expert Systems. These were specialized programs designed to mimic the decision-making of a human expert (like a doctor or a geologist).

The "Lisp Machine" Era

Companies like Symbolics and Lisp Machines Inc. built specialized hardware to run AI software. Business was booming. The Japanese government launched the "Fifth Generation Computer Systems" project, aiming to build massive AI supercomputers.

The Crash

The fall was triggered by two factors:

  1. Cost: Expert Systems were incredibly expensive to maintain. If the "expert's" knowledge changed slightly, the entire program had to be manually rewritten.
  2. The PC Revolution: Cheap desktop computers (Apple, IBM) became more powerful and versatile than the expensive, specialized "Lisp Machines." In 1987, the market for specialized AI hardware collapsed.
  • Result: Over 300 AI startups failed. The field entered a seven-year "Deep Freeze" where researchers had to relabel their work as "Informatics" or "Machine Learning" just to get a grant.

3. The 1990s: The Silent Resilience

During the 90s, while the public had moved on to the "Information Superhighway," researchers like Geoffrey Hinton, Yann LeCun, and Yoshua Bengio were quietly working on "Neural Networks"—a fringe idea inspired by the biology of the brain. They were largely ignored by the mainstream, who still believed that "logic" was the only way to achieve intelligence. This era proved that the most important breakthroughs often happen when the hype is at its lowest.


4. Why 2025 is Different: The Three Pillars

Critics today argue that we are on the verge of a Third AI Winter. They point to the "Diminishing Returns" of LLMs and the massive energy costs. However, most historians argue that 2025 is fundamentally different for three reasons:

I. Data Abundance

In the 1970s, researchers had to "teach" the computer every fact. Today, the computer "learns" from the entire internet. We have moved from "Hand-coded Logic" to "Statistical Pattern Matching."

II. Compute Scaling

The 1987 crash happened because hardware was specialized and expensive. Today, AI runs on GPUs (NVIDIA) that are also used for gaming, crypto, and video rendering. The "Infrastructure" of AI is now a core part of the global economy.

III. Practical Utility

In previous winters, AI was a "Proof of Concept." In 2025, AI is already generating code for 50% of software engineers, helping doctors diagnose cancer, and managing global logistics. You cannot have a "Winter" for a technology that is already integrated into the global GDP.


5. The "Trough of Disillusionment"

While a full "Winter" is unlikely, 2025 is likely entering the Trough of Disillusionment in the Gartner Hype Cycle.

  • The Disillusionment: Investors are beginning to ask, "Where is the profit?" After spending $100 billion on GPUs, the world still doesn't have a $100 billion AI software product.
  • The Plateau: We are moving from a period of "Awe" to a period of "Integration." This might feel like a "Cooling Down," but it is actually the period where the most value is created.

Conclusion

The history of AI winters is a reminder to be humble. Every time humanity thought we were "close" to AGI, the universe reminded us how complex the human mind really is.

However, the winters of the past were caused by a lack of resources. The challenge of 2025 is the opposite: we have too many resources but perhaps not enough "Safety" or "Alignment." The next "freeze" may not be caused by a lack of funding, but by a "Regulatory Winter"—where the risks of AI become so great that governments are forced to shut down the most powerful models.

Understanding the "Long Dark" of the 1970s and 80s helps us appreciate the "Electric Summer" we are in today. But remember: in the history of AI, the only thing more certain than progress is the eventual arrival of the cold.

Subscribe to AI Pulse

Get the latest AI news and research delivered to your inbox weekly.