History shows us something important: most technological revolutions eventually became part of the public fabric of life.
The printing press didn’t just lower the cost of books, it broke down walls. Knowledge that once belonged to the elite began reaching merchants, farmers, and ordinary families. Literacy spread, ideas crossed borders, and entire movements (like the Reformation) were made possible because access was no longer locked away.
Consider the telephone. It started as a novelty for businesses and the wealthy, but within decades, phones were hanging on the walls of regular households. Distance shrank. Families stayed connected across miles. A tool once reserved for the few became an expectation for the many.
The same was true with electricity. At first, it powered factories and illuminated the homes of the rich. But within a generation, power lines stretched across cities and towns. Eventually, even rural households could flip a switch and change the rhythm of their lives. Electricity didn’t stay exclusive; it became essential.
And then the internet. The early days were clunky, noisy dial-up connections, but it wasn’t confined to Silicon Valley insiders for long. Schools, libraries, coffee shops, and homes all gained access. The internet didn’t just belong to tech giants; it belonged to anyone with a modem and a little patience for AOL’s screechy login tones.
Each of these revolutions had flaws. They created disruption, inequity, and sometimes exploitation. But over time, they moved outward. They became shared. They became ours.
That’s what makes Artificial Intelligence feel so different.
A revolution that isn’t spreading
Unlike earlier breakthroughs, AI isn’t marching toward universal access. Yes, AI is ubiquitous for end users. Even appliances seem to be “enhanced” with AI. But are consumers all we ever will be? Training today’s most powerful systems requires staggering computing power, mountains of data, and billions of dollars. That’s not something universities, small businesses, or hobbyists can replicate.
Training today’s AI systems requires computing power and funding that are out of reach for almost everyone. Right now, there are maybe four major players at the forefront: Microsoft (through OpenAI), Google (through DeepMind and Gemini), Anthropic, and Amazon. And if trends continue, that number could shrink to two. AI is concentrating in fewer and fewer hands. And that’s dangerous.
This is not a broad-based revolution. It’s consolidation.
When power and money gather in a few hands, the rest of us don’t just lose out on opportunity. We lose control:
- A few voices dictate the future. Microsoft’s partnership with OpenAI means one company already wields outsize influence on how AI is integrated into businesses, education, and daily tools. Google’s models are quietly shaping search, advertising, and the flow of online information. Anthropic, funded heavily by Amazon, positions itself as the “safer” alternative, but at the end of the day, it’s still a private company answering to investors.
- Wealth piles up at the top. OpenAI’s valuation hit tens of billions within a few short years. Google and Microsoft stock prices surged on the promise of AI. Meanwhile, the average worker is being told to “upskill” before their job becomes obsolete. That’s not shared prosperity, it’s extraction.
- Fragility sets in. If two or three companies control the technology, what happens when one makes a catastrophic mistake? Or decides to cut corners in pursuit of profit? When power is concentrated, failure doesn’t just hurt a company, it destabilizes the system.
- We lose our sense of ownership. Electricity, books, and phones became part of daily life that people could buy, use, and understand. With AI, we’re not participants, we’re customers at the mercy of a few gatekeepers.
This isn’t just about markets. It’s about the kind of society we’re building.
Why this should worry all of us
It would be easy to say, “This is a leadership problem,” or “It’s up to regulators.” And yes, leaders and policymakers carry a huge share of responsibility. But the truth is, this concentration of AI power impacts all of us.
As citizens, we risk losing democratic influence over how AI evolves. Do we want a handful of unelected executives deciding how the most powerful tools in human history are used?
As workers, we risk being replaced, monitored, or squeezed for efficiency gains that benefit shareholders, not employees.
As consumers, we risk being locked into ecosystems where one company controls the platforms, the data, and the outcomes, and we have no real alternatives.
As communities, we risk technologies being built without local values, cultural diversity, or public good in mind. It is in danger of becoming an echo chamber that regurgitates our own content back to us.
AI isn’t just another business tool. It’s shaping the future of communication, education, healthcare, and governance. And if we’re not paying attention, that future will be built for profit, not people.
And let’s be clear: AI carries enormous potential. It could accelerate medical breakthroughs, personalize education at scale, and help us tackle massive challenges like climate change or global poverty. Used responsibly, it could open new doors for creativity, innovation, and human connection in ways we’re only beginning to imagine. The power of the technology itself isn’t the problem. The problem is who controls it, how it’s developed, and whether its benefits will actually be shared.
We aren’t powerless
The genie is out of bottle. The toothpaste has left the tube. Use whatever idiom that makes you happy, but they all boil down to one thing – AI is here to stay. We need to be intentional about how we want AI to augment, not control, our lives. I’m not the fanciest expert, but some thoughts on how we can get started:
- Policy and regulation. We know legislation lags innovation but that doesn’t mean we can’t set up some guard rails to save us from ourselves. Transparency and accountability aren’t optional.
- Support open ecosystems. Open-source AI won’t rival the giants tomorrow, but it creates alternatives. It keeps innovation from being locked behind closed doors. It means educators, nonprofits, and smaller businesses can participate in shaping the field.
- Treat AI like infrastructure. Just as we fund public roads, schools, and healthcare, we should treat AI as a public good. Imagine national or global initiatives focused not solely on profit, but on solving real-world problems like climate, healthcare, education.
- Make conscious choices as leaders. Don’t assume “bigger” means “better.” Push vendors for transparency. Invest in alternatives. Reward diversity of thought and innovation, not just scale.
- Demand accountability as citizens. Ask questions. Vote for representatives who understand technology and its risks – who want the benefits of AI but are wary of false promises. We need more transparency in legislative wheeling and dealing.
What the future holds
Responsibility isn’t just about steering the ship today. It’s about making sure the people who come after us inherit something better. It’s about our legacy. This moment isn’t just a “leadership” test – it’s a societal one.
AI will shape the future whether we like it or not. The only question is whether that future is written with us or for us. If power stays concentrated in the hands of a few, progress will no longer belong to everyone, it will belong to “them.”
We can’t afford to be passive. Leaders need to act. Citizens need to speak up. Workers need to demand accountability. Because if we don’t, we’ll look back one day and realize we handed over the next great revolution without ever insisting it be ours too.