Life is not a highlight reel

Like so many of us in the corporate world, I use LinkedIn (sometimes grudgingly) to connect with others. Look, it’s not my favorite site in the world, but in many ways it’s a necessity, so I try to focus on the benefits. ANYWAY… I was scrolling through my feed the other day and realized how little of it felt unfinished. Everything was presented as a completed thought. The lessons were clear. The vulnerability was controlled. Even the hard stuff showed up only after it had been resolved, when it was safe to talk about. Some of the posts felt like they were just missing the “tonight, on a very special episode of Terry’s LinkedIn feed….”

Each post was a single moment in time, presented as if it stood on its own.

And it made me think about how much we’re being trained to see the world in fragments.

LinkedIn isn’t unique in this, but it’s one that hits the business world hardest. It takes long, messy careers and compresses them into highlight reels. It turns ongoing work into finished insights. It rewards confidence, clarity, and closure, even when none of those things are honest representations of real life.

On the cutting room floor

I should probably admit that I can overthink this type of thing. I was a history major, and that way of looking at the world tends to stick with you. You get trained early to be suspicious of tidy explanations and single moments presented as truth.

In history, context isn’t optional. A quote only makes sense when you know who said it, when they said it, and what was happening around them. An artifact on its own might be interesting, but without knowing how it was used or what problem it was meant to solve, it’s easy to get it wrong.

That’s probably why LinkedIn, and social media in general, often makes me uneasy.

Most things that matter don’t make sense in isolation. Careers, decisions, leadership, relationships, etc., are shaped over time by pressure, tradeoffs, and accumulated experience. The way we consume information online trains us to do the opposite. We take in a post, a screenshot, a clip, a “stitch” (and yes, I know putting that in quotes makes me look old) – each one detached from what came before and what followed. Over time that starts to feel normal, even sufficient.

It isn’t.

When we start treating fragments as facts, it becomes remarkably easy to draw confident conclusions from very little information. We forget something archaeologists and historians take for granted: meaning lives in the context, not the artifact.

And once the fragment starts to feel like the truth, it changes how we judge people. We become less generous with our assumptions. Quicker to decide we understand someone based on a moment that was never meant to carry that much weight.

What we miss in the edit

When we lose context, we lose patience…and empathy usually goes with it.

It becomes easier to assume intent instead of complexity. Easier to believe that someone else’s decision was obvious, careless, or self-serving because all we’re seeing is the outcome, not the constraints, the history, or the tradeoffs that led there. We react to the end of the story without having seen any of the middle.

What makes this harder is that social media doesn’t just encourage that kind of thinking; it actively trains us for it. The constant stream of short, emotionally charged moments rewards quick reactions over reflection. Our brains get used to novelty, speed, and certainty. We’re nudged toward snap judgments because they feel efficient and satisfying, even when they’re wildly incomplete.

So we get quicker to judge and slower to wonder.

That shows up everywhere – not just in leadership, but in how we relate to colleagues, friends, and strangers. How willing we are to extend grace. How quickly we write people off based on a moment that was never meant to carry that much weight. A post becomes a personality. A decision becomes a character flaw.

At the same time, we start editing ourselves.

If every moment can be isolated and judged, we learn to present only the most defensible versions of ourselves. We share conclusions, not process. Results, not uncertainty. We wait until things are resolved before we talk about them, because unresolved things require context…and context doesn’t suit the algorithm.

Over time, this creates a strange feedback loop. Everything feels more staged, so we trust less of what we see. And because we trust less, we harden our judgments even further. The system rewards polish, but it quietly erodes understanding.

Finding the Director’s Cut

I don’t think the answer is to stop using these platforms or pretend they don’t shape us. They do.

But I do think we can be more intentional about how we show up within them, and how we interpret what we see.

That might mean slowing down before forming an opinion based on a single post. Reminding ourselves that most of someone’s story is off-screen. Choosing curiosity over certainty when we don’t actually have enough information to justify either.

And when we’re the ones posting, it might mean resisting the urge to over-polish. Offering a little more context than the format encourages. Letting things be unresolved. Accepting that real life, real work, and real people are rarely as tidy as a feed would suggest.

Context doesn’t fix everything. But without it, misunderstanding becomes the default.

If people who study ancient civilizations know better than to judge meaning from a single artifact, maybe the rest of us can learn to sit with a little more uncertainty when all we’re seeing is a moment, carefully curated into a post.

Seven Years In (and Still Surprised)

There’s something almost mythical about the number seven.

Hollywood certainly seems to think so. Seven Years in Tibet, Seven Year Itch, Seven Samurai, The Magnificent Seven, Seven Pounds, Se7en (What’s in the box?!)…okay, that last one got a little dark.

Then there’s our cultural fascination with it. Lucky number seven, the 7 wonders of the ancient world, the seven seas, seventh heaven, the seven deadly sins…sheesh, that keeps coming up. Anyway, seven seems to recur as a number of significance.

I bring this up because last week, I marked seven years with IA.

That may not sound remarkable on its own, but for me, it’s quietly monumental. It’s the longest I’ve ever stayed at a single company and honestly, I can’t quite believe it. Some days it feels like I just started, like I’m still learning the rhythms, still discovering new edges to the work. Other days, it feels like I’ve been here forever – in the best possible way – grounded by history, but never stuck in it.

For most of my career, longevity wasn’t something I was aiming at. I always told people I build, I don’t maintain. I was motivated by learning, by momentum, by the pull toward harder, more interesting problems. When that sense of stretch faded in past roles, I moved on. Not because I’m Gen X and apparently destined to job hop, but out of a desire to keep growing and learning. Staying felt riskier than moving on.

So when I look back at seven years here, the real question isn’t why did I stay? It’s what kind of work makes staying make sense?

My work here at IA sits at the intersection of strategy, design, and transformation. In practice, that means we’re rarely solving the same problem twice. We partner with organizations navigating meaningful change – how they operate, how they decide, how they serve people, how they evolve over time. That kind of work doesn’t settle neatly. It resists templates and tidy endings.

What’s kept the work feeling alive for me is that I’m constantly encountering new systems and new challenges. Each engagement resets the context. I can’t rely on muscle memory when helping clients. I need to listen again, learn again, and adapt again. That exposure to “new” work across different industries, cultures, and moments of change has given me the sense of renewal I used to associate with changing jobs, without losing the grounding that comes from staying in one place.

Just as important as the work are the relationships we build with clients along the way. Transformation only works when there’s trust, and trust takes time. Being able to return to organizations, deepen partnerships, and see how ideas evolve from recommendation to execution adds a layer of meaning that’s hard to replicate. It turns the work from a series of engagements into an ongoing conversation, one where learning flows in both directions. I’ve made several connections that have lasted long after the client engagement ended.

Doing this work alongside the people I work with at IA makes all the difference. I’m lucky to be surrounded by colleagues who are thoughtful, curious, and willing to sit with complexity rather than rush past it. People who ask better questions, challenge assumptions, and care as much about how we work as what we deliver. This helps keep the work demanding, human, and deeply engaging.

Seven years in, I no longer think of staying as the opposite of growth. I see it as a different expression of it, one grounded in continuous learning, meaningful relationships, and work that keeps evolving in genuinely interesting ways.

I’m grateful to be part of work that keeps changing, with people who make that change meaningful. I didn’t expect to find a long-term home this late in my career. But here feels like exactly the right place to be.

When progress stops belonging to all of us

History shows us something important: most technological revolutions eventually became part of the public fabric of life.

The printing press didn’t just lower the cost of books, it broke down walls. Knowledge that once belonged to the elite began reaching merchants, farmers, and ordinary families. Literacy spread, ideas crossed borders, and entire movements (like the Reformation) were made possible because access was no longer locked away.

Consider the telephone. It started as a novelty for businesses and the wealthy, but within decades, phones were hanging on the walls of regular households. Distance shrank. Families stayed connected across miles. A tool once reserved for the few became an expectation for the many.

The same was true with electricity. At first, it powered factories and illuminated the homes of the rich. But within a generation, power lines stretched across cities and towns. Eventually, even rural households could flip a switch and change the rhythm of their lives. Electricity didn’t stay exclusive; it became essential.

And then the internet. The early days were clunky, noisy dial-up connections, but it wasn’t confined to Silicon Valley insiders for long. Schools, libraries, coffee shops, and homes all gained access. The internet didn’t just belong to tech giants; it belonged to anyone with a modem and a little patience for AOL’s screechy login tones.

Each of these revolutions had flaws. They created disruption, inequity, and sometimes exploitation. But over time, they moved outward. They became shared. They became ours.

That’s what makes Artificial Intelligence feel so different.

A revolution that isn’t spreading

Unlike earlier breakthroughs, AI isn’t marching toward universal access. Yes, AI is ubiquitous for end users. Even appliances seem to be “enhanced” with AI. But are consumers all we ever will be? Training today’s most powerful systems requires staggering computing power, mountains of data, and billions of dollars. That’s not something universities, small businesses, or hobbyists can replicate.

Training today’s AI systems requires computing power and funding that are out of reach for almost everyone. Right now, there are maybe four major players at the forefront: Microsoft (through OpenAI), Google (through DeepMind and Gemini), Anthropic, and Amazon. And if trends continue, that number could shrink to two. AI is concentrating in fewer and fewer hands. And that’s dangerous.

This is not a broad-based revolution. It’s consolidation.

When power and money gather in a few hands, the rest of us don’t just lose out on opportunity. We lose control:

  • A few voices dictate the future. Microsoft’s partnership with OpenAI means one company already wields outsize influence on how AI is integrated into businesses, education, and daily tools. Google’s models are quietly shaping search, advertising, and the flow of online information. Anthropic, funded heavily by Amazon, positions itself as the “safer” alternative, but at the end of the day, it’s still a private company answering to investors.
  • Wealth piles up at the top. OpenAI’s valuation hit tens of billions within a few short years. Google and Microsoft stock prices surged on the promise of AI. Meanwhile, the average worker is being told to “upskill” before their job becomes obsolete. That’s not shared prosperity, it’s extraction.
  • Fragility sets in. If two or three companies control the technology, what happens when one makes a catastrophic mistake? Or decides to cut corners in pursuit of profit? When power is concentrated, failure doesn’t just hurt a company, it destabilizes the system.
  • We lose our sense of ownership. Electricity, books, and phones became part of daily life that people could buy, use, and understand. With AI, we’re not participants, we’re customers at the mercy of a few gatekeepers.

This isn’t just about markets. It’s about the kind of society we’re building.

Why this should worry all of us

It would be easy to say, “This is a leadership problem,” or “It’s up to regulators.” And yes, leaders and policymakers carry a huge share of responsibility. But the truth is, this concentration of AI power impacts all of us.

As citizens, we risk losing democratic influence over how AI evolves. Do we want a handful of unelected executives deciding how the most powerful tools in human history are used?

As workers, we risk being replaced, monitored, or squeezed for efficiency gains that benefit shareholders, not employees.

As consumers, we risk being locked into ecosystems where one company controls the platforms, the data, and the outcomes, and we have no real alternatives.

As communities, we risk technologies being built without local values, cultural diversity, or public good in mind. It is in danger of becoming an echo chamber that regurgitates our own content back to us.

AI isn’t just another business tool. It’s shaping the future of communication, education, healthcare, and governance. And if we’re not paying attention, that future will be built for profit, not people.

And let’s be clear: AI carries enormous potential. It could accelerate medical breakthroughs, personalize education at scale, and help us tackle massive challenges like climate change or global poverty. Used responsibly, it could open new doors for creativity, innovation, and human connection in ways we’re only beginning to imagine. The power of the technology itself isn’t the problem. The problem is who controls it, how it’s developed, and whether its benefits will actually be shared.

We aren’t powerless

The genie is out of bottle. The toothpaste has left the tube. Use whatever idiom that makes you happy, but they all boil down to one thing – AI is here to stay. We need to be intentional about how we want AI to augment, not control, our lives. I’m not the fanciest expert, but some thoughts on how we can get started:

  • Policy and regulation. We know legislation lags innovation but that doesn’t mean we can’t set up some guard rails to save us from ourselves. Transparency and accountability aren’t optional.
  • Support open ecosystems. Open-source AI won’t rival the giants tomorrow, but it creates alternatives. It keeps innovation from being locked behind closed doors. It means educators, nonprofits, and smaller businesses can participate in shaping the field.
  • Treat AI like infrastructure. Just as we fund public roads, schools, and healthcare, we should treat AI as a public good. Imagine national or global initiatives focused not solely on profit, but on solving real-world problems like climate, healthcare, education.
  • Make conscious choices as leaders. Don’t assume “bigger” means “better.” Push vendors for transparency. Invest in alternatives. Reward diversity of thought and innovation, not just scale.
  • Demand accountability as citizens. Ask questions. Vote for representatives who understand technology and its risks – who want the benefits of AI but are wary of false promises. We need more transparency in legislative wheeling and dealing.

What the future holds

Responsibility isn’t just about steering the ship today. It’s about making sure the people who come after us inherit something better. It’s about our legacy. This moment isn’t just a “leadership” test – it’s a societal one.

AI will shape the future whether we like it or not. The only question is whether that future is written with us or for us. If power stays concentrated in the hands of a few, progress will no longer belong to everyone, it will belong to “them.”

We can’t afford to be passive. Leaders need to act. Citizens need to speak up. Workers need to demand accountability. Because if we don’t, we’ll look back one day and realize we handed over the next great revolution without ever insisting it be ours too.