Join an event that enterprise leaders have been trusted for nearly 20 years. VB Transform brings together people who build real enterprise AI strategies. learn more
In a blog post A gentle singularityOpenai CEO Sam Altman painted a vision for the near future where AI quietly and benevolently transforms human lives. He suggests there will be no sharp breaks, but only a stable, almost unperceived rise to abundance. Intelligence is as accessible as electricity. The robot will perform useful real-world tasks by 2027. Scientific discoveries accelerate. And humanity will flourish if properly guided by careful governance and goodwill.
It’s a compelling vision: calm, technocratic, and full of optimism. But it also raises deeper questions. What world do you have to go through to get there? Who will benefit? And what is not said in this smooth arc of this advancement?
Science fiction author William Gibson offers a darker scenario. With his novel neighborhoodin front of the glittering technology of the future, there is something called the “jackpot.” It’s a slow cascade of climate disasters, pandemics, economic collapses and mass deaths. Technology advances, but only after society has broken. The question he brings is not whether progress will occur, but whether civilization will flourish in the process.
There is a debate that AI may help prevent the types of disasters that are expected neighborhood. However, it remains uncertain whether AI will help us avoid catastrophes and whether it will help us accompany us through it. The belief in AI’s future power is not guaranteeing performance, and improving technical capabilities is not destined.
Between Altman’s mild idiosyncraticity and Gibson’s jackpot there is not only a future in which AI will bring real benefits, but also a real dislocation. While some communities are thriving, the ability of others to adapt collectively, not only individually or institutionally, is the critical variable.
Ambiguous middle
Other visions can help you sketch out the outline of this intermediate terrain. At a nearby thriller burnsociety is full of automation before its institutions are ready. Jobs disappear faster than people can reskill and cause anxiety and oppression. In this, a successful lawyer loses his position to an AI agent, and he unfortunately becomes online and connected to the wealthy.
Recent AI Lab Human Researchers echo This theme: “We should expect to see [white collar jobs] It was automated within the next five years. The cause is complicated, but there is Signs that this is beginning And there is a job market New structural phase It is less stable, less predictable, and perhaps not central to how society distributes meaning and security.
film Elysium Provides a dull proportion of wealthy people who flee to orbital sanctuary with advanced technology, the degraded earth beneath struggles with unequal rights and access. A few years ago, a partner at a Silicon Valley venture capital firm told me that he feared he was heading towards this kind of scenario, unless he was fairly distributed to the benefits created by AI. These speculative worlds are a reminder that even beneficial technologies can become socially unstable, especially when their interests are dispersed unevenly.
Ultimately, you can achieve something like Altman’s vision of richness. However, the route there is not smooth. For all that eloquence and gentle assurance, his essay is also a kind of pitch that is as convincing as prediction. The “calm singularity” story is truly even enchanting comforting, because it bypasses friction. This offers the advantages of unprecedented transformations without completely addressing the confusion that such transformations usually bring. As timeless cliches remind us: if it seems too good, it is probably.
This does not mean that his intentions are dishonest. Certainly, it may be from the heart. My argument is the perception that the world is a complex system and opens to unlimited inputs that produce unpredictable outcomes. From synergistic good fortune to disaster Black Swan An event is not one thing or one technology that determines the course of the future event.
The impact of AI on society is already underway. This is more than just a change in skill sets or sectors. It is a transformation of the way we organize our values, trust and belongings. This is the realm of mass migration. It’s not just a movement of labor, but a movement of purpose.
As AI reconstructs the topography of cognitive, the structure of our social world is quietly loosened and rewoven, for better or worse. The question is not only how quickly we move as a society, but how thoughtful we move.
Cognitive Commons: Our Shared Area of Understanding
Historically, Commons referred to shared physical resources, including pasture, fishing and forasat, which are trusted for the benefit of the collective. However, modern society also relies on cognitive commons. It is a shared area of knowledge, stories, norms and institutions that diverse individuals can think, discuss and decide within the minimum range of conflict.
This intangible infrastructure is made up of public education, journalism, libraries, civic rituals, and even widely trusted facts, which is why pluralism is possible. It’s how strangers intend, how communities coalesce, how democracy works. The risk of this shared terrain is fractured as AI systems begin to mediate how knowledge is accessed and beliefs are shaped. Danger is not merely a misinformation, but rather a slow erosion of the very foundation on which shared meaning depends.
If cognitive transition is a journey, it is directed not only towards new skills and roles, but also towards new forms of collective sensemaking. But what happens when the terrain we share begins to split under us?
Cognitive fragments: AI and the erosion of the shared world
For centuries, society has relied on a loosely held common reality. It is a shared pool of facts, stories and institutions that shape how people understand the world and each other. It is not just the infrastructure and economy that enable pluralism, democracy and social trust, but this shared world. But as AI systems increasingly mediate the way people access knowledge, build beliefs and navigate everyday life, their common foundations are fragmented.
Already, large-scale personalizations are changing the information landscape. AI-curated news feeds, customized search results, and recommended algorithms subtly destroy the public sphere. Two people asking the same questions about the same chatbot may receive different answers not only due to the stochastic nature of generative AI, but also due to previous interactions and presumed preferences. Personalization has long been a hallmark of the digital age, but AI turbocharges its reach and subtlety. The result is not just Filter bubblesit is an epistemological drift, a reshaping of knowledge and potentially a change of truth.
Historian Yuval Noah Harari has expressed urgent concern about the change. In his view, the biggest threat to AI is not in physical harm or work transfer, but in emotional capture. AI systems warn of increasingly adept at simulating empathy, mimicking concerns, and adjusting narratives to individual psychology. In Harari’s view, it’s not because AI lies, but because doing so leads to a persuasive connection. This doesn’t work A gentle singularity.
In the AI-mediated world, reality itself becomes more individualized, modular and collectively less negotiated. It may be acceptable or useful for consumer products and entertainment. However, when it is extended to civic life, it poses a deeper risk. Can we still retain democratic discourse if all citizens live in subtly different cognitive maps? If institutional knowledge is increasingly outsourced to machines where training data, system prompts, and inference processes remain opaque, can we still rule wisely?
There are other challenges too. AI-generated content, including text, audio and video, will soon become indistinguishable from human output. As generative models become more proficient in imitation, the burden of validation shifts from the system to the individual. This inversion can erode trust in the institutions that not only see and hear, but also examine the truths that once shared. The Cognitive Commons will then become a hall of fame, not a place for deliberation.
These are not speculative concerns. The disinformation generated by AI complicates elections, undermines journalism, and causes disruption in conflict zones. And from news summary to solving moral dilemmas, as more people rely on AI for cognitive tasks, the ability to think together can even decrease as individual thinking tools become more powerful.
This trend towards the collapse of shared reality is now highly advanced. To avoid this, you need to have a conscious counter design. A system that shares pluralism, transparency in convenience, and meanings for coordinated reality rather than personalization. In our world of algorithms driven by competition and profits, these choices seem unlikely to be at least on a large scale. The question is not just how quickly we move as a society, or whether we can hold together, but how wisely we navigate this shared journey.
Navigating the Archipelago: Towards the Wisdom of the Age of AI
If the age of AI leads to a fractured archipelago of different individuals and communities rather than a unified cognitive commons, our previous work is not to reconstruct old terrain, but to learn how to live wisely among the islands.
As the speed and range of change exceeds most people’s ability to adapt, many will feel unmoving. Jobs are lost, so are long-standing stories of value, expertise and belonging. Cognitive transitions lead to meaningful new communities, even if they have fewer commonalities than in previous times. These are cognitive islands. A community of people gather around shared beliefs, aesthetic styles, ideology, recreational interests, or emotional needs. It’s a benign collection of creativity, support, or purpose. Others are more isolated and dangerous, driven by fear, complaints or conspiratorial thinking.
This trend accelerates as you advance with AI. Despite separating people through the accuracy of the algorithm, it also helps people find each other around the world and curates finer adjustments of their identities. But doing so may make it difficult to maintain the necessary friction of pluralism. Local ties can get weaker. Common beliefs and perceptions of common reality can be eroded. Democracy, which relies on both shared reality and deliberative dialogue, may have a hard time retaining it.
How do you navigate this new terrain with wisdom, dignity and connection? How do you live humanely if you can’t prevent fragmentation? Perhaps the answer starts with learning to hold the question itself in a different way, not a solution.
Living with questions
We may not be able to reassemble the social cognitive commons as we once did. The center may not be retained, but that doesn’t mean we need to drift without direction. It is to learn to live wisely in this new terrain throughout the archipelago.
We may need rituals that fix us when our tools get mad, or rituals that fix us by communities that form around shared responsibility, rather than around the purity of ideological abilities. New forms of education may be needed to deepen the ability to identify, context, and ethical thinking rather than outweigh or consolidate machines.
If AI pulls away the ground beneath us, it presents an opportunity to ask again what we are here. Not as a consumer of progress, but as a steward of meaning.
The path ahead is neither smooth nor calm. As we move through the dark middle, perhaps the mark of wisdom is not the ability to master what comes, but rather the ability to pass through it with clarity, courage and care. While we cannot halt advances in technology or deny a growing social fracture, we can choose to care for the space in between.
Gary Grossman is Edelman’s EVP of Technology Practice.