Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI. learn more
2025 will be the year that big technology companies move from selling increasingly powerful tools to selling increasingly powerful capabilities. The difference between tools and abilities is subtle but profound. We use tools as external artifacts that help us overcome organic limitations. From cars and airplanes to phones and computers, tools greatly expand what we can accomplish as individuals, as large teams, and as a vast civilization.
They have different abilities. We experience abilities in the first person as self-embodied abilities that feel internal and immediately accessible to our consciousness. For example, language and mathematics are human-created technologies that we load into our brains and carry with us throughout our lives, expanding our ability to think, create, and collaborate. They are superpowers that feel so inherent to our existence that we rarely think of them as technologies. Fortunately, you don’t have to purchase a service plan.
But the next wave of superpowers won’t just happen. But just like the ability to think in words and numbers, we experience these powers as self-embodied abilities that we carry with us throughout our lives. I call this new field of technology: strengthened spirit And it comes from the convergence of AI, conversational computing, and augmented reality. And in 2025, there will be an arms race between the world’s biggest companies to market us. superhuman ability.
These new superpowers are Context-aware AI agent These are loaded into devices we wear on our bodies (such as AI glasses) and travel with us throughout our lives, enhancing our perception and interpretation of the world by seeing, hearing and experiencing what we see. provide us with the ability. In fact, I predict that by 2030, the majority of us will be living our lives with the help of context-aware AI agents. digital superpower to our normal daily experiences.
How will our superhuman future unfold?
First and foremost, we whisper to these intelligent agents. whisper backwhich acts like an omniscient alter ego that gives us context-aware recommendations, knowledge, guidance, and advice. spatial reminderdirection indication, tactile nudge Other linguistic and perceptual content that guides us through daily life and educates us about our world.
Consider the following simple scenario. I was walking downtown and found the store across the street. Are you wondering what time the store opens? So you pick up your phone and type (or say) the name of the store. You can also quickly find opening hours and find other information about the store on our website. This is the basic tool-using computing model that is prevalent today.
Now let’s take a look at how major technology companies are moving to an ability computing model.
stage 1: You’re wearing AI-powered glasses that can see what you see, hear what you hear, and process your surroundings through multimodal large-scale language models (LLMs). Now, when I see that store across the street, I whisper to myself, “When is it going to open?” Immediately, the voice says “10:30 a.m.” again in your ears.
I know this is a little different than looking up a store’s name on your phone, but it feels more profound. That’s because context-aware AI agents share your reality. It doesn’t just track your location like GPS, it sees, hears, and pays attention to what you’re paying attention to. This makes it feel less like a tool and more like an internal ability linked to a first-person reality.
And when asked by the AI-powered doubles in our ears, we often respond with: Nod in affirmation (detected by the glasses’ sensors) or refuse by shaking your head. It feels so natural and seamless that you might not even consciously realize you’ve responded.
stage 2: By 2030, we won’t need to whisper to an AI agent traveling with us through life. Instead, we will be able to simply say the words, and the AI will know what we are saying by reading our lips and detecting activation signals from our muscles. I will. The mouth was introduced because it feels more private, more resilient in noisy spaces, and most importantly, more personal, internal, and self-embodied. I am convinced that it is.
stage 3:By 2035, we may not even need to say words. Because AI will learn to interpret our muscle signals so sensitively and accurately that we will only have to think about saying the words to communicate our intentions. We will be able to focus our attention on all objects and activities in the world, and we will be able to hear useful information from the AI glasses. omniscient voice in our heads.
Of course, it does more than just question your surroundings. That’s because the onboard AI, which shares your first-person reality, learns to predict the information you want before you ask for it. For example, if a colleague approaches you from across the hallway and you can’t remember their name, the AI will sense your anxiety and say, “Greg from Engineering.”
Or when you pick up a can of soup at the store and wonder about the carbs or whether it’s cheaper at Walmart, the answer just rings in your ears or appears visually. Plus, a superhuman who assesses other people’s facial emotions, predicts their mood, goals, and intentions, and coaches you during real-time conversations to make you more attractive, attractive, or persuasive. (see this fun one) video example).
While some may be skeptical of the level of adoption and short-term adoption I predicted above, I do not make this claim lightly. I’ve spent much of my career developing technology. Augment and expand human capabilitiesAnd there is no doubt that the mobile computing market is headed in a very big direction.
Over the past 12 months, two of the world’s most influential and innovative companies, Meta and Google, have made clear their intentions to give us self-embodied superpowers. Meta has made its first big move by adding context-aware AI to its Ray-Ban glasses and unveiling its Orion mixed reality prototype, which adds impressive visual capabilities. Meta is now very well-positioned to leverage significant investments in AI and augmented reality (XR) to become a major player in the mobile computing market, and perhaps they will have a super They will do that by selling us great powers.
Google isn’t far behind these days. Announcing Android XRa new operating system powered by AI. expand our world Features seamless context-aware content. It also announced a partnership with Samsung to bring new glasses and headsets to market. With a market share of over 70% for mobile operating systems and a growing presence of AI with Gemini, Google is well positioned to become a leading provider of technology-enabled human superpowers in the coming years. I believe that there is.
Of course, risks also need to be considered.
Quoting famous words 1962 spiderman comic“With great power comes great responsibility.” This wisdom is literally about superpowers. The difference is that the big responsibility falls not on the consumers who buy these techno powers, but on the companies that provide them and the regulators that oversee them.
After all, when wearing AI-powered augmented reality (AR) glasses, each of us could find ourselves in a world where: new reality where is the technology Managed by a third party An AI-powered voice whispers advice, information, and guidance into our ears while selectively changing what we see and hear. Although the intention is positive, even magical, potential for abuse It’s equally profound.
To avoid dystopian outcomes, my main recommendations for both consumers and manufacturers are: Adopt a subscription business model. If the arms race for superpower sales were driven by which companies could offer the most amazing new capabilities at affordable monthly rates, we would all benefit. Instead, if business models became a race to monetize superpowers by having the most effective targeted impact on our eyes and ears throughout our daily lives, consumers would easily manipulated It has a precision and penetration that we have never experienced before.
Eventually, these superpowers will no longer feel like options. After all, without them we can be at a cognitive disadvantage. It is now up to industry and regulators to ensure that these new capabilities can be deployed in a way that is not intrusive, manipulative, or dangerous. I believe this will be a magical new direction for computing, but it will require careful planning and oversight.
Louis Rosenberg is with Immers Corporation, Outland Research, Unanimous AIwrote “Our Next Reality.”
data decision maker
Welcome to the VentureBeat community!
DataDecisionMakers is a place where experts, including technologists who work with data, can share data-related insights and innovations.
If you want to read about cutting-edge ideas, updates, best practices, and the future of data and data technology, join DataDecisionMakers.
Why not consider contributing your own articles?
Read more about DataDecisionMakers