Where Tech Leaders and Students Really Think AI Is Going
Recorded: Jan. 27, 2026, noon
| Original | Summarized |
Where Tech Leaders and Students Really Think AI Is Going | WIREDSkip to main contentMenuSECURITYPOLITICSTHE BIG STORYBUSINESSSCIENCECULTUREREVIEWSMenuAccountAccountNewslettersSecurityPoliticsThe Big StoryBusinessScienceCultureReviewsChevronMoreExpandThe Big InterviewMagazineEventsWIRED InsiderWIRED ConsultingNewslettersPodcastsVideoMerchSearchSearchSign InSign InBrian BarrettBusinessJan 27, 2026 5:30 AMWhere Tech Leaders and Students Really Think AI Is GoingWe asked tech CEOs, journalists, entertainers, students, and more about the promise and peril of artificial intelligence. Here’s what they said.Photo-Illustration: WIRED Staff; Getty ImagesSave StorySave this storySave StorySave this storyThe future never feels fully certain. But in this time of rapid, intense transformation—political, technological, cultural, scientific—it’s as difficult as it ever has been to get a sense of what’s around the next corner.Here at WIRED, we’re obsessed with what comes next. Our pursuit of the future most often takes the form of vigorously reported stories, in-depth videos, and interviews with the people helping define it. That’s also why we recently embraced a new tagline: For Future Reference. We’re focused on stories that don’t just explain what’s ahead, but help shape it.In that spirit, we recently interviewed a range of luminaries from the various worlds WIRED touches—and who participated in our recent Big Interview event in San Francisco—as well as students who have spent their whole lives inundated with technologies that seem increasingly likely to disrupt their lives and livelihoods. The main focus was unsurprisingly on artificial intelligence, but it extended to other areas of culture, tech, and politics. Think of it as a benchmark of how people think about the future today—and maybe even a rough map of where we’re going.AI Everywhere, All the TimeWhat’s clear is that AI is already every bit as integrated into people’s lives as search has been since the Alta Vista days. Like search, the use cases tend toward the practical or mundane. “I use a lot of LLMs to answer any questions I have throughout the day,” says Angel Tramontin, a student at UC Berkeley’s Haas School of Business.Several of our respondents noted that they’d used AI within the last few hours, even in the last few minutes. Lately, Anthropic cofounder and president Daniela Amodei has been using her company’s chatbot to assist with childcare. “Claude actually helped me and my husband potty-train our older son,” she says. “And I’ve recently used Claude to do the equivalent of panic-Googling symptoms for my daughter.”She’s not the only one. Wicked director Jon M. Chu turned to LLMs “just to get some advice on my children’s health, which is maybe not the best,” he says. “But it’s a good starting reference point.”AI companies themselves see health as a potential growth area. OpenAI announced ChatGPT Health earlier this month, disclosing that “hundreds of millions of people” use the chatbot to answer health and wellness questions each week. (ChatGPT Health introduces additional privacy measures, given the sensitivity of the queries.) Anthropic’s Claude for Healthcare targets hospitals and other health care systems as customers.Not everyone we interviewed took such an immersive approach. “I try not to use it at all,” says UC Berkeley undergraduate student Sienna Villalobos. “When it comes down to doing your own work, it’s very easy to have an opinion. AI shouldn’t be able to give you an opinion. I think you should be able to make that for yourself.”That view may be increasingly in the minority. Nearly two-thirds of US teens use chatbots, according to a recent Pew Research study. About 3 in 10 report using it daily. (Given how intertwined Google Gemini is with search these days, many more may use AI without even realizing it or intending to.)Ready to Launch?The pace of AI development and deployment is relentless, despite concerns about its potential impacts on mental health, the environment, and society at large. In this wide-open regulatory environment, companies are largely left to self-police. So what questions should AI companies ask themselves ahead of every launch, absent any guardrails from lawmakers?“‘What might go wrong?’ is a really good and important question that I wish more companies would ask,” says Mike Masnick, founder of the tech and policy news site Techdirt.That focus on consequences was a common theme across almost all of our respondents—including Anthropic’s Amodei. Prior to launching a new AI agent, she says, companies need to ask themselves, “How confident are we that we’ve done enough safety testing on this model?” Similar to car manufacturers doing crash tests, chatbot makers need to make sure that what they’re producing is as reliable as possible. “We’re actually putting this out into the world; it’s something people are going to rely on every day,” she says. “Is this something that I would be comfortable giving to my own child to use?”Cloudflare CEO Matthew Prince emphasized AI companies should work to establish trust before launching a new product. A recent YouGov survey found that while 35 percent of US adults say they use AI daily, only 5 percent “trust AI a lot,” and 41 percent are distrustful. An Ipsos poll showed that trust in AI companies to protect personal data actually fell globally from 2023 to 2024. “I think a lot of them put financial gain over morality, and that’s one of the biggest dangers.” says Villalobos.A series of high-profile lawsuits over alleged harms caused by AI has further strained the public’s view of some chatbot providers in particular. Which again gets back to the question of consequences.“Who does it hurt, and who does it harm?” says Michele Jawando, president of the nonprofit Omidyar Network, which partnered with WIRED on this project. “If you don’t know the answer, you don’t have enough people in the room.”Risk and RewardAs befits a technology that’s rapidly evolving and multifaceted, our interviewees didn’t settle on one frame through which to view AI.Take Cloudflare’s Prince, whose company has done more than perhaps any other to keep AI companies accountable for their rampant scraping of websites for training data. Despite that confrontational relationship, he remains optimistic about the technology as a whole. “I’m pretty optimistic about AI,” Prince says. “I think it’s actually going to make humanity better, not worse.”Several Berkeley students cited job security and data privacy as long-term concerns as AI continues to take hold. “A lot of people are really stressed on campus about whether or not the field they’re going into is going to still be a field,” says student Abigail Kaufman. Jeremy Allaire, CEO of digital financial company Circle, agrees: “The change in the nature of labor and how that can impact people and the economy … There’s a lot of major questions about that and major risks around that, and no one really seems to have good answers.”Recent research from Stanford University economists has found that employment opportunities for young people are already in decline, and multiple tech giants have cited AI as a rationale for restructuring their workforces.The open questions about AI extend to health care—despite how willingly some respondents have embraced AI in that context. “There’s concerns about patient care,” says physician Eric Topol, author of Super Agers. “We have lots of errors that are done by physicians, of course, and in medicine, but we also don’t want to have new ones, or make that any worse by AI.”Still, concerns about future impacts haven’t stymied present-tense usefulness. “I am working on a presentation to teach people how to use AI in my country, Peru,” says Gonzalo Vasquez Negra, who is pursuing his MBA at Berkeley. “The last time I used AI was for writing poetry,” says Berkeley student Gilliane Balingit. “I have a hard time with editing my writing, so I used AI to just help me enhance my thoughts and my feelings.”You Might Also LikeIn your inbox: The biggest tech news coming out of ChinaThe real AI talent war is for plumbers and electriciansBig Story: How ICE uprooted normal life in MinneapolisDumbphone owners have lost their mindsListen: Wikipedia’s founder on the threats to its futureBrian Barrett is the executive editor of WIRED. Previously he was the editor in chief of the tech and culture site Gizmodo and was a business reporter for the Yomiuri Shimbun, Japan’s largest daily newspaper. ... Read MoreExecutive EditorblueskyTopicsartificial intelligenceAnthropicfutureCloudFlareThe Big Interview EventRead MoreAds Are Coming to ChatGPT. Here’s How They’ll WorkOpenAI says ads will not influence ChatGPT’s responses, and that it won’t sell user data to advertisers.OpenAI Invests in Sam Altman’s New Brain-Tech Startup Merge LabsMerge Labs has emerged from stealth with $252 million in funding from OpenAI and others. It aims to use ultrasound to read from and write to the brain.AI Labor Is Boring. AI Lust Is Big BusinessAfter years of hype about generative AI increasing productivity and making lives easier, 2025 was the year erotic chatbots defined AI’s narrative.Wikipedia’s Existential Threats Feel Greater Than EverAs the free online encyclopedia turns 25, it’s facing political opposition, AI scraping, dwindling volunteers, and a public that may no longer believe in its ideals.Inside OpenAI’s Raid on Thinking Machines LabOpenAI is planning to bring over more researchers from Thinking Machines Lab after nabbing two cofounders, a source familiar with the situation says. Plus, the latest efforts to automate jobs with AI.AI Models Are Starting to Learn by Asking Themselves QuestionsAn AI model that learns without human input—by posing interesting queries for itself—might point the way to superintelligence.The Danger of Reducing America’s Venezuela Invasion to a 60-Second VideoJanuary 3 marked the return of US military intervention in Latin America. While the events unfolded between Caracas and Brooklyn, social networks had already fabricated their own reality.Thinking Machines Cofounder’s Office Relationship Preceded His TerminationLeaders at Mira Murati’s startup believe Barret Zoph engaged in an incident of “serious misconduct.” The details are now coming to light.The US Invaded Venezuela and Captured Nicolás Maduro. ChatGPT DisagreesSome AI chatbots have a surprisingly good handle on breaking news. Others decidedly don’t.Want to Stop Doomscrolling? You Might Need a Sleep CoachTraditionally sleep coaches treat babies. But now more and more anxious, screen-attached grownups are the ones who need nursing.AI’s Hacking Skills Are Approaching an ‘Inflection Point’AI models are getting so good at finding vulnerabilities that some experts say the tech industry might need to rethink how software is built.Billion-Dollar Data Centers Are Taking Over the WorldThe battle for AI dominance has left a large footprint—and it’s only getting bigger and more expensive.WIRED is obsessed with what comes next. Through rigorous investigations and game-changing reporting, we tell stories that don’t just reflect the moment—they help create it. When you look back in 10, 20, even 50 years, WIRED will be the publication that led the story of the present, mapped the people, products, and ideas defining it, and explained how those forces forged the future. WIRED: For Future Reference.SubscribeNewslettersTravelFAQWIRED StaffWIRED EducationEditorial StandardsArchiveRSSSite MapAccessibility HelpReviewsBuying GuidesStreaming GuidesWearablesCouponsGift GuidesAdvertiseContact UsManage AccountJobsPress CenterCondé Nast StoreUser AgreementPrivacy PolicyYour California Privacy Rights© 2026 Condé Nast. All rights reserved. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad ChoicesSelect international siteUnited StatesLargeChevronItaliaJapónCzech Republic & SlovakiaFacebookXPinterestYouTubeInstagramTiktok |
The rapid advancement of artificial intelligence is eliciting a complex mix of excitement and apprehension across a diverse range of individuals, from tech leaders and students to entertainers and physicians. This perspective, captured through interviews with a broad spectrum of respondents – including CEOs, journalists, students, and healthcare professionals – reveals a nuanced understanding of AI’s potential and perils. The overarching sentiment is one of immediate integration, with AI already woven into daily routines, primarily for practical applications. Angel Tramontin, a UC Berkeley student, exemplifies this, routinely utilizing Large Language Models (LLMs) to answer questions throughout the day, while Daniela Amodei, co-founder of Anthropic, has personally used Claude for childcare and symptom checking. Jon M. Chu, director of *Wicked*, similarly leverages LLMs for advice. However, this widespread adoption isn’t without its associated concerns. Sienna Villalobos, a UC Berkeley undergraduate, voices a prevalent worry: the potential for AI to erode independent thought and critical judgment, particularly within academic work. This sentiment reflects a broader anxiety about the dependence on AI for decision-making, highlighting a fundamental tension between efficiency and intellectual autonomy. Nearly two-thirds of US teens use chatbots, indicating a generational shift in how information is consumed and processed, and many more are likely utilizing AI without conscious awareness. The rapid pace of development and deployment, largely without established regulatory frameworks, is compounded by a lack of confidence in AI’s reliability. Only 5 percent of US adults “trust AI a lot,” and 41 percent are distrustful, as evidenced by a YouGov survey. This skepticism is further fueled by high-profile lawsuits alleging harm caused by AI systems, contributing to a growing demand for accountability. Mike Masnick, founder of Techdirt, emphasizes the crucial question: "What might go wrong?" and advocates for a proactive approach to risk assessment, mirroring car manufacturers' crash testing protocols. Companies like Anthropic and Cloudflare are increasingly prioritizing safety testing and “confidence” levels in their AI agents. The potential impacts extend beyond individual habits to broader societal concerns. Job security anxieties are palpable among university students, while Jeremy Allaire, CEO of Circle, raises concerns about the transformative effects on labor and the economy. Moreover, healthcare professionals, such as Eric Topol, are wary of introducing AI into patient care, fearing potential errors and the exacerbation of existing healthcare challenges. Despite these anxieties, individuals like Gonzalo Vasquez Negra, an MBA candidate at Berkeley, are actively seeking to educate others on how to leverage AI, exemplified by his work in Peru, while Gilliane Balingit, also a Berkeley student, utilizes AI for creative writing assistance. Despite the predominantly cautionary tone, some, like Cloudflare CEO Matthew Prince, express optimism about AI's potential to improve humanity. However, this optimism is tempered by the realization that AI is not a panacea. The potential for harm – who it hurts and who it harms – remains a central concern, championed by Michele Jawando, president of Omidyar Network. The diverse viewpoints underscore a fundamental challenge: navigating the dual promise of immense benefit alongside significant risk. Ultimately, the conversations highlight the need for ongoing vigilance, ethical frameworks, and a commitment to ensuring that AI serves humanity's best interests, as articulated by the diverse perspectives of those actively shaping its trajectory. |