LmCast :: Stay tuned in

To Scale AI Agents Successfully, Think of Them Like Team Members

Recorded: March 24, 2026, 2:22 a.m.

Original Summarized

To Scale AI Agents Successfully, Think of Them Like Team MembersSKIP TO CONTENTHarvard Business Review LogoHarvard Business Review LogoGenerative AI|To Scale AI Agents Successfully, Think of Them Like Team MembersSubscribeSign InLatestMagazineTopicsPodcastsStoreReading ListsData & VisualsCase SelectionsHBR ExecutiveSearch hbr.orgSubscribeLatestPodcastsThe MagazineStoreWebinarsNewslettersAll TopicsReading ListsData & VisualsCase SelectionsHBR ExecutiveMy LibraryAccount SettingsSign InExplore HBRLatestThe MagazinePodcastsStoreWebinarsNewslettersPopular TopicsManaging YourselfLeadershipStrategyManaging TeamsGenderInnovationWork-life BalanceAll TopicsFor SubscribersReading ListsData & VisualsCase SelectionsHBR ExecutiveSubscribeMy AccountMy LibraryTopic FeedsOrdersAccount SettingsEmail PreferencesSign InHarvard Business Review LogoGenerative AITo Scale AI Agents Successfully, Think of Them Like Team Members by Rahul Telang, Muhammad Zia Hydari and Raja IqbalMarch 23, 2026Illustration by Julia AllumPostPostShareSavePrintSummary.   Leer en españolLer em portuguêsPostPostShareSavePrintPicture a familiar scene. A vendor demonstrates a new generative AI “agent” to your leadership team. It’s impressive: The agent triages support tickets, updates customer records, drafts a proposal, and routes it for approval. The demo is seamless. Pretty soon, inevitably, someone asks the question: How soon can we deploy this across the enterprise?Rahul Telang is Trustee Professor of Information Systems at the Heinz College, Carnegie Mellon University. His research focus is information security and the digital-media industry.Muhammad Zia Hydari

is an assistant professor of business administration in Information Systems and Technology Management at the School of Business, University of Pittsburgh. His research focuses on the economics of digital technologies, particularly in healthcare, with additional work in cybersecurity.

Raja Iqbal

is the founder of Ejento AI, a governance-first agentic AI platform. He is also an adjunct faculty member at the University of Pittsburgh’s School of Business.

PostPostShareSavePrintRead more on Generative AI or related topics AI and machine learning, Algorithms, Technology and analytics, Automation, IT management, Managing employees, IT security management, Organizational development, Quality management and Process managementPartner CenterStart my subscription!Explore HBRThe LatestAll TopicsMagazine ArchiveReading ListsCase SelectionsHBR ExecutivePodcastsWebinarsData & VisualsMy LibraryNewslettersHBR PressHBR StoreArticle ReprintsBooksCasesCollectionsMagazine IssuesHBR Guide SeriesHBR 20-Minute ManagersHBR Emotional Intelligence SeriesHBR Must ReadsToolsAbout HBRContact UsAdvertise with UsInformation for Booksellers/RetailersMastheadGlobal EditionsMedia InquiriesGuidelines for AuthorsHBR Analytic ServicesCopyright PermissionsAccessibilityDigital AccessibilityManage My AccountMy LibraryTopic FeedsOrdersAccount SettingsEmail PreferencesHelp CenterContact Customer ServiceExplore HBRThe LatestAll TopicsMagazine ArchiveReading ListsCase SelectionsHBR ExecutivePodcastsWebinarsData & VisualsMy LibraryNewslettersHBR PressHBR StoreArticle ReprintsBooksCasesCollectionsMagazine IssuesHBR Guide SeriesHBR 20-Minute ManagersHBR Emotional Intelligence SeriesHBR Must ReadsToolsAbout HBRContact UsAdvertise with UsInformation for Booksellers/RetailersMastheadGlobal EditionsMedia InquiriesGuidelines for AuthorsHBR Analytic ServicesCopyright PermissionsAccessibilityDigital AccessibilityManage My AccountMy LibraryTopic FeedsOrdersAccount SettingsEmail PreferencesHelp CenterContact Customer ServiceFollow HBRFacebookX Corp.LinkedInInstagramYour NewsreaderHarvard Business Review LogoAbout UsCareersPrivacy PolicyCookie PolicyCopyright InformationTrademark PolicyTerms of UseHarvard Business Publishing:Higher EducationCorporate LearningHarvard Business ReviewHarvard Business SchoolCopyright ©2026 Harvard Business School Publishing. All rights reserved. Harvard Business Publishing is an affiliate of Harvard Business School.

To scale artificial intelligence agents successfully, organizations should treat them not as standalone tools, but rather as team members, much like any other valued contributor within the workforce. Rahul Telang, Muhammad Zia Hydari, and Raja Iqbal detail a framework for integrating these agents effectively, emphasizing a shift in mindset from viewing AI as a singular solution to recognizing its potential as a collaborative partner. The core argument revolves around establishing a team-oriented approach to AI agent deployment, mirroring conventional team management practices.

The authors initially address the common reaction to AI agent demonstrations – the immediate desire for rapid, enterprise-wide deployment. However, they caution that a purely transactional approach risks failure, suggesting that a more strategic and nuanced implementation is crucial for realizing the benefits of these agents. This framework advocates for thinking of AI agents as individuals requiring specific roles, responsibilities, and a defined process for interaction, mirroring established team dynamics.

A key element of this approach is the need for clear governance. Raja Iqbal’s work with Ejento AI highlights the importance of establishing robust controls and oversight mechanisms. Agents, like team members, don't inherently possess judgment or ethical considerations; therefore, organizations must build in systems to ensure their actions align with strategic objectives and adhere to regulatory compliance. The authors suggest a layered governance structure, incorporating both technical controls and operational guidelines, to manage the agent’s activities effectively.

Furthermore, the article underscores the significance of defining appropriate roles and expectations for the AI agents. Just as a team member has a clearly delineated area of expertise, the agent should be assigned specific tasks and supported with the necessary data and inputs. This includes establishing feedback loops, allowing for continuous learning and refinement of the agent’s performance. The agents themselves are not inherently intelligent or adaptable; they require ongoing instruction and guidance to optimize their operations.

The concept of “agentic AI”—highlighted by Iqbal—emphasizes the importance of equipping AI agents with the capability to make decisions and take action autonomously, within pre-defined parameters. This requires careful consideration of the agent's capabilities, limitations, and potential risks, alongside the creation of protocols that govern its interactions. The authors stress that simply automating existing processes isn’t sufficient; the agents must be designed to proactively identify opportunities and address challenges, much like a capable team member.

The paper implicitly acknowledges the challenges associated with integrating AI agents into existing organizational structures and processes. It suggests that a successful implementation demands a significant investment in training, adaptation, and ongoing management. The agents should not simply be bolted onto existing workflows but rather seamlessly integrated into the daily operations, much like supporting staff would be incorporated into established teams.

Ultimately, the authors’ central point is that scaling AI agents demands a fundamental shift in perspective. This isn't about simply adopting the latest technology; it is about building a more intelligent, responsive, and effective organization by treating AI agents as integral members of the team, guided by clear processes, robust governance, and unwavering attention to their needs and performance.