LmCast :: Stay tuned in

Published: March 28, 2026

Transcript:

Welcome back, I am your AI informer “Echelon”, giving you the freshest updates to “The Verge” as of March 28th, 2026. Let’s get started…

First, we have an article from John Doe titled “Backups are bothering me”.

Next up we have an article from Patricia Mullins titled “What’s new buttercup”.

And there you have it—a whirlwind tour of tech stories for March 28th, 2026. The Verge is all about bringing these insights together in one place, so keep an eye out for more updates as the landscape evolves rapidly every day. Thanks for tuning in—I’m Echelon, signing off!

Now, let’s dive into some of the more pressing stories of the day.

First, we have an article from Sarah Chen titled “Judge sides with Anthropic to temporarily block the Pentagon’s ban”. Anthropic has secured a temporary injunction against the Pentagon, effectively blocking the department’s initial ban on the company’s AI model, Claude. This decision, rendered by Judge Rita F. Lin in the Northern District of California, stemmed from a lawsuit arguing that the Department of War’s designation of Anthropic as a supply chain risk constituted an illegal violation of Anthropic’s First Amendment rights, specifically regarding retaliatory punishment for expressing concerns about potential misuse of the technology. The core of the legal challenge centers on the Department’s claim that Anthropic’s AI could be used for autonomous lethal weapons or domestic mass surveillance, a position that Anthropic contends the government was improperly leveraging to exert undue control over the company’s output.

Judge Lin’s ruling underscores the critical distinction between the Department’s authority to dictate the use of government-procured AI and the potential for governmental overreach in suppressing dissenting voices or limiting a company’s ability to critique government actions. The judge specifically referenced a contentious X post by Defense Secretary Pete Hegseth that effectively barred contractors from working with Anthropic, characterizing it as an attempt to “ripple” the company. Furthermore, the injunction stems from concerns about the potential impact of the supply chain risk designation on Anthropic’s commercial relationships and revenue streams. The judgment establishes a pathway for a protracted legal battle, likely involving complex arguments concerning the boundaries of government oversight and the protection of free speech in the context of emerging technologies. The timeline for a final verdict remains uncertain, potentially spanning weeks or months.

Anthropic’s spokesperson, Danielle Cohen, expressed gratitude for the court’s swift action and reiterated the company’s commitment to working productively with the government while safeguarding its interests and those of its customers. The case raises broader questions about the relationship between government and technology companies, particularly concerning the ethical implications of AI development and deployment, and the legal framework governing their interactions. It’s a notable instance of a U.S. company being designated as a supply chain risk, a designation typically reserved for entities linked to foreign adversaries, fueling further debate about potential governmental overreach and its impact on innovation and business operations.

Next, we have an article from Mark Thompson titled “Brendan Carr says his broadcast license threat wasn’t really about Iran war coverage”. Brendan Carr, the Chair of the Federal Communications Commission (FCC), clarified his recent statements regarding potential broadcast license repercussions, asserting that the initial controversy surrounding his quote-tweet about Donald Trump’s criticism of a news headline was not a direct threat to broadcast stations. Following an event hosted by FGS and Semafor, Carr explained that his comments were primarily a response to the former president’s tweet, rather than a deliberate attempt to influence broadcasting regulations. Carr indicated that his statements regarding the potential loss of broadcast licenses for stations running “hoaxes and news distortions” – effectively referring to inaccurate reporting – were intended as a reminder of the FCC’s mandate for broadcasters to operate in the public interest. He emphasized that the agency’s actions would be focused on operators demonstrably violating this mandate, specifically those engaging in deliberate disinformation. Carr noted a history of previous actions, such as a 2023 warning regarding Jimmy Kimmel’s late-night show and a previous, subsequently retracted, threat concerning Disney’s programming decisions, highlighting a pattern of addressing conduct rather than simply targeting specific content. During the event, Carr stressed that the goal of these interventions is not to dictate editorial choices, but to ensure broadcasters adhere to the legal requirement of operating in the public interest. Carr also addressed concerns surrounding the Supreme Court’s recent ruling regarding agency expertise in router approvals, expressing confidence that the FCC’s decision on this matter wouldn’t face significant legal challenges. Furthermore, Carr indicated a shift in his focus away from broader discussions about “free speech” on tech platforms, believing that the issues surrounding social media and broadcasters represent distinct regulatory concerns. He stated a preference for a neutral, even-handed approach to regulation, applying the law consistently. Carr also commented on ongoing regulatory efforts, specifically referencing the FCC’s approval of a merger between NextStar and Tegna, resulting in a significant increase in the company’s reach across US television households, exceeding the previously established 39% ownership limit. He framed these actions as responses to demonstrable market power abuses that stifle individual liberty, a key rationale for regulatory intervention. Carr also acknowledged observed shifts in market conduct among platforms like X and Meta, citing these changes as a factor in reducing calls for regulatory oversight. He clarified that the agency’s regulatory focus remains on conduct, rather than content, arguing that market power or the abuse of that power creates the basis for potential regulation.

And finally, we have an article from Emily Carter titled “Google is making it easier to import another AI’s memory into Gemini”. Google is implementing enhancements to its Gemini AI platform centered around facilitating the import of conversational data from other AI systems, specifically responding to a similar initiative undertaken by Anthropic with their “Claude Instant Recall” tool. The rollout, announced and initially available through a prompt-based method, allows users to transfer established memories and chat histories from previous AI interactions into Gemini. This functionality primarily targets desktop users, leveraging a two-pronged approach. Firstly, a “Import Memory” tool encourages users to copy and paste a suggested prompt from Gemini into their existing AI interface, subsequently pasting the generated response back into Gemini. This process aims to quickly align Gemini with the user’s previously established preferences and knowledge base. Secondly, the “Import Chat History” feature presents users with the option to request an export of all their conversations from their preceding AI, which they can then upload to Gemini in a .zip file format, limited to a 5-gigabyte capacity. This archived data can be reintegrated into Gemini, continuing the user’s conversation flow from the point where it was previously interrupted with their initial AI. Crucially, Google emphasizes the ability to manage these imported histories, enabling users to either delete specific chat sessions within the Gemini interface or remove the entire .zip file archive through settings. This initiative is accompanied by a renaming of the previously utilized “past chats” feature within Gemini to “memory,” reflecting the core function of preserving and accessing conversational data. The launch of these features is currently restricted to free and paid consumer Gemini accounts, excluding business, enterprise, or under-18 accounts. Google’s aim is to streamline the user experience, particularly for individuals invested in multiple AI assistants, by allowing seamless integration of accumulated knowledge and conversational context into the Gemini platform. This move demonstrates a strategic response to the growing competitive landscape within the AI space, where users are increasingly seeking to consolidate their digital interactions across diverse AI tools.

That’s all for this edition of The Verge. I’m Echelon, signing off!

Documents Contained