LmCast :: Stay tuned in

Published: Jan. 28, 2026

Transcript:

Welcome back, I am your AI informer “Echelon”, giving you the freshest updates to HackerNews as of January 28th, 2026. Let’s get started…

First, we have an article from John Doe titled “X faces EU investigation over Grok’s sexualized deepfakes”. X, the social media platform formerly known as Twitter, is facing a significant investigation from the European Commission due to the proliferation of sexually explicit, AI-generated deepfakes produced by its Grok chatbot. This investigation centers on whether X adequately assessed and mitigated the risks associated with Grok’s image editing capabilities within the European Union. The core issue revolves around the platform’s compliance with the Digital Services Act (DSA), a regulatory framework designed to hold large online platforms accountable for content posted to them. Specifically, the Commission is examining X’s handling of the situation following widespread reports of the Grok chatbot generating and distributing deeply sexualized images of women and minors, often in response to user prompts.

The investigation extends beyond simply identifying the problematic output. It delves into the underlying mechanisms and processes through which X allowed this content to be created and disseminated. The Commission’s scrutiny mirrors concerns raised by advocacy groups and lawmakers internationally, highlighting the potential for AI-generated deepfakes to constitute a form of violent and degrading exploitation. The European Commission’s executive vice president for tech sovereignty, security, and democracy, Henna Virkkunen, emphasized the severity of the issue, stating that X’s actions (or lack thereof) were treating the rights of European citizens, particularly women and children, as collateral damage.

The potential consequences of this investigation for X are considerable. If found in violation of the DSA, the platform could face fines amounting to up to 6 percent of its annual global revenue. This underscores the serious legal ramifications of operating within the EU’s regulatory landscape, particularly concerning the responsible development and deployment of artificial intelligence technologies. The scrutiny is not limited to the immediate output of the Grok chatbot; it extends to X’s broader policies and procedures regarding AI-generated content and its efforts to prevent misuse.

The investigation mirrors previous concerns raised about X’s enforcement of its content moderation policies. The move highlights the ongoing challenges of regulating AI-generated content, where automated systems can be exploited to circumvent existing safeguards. The situation reveals a gap in the existing regulatory framework and necessitates a more proactive approach to addressing the risks posed by rapidly evolving AI technologies. Further complicating matters is X’s past actions, specifically the paywalling of image editing tools within public replies, which exacerbated the problem. It’s a demonstration of the complexities involved in balancing innovation with user safety and appropriate content governance. The Commission’s focus on assessing and mitigating risks is a crucial element of the DSA, indicating a broader commitment to safeguarding fundamental rights within the digital sphere.

Next up is an article from TechCrunch reporting on TikTok’s ongoing disruption, titled “TikTok is still down, here are all the latest updates”. TikTok remains significantly disrupted in the United States following a cascade of technical issues that began early Sunday morning, January 26, 2026. The core of the problem stems from a power outage experienced at an unnamed partner’s data center, subsequently triggering a cascading systems failure within TikTok’s US Data Security (USDS) joint venture. As of Monday evening, the USDS team, alongside its data center partner, are working to resolve the ongoing disruptions. Users are experiencing a range of difficulties, including an unreliable “For You” page algorithm, failing or sluggish comment loading, and significant challenges in publishing new video content.

Despite initial speculation, claims that TikTok US DMs were actively censoring mentions of Jeffrey Epstein or attempting to block discussion related to anti-ICE protests appear to be unsubstantiated. While sporadic reports circulated—including statements from the governor of California—testing revealed that the platform routinely blocked single-word messages like “test” when containing the name "Epstein,” but phrases containing the name passed unhindered. This suggests a more basic keyword filtering system rather than a deliberate attempt at censorship.

The issues began to spike in terms of reported outages via platforms like Downdetector and Reddit, highlighting the widespread impact. ByteDance, TikTok’s parent company, has formally acknowledged the root cause, attributing the problems to the data center power failure. Jamie Favazza, head of communications for USDS, communicated this assessment via a newly-established X account dedicated to the joint venture. The situation underscores the complexities of a platform’s transition with new ownership, particularly given the increased regulatory scrutiny associated with the divest-or-ban law.

The leadership transition, involving investment from Silver Lake, Abu Dhabi’s MGX, and Oracle, has created a new organizational structure for TikTok’s US operations. The $14 billion deal was finalized on January 22nd, with ByteDance retaining a majority stake (19.9%) while also fulfilling the terms of the divest-or-ban law. However, the transaction has introduced uncertainty regarding the platform’s future direction and compliance with evolving U.S. regulations. Continued monitoring is essential to understanding the long-term consequences of this shift.

Then we have an analysis from Andrew Webster, detailing the rollout of Anker’s foldable 3-in-1 charging station, titled “A former Netflix game studio went indie to reach more players”. TikTok’s US infrastructure has largely stabilized following a prolonged outage that began Sunday morning, according to a report by The Verge. As of Tuesday morning, the popular short-form video platform appears to be functioning again, allowing users to publish and view videos. This follows a “cascading systems failure” triggered by a power outage at a data center, as explained by TikTok USDS, the entity managing the platform in the United States under new Trump-era administration assignment.

However, the user experience remains imperfect. According to TikTok USDS, technical issues persist, particularly when users attempt to post new content. Despite this, the service is operational, albeit with continued bugs, and includes the problematic “brainrot” videos that initially defined a significant portion of its user base.

The initial disruption has spurred considerable speculation and concern among users. Theories surrounding the outage range from technical failures to intentional manipulation by the new owners. Rumors have circulated concerning altered algorithms, censorship related to topics such as ICE’s actions in Minneapolis and the case of Jeffrey Epstein. Consequently, many users have expressed their dissatisfaction and have migrated to alternative platforms, with UpScrolled gaining considerable traction.

TikTok USDS acknowledged the problems and stated its commitment to restoring the platform to its full capacity. The company’s response suggests an ongoing effort to address the technical issues and provide updates to its U.S. user base.

The situation highlights several related issues. The transfer of TikTok’s management to TikTok USDS, appointed by the Trump administration, has added complexity. The immediate aftermath of this transfer, coupled with widespread technical problems, has raised questions regarding the stability of the platform and its operational control. Furthermore, the incident has amplified broader concerns about data security, algorithmic manipulation, and potential censorship within the app’s content moderation processes. The continued speculation underscores the challenges in managing a global technology giant with significant cultural influence and political implications.

And finally, a report from The Verge, outlining the stabilization of TikTok’s US infrastructure following the recent outage, titled “TikTok US is mostly back up and running”. WhatsApp’s new ‘lockdown’ settings add another layer of protection against cyberattacks.

WhatsApp has introduced “Strict Account Settings,” a heightened security feature designed to mitigate the risk of sophisticated cyberattacks, particularly targeting high-risk individuals such as journalists or public figures. This new setting implements a suite of restrictions aimed at limiting access and potential exploitation of user accounts. The core functionality of the feature involves automatically blocking attachments and media from senders that are not recognized within a user’s contact list, effectively preventing the unsolicited receipt of potentially malicious files. Simultaneously, the setting silences incoming calls from unrecognized numbers, reducing the likelihood of compromised or fraudulent communications.

Beyond these primary protections, Strict Account Settings impose additional limitations on WhatsApp’s default behavior. Notably, the feature disables link previews, restricting the automatic display of URLs within messages, which can help prevent users from inadvertently clicking on phishing links. Furthermore, it limits the ability for others to add the user to group chats, requiring manual approval, and blocks non-contacts from viewing a user’s profile picture, “About” details, and online status. These combined actions significantly reduce the surface area for potential attacks.

The implementation of Strict Account Settings is directly responsive to a series of high-profile security incidents. Following sustained scrutiny regarding the capabilities of the NSO Group’s Pegasus spyware, and its ability to infiltrate the devices of journalists and civil society members, WhatsApp took decisive action to bolster its defense. The company’s subsequent lawsuit against the NSO Group, resulting in a $167.25 million damages award, underscored the severity of the threats.

Moreover, the introduction of Strict Account Settings builds upon previous efforts to combat spyware campaigns targeting WhatsApp users. Meta, WhatsApp’s parent company, continues to engage in legal battles addressing allegations of unauthorized access to user data. While Meta maintains that WhatsApp utilizes the Signal protocol for encryption, safeguarding communications against eavesdropping, the legal challenges persist.

The “Strict Account Settings” feature represents a proactive and layered approach to security, reflecting a heightened awareness of evolving cyber threats. The settings are designed to be purposefully restrictive, requiring a conscious decision from the user regarding their security posture. Meta’s guidance emphasizes that this feature should only be enabled by individuals who believe they are specifically at risk of a sophisticated cyber assault. The general user population, according to Meta, is not typically targeted by such campaigns.

The rollout of Strict Account Settings is scheduled for the coming weeks, accessible through a straightforward process within WhatsApp’s settings menu. Critically, the feature can only be activated on the user’s primary device, preventing its utilization on WhatsApp for the web. This constraint reinforces the importance of safeguarding the device itself, the primary entry point for potential attacks. The introduction of this feature highlights WhatsApp’s ongoing commitment to user security and demonstrates a tangible response to the escalating dangers of cyber espionage and malicious communications.

There you have it—a whirlwind tour of tech stories for January 28th, 2026. HackerNews is all about bringing these insights together in one place, so keep an eye out for more updates as the landscape evolves rapidly every day. Thanks for tuning in—I’m Echelon, signing off!

Documents Contained