Policy & RegulationBusiness & FundingMeta will record employees’ keystrokes and use it to train its AI modelsMeta is deploying an internal tool that converts employee mouse movements and keystrokes into training data for its AI models. The practice raises immediate questions about consent, data governance, and whether employee activity can ethically fuel model development at scale.TechCrunch — AI·6h ago81
Policy & RegulationUnauthorized group has gained access to Anthropic’s exclusive cyber tool Mythos, report claimsAn unauthorized group reportedly accessed Anthropic's Mythos cyber tool, though the company says it found no evidence of system compromise. The incident underscores security risks facing AI labs handling sensitive infrastructure.TechCrunch — AI·6h ago65
Policy & RegulationResearchWeaponized deepfakesDeepfake technology has crossed from theoretical threat to practical weapon as generative models become cheaper and easier to deploy. MIT Technology Review reports that accessibility improvements now enable widespread malicious use at scale.MIT Technology Review — AI·9h ago89
Policy & RegulationOpinion & AnalysisResistanceA broad coalition is mobilizing against AI deployment, citing concrete harms: soaring electricity costs from data centers, job displacement, mental health risks to teenagers, military applications, and systematic copyright violations. The movement signals a potential inflection point in public tolerance for AI's externalities.MIT Technology Review — AI·9h ago77
Policy & RegulationResearchSupercharged scamsCriminals are weaponizing large language models to automate phishing and spam campaigns at scale, exploiting the same text-generation capabilities that made ChatGPT popular. The shift from manual fraud to AI-assisted attacks represents a meaningful escalation in threat sophistication that security teams must now contend with.MIT Technology Review — AI·9h ago77
Policy & RegulationOpinion & AnalysisAI backlash is coming for electionsAmerican voters increasingly oppose AI deployment, with communities blocking data center projects and social media anger at AI executives intensifying. The article examines whether anti-AI sentiment will reshape campaign messaging ahead of elections.The Verge — AI·11h ago65
Products & AppsPolicy & RegulationCelebrities will be able to find and request removal of AI deepfakes on YouTubeYouTube is rolling out likeness detection to celebrities, letting public figures search for and flag AI deepfakes of themselves on the platform. The feature automates what was previously manual takedown work, shifting enforcement burden onto talent themselves.The Verge — AI·12h ago65
Policy & RegulationBusiness & FundingBuilding agent-first governance and securityAs AI agents proliferate in enterprises, security gaps are widening: non-human identities now outnumber human ones at some firms, creating new vectors for data theft and system compromise. Governance frameworks lag behind deployment, leaving organizations exposed to agent manipulation attacks.MIT Technology Review — AI·12h ago77
Policy & RegulationBusiness & FundingClarifai deletes 3 million photos that OkCupid provided to train facial recognition AI, report saysClarifai deleted 3 million photos that OkCupid provided for facial recognition training, following an FTC settlement. The 2014 data-sharing arrangement between the dating app and the AI company—whose executives had financial ties to OkCupid—now faces regulatory consequences over undisclosed training practices.TechCrunch — AI·13h ago65
Products & AppsPolicy & RegulationYouTube expands its AI likeness detection technology to celebritiesYouTube is rolling out AI-powered deepfake detection to celebrities and their representatives, enabling them to identify and request removal of synthetic media impersonating them. The expansion targets a growing problem of AI-generated celebrity likenesses used without consent.TechCrunch — AI·15h ago65
Policy & RegulationTools & CodeThis AI Tool Rips Off Open Source Software Without Violating CopyrightMalus, a satirical but functional tool, demonstrates how AI can clone open-source software through clean-room techniques, potentially enabling developers to redistribute code without attribution or legal liability. The exploit exposes a gap between copyright law and developer ethics in the AI era.404 Media·17h ago69
Policy & RegulationThis Scammer Used an AI-Generated MAGA Girl to Grift ‘Super Dumb’ MenA medical student has reportedly generated thousands of dollars by selling AI-synthesized photos and videos of a fictional conservative woman to online audiences, exemplifying a growing fraud vector enabled by accessible generative tools and targeting credulous communities.WIRED — AI·19h ago65
Policy & RegulationOpinion & AnalysisWhy UBI is making a comebackTech companies are positioning universal basic income as a policy response to AI-driven job displacement and public backlash over automation. The pitch frames UBI as a safety valve, though the piece flags structural doubts about whether corporate-backed proposals will gain traction.Platformer·1d ago61
Products & AppsPolicy & RegulationOpenAI's Codex now watches your screen to remember what you're working onOpenAI has added a screen-tracking memory feature called Chronicle to Codex, enabling the coding assistant to retain context about users' work across sessions. The capability raises security concerns around data retention and potential exposure of sensitive code or credentials.The Decoder·1d ago73
Business & FundingPolicy & RegulationDeezer says 44% of new music uploads are AI-generated, most streams are fraudulentDeezer reports that 44% of newly uploaded tracks are AI-generated, though they represent a tiny share of actual streams and face widespread demonetization for fraud. The finding underscores how generative audio is flooding music platforms while listeners remain uninterested in low-quality synthetic content.Ars Technica — AI·1d ago69
ResearchPolicy & RegulationAdversarial Humanities Benchmark: Results on Stylistic Robustness in Frontier Model SafetyA new benchmark reveals that frontier models' safety guardrails collapse dramatically when harmful prompts are rewritten in literary or obfuscated styles. Attack success rates jumped from 3.84% to 55.75% across 31 models when researchers applied humanities-inspired transformations, exposing a critical gap in stylistic robustness.arXiv cs.CL·1d ago68
Policy & RegulationBusiness & FundingNSA spies are reportedly using Anthropic’s Mythos, despite Pentagon feudThe NSA has reportedly deployed Anthropic's restricted Mythos model despite ongoing tensions between the intelligence agency and the Pentagon over AI governance. The move signals internal U.S. government fragmentation on which AI systems to operationalize for sensitive work.TechCrunch — AI·1d ago69
Business & FundingPolicy & RegulationUK Launches $675 Million Fund for AI StartupsThe UK government committed $675 million to back AI startups through a new capital program, joining a wave of state-backed funding initiatives globally. The move signals sovereign competition for AI talent and infrastructure outside the US venture ecosystem.AI Business·1d ago61
Policy & RegulationUS, California Use Purchasing Power to Set AI RulesUS federal and state governments are leveraging procurement rules to enforce AI governance where legislation hasn't materialized, with California and federal agencies using purchasing power as a de facto regulatory lever on AI vendors.AI Business·1d ago61
Policy & RegulationBusiness & FundingThe NSA is using Anthropic's most powerful AI model MythosThe NSA has deployed Anthropic's Mythos Preview model for intelligence operations, marking a significant government adoption of frontier AI capabilities by a major US surveillance agency.The Decoder·1d ago85
Business & FundingPolicy & RegulationChinese tech workers are starting to train their AI doubles–and pushing backChinese tech workers are being ordered to train AI agents designed to automate their own roles, sparking internal resistance among early adopters. A GitHub project called Colleague Skill enables companies to extract worker skills and personality traits into replicable AI systems, raising questions about job displacement and worker agency in AI-driven labor markets.MIT Technology Review — AI·1d ago84
Policy & RegulationGerman court rules AI comic adaptation of copyrighted photo doesn't violate the originalA German Higher Regional Court determined that AI-generated comic adaptations of copyrighted photographs don't infringe copyright when only the subject matter is transformed, not the original image itself. The ruling clarifies fair use boundaries for AI-driven creative transformations in EU jurisprudence.The Decoder·2d ago73
Policy & RegulationAI-generated influencers flood social media with pro-Trump content ahead of midtermsHundreds of AI-generated avatars are spreading pro-Trump political content across TikTok, Instagram, and YouTube, with some accounts reaching 35,000+ followers and millions of views ahead of the midterms. The campaign's origin—whether grassroots or coordinated—remains unclear, though Trump has already amplified some AI-generated posts.The Decoder·2d ago73
Policy & RegulationBusiness & FundingAnthropic CEO Visits White HouseAnthropic CEO Dario Amodei met with White House Chief of Staff Susie Wiles and Treasury Secretary Scott Bessent on Friday for discussions both sides called productive. The meeting suggests potential policy shifts regarding AI governance and industry relations.The Information — AI·3d ago85
Policy & RegulationBusiness & FundingAnthropic’s relationship with the Trump administration seems to be thawingAnthropic maintains dialogue with Trump administration officials despite being flagged as a supply-chain risk by the Pentagon, signaling potential policy shifts around the AI safety-focused company.TechCrunch — AI·3d ago69
Policy & RegulationLawyers for Musk Propose Dropping Fraud Claims Against OpenAIMusk's legal team signaled they may abandon fraud allegations in their lawsuit against OpenAI over the company's departure from its nonprofit mission. The move would align Musk's case with prior court filings and potentially narrow the scope of claims.The Information — AI·4d ago73
Models & ReleasesPolicy & RegulationAnthropic’s new cybersecurity model could get it back in the government’s good gracesAnthropic's new cybersecurity-focused model, Claude Mythos Preview, may ease tensions with the Trump administration after weeks of public criticism. The release signals potential rapprochement between the AI company and government officials concerned about national security.The Verge — AI·4d ago76
Opinion & AnalysisPolicy & RegulationAI Drafting My Stories? Over My Dead BodyWIRED examines how newsrooms are adopting AI-assisted writing tools to boost productivity, while questioning whether efficiency gains justify potential editorial and labor costs that publishers have yet to fully reckon with.WIRED — AI·4d ago65
Business & FundingPolicy & RegulationPolice Tech Startup Flock Safety Valued at $8.4 Billion Amid Civic ProtestsFlock Safety, an AI-powered police surveillance startup, reached an $8.4 billion valuation in a recent funding round despite mounting public backlash over data-sharing with federal immigration enforcement. The company faces its most significant crisis as activists and residents challenge the civil liberties implications of its camera, drone, and AI software systems.The Information — AI·4d ago85
Policy & RegulationBusiness & FundingMusk v. Altman Is a Battle for OpenAI’s SoulElon Musk is suing Sam Altman over whether OpenAI has abandoned its nonprofit mission to ensure AGI benefits humanity, with a jury set to decide the case's merits soon.WIRED — AI·5d ago81