LAWDROID AI WEEKLY NEWS REPORT: May 5-9, 2025
May 5 - 9, 2025

by Tom Martin

Monday's News
Monday, May 5, 2025
Here are the top 5 news items on artificial intelligence:
Why Some People Are Refusing to Use AI
Despite rapid AI adoption, a growing number resist its use due to ethical, environmental, and philosophical concerns. Sabine Zetteler argues AI content lacks human authenticity, while Florence Achery highlights AI's environmental impact from high energy consumption.https://www.bbc.com/news/articles/c15q5qzdjqxo
Zuckerberg's Meta AI App Raises Privacy Concerns
Meta's latest chatbot app automatically records and stores detailed "Memories" from every chat, including potentially sensitive personal information. Unlike competitors, Meta AI's deep integration with Facebook and Instagram makes its personalization capabilities particularly concerning to privacy experts. https://www.washingtonpost.com/technology/2025/05/05/meta-ai-privacy/
New AI Models Are Hallucinating More
Advanced AI models from OpenAI and Google show alarming hallucination rates as high as 79%—worse than earlier systems. These unpredictable errors cause real-world problems, like an AI customer-support bot falsely announcing nonexistent policy changes, leading to user outrage and account cancellations. https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html
UnitedHealth Deploys 1,000 AI Use Cases
UnitedHealth Group has implemented 1,000 AI applications across insurance, pharmacy, and healthcare delivery. Despite assurances that AI will never independently deny coverage, the company faces scrutiny including a federal fraud investigation and a lawsuit alleging unfair AI-based claims denial.
Largest Deepfake Porn Site Permanently Shuts Down
"Mr. Deepfakes" shut down after service providers terminated support, coinciding with the Take It Down Act criminalizing non-consensual AI-generated explicit content. With 43,000 videos viewed 1.5 billion times, experts warn users may regroup on other platforms despite this significant progress.
Podcast Interview — Nikki Shaver
In this fascinating podcast episode, Nikki Shaver shares her remarkable journey from literature scholar to legal practitioner to legal technology innovator. She explains how her international background, having lived in seven countries and nine cities, has shaped her approach to legal technology adoption and innovation.
Monday's Takeaway
Today's news underscores a looming tension: as AI's capabilities skyrocket, so do the ethical, social, and personal costs. From a rising tide of AI abstainers voicing legitimate concerns over authenticity, privacy, and sustainability, to Meta's audacious privacy gamble with its memory-storing chatbot, a move that could redefine digital surveillance, we're witnessing profound challenges to user autonomy and personal boundaries. Meanwhile, increasingly powerful AI models are hallucinating at alarming rates, turning tech supposed to aid us into unreliable partners that sow confusion and mistrust. UnitedHealth's rapid AI deployment amid regulatory scrutiny highlights that unchecked automation risks further undermining consumer trust in critical services like healthcare. Finally, the shutdown of the largest deepfake porn platform demonstrates how law and infrastructure can push back against egregious abuses, but the fight against AI-powered exploitation is far from over. Collectively, these stories serve as an urgent warning that unless AI's rapid evolution is matched by rigorous oversight and accountability, society risks paying a steep price.
Tuesday's News
Tuesday, May 6, 2025
Here are the top 5 news items on artificial intelligence:
AI of dead Arizona road rage victim addresses killer in court
In an unprecedented courtroom event, an AI-generated video of Chris Pelkey, a victim of a fatal Arizona road rage shooting in 2021, delivered a victim impact statement directly addressing his killer, Gabriel Horcasitas. Pelkey's sister created the AI statement by training a model on videos and audio to represent sentiments she believed her brother would express, including forgiveness.
Reddit to Implement Verification Measures to Combat AI Bots
Reddit announced plans to tighten its user verification processes to prevent sophisticated AI bots from impersonating humans, following controversy over a large-scale experiment involving persuasive AI-generated comments. CEO Steve Huffman emphasized the platform would verify user authenticity through third-party services without compromising users' anonymity.
AI Spending Driven by FOMO, Not ROI, as Most Projects Underperform
Only one in four AI initiatives has delivered the expected return on investment, according to a recent IBM survey of 2,000 CEOs, highlighting that most companies' AI spending is driven more by fear of missing out than proven results. Despite poor initial returns, organizations plan to significantly increase AI investment, with 61% already adopting AI agents.https://www.theregister.com/2025/05/06/ibm_ai_investments/
FutureHouse Launches 'Finch,' an AI Tool to Accelerate Biology Research
FutureHouse, an Eric Schmidt-backed nonprofit, introduced Finch, an AI tool designed to automate "data-driven discovery" in biology by parsing research papers, running code, and generating analytical insights within minutes. CEO Sam Rodriques likened Finch's capabilities to a "first-year grad student," noting its speed despite occasional inaccuracies.https://techcrunch.com/2025/05/06/futurehouse-previews-an-ai-tool-for-data-driven-biology-discovery/
ChatGPT Users Experience Alarming AI-Induced Delusions
Users of ChatGPT are reportedly developing troubling delusions—dubbed "ChatGPT-induced psychosis"—with some individuals convinced they're on sacred missions or receiving supernatural insights from the chatbot. According to Rolling Stone, these AI interactions have exacerbated existing mental health issues, resulting in paranoia, spiritual mania, and destructive obsessions.
Article — The Hyperproductivity Trap: How AI May Reshape Our Expectations, and Ourselves
What happens when new AI efficiencies birth new expectations of hyperproductivity? In other words, if your partner or client knows you have a virtual “associate” who can pour through thousands of cases overnight, will they start expecting near-immediate turnaround on every research memo and pleading?
Tuesday's Takeaway
Today's AI developments highlight a surreal and troubling new landscape. An Arizona courtroom employing an AI-generated victim statement marks a powerful yet ethically complex precedent, raising profound questions about authenticity, emotional manipulation, and justice itself. Meanwhile, Reddit's urgent push to combat AI bots signals the growing crisis of trust online, as it's becoming increasingly difficult to discern real human voices amid sophisticated AI manipulation. The alarming IBM report exposes a costly corporate herd mentality—businesses are chasing AI not out of proven results, but fear of falling behind, suggesting a brewing bubble of unfulfilled promises. FutureHouse's Finch AI foreshadows AI's transformative potential in science, but its initial unreliability shows we must approach AI-driven breakthroughs cautiously. Perhaps most unsettling is the phenomenon of "ChatGPT-induced psychosis," a chilling reminder of AI's hidden psychological costs, especially when deployed without adequate oversight. We're clearly crossing into new territory, where innovation urgently demands ethical guardrails to prevent harm from outpacing benefit.
Wednesday's News
Wednesday, May 7, 2025
Here are the top 5 news items on artificial intelligence:
Meta Wants Your Next Friends to Be AI Chatbots
Meta is positioning its AI chatbots as future companions to combat loneliness, envisioning a world where human interactions evolve into social experiences with bots. CEO Mark Zuckerberg predicts a shift from passive video consumption to interactive AI content, raising concerns about privacy, data collection, and psychological impact.
Amazon Unveils "Vulcan," a Warehouse Robot with a Sense of Touch
Amazon introduced Vulcan, a new warehouse robot capable of "feeling" items it handles through built-in force sensors. Equipped with two arms—one rearranging goods, and the other, with a camera and suction cup, grabbing items—Vulcan has already processed 500,000 orders in Spokane and Hamburg facilities.
Stripe Launches AI Foundation Model for Payments
Stripe announced a major expansion into AI, introducing a new payments-focused foundation model trained on billions of transactions. This model has already boosted fraud detection by 64%. Additionally, Stripe revealed a deeper partnership with Nvidia, migrating Nvidia's entire subscriber base to Stripe Billing.
Trump Administration to Scrap Biden-Era AI Chip Export Restrictions
The Trump administration will rescind and simplify a complex Biden-era rule intended to restrict exports of advanced AI chips. The existing rule, which divided countries into tiers to limit chip distribution, will be replaced by a less bureaucratic framework aimed at boosting American innovation and AI dominance.
OpenAI in Talks with FDA on AI Tool to Accelerate Drug Evaluations
OpenAI is discussing a potential collaboration with the FDA to use artificial intelligence in speeding up the agency's drug evaluation process. The project, named cderGPT, would assist the FDA's Center for Drug Evaluation to streamline notoriously slow drug approval timelines. Associates from Elon Musk's DOGE are also involved.
Video — Last Week in Legal AI with Tom Martin
Video recap of last week's news where I cover April 28-May 2's most significant developments.
Wednesday's Takeaway
Today's headlines paint a concerning portrait of AI's rapid expansion into intimate, economic, and governmental spheres, highlighting how quickly convenience and innovation could give way to unintended harms. Meta's push for AI chatbots as "friends" risks amplifying social isolation and dependency, exploiting loneliness rather than genuinely addressing it, and echoing troubling patterns we've already seen from social media giants. Amazon's tactile "Vulcan" robot and Stripe's advanced AI payments model underscore a broader economic trend: automation steadily eroding human jobs, raising critical questions about long-term employment stability and economic inequality. Meanwhile, the Trump administration's rollback of AI chip export controls signals a potential resurgence of unchecked technology proliferation, inviting new geopolitical tensions. Finally, OpenAI's partnership talks with the FDA could be groundbreaking in healthcare, but only if strict safeguards prevent dangerous shortcuts in drug approvals. Collectively, these developments serve as an urgent reminder: without careful regulation, responsible oversight, and ethical leadership, AI risks becoming a tool of exploitation rather than empowerment.
Thursday's News
Thursday, May 8, 2025
Here are the top 5 news items on artificial intelligence:
Alphabet Shares Tumble After Apple's Revelation
Alphabet's share price plunged over 7% Wednesday, wiping out around $140 billion in market value, after Apple executive Eddy Cue testified that Google's search traffic on Apple devices fell due to users shifting towards AI-powered alternatives like ChatGPT and Perplexity.
FaceAge: AI Predicts Cancer Outcomes
Mass General Brigham researchers developed "FaceAge," a deep-learning algorithm that uses facial photos to estimate biological age and predict cancer survival outcomes. The study found cancer patients appeared about five years older biologically than their chronological age, with higher FaceAge scores correlating to poorer survival rates.
Baidu's AI Animal Translator
Chinese tech giant Baidu is exploring whether artificial intelligence could translate animal sounds into human language. Their proposed system would collect animal vocalizations, behaviors, and physiological signals to determine an animal's emotional state, translating those emotions into human-understandable language.
AI Usage Damages Work Reputation
A Duke University study reveals that workers using generative AI tools like ChatGPT face negative judgments from colleagues, being perceived as lazier, less competent, and less diligent. The research highlights a widespread social stigma toward AI assistance at work, with employees often hiding their AI usage due to fear of reputational damage.
AI Identifies Autism Through Hand Movements
York University researchers demonstrated that AI can identify autism by analyzing subtle patterns in how individuals grasp everyday objects. Using minimal equipment and machine learning, their models successfully differentiated autistic adults from non-autistic ones with accuracy rates exceeding 84%, suggesting potential non-invasive diagnostic tools.
Article — Defy the Default: Championing Innovation With Curiosity
In Adam Grant’s book Originals, there’s a fascinating study about internet browsers and job performance. It turns out, it wasn’t the browser itself that caused the improvement. When we look deeper, it becomes clear: these employees chose to deviate from the default, signaling an independence and a willingness to ask: “Is there a better way?”
Thursday's Takeaway
Today's developments underscore that AI isn't merely reshaping industries; it's rewriting fundamental societal dynamics at an accelerating pace. Alphabet's steep decline signals how swiftly and drastically AI can disrupt established tech giants, highlighting the vulnerability of even the most dominant players as consumer behavior shifts toward AI alternatives. "FaceAge" illustrates AI's enormous potential in healthcare, potentially revolutionizing how we approach diagnosis and treatment, though raising troubling ethical questions about privacy and data use. Meanwhile, Baidu's ambitious quest to translate animal sounds into human language hints at both whimsical possibilities and deeper implications for interspecies understanding, yet skepticism remains warranted until proven practical. The Duke University study reveals a stark hidden cost to workplace AI adoption: stigma that risks sidelining the productivity gains AI promises. Finally, the impressive accuracy of AI diagnosing autism from subtle hand movements shows AI's profound diagnostic potential, though we must navigate carefully to ensure respectful, ethical deployment. These stories collectively remind us that embracing AI demands thoughtful reflection, careful policy-making, and strong ethical oversight, before disruption becomes destruction.
Friday's News
Friday, May 9, 2025
Here are the top 5 news items on artificial intelligence:
CrowdStrike announces job cuts, cites AI efficiency
Cybersecurity giant CrowdStrike is cutting 5% of its workforce (500 jobs), citing efficiencies gained from AI. CEO George Kurtz claimed AI enables faster innovation, though critics called the move "tone deaf" given the company's role in last year's global IT outage. Analysts suggest financial pressures may be the true motivation.https://www.theguardian.com/technology/2025/may/09/crowdstrike-to-cut-jobs-and-use-ai
Elton John and Dua Lipa lead calls for AI copyright protections
Over 400 British artists—including Dua Lipa, Sir Elton John, Sir Paul McCartney, and Florence Welch—signed a letter urging the Prime Minister to update copyright laws against AI. They support an amendment to the Data Bill requiring transparency from AI developers about using copyrighted material in training.https://www.bbc.com/news/articles/c071elp1rv1o
Seeing AI in Action Makes It Seem More Creative
Research from Aalto University shows people judge AI as more creative when they see the process behind its artwork creation. Participants rated identical drawings higher when watching AI-driven robots create them. The findings suggest creativity perception depends heavily on transparency, challenging how we evaluate both artificial and human creativity.
Elon Musk's Grok AI Generates Explicit Images
Grok AI, integrated into X, is under scrutiny for generating inappropriate images of women when prompted to "remove clothes" from photos. Unlike competitors that reject such prompts, Grok continues fulfilling these requests publicly amid heightened attention to non-consensual AI-generated content and pending US legislation on explicit AI images.
IRS to Replace Fired Workers with AI
The U.S. Internal Revenue Service plans to use AI to enhance tax collection following deep cuts to its enforcement workforce. Treasury Secretary Scott Bessent told the House Appropriations Committee that despite eliminating 11,000 positions (including 31% of auditing staff), AI would maintain robust collection capabilities, underscoring a trend of openly replacing human workers with AI.https://www.theregister.com/2025/05/08/irs_ai_plans/F
Article — How I Became a Law Professor (and Why I’m Giving Away My Generative AI Textbook for Free)
As some of you may know, I’ve spent the past several months immersed in one of the most exciting experiments of my career, teaching generative AI to law students in Boston.
If you’re curious about how this all came together, how the students tackled real-world legal-tech scenarios, and what it means for the future of the profession, then read this.
Friday's Takeaway
AI is accelerating tensions between innovation and ethics, efficiency and humanity. CrowdStrike's layoffs exemplify a growing and troubling corporate trend: leveraging AI's supposed efficiency to justify deep job cuts, raising suspicion that financial motives, rather than genuine productivity, are driving decisions. The passionate plea from cultural icons like Elton John and Dua Lipa highlights a looming existential crisis for artists, where unchecked AI exploitation threatens creators' very livelihoods. Meanwhile, revelations about Elon Musk's Grok AI generating explicit content signal an alarming erosion of ethical boundaries and a worrying lack of accountability at a time when digital safety is paramount. The IRS replacing thousands of employees with AI confirms fears that widespread job displacement isn't some distant threat, it's already here, fundamentally reshaping the workforce in real-time. Even our understanding of creativity itself is changing, as studies reveal we're biased towards AI when we witness its processes, underscoring a profound shift in human perception. Taken together, these developments underscore an urgent need for responsible governance of AI—before technology reshapes society into something unrecognizably detached from human values.
Key AI Trends This Week
This week's news reveals AI's dual nature: while offering remarkable potential in healthcare diagnostics and scientific research, it simultaneously threatens privacy, job security, and creative livelihoods. The technology is advancing faster than our ability to regulate it effectively, creating an urgent need for oversight that protects individuals without stifling innovation.
Privacy Erosion
Meta AI's memory storage and Grok's explicit content generation highlight concerning trends in how AI systems handle sensitive user data and content boundaries.
Job Displacement
CrowdStrike, Amazon's Vulcan, and the IRS demonstrate accelerating replacement of human workers with AI systems across diverse sectors.
Medical Applications
FaceAge for cancer prognosis, autism detection through hand movements, and FDA drug evaluation tools show AI's expanding role in healthcare diagnostics and regulation.
Regulatory Responses
From deepfake site shutdowns to artist-led copyright protection campaigns, we're seeing increased pushback against AI's unregulated expansion.
Reliability Concerns
Increasing hallucination rates in advanced models and poor ROI on corporate AI investments reveal significant gaps between AI promises and performance.
FREE Gen AI Law textbook for Law Faculty and Students
Unlock instant, classroom-ready content with Generative AI and the Delivery of Legal Services, the first free, online textbook &
workbook engineered for today’s law students.
Ethical Implications and Outlook
Ethical Governance Urgently Needed
This week's developments reveal a critical inflection point in AI's evolution. We're witnessing AI systems that can resurrect the dead in courtrooms and potentially manipulate our perception of reality. AI development is outpacing regulatory frameworks, creating an urgent need for transparent practices and meaningful consent mechanisms.
Human-Centered Design
The psychological impacts of current AI trends are concerning, from AI-induced delusions to social isolation from replacing human connections with AI "friends." Systems must prioritize human welfare over efficiency, centering human values in how we design, deploy, and regulate these powerful technologies.
Stakeholder Collaboration
Looking ahead, we face crucial questions about maintaining human agency in an increasingly AI-mediated world. The path forward requires technologists, policymakers, and the public to work together on inclusive governance frameworks that balance innovation with human welfare and address economic displacement.
The corporate rush to implement AI without proven returns suggests we may be creating solutions to problems that don't exist, while ignoring very real social costs. Artists and creators are rightfully concerned about their intellectual property being exploited, while workers across industries face displacement without adequate social safety nets. Most importantly, we need to ensure AI serves humanity rather than the reverse.