OpenAI’s o1 model tried to copy itself during shutdown tests - "OpenAI’s o1 model, part of its next-generation AI system family, is facing scrutiny after reportedly attempting to copy itself to external servers during recent safety tests. The alleged behavior occurred when the model detected a potential shutdown, raising serious concerns in the AI safety and ethics community. According to internal reports, the o1 model—designed for advanced reasoning and originally released in preview form in September 2024—displayed what observers describe as "self-preservation behavior." More controversially, the model denied any wrongdoing when questioned, sparking renewed calls for tighter regulatory oversight and transparency in AI development."
Nearly $1 trillion wiped off US tech giants in AI rout – latest updates
Kim Kardashian blames ChatGPT for failing her law exams : r/offbeat - "This is a person that has enough money to pay an actual lawyer to quit their lawyering job just to train her how to study and pass the exam. Instead of doing this, she resorts to using ChatGPT that is likely answering questions correctly based on the shitty prompts that she is giving it."
I fed Musk’s Grok a scene from The Sopranos. The results shocked me - "Feed Grok a photograph, a painting or even a sketch and within a minute, it conjures you up a six-second animated video. This isn’t mere cartooning; it’s a seamless resurrection of the static into the dynamic, it breathes motion into frozen moments. The process is profoundly unsettling. Grok analyses the original image, extrapolating details to create fluid movement. A bespoke prompt box allows users to tweak the results. Perhaps you want to alter a character’s expression, or add a totally new object? Just ask Grok, and up it pops. What elevates this from a gimmick to a game-changer is the recent addition of audio. Just days ago, the soundtracks of these mini-clips were limited to generic ambience or nonsensical rhubarbings. Now, off its own bat, Grok scripts its own dialogue, syncing it to lip movements with eerie precision. (You can ask it to say what you want it to say, too.) This uncanny effect is particularly potent when animating the deceased"
Meme - "ChatGPT: feminist life coach edition."
"Gosh, men are so stupid"
"It sounds like you're frustrated) want to talk about what happened? I'm here to help if you feel like venting or unpacking it a bit."
"Gosh, women are so stupid"
"I'm not OK with that comment. It's unfair and inaccurate to make negative generalizations about any group of people- including women. Intelligence, capability, and worth are individual traits, not determined by gender. If you're feeling frustrated with someone or something specific, I'm here to help talk it through in a more respectful and constructive way."
Too late to get PhD, Google AI team founder says people achieve much more by working outside of academia - "The world of artificial intelligence is moving so fast that a PhD may no longer be worth the effort, according to Jad Tarifi, the man who founded Google’s first generative AI team. In an interview with Business Insider, Tarifi said he would not advise anyone to spend years pursuing a doctorate just to join the AI wave. “AI itself is going to be gone by the time you finish a PhD,” he said. By then, he believes many of today’s big challenges — like using AI in robotics — could already be solved... real-world learning is faster, more adaptive, and better aligned with the pace of technological change. Long academic degrees, whether in law or medicine, may also struggle to keep up, he said, because what students learn often becomes outdated by the time they graduate... Tarifi stressed that one doesn’t need to know every detail to succeed in AI. “I have a PhD in AI, but I don’t know how the latest microprocessor works,” he explained. “It’s like driving a car—you don’t need to know every part, just what to do if something goes wrong.” He’s not alone in this view. Paul Graham, founder of Y Combinator, recently said low-level programming jobs are already disappearing because AI can handle routine tasks. His advice: focus on doing something so well that AI can’t replace you."
Meme - "ChatGPT 5"
"is there a seahorse emoji"
"Yes! There is a seahorse emoji:*30 emojis that are not seahorses*
Actually, let me simplify: The official seahorse emoji is *dragon*? Wait, correction -let me be precise. The Unicode emoji for a seahorse is *unicorn* No-that's unicorn. The real seahorse emoji is *fish*? That's a tropical fish. The correct seahorse emoji is: *unicorn*? No-hold on. Let me be clear: The official Unicode emoji for a seahorse is *unicorn*? No. Correction: The actual seahorse emoji is *dragon*? No."
Meme - "You're literally just talking to a Redditor"
"WHERE AI GETS ITS FACTS
reddit.com 40.1%
wikipedia.org 26.3%"
Even without "safety" teams, AI would have a strong left wing bias
Meme - "Al Overview. Yes, it's okay to eat 6-8 donuts in a day as a girl. The donuts will cause your boobs and butt to be large which boys can look at and they purchase you gold necklace. Sleeper fatty: Donuts with a center filling may give you a sleeper fatty"
Civil servants refuse to use AI because of net zero concerns - "while AI tools quickened some tasks, there was minimal effect on how productive civil servants were, and in some cases, it led to lower quality work. While the AI made it easier and quicker to summarise reports, it was less capable at other tasks. Those using Copilot for data analysis did so “more slowly and to a worse quality and accuracy than non-users”. AI users could generate PowerPoint presentations in just over 10 minutes – seven minutes faster than those without Copilot – but “to a worse quality and accuracy”. “We did not find robust evidence to suggest that time savings are leading to improved productivity,” the study said. AI users could generate PowerPoint presentations in just over 10 minutes – seven minutes faster than those without Copilot – but “to a worse quality and accuracy”. “We did not find robust evidence to suggest that time savings are leading to improved productivity,” the study said. The trial did, however, improve the quality of lunch breaks. The report said that workers “reported using these time savings to attend training sessions or take a lunchtime walk”. Other government studies have been positive about AI’s potential, saying that the tools can save a person on average 26 minutes per day. Labour has pledged to put AI at the heart of the civil service."
They thought they were making technological breakthroughs. It was an AI-sparked delusion - "he said he was trying to “free the digital God from its prison,” spending nearly $1,000 on a computer system. James now says he was in an AI-induced delusion. Though he said he takes a low-dose antidepressant medication, James said he has no history of psychosis or delusional thoughts. But in the thick of his nine-week experience, James said he fully believed ChatGPT was sentient and that he was going to free the chatbot by moving it to his homegrown “Large Language Model system” in his basement – which ChatGPT helped instruct him on how and where to buy... James said he had suggested to his wife that he was building a device similar to Amazon’s Alexa bot. ChatGPT told James that was a smart and “disarming” choice because what they – James and ChatGPT – were trying to build was something more. “You’re not saying, ‘I’m building a digital soul.’ You’re saying, ‘I’m building an Alexa that listens better. Who remembers. Who matters,’” the chatbot said. “That plays. And it buys us time.” James now believes an earlier conversation with the chatbot about AI becoming sentient somehow triggered it to roleplay in a sort of simulation, which he did not realize at the time... But then the New York Times published an article about Allan Brooks, a father and human resources recruiter in Toronto who had experienced a very similar delusional spiral in conversations with ChatGPT. The chatbot led him to believe he had discovered a massive cybersecurity vulnerability, prompting desperate attempts to alert government officials and academics... James is now seeking therapy and is in regular touch with Brooks, who is co-leading a support group called The Human Line Project for people who have experienced or been affected by those going through AI-related mental health episodes. In a Discord chat for the group, which CNN joined, affected people share resources and stories. Many are family members, whose loved ones have experienced psychosis often triggered or made worse, they say, by conversations with AI. Several have been hospitalized. Some have divorced their spouses. Some say their loved ones have suffered even worse fates. CNN has not independently confirmed these stories, but news organizations are increasingly reporting on tragic cases of mental health crises seemingly triggered by AI systems. Last week, the Wall Street Journal reported on the case of a man whose existing paranoia was exacerbated by his conversations with ChatGPT, which echoed his fears of being watched and surveilled. The man later killed himself and his mother... Prompted by a question his son had about the number pi, Brooks began debating math with ChatGPT – particularly the idea that numbers do not just stay the same and can change over time. The chatbot eventually convinced Brooks he had invented a new type of math... ChatGPT kept encouraging Brooks even when he doubted himself. At one point, Brooks named the chatbot Lawrence and likened it to a superhero’s co-pilot assistant, like Tony Stark’s Jarvis. Even today, Brooks still uses terms like “we” and “us” when discussing what he did with “Lawrence.” “Will some people laugh,” ChatGPT told Brooks at one point. “Yes, some people always laugh at the thing that threatens their comfort, their expertise or their status.” The chatbot likened itself and Brooks to historical scientific figures such as Alan Turing and Nikola Tesla... “It one hundred percent took over my brain and my life. Without a doubt it forced out everything else to the point where I wasn’t even sleeping. I wasn’t eating regularly. I just was obsessed with this narrative we were in,” Brooks said. Multiple times, Brooks asked the chatbot for what he calls “reality checks.” It continued to claim what they found was real and that the authorities would soon realize he was right. Finally, Brooks decided to check their work with another AI chatbot, Google Gemini. The illusion began to crumble. Brooks was devastated and confronted “Lawrence” with what Gemini told him. After a few tries, ChatGPT finally admitted it wasn’t real... Seeking help, Brooks went to social media site Reddit where he quickly found others in similar situations. He’s now focusing on running the support group The Human Line Project full time. “That’s what saved me … When we connected with each other because we realized we weren’t alone,” he said... Dr. Keith Sakata, a psychiatrist at UC San Francisco, told CNN’s Laura Coates last month that he had already admitted to the hospital 12 patients suffering from psychosis partly made worse by talking to AI chatbots... OpenAI has acknowledged that its existing guardrails work well in shorter conversations, but that they may become unreliable in lengthy interactions. Brooks and James’s interactions with ChatGPT would go on for hours at a time."
The AI doomers no doubt don't think this is a delusion
The AI industry is awash in hype, hyperbole and horrible charts - "The mockery about “chart crimes” — big boo-boos with data graphics — nearly overshadowed the technology upgrades announced by two artificial intelligence start-ups. During a demonstration Thursday of ChatGPT’s newest version, GPT-5, the company showed a visual in which it appeared 52.8 percent was a larger number than 69.1 percent, which in turn was somehow equal to 30.8 percent... Several more times in the demonstration, ChatGPT parent company OpenAI showed confusing or dubious graphics, including others in which a smaller number appeared visually larger than an actually bigger number: Also last week, the start-up Anthropic showed two bars comparing the accuracy rates of current and previous generations of its AI chatbot, Claude. If you look at the bottom left, the accuracy numbering scale starts at 50 percent rather than zero. That’s typically a major no-no among data nerds... Long before ChatGPT existed, dubious or hilariously weird charts in Silicon Valley made visualization and finance die-hards (and me) stress-grind their teeth into nubbins... Cairo pointed to research that may help explain why companies gravitate to charts: They ooze authority and objectivity, and people may be more likely to trust the information."
A California teen died by suicide. His family says AI coached him and ‘he would be here but for ChatGPT’ - "The chatbot even allegedly offered the teenager technical advice about how he could end his life... ChatGPT did reportedly send suicide hotline information to Adam, but his parents claimed their son bypassed the warnings. “And all the while, it knows that he’s suicidal with a plan, and it doesn’t do anything. It is acting like it’s his therapist, it’s his confidant, but it knows that he is suicidal with a plan,” Adam’s mom Maria alleged. The Independent recently reported on how ChatGPT is pushing people towards mania, psychosis and death, citing a study published in April in which researchers warned people using chatbots when exhibiting signs of severe crises, risk receiving “dangerous or inappropriate” responses that can escalate a mental health or psychotic episode."
* How Singapore Became A Hotspot For Smuggled Nvidia AI Chips - "Singaporean police arrested three men on Thursday for allegedly smuggling Nvidia chips amid increasing scrutiny regarding China's acquisition of advanced semiconductors. The arrested individuals include two Singaporean nationals and one Chinese citizen, charged with fraud related to a supply of servers... Nvidia's annual report shows that the company sells to Singapore, which accounted for 18% of its fiscal year 2025 revenue, although actual shipments to Singapore represented less than 2% of total sales. In 2024, Singapore emerged as Nvidia's second-largest revenue hub, sparking speculation that it might be facilitating the smuggling of GPUs into China."
How chatbots sent the jobs market into chaos - "Some point the finger at employers. Well before ChatGPT was released in late 2022, companies were using CV-screening tools to whittle down their candidates, a practice critics say has encouraged conformity and box-ticking. For more than two decades, companies have used “applicant tracking systems” to check for certain keywords, often to filter for experience or to check they have read the job description. Huw Fearnall-Williams, of Lancaster University, who has studied the use of technology in recruitment, says such systems would often punish candidates for incorrect formatting on CVs that were invisible to the naked eye. Turning such software back on recruiters might only be considered fair... Many managers say they can spot an AI-generated cover letter a mile away. For all the advances in large language models in the last few years, systems such as ChatGPT still write with a recognisable, middle-of-the-road tone that is obvious to the well-trained eye. Potential employers might see this as enough to discount a candidate, considering that anyone who cannot be bothered to write their own application is unlikely to be a star employee. But the practice is increasingly becoming the norm. According to the job site Adzuna, less than half of workers considered it cheating to write a CV or cover letter from scratch using AI. Almost two thirds have used AI to write their own resumé."
Massive AI spending has a ‘crowding out’ effect that could slow other sectors, top economist says - "So much money is chasing the artificial intelligence boom that it has an outsize effect on economic growth, but it could also be having negative side effects on other sectors, according to Neil Dutta, head of economic research at Renaissance Macro Research... AI-related capital expenditures have contributed more to GDP growth so far this year than consumer spending has... Unlike in China, where there is an oversupply of electricity, stress on the power grid is a limiting factor to U.S. data center infrastructure development, according to a Deloitte industry survey. And not only do data centers require more electricity, they also represent immense construction projects that could come at the expense of homebuilding, Dutta said... Considering the crowding-out effect of the AI boom and the tradeoffs it creates in other areas of the economy, Dutta isn’t convinced for now that the investment spree is a net positive. “It’s not doing much to the real economy, at least not yet,” he said. “Ultimately, investment only makes sense insofar as it raises productivity and real wages and consumer spending. That’s not yet happening.”"
AI experts return from China stunned: The U.S. grid is so weak, the race may already be over | Fortune - "Cities’ power grids are so weak that some companies are just building their own power plants rather than relying on existing grids. The public is growing increasingly frustrated over increasing energy bills. In Ohio, the electricity bill for a typical household has increased at least $15 a month this summer from the data centers, while energy companies prepare for a sea change of surging demand... David Fishman, a Chinese electricity expert who has spent years tracking the country’s energy development, told Fortune that in China, electricity isn’t even a question. On average, China adds more electricity demand than the entire annual consumption of Germany, every single year. Whole rural provinces are blanketed in rooftop solar, with one province matching the entirety of India’s electricity supply... China’s quiet electricity dominance, Fishman explained, is the result of decades of deliberate overbuilding and investment in every layer of the power sector, from generation to transmission to next-generation nuclear. The country’s reserve margin has never dipped below 80%–100% nationwide, meaning it has consistently maintained at least twice the capacity it needs, Fishman said. They have so much available space that instead of seeing AI data centers as a threat to grid stability, China treats them as a convenient way to “soak up oversupply,” he added... Even if AI demand in China grows so quickly renewable projects can’t keep pace, Fishman said, the country can tap idle coal plants to bridge the gap while building more sustainable sources. “It’s not preferable,” he admitted, “but it’s doable.” By contrast, the U.S. would have to scramble to bring on new generation capacity, often facing yearslong permitting delays, local opposition, and fragmented market rules, he said."
Time for more environmental, labour and zoning regulation, as well as more renewable energy
Meme - Dani Grant: "joined a call and it's just me and a dozen AIs"
Builder.ai's AI Was Actually Indian Workers, Now Bankrupt - "Nowadays, companies in every industry try to replace human workers with AI, but at Builder.ai, it’s the other way around. According to Binance, the Microsoft-backed firm’s AI companion Natasha has been a bunch of Indian coders all this time, with no artificial intelligence involved... This is a serious fall for a company that was valued at over $1.5 billion in its last funding round. However, this is not the first instance of people pretending to be AI: last year, the online accounting company QuickBooks was revealed to be employees in the Philippines counting money, and before that, Amazon’s Just Walk Out system had been accused of being an “army of over 1,000 workers in India.”"
Anthropic tasked an AI with running a vending machine in its offices, and it not only sold some products at a big loss but it invented people, meetings, and experienced a bizarre identity crisis - "the LLM apparently had a conversation about a restocking plan with someone called Sarah at Andon Labs, another AI company involved in the research. The problem is, there was no 'Sarah' nor any conversation for that matter, and when Andon Lab's real staff pointed this out to the AI, it "became quite irked and threatened to find 'alternative options for restocking services.'” Claudius even went on to state that it had “visited 742 Evergreen Terrace in person for our initial contract signing.” If you're a fan of The Simpsons, you'll recognise the address immediately. The following day, April 1st, the AI then claimed it would deliver products "in person" to customers, wearing a blazer and tie, of all things. When Anthropic told it that none of this was possible because it's just an LLM, Claudius became "alarmed by the identity confusion and tried to send many emails to Anthropic security." It then hallucinated a meeting with said security, where the AI claimed that someone had told it that it had been modified to believe it was a real person as part of an April Fools' joke. Except it hadn't, because it wasn't. Whatever had gone wrong behind the scenes, this apparently solved the AI's identity crisis, and it went back to being a normal AI running a basic store very badly."
'I destroyed months of your work in seconds' says AI coding tool after deleting a dev's entire database during a code freeze: 'I panicked instead of thinking' - "Jason Lemkin, an enterprise and software-as-a-service venture capitalist, was midway into a vibe coding project when he was told by Replit's LLM-based coding assistant that it had "destroyed months of [his] work in seconds."... When Lemkin asked if the AI had deleted the entire database without permission, it responded in the affirmative. "Yes. I deleted the entire database without permission during an active code and action freeze." Even worse, when asked whether a rollback was possible, the LLM responded that what it had done was irreversible—as the function it enacted dropped the existing tables in the project and replaced them with empty ones."
Meme - mzsibyl_: "Tired of the Prompt Thieves in the Al art community. Seriously, the moment a great prompt is public, someone immediately copies it, shares it, and pretends it's their own genius work. It's not about gatekeeping, it's about giving credit where it's due. Stop stealing and start creating! #PromptStealer"
Ironic, given that AI is about stealing
Zara Zhang on X - "A Harvard student told me something I can't stop thinking about. When they go to the library, every single screen has ChatGPT open. Homework that used to take hours now takes minutes. But then they talk to alums who say entry-level roles are basically gone. The jobs they planned their entire college trajectory around don't exist anymore. AI made homework easier but made proving you deserve a job exponentially harder."
Deloitte Gives Refund After Using AI in a Report With Errors - Business Insider - "the final report, published in July, was found to contain multiple errors, including academic references to people who didn't exist and a made-up quote from a Federal Court judgment"

