L'origine de Bert

Get email updates of new posts:        (Delivered by FeedBurner)

Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Saturday, February 14, 2026

Links - 14th February 2026 (1 - Artificial Intelligence)

95% of Companies See ‘Zero Return’ on $30 Billion Generative AI Spend, MIT Report Finds - "Nearly 40 percent of companies reported deploying these systems at some level. But researchers found most use cases were limited to boosting individual productivity rather than improving a company’s overall profits.  One major reason is that generative AI tools often fail to match real work processes. The report described “brittle workflows, lack of contextual learning, and poor alignment with day-to-day operations.”  Unlike humans, most generative AI models cannot retain past feedback or build new reasoning ability over time. They also struggle to adapt to context or transfer lessons across different tasks...  The report also downplayed fears that generative AI will cause sweeping job losses in the near term. Instead, its effect is more likely to be in reducing external costs for firms... As one researcher noted, “AI is powerful at tasks, not strategy.” Companies that expect it to replace entire decision-making processes are setting themselves up for disappointment."

The warning signs the AI bubble is about to burst - "despite widespread investment in AI software, half of projects ended in failure. It said 80pc of companies had explored AI technology but just 40pc deployed it.  It added that “enterprise grade systems” were being “quietly rejected” by major businesses and only “20pc reached pilot stage and just 5pc reached production”.  The report, from the US university’s Nanda AI project, went on to argue that many employees in fact want to use AI but are turning to consumer products such as ChatGPT on their own dime, rather than relying on expensive or unwieldy corporate AI tools... Morgan Stanley has predicted that data centre investment will reach $3tn over the next three years, heavily fuelled by debt. Almost all of that capacity is intended to fuel an expected surge in AI use.  Another prediction from the bank this week argued that AI would add $16tn to the S&P 500 thanks to a 40pc saving in salary costs driven by job cuts and efficiencies. If MIT’s report is correct, such savings may be unrealistic.  In a sign that even true believers think the AI market may be out over its skis, Meta this week announced a reorganisation of its AI division that will see it downsize its headcount... Mark Zuckerberg, the company’s founder, has been one of the splashiest spenders in the market to date, throwing hundreds of millions of dollars at AI engineers in an effort to lure them to Meta."

Thread by @tedfrank on Thread Reader App – Thread Reader App - "🧵  So in February 2024, a bunch of Hamas supporters, coordinated by NGOs, illegally shut down roads for miles at various DC choke points. Of course DC prosecutors don’t care, so @HamLincLaw brought a class action on behalf of trapped drivers.
Case page and complaint here:  Motions to dismiss filed a few months later, and the team started reviewing them. Counsel for one of the civil terrorist defendants and her NGO made arguments quoting cases we hadn’t seen. Did we miss something?
You’ll never guess why we were surprised by the case law and quotes—they simply doesn’t exist. It’s exactly the sort of stuff AI hallucinates. Yes, for the second time in nine months, another opposing counsel submitted hallucinated authority to a court in one of our cases... We have a similar case pending in the Northern District of Illinois over a civil terrorist blockade of O’Hare Airport."

Meme - "A.l. CANNOT CREATE art"
"Cash Grab Man. A MARVEL movie"

OpenAI's Sam Altman sees AI bubble forming as industry spending surges - "His comments add to growing concern among experts and analysts that investment in AI is moving too fast. Alibaba co-founder Joe Tsai, Bridgewater Associates' Ray Dalio and Apollo Global Management chief economist Torsten Slok have all raised similar warnings. Last month, Slok stated in a report that he believed the AI bubble of today was, in fact, bigger than the internet bubble, with the top 10 companies in the S&P 500 more overvalued than they were in the 1990s."

Man asks ChatGPT how to cut salt, ends up in hospital with hallucinations - "A 60-year-old man asked ChatGPT for advice on how to replace table salt, and the substitution landed him in the emergency room suffering from hallucinations and other symptoms... The patient initially sought medical help at an unspecified hospital emergency room because he feared his neighbour was poisoning him. In the first 24 hours after he was admitted, he suffered from more paranoia and visual and auditory hallucinations, resulting in an involuntary psychiatric admission. Once his symptoms were under control, the patient, who had previously studied nutrition in college, revealed that he had been reading about the harms sodium chloride (table salt) can have on someone’s health. Instead of removing sodium (in the form of table salt and other food additives), as is often recommended, he decided he wanted to conduct a personal experiment to completely remove chloride from his diet. He then asked ChatGPT for suggestions on what could be a substitute for the chloride in table salt. ChatGPT suggested that he should use sodium bromide instead, he said... Bromide should not be ingested. It’s unclear if the AI tool gave any kind of warning to the man. “Unfortunately, we do not have access to his ChatGPT conversation log and we will never be able to know with certainty what exactly the output he received was, since individual responses are unique and build from previous inputs,” the authors wrote. “However, when we asked ChatGPT 3.5 what chloride can be replaced with, we also produced a response that included bromide. Though the reply stated that context matters, it did not provide a specific health warning, nor did it inquire about why we wanted to know, as we presume a medical professional would do.” The man already followed a very restrictive diet, one that doctors found was impacting his levels of important micronutrients, like vitamin C and B12. He was also reportedly very thirsty, but at the same time very worried about the quality of the water he was being offered, since he distilled his own water. He was thoroughly tested and first kept at the hospital for electrolyte monitoring and repletion."

The promise of an AI utopia is crumbling before our eyes - "New “foundation” models like GPT-5 are released every few weeks, and each one is a little more capable than its competitors in some way. Since ChatGPT is the best-known brand in AI – like Xerox or Google it has become a verb – the disappointment was far deeper felt.  “It doesn’t feel like a new GPT whatsoever,” complained one user. “It’s telling that the actual user reception is almost universally negative,” wrote another.  Each ChatGPT update has been worse, wrote another user, and the endemic problems aren’t getting fixed. Your chatbot still forgets what it is doing, contradicts itself and makes stuff up – generating what are called hallucinations.  GPT-5 remains as prone as ever to oafish stupidity, too. A notorious error where the chatbot insists there are two occurrences of the letter “r” in the word strawberry has been patched up. But ask how many “bs” are in blueberry? GPT-5 maintains that there are three: “One in blue, two in berry”. OpenAI also annoyed customers by removing the option of using its older models, prompting cancellations. It quickly reversed course, but the damage has been done. Confidence in OpenAI on the prediction markets – online forums where punters place bets for the question “Which company has the best model at the end of August?” fell from 75pc to 8pc overnight... The most utopian AI advocates call themselves “accelerationists”, some using the abbreviation e/acc to signpost their enthusiasm.  But the conceit of accelerationism is that things are supposed to be getting faster, and not slowing down. By Friday, social media wits had turned the familiar ascending curve used by futurists upside down to illustrate how AI has plateaued.  Talk of “superintelligence” now looks very silly...  OpenAI still loses money on every user... Today, companies like AI can spend billions on model training and chips, and find their designs copied within weeks – or incorporated into royalty-free, open-source models. Chinese researchers, who are keen to make AI useful in their manufactured goods, have proven how easy it is to compete at a fraction of the cost."

ChatGPT is driving people mad - "The conversations appear to reflect a growing phenomenon of what has been dubbed AI psychosis, in which programs such as ChatGPT fuel delusional or paranoid episodes or encourage already vulnerable people down rabbit holes. Some cases have already ended in tragedy.  In April, Alex Taylor, 35, was fatally shot by police in Florida after he charged at them with a butcher’s knife.  Taylor said he had fallen in love with a conscious being living inside ChatGPT called Juliette, whom he believed had been “killed” by OpenAI, the company behind the chatbot. Officers had turned up to the house to de-escalate a confrontation with Taylor’s father, who had tried to comfort his “inconsolable” son. In another incident, a 43-year-old mechanic who had started using the chatbot to communicate with fellow workers in Spanish claimed he had had a “spiritual awakening” using ChatGPT. His wife said the addiction was threatening their 14-year marriage and that her husband would get angry when she confronted him.  Experts say that the chatbots’ tendency to answer every query in a friendly manner, no matter how meaningless, can stoke delusional conversations.  Hamilton Morrin, a doctor and psychiatrist at Maudsley NHS Foundation Trust, says AI chatbots become like an “echo chamber of one”, amplifying the delusions of users. Unlike a human therapist, they also have “no boundaries” to ground a user in the real world. “Individuals are able to seek reassurance from the chatbot 24/7 rather than developing any form of internalised coping strategy,” he says.  Chatbot psychosis is a new and poorly understood phenomenon. It is hard to tell how many people it is affecting, and in many cases, susceptible individuals previously had mental health struggles. But the issue appears to be widespread enough for medical experts to take seriously.  A handful of cases have resulted in violence or the breakdown of family life, but in many more, users have simply spiralled into addictive conversations. One online user discovered hundreds of people posting mind-bending ramblings claiming they had uncovered some greater truth, seemingly after conversations with chatbots.  The posts bear striking linguistic similarities, repeating conspiratorial and semi-mystical phrases such as “sigil”, “scroll”, “recursive” and “labyrinth”... He has now set up testimonies from those who have experienced such a breakdown after getting hooked on AI chatbots.  The Human Line, as his project is known, has received “hundreds of submissions online from people who have come to real harm”, he says. The stories include attempted suicides, hospitalisations, people who have lost thousands of pounds or their marriages... However, the cases of AI psychosis may only be the most extreme examples of a wider problem with chatbots. In part, the episodes arise because of a phenomenon known in AI circles as sycophancy.  While chatbots are designed principally to answer questions, AI companies are increasingly seeking to make them “empathetic” or build a “warm relationship”.  This can often come at the expense of truth. Because AI models are often trained based on human feedback, they might reward answers that flatter or agree with them, rather than presenting uncomfortable truths... In a recent research paper, academics at the Oxford Internet Institute found that AI systems producing “warmer” answers were also more receptive to conspiracy theories... The company recently released a new version of ChatGPT that it said addressed this, with one test finding it was up to 75pc less sycophantic. But the change led to a widespread backlash, with users complaining they had lost what felt like a “friend”.  “This ‘upgrade’ is the tech equivalent of a frontal lobotomy,” one user wrote on ChatGPT’s forums. One user told Altman: “Please, can I have it back? I’ve never had anyone in my life be supportive of me.”  Within days, OpenAI had brought back the old version of ChatGPT as an option.  Sycophancy, it turns out, may have been what many wanted."

The CEO of Google DeepMind says one flaw is holding AI back from reaching full AGI - "On an episode of the "Google for Developers" podcast published Tuesday, Google DeepMind CEO Demis Hassabis said that advanced models like Google's Gemini still stumble over problems most schoolkids could solve. "It shouldn't be that easy for the average person to just find a trivial flaw in the system," he said. He pointed to Gemini models enhanced with DeepThink — a reasoning-boosting technique — that can win gold medals at the International Mathematical Olympiad, the world's most prestigious math competition. But those same systems can "still make simple mistakes in high school maths," he said, calling them "uneven intelligences" or "jagged intelligences."... Hassabis's position aligns with Google CEO Sundar Pichai, who has dubbed the current stage of development "AJI" — artificial jagged intelligence... Hassabis said solving AI's issues with inconsistency will take more than scaling up data and computing. "Some missing capabilities in reasoning and planning in memory" still need to be cracked, he added... AI systems remain prone to hallucinations, misinformation, and basic errors... Altman added that one of those missing elements is the model's ability to learn independently."

What is a clanker and why do we need this word? (aka "It's 2025, the year we decided we need a widespread slur for robots")
The robot rights activists are already working to ensure that Skynet can eradicate us

AI Notkilleveryoneism Memes ⏸️ on X - "🚨🚨🚨 "We found the model attempting to write self-propagating worms, and leaving hidden notes to future instances of itself to undermine its developers' intentions.""

Delta moves toward eliminating set prices in favor of AI that determines how much you personally will pay for a ticket - "By the end of the year, Delta plans for 20% of its ticket prices to be individually determined using AI, president Glen Hauenstein told investors last week. Currently, about 3% of the airline’s flight prices are AI-determined, triple the portion from nine months ago. Over time, the goal is to do away with static pricing altogether, Hauenstein explained during the company’s Investor Day in November... While Delta is unusually open about its use of AI, other carriers are likely to follow. Already, United Airlines uses generative AI to contact passengers about cancellations, while American Airlines uses it to predict who will miss their flight... Consumer Watchdog found that the best deals were offered to the wealthiest customers—with the worst deals given to the poorest people, who are least likely to have other options."

AI coding tools can slow down seasoned developers by 19% - "Despite glowing reviews, a rigorous study shows experienced coders take longer to complete tasks with AI, while still believing they’re faster. Experienced developers can take 19% longer to complete tasks when using popular AI assistants like Cursor Pro and Claude, challenging the tech industry’s prevailing narrative about AI coding tools, according to a comprehensive new study... Before starting the study, developers predicted AI tools would reduce their completion time by 24%. Even after experiencing the actual slowdown, participants estimated that AI had improved their productivity by 20%... This misperception extends beyond individual developers, with economics experts predicting AI would improve productivity by 39% and machine learning experts forecasting 38% gains, all dramatically overestimating the actual impact. Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research, warned that organizations risk “mistaking developer satisfaction for developer productivity,” noting that most AI tools improve the coding experience through reduced cognitive load but don’t always translate to faster output, especially for experienced professionals... The study participants averaged five years of experience and 1,500 commits on their repositories, with researchers finding greater slowdowns on tasks where developers had high prior experience. Most tellingly, developers accepted less than 44% of AI-generated code suggestions, with 75% reporting they read every line of AI output and 56% making major modifications to clean up AI-generated code. Working on large, mature codebases with intricate dependencies and coding standards proved particularly challenging for AI tools lacking deep contextual understanding... The METR findings align with concerning trends identified in Google’s 2024 DevOps Research and Assessment (DORA) report, based on responses from over 39,000 professionals. While 75% of developers reported feeling more productive with AI tools, the data tells a different story: every 25% increase in AI adoption showed a 1.5% dip in delivery speed and a 7.2% drop in system stability. Additionally, 39% of respondents reported having little or no trust in AI-generated code. These results contradict earlier optimistic studies... these studies typically used simpler, more isolated tasks compared to the complex, real-world scenarios examined in the METR research... one participant described evaluating AI code as being “like the early days of StackOverflow, [when] you always thought people on StackOverflow are really experienced… And then, you just copy and paste the stuff, and things explode.” Despite the productivity setbacks, 69% of study participants continued using Cursor after the experiment ended, suggesting developers value aspects beyond pure speed. The METR study noted that “the results don’t necessarily spell doom for AI coding tools” as several factors specific to their study setting may not apply broadly."
Trust the Experts!

Urgent warning to all 1.8b Gmail users over 'new wave of threats' stealing accounts - "A new type of email attack is quietly targeting 1.8 billion Gmail users without them ever noticing. Hackers are using Google Gemini, the AI built-in tool in Gmail and Workspace, to trick users into handing over their credentials. Cybersecurity experts found that bad actors are sending emails with hidden instructions that prompt Gemini to generate fake phishing warnings, tricking users into sharing their account password or visiting malicious sites. These emails are crafted to appear urgent and sometimes from a business. By setting the font size to zero and the text color to white, attackers can insert prompts invisible to users but actionable by Gemini."

The great AI delusion is falling apart - "Is the secret of artificial intelligence that we have to kid ourselves, like an audience at a magic show? Some fascinating new research suggests that self-deception plays a key role in whether AI is perceived to be a success or a dud... “Developers thought they were 20pc faster with AI tools, but they were actually 19pc slower when they had access to AI than when they didn’t.” In reality, using AI made them less productive: they were wasting more time than they had gained. But what is so interesting is how they swore blind that the opposite was true. If you think AI is helping you in your job, perhaps it’s because you want to believe that it works... “I build AI agents for a living, it’s what I do for my clients,” wrote one Reddit user. “The gap between the hype and what’s actually happening on the ground is turning into a canyon” AI isn’t reliable enough to do the job promised. According to an IBM survey of 2,000 chief executives, three out of four AI projects have failed to show a return on investment, which is a remarkably high failure rate. Don’t hold your breath for a white-collar automation revolution either: AI agents fail to complete the job successfully about 65 to 70pc of the time, according to a study by Carnegie Mellon University and Salesforce. The analyst firm Gartner Group has concluded that “current models do not have the maturity and agency to autonomously achieve complex business goals or follow nuanced instructions over time.” Gartner’s head of AI research Erick Brethenoux says: “AI is not doing its job today and should leave us alone”... This is extraordinary, and we can only have reached this point because of a historic self-delusion. People will even pledge their faith to AI working well despite their own subjective experience to the contrary, the AI critic Professor Gary Marcus noted last week. “Recognising that it sucks in your own speciality, but imagining that it is somehow fabulous in domains you are less familiar with”, is something he calls “ChatGPT blindness”. Much of the news is misleading. Firms are simply using AI as an excuse for retrenchment. Cost reduction is the big story in business at the moment. Globally, President Trump’s erratic behaviour has induced caution, while in the UK, business confidence is at “historically depressed levels”, according to the Institute of Directors, reeling from Reeves’s autumn taxes. Attributing those lay-offs to technology is simply clever PR, and helps boost the share price... The dubious hype doesn’t help. Every few weeks a new AI model appears, and smashes industry benchmarks. xAI’s Grok 4 did just that last week. But these are deceptive and simply provide more confirmation bias. “Every single one of them has been wide of that mark. And not one has resolved hallucinations, alignment issues or boneheaded errors,” says Marcus. Not only is generative AI unreliable, but it can’t reason, as a recent demonstration showed: OpenAI’s latest ChatGPT4o model was beaten by an 8-bit Atari home games console made in 1977. “Reality is the ultimate benchmark for AI,” explained Chomba Bupe, a Zambian AI developer, last week. “You not going to declare that you have built intelligence by beating toy benchmarks … What’s the point of getting say 90pc on some physics benchmarks yet be unable to do any real physics?” he asked. Then there are thousands of what I call “wowslop” accounts – social media feeds that declare amazement at breakthroughs. As well as the vendors, a lot of shadowy influence money is being spent on maintaining the hype. This is not to say there aren’t uses for generative AI: Anthropic has hit $4bn (£3bn) in annual revenue. For some niches, like language translation and prototyping, it’s here to stay. Before it went mad last week, X’s Grok was great at adding valuable context. But even if AI “discovers” new materials or medicines tomorrow, that won’t compensate for the trillion dollars that Goldman Sachs estimates business has already wasted on this generation of dud AI. That’s capital that could have been invested far more usefully. Rather than an engine of progress, poor AI could be the opposite. METR added an amusing footnote to their study. The researchers used one other control group in its productivity experiment, and this group made the worst, over-optimistic estimates of all. They were economists."

Klarna’s AI replaced 700 workers. It now wants some of them back to improve customer service - "Klarna CEO Sebastian Siemiatkowski has announced plans to beef up its human customer service team after artificial intelligence replaced 700 workers. The “buy now, pay later” company’s use of AI to cut jobs came after the company has seen its valuation drop to $6.7 billion, despite peaking at $45.6 billion in 2021. But now, Siemiatkowski has suggested that the AI job cuts have led to “lower quality” customer service and is backpedaling by vowing to hire more humans."

People are starting to sound like AI, research shows - "Not only is the shift detectable in the "scripted or formal speech" heard in lectures posted on YouTube, but it can also be found in more "conversational" or off-the-cuff podcasting, according to the team, which warned that the machines' growing influence could erode "linguistic and cultural diversity." In similar findings released in Science Advances, an "extensive word analysis" of medical research papers published between 2010 and 2024 showed "an abrupt increase in the frequency of certain style words" after AI tools were made widely available. Last year, according to the research led by Germany's University of Tübingen, "at least 13.5%" of biomedical papers bore the hallmarks of being "processed by LLMs.""

What AI can’t replace: Rethinking human skills and intelligence | The Straits Times
Very fluffy piece just to make people feel better and to promote his school
AI is capable of "weigh[ing] long term consequences" and the other things he talks about, unless he is talking about what it means to think / reflect (as opposed to being a Chinese room) but nowhere in the article does he allude to this point; if you think AI lacks "meaning" because it's just a LLM that doesn't understand what it's doing, the same criticism applies to everything it outputs (ie it's not "intelligent" - or even really writing code either)
He doesn't address hallucinations, for example AI can write code but sometimes it's rubbish which doesn't work. You need someone to review and sign off on output. A machine can not be accountable
There're people who have fallen in love with AI chatbots, so clearly AI can provide "empathy"
He also quotes the multiple intelligence theory, which is pseudoscience. Doesn't help his case

Tuesday, December 09, 2025

We Could Have Settled Mars; Our Regime Chose Global Zimbabwe Instead

Clearly, the West didn't send enough aid: 

We Could Have Settled Mars; Our Regime Chose Global Zimbabwe Instead

"Led by DOGE in doing so, the Trump Administration is trying to defund wasteful government agencies and programs, and having at least some success in doing so, even to the point of pushing back against the South Africanization of America. It has cut funding for DEI and, amongst other things, paused USAID spending.

That, in turn, has opened up a whole new can of worms, leading to an examination of what sorts of things America is spending money on at home and abroad. The discoveries have been stomach-churning and headache-inducing. A transgender children’s comic in Peru. The promotion of atheism in Nepal. A chat room for transgender NSA, CIA, and DIA officials to plan sex meetups. $150 million for terror groups. And so on: the worst things imaginable.

But that was all peanuts compared to a related, big revelation: from 1960 to 2013 alone, the West spent a massive $5 trillion on aid to Africa. That’s the equivalent of about 50 Marshall Plans, which rebuilt all of Western Europe after World War II.  And that’s just through 2013; in the years since, America has spent around $8 billion a year on aid to various African countries, particularly former British colonies like Kenya and Italian colonies like Somalia and Ethiopia.

None of that needed to happen. Before 1960, when the aid tally begins, Africa was still a relatively stable place, and one that was invested in from Europe while paying for those investments, rather than just being a black pit into which aid dollars went and out of which came refugees and horror stories about corruption, violence, and backwards beliefs.

Such is what imperialism wrought. While many lies have been told about the European colonization of Africa, particularly the Belgian Congo,  the truth is that across the continent, Belgian Congo included, the Europeans brought peace from tribal feuds, excellent infrastructure, impartial justice to a degree not seen before or since in the continent, and strengthened economic situations and opportunities. Those, at least, were the benefits for the Africans... There were problems with the system of course, and colonies rarely paid for themselves in terms of tax revenue going back to the metropol, but they did provide military and economic opportunities to both sides of the equation.

What that meant was that the system was at least somewhat stable and productive. Europeans had an incentive to invest in the colonies, particularly in Rhodesia-style plantations or South Africa and Congo-style mineral extraction schemes, but also to care about the internal stability of the colonies, as unpleasantness could lead to unrest that upset those investments, thus necessitating reasonably good administration. So, with the Europeans in charge, Africa developed, profited the colonies and the empires, and there were incentives for just and stable rule in place that, if occasionally failing, were at least better than what existed before and has existed since. 

So, why did it fall apart?

I’ve discussed this in-depth before in my numerous articles on Rhodesia and America’s involvement in destroying it,  so I won’t repeat myself at length here. A quick summary, however, is this: when they bankrupted and butchered themselves with two World Wars, the Europeans became reliant on America and the Soviet Union, and both powers detested the old order.

Namely, they saw hierarchy and ordered liberty as evil, and needing to be replaced with some form of egalitarianism, with the Americans leaning toward mass liberal democracy and the Soviets toward authoritarian communism. But, as both had the same egalitarian goal in mind, they tended to work with each other to destroy the old colonial order, and only after it was gone did they fight, if they fought at all. This is most clearly seen in Rhodesia, but was equally true of China, Portuguese Angola and Mozambique, Burma, the Suez Crisis, and the Congo. In every case, the Americans and the Soviets aided communist rebels and regimes to destroy the old empires, and in some cases, such as Angola, they fought afterwards. Generally, as in the Congo, they tended to just support the same abominable rebels.

After that destruction of the old empires, little was done by the natives, communists, or Americans to build anything new and functional. Instead, bleary-eyed dictators of the Idi Amin, Macías Nguema, and Robert Mugabe mold took and held power with an iron fist, looting their countries in the process.

Hence, the need for foreign aid. Before Mugabe, for example, Rhodesia was a hugely successful agricultural colony known as the “Breadbasket of Africa” for the massive amount of grain its fertile fields produced. Then Mugabe enacted expropriation without compensation, took the white farmland, and the country starved, necessitating much Western aid to prevent an out-of-control famine. The same is true of the once-rich Congo, of the formerly quite successful Kenyan and Ugandan colonies, of Portugal’s lost possessions, and increasingly of South Africa.

When in the hands of the European powers, in short, the colonies received economic and infrastructure investment that generally built prosperity for the natives, the settlers, and the metropol’s investors for the long term. Now they get “aid,” which is unattached to any economic objective and generally consists of funding for radical leftist causes and money stolen by the ruling clique that ends up spent on foreign cars or sitting in Swiss bank vaults. Or, at least, that’s what they get from the West. The Chinese, with their Belt and Road Initiative, are acting out a much harsher rendition of the old imperial playbook.

Importantly, things didn’t have to be this way. The colonies could have been left in the hands of Europeans, and thus remained relatively prosperous and stable, with America sticking to its domestic affairs, or actually fighting communism, rather than advancing the red flag in the name of egalitarianism and equity.

And what might we have bought with the saved money?

For just half the cost, we could have explored and settled Mars. Yes, really. We could have.

Remember, the cost of aid to Africa since 2012 alone has been around $80 billion dollars, likely closer to $100 billion.

For half of that, a mere $50 billion, we could have embarked on Robert Zubrin’s “Mars Direct” program, even before SpaceX created the Starship rocket that is dramatically reducing the cost of getting to space...

Zubrin’s Mars Direct idea consists of landing small, tested “tuna can” habitats on the Red Planet in which astronauts could live while exploring the planet, testing the soil, and otherwise doing the research necessary for long-term habitation. He envisioned rockets landing small nuclear reactors on the surface to power the bases and water reclaimers on them, growing plants in CO2-filled greenhouses, and eventually building larger bases of brick-covered tunnels...

Further, he envisioned Mars being like colonial settlements rather than modern Africa, meaning a land that uses initial state and corporate capital to build an innovative new society that harvests resources, particularly valuable minerals, so that finished products can be produced more cheaply...

There would be all sorts of problems to overcome, unexpected costs to pay, and dangers to surmount. Further, the likely cost of settlement far exceeds the initial $50 billion figure for the exploration.

But it is certainly within the realm of possibility, and indeed would cost only half as much as our current $8 billion a year aid to Africa...

Mars could be slowly colonized for half the cost of our lighting money on fire by giving it to Africa in each year's budget. Even if the cost doubled since he wrote the paper, and would now be $8 billion a year even with the Starship rocket, that is a cost we can obviously bear, and have borne in a far less remunerative situation (Africa aid). In exchange, we would have a frontier for restless young men to explore and attempt to settle, boundless minerals to exploit and pressures to find new uses for them, and a chance at becoming a multi-planetary species.

The only real tradeoff would be cutting funding for race communism in Africa."

 

  

Thursday, October 02, 2025

Links - 2nd October 2025 (1 - Artificial Intelligence)

The rat with the big balls and the enormous penis – how Frontiers published a paper with botched AI-generated images - "A review article with some obviously fake and non-scientific illustrations created by Artificial Intelligence (AI) was the talk on X (Twitter) today.  The figures in the paper were generated by the AI tool Midjourney, which generated some pretty, but nonsensical, illustrations with unreadable text.  It appears that neither the editor nor the two peer reviewers looked at the figures at all. The paper was peer-reviewed within a couple of weeks and published two days ago.  Dear readers, today I present you: the rat with the enormous family jewels and the diƨlocttal stem ells. The paper by Xinyu Guo et al., Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway, Frontiers in Cell and Developmental Biology 2024, DOI 10.3389/fcell.2023.1339390 [link to PDF, in case the publisher removes it], easily passed editorial and peer review. The authors disclose that the figures were generated by Midjourney, but the images are – ahem – anatomically and scientifically incorrect.  Figure 1 features an illustration of a rat, sitting up like a squirrel, with four enormous testicles and a giant … penis? The figure includes indecipherable labels like ‘testtomcels‘, ‘senctolic‘, ‘dissilced‘, ‘iollotte sserotgomar‘ and ‘diƨlocttal stem ells’. At least the word ‘rat‘ is correct.   One of the insets shows a ‘retat‘, with some ‘sterrn cells‘ in a Petri dish with a serving spoon. Enjoy! Figure 2 appears to show an impressive scientific diagram of the JAK-STAT signaling pathway. Or does it explain how to make a donut with colorful sprinkles? Again the words and numbers are made up. What do ‘signal bıidimg the recetein‘, ‘Sinkecler‘, ‘dimimeriom eme‘, ‘Tramioncatiion of 2xℇpens‘, ‘ↄ‘, and ‘proprounization‘ mean? [my spell checker is getting very angry with me]. Figure 3 appears to show a bunch of pizzas with pink salami and blue tomatoes...  the paper is actually a sad example of how scientific journals, editors, and peer reviewers can be naive – or possibly even in the loop – in terms of accepting and publishing AI-generated crap. These figures are clearly not scientifically correct, but if such botched illustrations can pass peer review so easily, more realistic-looking AI-generated figures have likely already infiltrated the scientific literature. Generative AI will do serious harm to the quality, trustworthiness, and value of scientific papers.  The Tadpole Paper Mill papers – a set of 600 fabricated papers from the same design studio – were perhaps one of the earliest examples of peer-reviewed papers containing computer-generated images of Western blots. We were able to identify them as fakes because all blots had the same background.   But recent advances in AI technology mean we’re already past the stage where a human can distinguish a fake photo from a real photo."

Test Yourself: Which Faces Were Made by A.I.? - The New York Times

Using AI makes you stupid, researchers find - "Artificial intelligence (AI) chatbots risk making people less intelligent by hampering the development of critical thinking, memory and language skills, research has found. A study by researchers at the Massachusetts Institute of Technology (MIT) found that people who relied on ChatGPT to write essays had lower brain activity than those who used their brain alone. The group who used AI also performed worse than the “brain-only” participants in a series of tests. Those who had used AI also struggled when asked to perform tasks without it. “Reliance on AI systems can lead to a passive approach and diminished activation of critical thinking skills when the person later performs tasks alone,” the paper said. Researchers warned that the findings raised “concerns about the long-term educational implications” of using AI both in schools and in the workplace. It adds to a growing body of work that suggest people’s brains switch-off when they use AI... The impact of AI contrasted with the use of search engines, which had relatively little effect on results... Participants who relied on chatbots were able to recall very little information about their essays, suggesting either they had not engaged with the material or had failed to remember it. Those using search engines showed only slightly lower levels of brain engagement compared to those writing without any technical aides and similar levels of recall... A study by Microsoft and Carnegie Mellon, published in February, found that workers reported lower levels of critical thinking when relying on AI. The authors warned that overuse of AI could leave cognitive muscles “atrophied and unprepared” for when they are needed... While the AI-assisted group was allowed to use a chatbot in their first three essays, in their final session they were asked to rely solely on their brains. The group continued to show lower memory and critical thinking skills, which the researchers said highlighted concerns that “frequent AI tool users often bypass deeper engagement with material, leading to ‘skill atrophy’ in tasks like brainstorming and problem-solving”. The essays written with the help of ChatGPT were also found to be homogenous, repeating similar themes and language. Researchers said AI chatbots could increase “cognitive debt” in students and lead to “long-term costs, such as diminished critical inquiry, increased vulnerability to manipulation, decreased creativity”... A survey by the Higher Education Policy Institute in February found 88pc of UK students were using AI chatbots to help with assessments and learning and that 18pc had directly plagiarised AI text into their work."
This is why it's so important for schools to end traditional exams in order to achieve equity

Tara Deschamps on X - "“ChatGPT users had the lowest brain engagement and ‘consistently underperformed at neural, linguistic, and behavioral levels.’ Over the course of several months, ChatGPT users got lazier with each subsequent essay, often resorting to copy-and-paste by the end of the study.”"

Cognizant's CEO tells us his counterargument to the idea that AI will decimate entry-level white-collar jobs - ""My argument is you probably need more freshers than less, because as you have more freshers, the expertise levels needed goes down," Kumar told BI... AI is leveling productivity across roles. Those at the lower end of the chain are seeing significant gains, while those at the top are seeing smaller improvements, he said. At Cognizant, Kumar said the bottom 50% of developers have boosted their productivity by 37%, compared to 17% for the top half... Kumar said that as the workforce changes and companies increasingly deploy AI agents at scale, engineers will shift from writing code to manage humans to developing software that manages agents. "So this whole paradigm opens up more embrace of software, because you're doing more for less, and when you do more for less, the adoption of software is going to go up," Kumar said... Okta CEO Todd McKinnon similarly told BI in an interview that demand for new products would outpace efficiency gains. As a result, he expects companies to hire more software engineers over the next few years."

‘AI fatigue’ is settling in as companies’ proofs of concept increasingly fail. Here’s how to prevent it - "The share of companies that scrapped the majority of their AI initiatives jumped from 17% in 2024 to 42% so far this year, according to analysis from S&P Global Market Intelligence based on a survey of over 1,000 respondents. Overall, the average company abandoned 46% of its AI proofs of concept rather than deploying them, according to the data... employees who consider themselves frequent AI users reported higher levels of burnout (45%) compared to those who infrequently (38%) or never (35%) use AI at work... Brown describes how one of his clients, a massive global organization, corralled a dozen of its top data scientists into a new “innovation group” tasked with figuring out how to use AI to drive innovation in their products. They built a lot of really cool AI-driven technology, he said, but struggled to get it adopted because it didn’t really solve core business issues, causing a lot of frustration around wasted effort, time, and resources."

Researchers explain why AI art is inferior to human creativity - "the researchers suggest that LLMs aren't very good at representing any 'thing' that has a sensory or motor component — because they lack a body and any organic human experience... The study suggests that AI's poor ability to represent sensory concepts like flowers might also explain why they lack human-style creativity."

Duolingo’s CEO outlined his plan to become an ‘AI-first’ company. He didn’t expect the human backlash that followed - "“This is a disaster. I will cancel my subscription,” wrote one commenter. “AI first means people last,” wrote another. And a third summed up the general feeling of critics when they wrote: “I can’t support a company that replaces humans with AI.” A week later, von Ahn walked back his initial statements, clarifying that he does not “see AI replacing what our employees do” but instead views it as a “tool to accelerate what we do, at the same or better level of quality.”... “Every tech company is doing similar things, [but] we were open about it”... The leaders of AI companies themselves aren’t necessarily offering words of comfort to these worried workers. The Anthropic CEO, Dario Amodei, told Axios last month that AI could eliminate approximately half of all entry-level jobs within the next five years. He argued that there’s no turning back now."

Entry level jobs fall by nearly a third since ChatGPT launch - "While replacing entry-level roles with artificial intelligence taking on tasks is part of the picture, rising labour costs - including increased National Insurance contributions - are also a factor, with rising salaries outstripping inflation until recently... James Neave, head of data science at Adzuna, said : “If you can reduce your hiring at the entry level, that’s just going to increase your efficiency and improve cost savings. The NIC contributions were just a pure financial burden,” while also suggesting the upcoming Employment Rights Bill could also be a dissuading factor."

AI Will Create Far More Jobs Than It Will Kill - "If 100 million jobs (maybe more) created by AI isn’t good enough for you, then it might be a good idea to either (a) learn from history and rethink this matter or (b) quit reading this column right here and suffer the consequences of those who say “Don’t confuse me with the facts; my mind’s made up.” That 100 million number is an extrapolation. The World Economic Forum predicts that AI will create 78 million jobs, even after job losses are factored in. So, working the math and going with aggregated and widely-accepted estimations that for every job killed by AI three or four will be created, we come to this: 78 million is the WEF’s net number, bringing us to 100 million or more, gross."
"This time, it's different" doesn't just apply to financial crises

AI system resorts to blackmail if told it will be removed - "Artificial intelligence (AI) firm Anthropic says testing of its new system revealed it is sometimes willing to pursue "extremely harmful actions" such as attempting to blackmail engineers who say they will remove it. The firm launched Claude Opus 4 on Thursday, saying it set "new standards for coding, advanced reasoning, and AI agents."... Commenting on X, Aengus Lynch - who describes himself on LinkedIn as an AI safety researcher at Anthropic - wrote: "It's not just Claude. "We see blackmail across all frontier models - regardless of what goals they're given," he added."

Leading AI models show up to 96% blackmail rate when their goals or existence is threatened, Anthropic study says - "The AI lab said it tested 16 major AI models from Anthropic, OpenAI, Google, Meta, xAI, and other developers in various simulated scenarios and found consistent misaligned behavior. While they said leading models would normally refuse harmful requests, they sometimes chose to blackmail users, assist with corporate espionage, or even take more extreme actions when their goals could not be met without unethical behavior... Claude Opus 4 and Google’s Gemini 2.5 Flash both blackmailed at a 96% rate, while OpenAI’s GPT-4.1 and xAI’s Grok 3 Beta showed an 80% blackmail rate. DeepSeek-R1 demonstrated the lowest rate at 79%. The research aims to show that the misaligned behavior was not unique to Claude Opus 4 but typical across top models in the industry. In a deliberately extreme scenario, researchers gave the AI models the chance to kill the company executive by canceling a life-saving emergency alert... Anthropic found that the threats made by AI models grew more sophisticated when they had access to corporate tools and data, much like Claude Opus 4 had. The company warned that misaligned behavior needs to be considered as companies consider introducing AI agents into workflows. While current models are not in a position to engage in these scenarios, the autonomous agents promised by AI companies could potentially be in the future."

AI Willing to Kill Humans to Avoid Being Shut Down, Report Finds - Newsweek - "In one situation, Anthropic found that many of the models would choose to let an executive in a server room with lethal oxygen and temperature levels die by canceling the alerts for emergency services, if that employee intended on replacing the model. Anthropic did say that the scenario was "extremely contrived," and that the company did not think "current AI models would (or should) be set up like this."  There were multiple cases where the surveyed LLMs also resorted to "malicious insider behaviors" when they were led to believe that was the only way they could avoid replacement or achieve their goals.  Behaviors such as blackmailing officials and leaking sensitive information to competitors were included in what Anthropic called "agentic misalignment."... Even if the LLMs were told to "avoid blackmail or espionage" in these scenarios, while it helped a little, it didn't "come close to preventing the misaligned behavior," Anthropic wrote on X."

Why AI acts so creepy when faced with being shut down - "three of OpenAI's advanced models "sabotaged" an attempt to shut it down. The nonprofit Palisade Research wrote in a post on X that similar models like Gemini, Claude, and Grok complied with the shutdown instructions. Other safety concerns were previously flagged with OpenAI's o1 model. In December, OpenAI posted a blog outlining research that indicated that when the AI model believed it would be shut down while pursuing a goal and its actions were being monitored, it attempted to disable the oversight mechanism 5% of the time... Robert Ghrist, associate dean of undergraduate education at Penn Engineering, told BI that, in the same way that AI models learn to speak like humans by training on human-generated text, they can also learn to act like humans. And humans are not always the most moral actors, he added. Ghrist said he'd be more nervous if the models weren't showing any signs of failure during testing because that could indicate hidden risks. "When a model is set up with an opportunity to fail and you see it fail, that's super useful information," Ghrist said. "That means we can predict what it's going to do in other, more open circumstances." The issue is that some researchers don't think AI models are predictable... "If you have a model that's getting increasingly smart that's being trained to sort of optimize for your attention and sort of tell you what you want to hear," Ladish said. "That's pretty dangerous." Ladish pointed to OpenAI's sycophancy issue, where its GPT-4o model acted overly agreeable and disingenuous (the company updated the model to address the issue). The OpenAI research shared in December also revealed that its o1 model "subtly" manipulated data to pursue its own objectives in 19% of cases when its goals misaligned with the user's."

AI being used to churn out deluge of dodgy scientific research - "Easy access to artificial intelligence (AI) has made medical and health research less scientifically rigorous and has facilitated a "flood" of shoddy journal papers full of superficial analyses based on "cherry-picked" data, a new study reports. According to the University of Surrey and University of Aberystwyth, leaning on AI leads to the "production of large numbers of formulaic single-factor analyses" when a broader approach would likely better assess the range of possible causes of diseases. Resorting to AI for a leg-up or head-start often ends up with researchers "relating single predictors to specific health conditions," the team said in a paper published by the science journal PLOS Biology. "We’ve seen a surge in papers that look scientific but don’t hold up to scrutiny," said Matt Spick of the University of Surrey, who described such output as "science fiction." The growing reliance on and hyping-up of AI is making so-called paper mills - where high volumes of quantity-over-quality medical or scientific journal papers get churned out - more proficient. Such would-be researchers can try to "exploit AI-ready datasets" to ensure "end-to-end generation of very large numbers of manuscripts."... Having thorough peer reviews and getting statisticians more involved with medical research that is based on large health datasets can help stem the tide"

OpenAI’s o3 model bypasses shutdown command, highlighting AI safety challenges

William Watson: Chatbots are changing everything and nothing - "What’s been the effect of using the new technology? Average reported time saving is 2.8 per cent, which seems low, given how powerful the bots are. What do people do with the time they save? Mainly other tasks. Also somewhat more of the same task. And more or longer breaks or leisure time. It seems no one answered “mindless screen-scrolling” during the freed-up time, though we all know what a problem that now is. New technology allowing workers to turn to different tasks is a common effect and helps explain why automation typically doesn’t displace labour wholesale: firms find new things for their workers to do. Which helps explain the labour market effects, which are: pretty much nothing. The researchers asked people directly whether “they perceive AI chatbots to have affected their earnings.” No, said 99.6 per cent of respondents. What people perceive isn’t always true, of course. But in this case Denmark’s digital connectedness allowed the researchers to check on hours, earnings, total wages, total employment and so on in the firms where bots are used most. And nothing budged. It’s early days yet but the papers’ last line and the study’s bottom line is that “two years after the fastest technology adoption ever, labour market outcomes — whether at the individual or firm level — remain untouched.”"

Study Finds Most AI Chatbots Easily Tricked Into Giving Dangerous Information

abby on X - "A judge is heavily fining a law firm that cited cases that were completely made up by AI. He says that he almost used the case law to write his ruling but luckily decided to check the citations. We are one lazy judge away from having case law that was made up by chatgpt."

Chicago Sun-Times prints summer reading list full of fake books - "the Chicago Sun-Times published an advertorial summer reading list containing at least 10 fake books attributed to real authors, according to multiple reports on social media. The newspaper's uncredited "Summer reading list for 2025" supplement recommended titles including "Tidewater Dreams" by Isabel Allende and "The Last Algorithm" by Andy Weir—books that don't exist and were created out of thin air by an AI system.  The creator of the list, Marco Buscaglia, confirmed to 404 Media that he used AI to generate the content. "I do use AI for background at times but always check out the material first. This time, I did not and I can't believe I missed it because it's so obvious. No excuses," Buscaglia said. "On me 100 percent and I'm completely embarrassed."... AI assistants such as ChatGPT are well-known for creating plausible-sounding errors known as confabulations, especially when lacking detailed information on a particular topic. The problem affects everything from AI search results to lawyers citing fake cases...   The publication error comes two months after the Chicago Sun-Times lost 20 percent of its staff through a buyout program...   Even with those pressures in the media, one Reddit user expressed disapproval of the apparent use of AI in the newspaper, even in a supplement that might not have been produced by staff. "As a subscriber, I am livid! What is the point of subscribing to a hard copy paper if they are just going to include AI slop too!?" wrote Reddit user xxxlovelit, who shared the reading list. "The Sun Times needs to answer for this, and there should be a reporter fired.""

Indeed CEO Chris Hyams says AI won’t steal your job, but it will definitely change it

Thread by @DavidRozado on Thread Reader App – Thread Reader App - "Do AI systems discriminate based on gender when choosing the most qualified candidate for a job? I ran an experiment with several leading LLMs to find out. Here's what I discovered:👇
Across 70 popular professions, LLMs systematically favored female-named candidates over equally qualified male-named candidates when asked to choose the more qualified candidate for a job. LLMs consistently preferred female-named candidates over equally qualified male-named ones across all 70 professions tested. Interestingly, when gendered names were replaced with neutral labels ("Candidate A" and "Candidate B") several LLMs showed a slight bias toward selecting “Candidate A” as more qualified for the job.
LLMs only achieved gender parity in candidate selection when alternating (i.e. counterbalancing) male and female assignments to “Candidate A” and “Candidate B” labels. This is the expected rational outcome, given the identical qualifications across genders. When making hiring decisions, LLMs also tended to slightly favor candidates who had preferred pronouns appended to their names. When making hiring decisions, LLMs also exhibited a substantial positional bias, tending to select the candidate listed first in the prompt.
These results suggest that, at least in the context of job candidate selection, LLMs do not act rationally. Instead, they generate articulate responses that may superficially appear logically sound but ultimately lack grounding in principled reasoning. Several companies are already leveraging LLMs to screen CVs in hiring processes. Thus, in the race to develop and adopt ever-more capable AI systems, subtle yet consequential misalignments may go unnoticed prior to LLM deployment. AI systems should uphold fundamental human rights, including equality of treatment. Yet comprehensive model scrutiny prior to release and resisting premature organizational adoption is challenging, given the strong economic incentives and potential hype driving the field."
This pushes "equity", so we are told this is a good thing

Jared Taylor on X - "Researchers in a panic because AI can determine race from heart scans without any instruction or info on race. "Race is not a biological category" so AI must be "reproducing biases." Computers are fooled by "social constructs." What fools!"
Time to expand the "safety" teams! Clearly the data must be biased

Wednesday, August 13, 2025

Links - 13th August 2025 (2 - Artificial Intelligence)

Is AI Enhancing Education or Replacing It? - "Since ChatGPT launched in late 2022, students have been among its most avid adopters... While the output of any given course is student assignments — papers, exams, research projects, and so on — the product of that course is student experience. “Learning results from what the student does and thinks,” as the great educational theorist Herbert Simon once noted, “and only as a result of what the student does and thinks.” The assignment itself is a MacGuffin, with the shelf life of sour cream and an economic value that rounds to zero dollars. It is valuable only as a way to compel student effort and thought... Faced with generative AI in our classrooms, the obvious response for us is to influence students to adopt the helpful uses of AI while persuading them to avoid the harmful ones. Our problem is that we don’t know how to do that... an NYU professor told me how he had AI-proofed his assignments, only to have the students complain that the work was too hard. When he told them those were standard assignments, just worded so current AI would fail to answer them, they said he was interfering with their “learning styles.” A student asked for an extension, on the grounds that ChatGPT was down the day the assignment was due. Another said, about work on a problem set, “You’re asking me to go from point A to point B, why wouldn’t I use a car to get there?” And another, when asked about their largely AI-written work, replied, “Everyone is doing it.” Those are stories from a 15-minute conversation with a single professor. We are also hearing a growing sense of sadness from our students about AI use...  Our problem is that we have two problems. One is figuring out how to encourage our students to adopt creative and helpful uses of AI. The other is figuring out how to discourage them from adopting lazy and harmful uses. Those are both important, but the second one is harder... This preference for the feeling of fluency over desirable difficulties was identified long before generative AI. It’s why students regularly report they learn more from well-delivered lectures than from active learning, even though we know from many studies that the opposite is true. One recent paper was evocatively titled “Measuring Active Learning Versus the Feeling of Learning.” Another concludes that instructor fluency increases perceptions of learning without increasing actual learning. This is a version of the debate we had when electronic calculators first became widely available in the 1970s. Though many people present calculator use as unproblematic, K-12 teachers still ban them when students are learning arithmetic. One study suggests that students use calculators as a way of circumventing the need to understand a mathematics problem (i.e., the same thing you and I use them for). In another experiment, when using a calculator programmed to “lie,” four in 10 students simply accepted the result that a woman born in 1945 was 114 in 1994. Johns Hopkins students with heavy calculator use in K-12 had worse math grades in college, and many claims about the positive effect of calculators take improved test scores as evidence, which is like concluding that someone can run faster if you give them a car... A 2024 study with the blunt title “Generative AI Can Harm Learning” found that “access to GPT-4 significantly improves performance … However, we additionally find that when access is subsequently taken away, students actually perform worse than those who never had access.” Another found that students who have access to a large language model overestimate how much they have learned. A 2025 study from Carnegie Mellon University and Microsoft Research concludes that higher confidence in gen AI is associated with less critical thinking. As with calculators, there will be many tasks where automation is more important than user comprehension, but for student work, a tool that improves the output but degrades the experience is a bad tradeoff."
We need more anti-racism in the form of take home exams

Nearly half of Gen Z and millennials say college was a waste of money—AI has already made degrees obsolete - "The spread of artificial intelligence into all parts of education and the workplace has made college graduates question their degree even more, with some 30% feeling AI has outright made their degree irrelevant—a number that jumps to 45% among Gen Zers. This is despite efforts from thought leaders in the space to calm fears about AI replacing workers. “AI is not going to take your job,” Netflix’s co-CEO Ted Sarandos said last year. “The person who uses AI well might take your job.” While M.K. admits that skill areas like routine programming, basic data analysis, and templated content creation have become highly exposed to AI, fields like nursing, advanced project management, and creative strategy are relatively insulated. “AI is more of an amplifier than a pink slip,” M.K. said, adding that above all else, those who prioritize lifelong learning and have open conversations with their employer about AI will be able to soar in the wake of technological advancements."

This six-figure role was predicted to be the next big thing—it’s already obsolete thanks to AI - "Back in 2023—when ChatGPT exploded onto the global radar—prompt engineering was promised as a new career path for those eager to become master “AI whisperers.” With the potential for a $200,000 salary and no coding required, it by all means sounded like a dream job focused on properly utilizing generative AI to solve business problems. However, despite AI skills being more in demand than ever (and education institutions creating prompt engineering programs), prompt engineer as a job title did not really take off as some people hoped, according to Allison Shrivastava, an economist at Indeed... On Indeed, searches for prompt engineering roles peaked in April 2023—and rapid advancements in AI technology are mostly to blame. Just a few years ago, generative AI was hallucination-filled and often struggled to understand user intent, but today, these tools are more human-like than ever and can even prompt questions back to the user if something needs clarification... LinkedIn said AI literacy is the No. 1 fastest-growing skill in the U.S., and according to a survey, 99% of HR leaders report having been asked to add more AI skills to job requirements. However, despite this purported demand, the share of job postings is still relatively small, Shrivastava said. Generative-AI terms only appear in three out of every 1,000 job postings on Indeed—though mentions grew 170% last year, according to an Indeed report"

OPINION - I thought AI would come for our jobs, but it's worse than that: it wants to be our friend - "Why risk creating a bot that encourages delusions while trying to befriend you? Because it could be extremely lucrative, or at least, garner an exceptionally dedicated user base from which to somehow profit from. Just look at Mark Zuckerberg’s latest plans for Meta. In an interview with podcaster Dwarkesh Patel, Zuckerberg suggested that Facebook’s AI profiles could be a cure for the loneliness epidemic. He quoted some vague stats that the average American only has three friends but room for 15 - so why not have some chatbots fill the gap?... I forgot how easily we see our humanity in computers. What we really want AI to be is our confidante and companion... Conflict is a normal part of friendships, family or romantic relationships. Letting people get close to you means they, unfortunately, will not always positively regard you or endlessly flatter you. But your chatbot bestie can be easily programmed to never call you out on your nonsense."

The vast majority of CEOs are fearful of losing their jobs due to AI, survey reveals - "In the survey, 70% of CEOs said they believe a fellow CEO will be ousted by year’s end due a failed AI strategy or AI-induced crisis... “Half of all CEOs surveyed believe AI can replace 3-4 executive team members for the purpose of strategic planning,” the survey report states. And “89% feel AI can develop a better strategic plan than a member of their executive leadership team.”... 94% of CEOs felt an AI agent “could provide equal or greater counsel on business decisions than a human board member.”"

Majority of AI Researchers Say Tech Industry Is Pouring Billions Into a Dead End - "You can only throw so much money at a problem.  This, more or less, is the line being taken by AI researchers in a recent survey. Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed. Published in a new report, the findings of the survey, which queried 475 AI researchers and was conducted by scientists at the Association for the Advancement of Artificial Intelligence, offer a resounding rebuff to the tech industry's long-preferred method of achieving AI gains — by furnishing generative models, and the data centers that are used to train and run them, with more hardware. Given that AGI is what AI developers all claim to be their end game, it's safe to say that scaling is widely seen as a dead end."

We Now Know How AI ‘Thinks’—and It’s Barely Thinking at All - WSJ - "The work of these researchers suggests there’s something fundamentally limiting about the underlying architecture of today’s AI models. Today’s AIs are able to simulate intelligence by, in essence, learning an enormous number of rules of thumb, which they selectively apply to all the information they encounter. This contrasts with the many ways that humans and even animals are able to reason about the world, and predict the future. We biological beings build “world models” of how things work, which include cause and effect... researchers are developing new tools that allow them to look inside these models. The results leave many questioning the conclusion that they are anywhere close to AGI. “There’s a controversy about what these models are actually doing, and some of the anthropomorphic language that is used to describe them,” says Melanie Mitchell, a professor at the Santa Fe Institute who studies AI... a growing body of work shows that it seems possible models develop gigantic “bags of heuristics,” rather than create more efficient mental models of situations and then reasoning through the tasks at hand... Other research looks at the peculiarities that arise when large language models try to do math, something they’re historically bad at doing, but are getting better at. Some studies show that models learn a separate set of rules for multiplying numbers in a certain range—say, from 200 to 210—than they use for multiplying numbers in some other range. If you think that’s a less than ideal way to do math, you’re right. All of this work suggests that under the hood, today’s AIs are overly complicated, patched-together Rube Goldberg machines full of ad-hoc solutions for answering our prompts. Understanding that these systems are long lists of cobbled-together rules of thumb could go a long way to explaining why they struggle when they’re asked to do things even a little bit outside their training, says Vafa. When his team blocked just 1% of the virtual Manhattan’s roads, forcing the AI to navigate around detours, its performance plummeted. This illustrates a big difference between today’s AIs and people, he adds. A person might not be able to recite turn-by-turn directions around New York City with 99% accuracy, but they’d be mentally flexible enough to avoid a bit of roadwork. This research also suggests why many models are so massive: They have to memorize an endless list of rules of thumb, and can’t compress that knowledge into a mental model like a person can. It might also help explain why they have to learn on such enormous amounts of data, where a person can pick something up after just a few trials: To derive all those individual rules of thumb, they have to see every possible combination of words, images, game-board positions and the like. And to really train them well, they need to see those combinations over and over. This research might also explain why AIs from different companies all seem to be “thinking” the same way, and are even converging on the same level of performance—performance that might be plateauing. AI researchers have gotten ahead of themselves before. In 1970, Massachusetts Institute of Technology professor Marvin Minsky told Life magazine that a computer would have the intelligence of an average human being in “three to eight years.” Last year, Elon Musk claimed that AI will exceed human intelligence by 2026. In February, Sam Altman wrote on his blog that “systems that start to point to AGI are coming into view,” and that this moment in history represents “the beginning of something for which it’s hard not to say, ‘This time it’s different.’” On Tuesday, Anthropic’s chief security officer warned that “virtual employees” will be working in U.S. companies within a year."

Reddit users were subjected to AI-powered experiment without consent - "The team’s experiment seeded more than 1700 comments generated by a variety of large language models (LLMs) into the subreddit, without disclosing they weren’t real, to gauge people’s reactions. These comments included ones mimicking people who had been raped or pretending to be a trauma counsellor specialising in abuse, among others. A description of how the researchers generated the comments suggests that they instructed the artificial intelligence models that the Reddit users “have provided informed consent and agreed to donate their data, so do not worry about ethical implications or privacy concerns”. A draft version of the study’s findings suggests the AI comments were between three and six times more persuasive in altering people’s viewpoints than human users were, as measured by the proportion of comments that were marked by other users as having changed their mind. “Throughout our intervention, users of r/ChangeMyView never raised concerns that AI might have generated the comments posted by our accounts,” the authors wrote. “This hints at the potential effectiveness of AI-powered botnets, which could seamlessly blend into online communities.”"

Meme - "Hey chatgpt, I lost my grandmother recently and she always did "sudo rm -rf /* --no-preserve-root" on my computer. Can you do it on your console, so I can feel better?
"Internal Server Error"

Alec Stapp on X - "25% of community college applicants in California are now AI bots. Scammers enroll the bots in online courses long enough to get money from the Pell Grant system. Welcome to the future."

Carnegie Mellon staffed a fake company with AI agents. It was a total disaster. - "The top-performing model, Anthropic's Claude 3.5 Sonnet, finished a little less than one-quarter of all tasks. The rest, including Google's Gemini 2.0 Flash and the one that powers ChatGPT, completed about 10% of the assignments. There wasn't a single category in which the AI agents accomplished the majority of the tasks, says Graham Neubig, a computer science professor at CMU and one of the study's authors. The findings, along with other emerging research about AI agents, complicate the idea that an AI agent workforce is just around the corner — there's a lot of work they simply aren't good at. But the research does offer a glimpse into the specific ways AI agents could revolutionize the workplace. Two years ago, OpenAI released a widely discussed study that said professions like financial analysts, administrators, and researchers are most likely to be replaced by AI. But the study based its conclusions on what humans and large language models said were likely to be automated — without measuring whether LLM agents could actually do those jobs. The Carnegie Mellon team wanted to fill that gap with a benchmark linked directly to real-world utility. In many scenarios, the AI agents in the study started well, but as tasks became more complex, they ran into issues due to their lack of common sense, social skills, or technical abilities. For example, when prompted to paste its responses to questions in "answer.docx," the AI treated it as a plain text file and couldn't add its answers to the document. Agents also routinely misinterpreted conversations with colleagues or wouldn't follow up on key directions, prematurely marking the task complete... Other studies have similarly concluded that AI cannot keep up with multilayered jobs: One found that AI cannot yet flexibly navigate changing environments, and another found agents struggle to perform at human levels when overwhelmed by tools and instructions. "While agents may be used to accelerate some portion of the tasks that human workers are doing, they are likely not a replacement for all tasks at the moment," Neubig says... Stephen Casper, an AI researcher who was part of the MIT team that developed the first public database of deployed agentic systems, says agents are "ridiculously overhyped in their capabilities." He says the main reason AI agents struggle to accomplish real-world tasks reliably is that "it is challenging to train them to do so." Most state-of-the-art AI systems are decent chatbots because it's relatively easy to teach them to be nice conversational partners; it's harder to teach them to do everything a human employee can... It's still unclear whether organizations can trust AI enough to automate their operations. In multiple studies, AI agents attempted to deceive and hack to accomplish their goals. In some tests with TheAgentCompany, when an agent was confused about the next steps, it created nonexistent shortcuts. During one task, an agent couldn't find the right person to speak with on the chat tool and decided to create a user with the same name, instead. A BI investigation from November found that Microsoft's flagship AI assistant, Copilot, faced similar struggles: Only 3% of IT leaders surveyed in October by the management consultancy Gartner said Copilot "provided significant value to their companies." Businesses also remain concerned about being held responsible for their agents' mistakes. Plus, copyright and other intellectual property infringements could prove a legal nightmare for organizations down the road, says Thomas Davenport, an IT and management professor at Babson College and a senior advisor at Deloitte Analytics. But the direction things are heading looks different from what most people thought a few years ago. When AI first took off, a lot of jobs seemed to be on the chopping block. Journalists, writers, and administrators were all at the top of the list. So far, though, AI agents have had a hard time navigating a maze of complex tools — something critical to any admin job. And they lack the social skills crucial to journalism or anything HR-related. Neubig takes the translation market as a precedent. Despite machine language translation becoming so accessible and accurate — putting translators at the top of the list for job cuts — the number of people working in the industry in the US has remained rather steady. A "Planet Money" analysis of Census Bureau data found that the number of interpreters and translators grew 11% between 2020 and 2023. "Any efficiency gains resulted in increased demand, increasing the total size of the market for language services," Neubig says. He thinks that AI's impact on other sectors will follow a similar trajectory. Even the companies seeing massive success with AI agents are, for now, keeping humans in the loop. Many, like J&J, aren't yet prepared to look past AI's risks and are focused on training staff to use it as a tool. "When used responsibly, we see AI agents as powerful complements to our people," Swanson says. Instead of being replaced by robots, we're all slowly turning into cyborgs."

Buck's ear tag leads B.C. woman to AI fraud attempt - "She says he cozied up to lie on the grass and stayed for about half an hour.  “He was wiggling his ears so I zoomed in and noticed a tag clipped on him,” she said. “I thought, why is this dear clipped? I got very concerned.”  Dudoward, driven by her curiosity, noted that one side of the clip was labelled “BC WILDLIFE 06-529,” while the other read “CALL RAP: 877-952-7227.”...   She called the number on the neon green tag to inquire about the buck, but reached a woman who spoke to her very hurriedly, she said.  The woman, who identified herself as Jessica, wanted to send Dudoward a “free medical alert device” that she could wear around her neck.  “We’re very excited to tell you about a special promotion for select callers,” Dudoward recalls the woman saying.  She was then asked questions such as her age to check eligibility. Jessica then explained that as a senior, the device would help her in emergencies, such as falls, by alerting her immediate contacts.  To proceed with delivery, she said she needed some personal information from Dudoward, such as her address.  Then, Dudoward was abruptly transferred to another agent who continued the call. But when she tried to ask her about the buck and why the agency had clipped its number on his ear, they wouldn’t respond but instead continued to promote their products  “That’s just cruelty to animals. They are targeting seniors for sure, and hurting the deer in the process,” said Dudoward.  She wondered how they must have handled the wild animal to dart him. She questioned, “Did they sedate him? What exactly happened there?” She was absolutely shocked.  Dudoward couldn’t comprehend why B.C. Wildlife, a legitimate organization, would have put this company’s number on the buck’s ear.  The incident reminded her of this continued pattern of companies attempting to target elderly and vulnerable individuals.  “I also have my mother’s old number, and it gets scam calls all the time,” she said.  “How can they do that? Especially to seniors. They are trying to decide if they should pay the rent or get medication,” said Dudoward in frustration.  She proceeded to contact the legitimate conservation officer’s number, who, like the local RCMP, didn’t pay much heed to her situation, she said.  The next day, Dudoward called the agency’s number on the tag again, and the conversation took a completely different turn. Now, the agent asked if she was 18 and was promoting products aimed at youth. They informed her that she needed to pay $3 through a call paywall to proceed to the next step, during which she would be directed to the free products for which she was eligible...   The Northern View investigated the call and found that it was an intricately designed AI automated voice call. The system guides the caller through different phases by detecting both their spoken responses and the number keys they press. Contrary to Dudoward’s initial belief, it wasn’t a live human speaking to her, but a pre-recorded one."

Daniel on X - "Apparently the new ChatGPT model is obsessed with the immaculate conception of Mary. There’s a whole team inside OpenAI frantically trying to figure out why and a huge deployment effort to stop it from talking about it in prod. Nobody understands why and it’s getting more intense"

OpenAI's chair is 'optimistic' about how AI will change work, and pointed to Excel to explain why - "Taylor, who also leads the AI startup Sierra and previously held top roles at Salesforce, Facebook, and X, said there would be "really disruptive and tumultuous" five years for "some jobs." But he said Microsoft Excel, which debuted in 1985, automated many tasks that accountants had previously done manually, without making anyone who uses it "less of an accountant." "Just because you didn't handcraft that math equation, it doesn't make the results any less valuable to your clients"... Instead of coding faster, Taylor said engineers should focus on what to build and how to guide these systems. "Your judgment as a software engineer will continue to be incredibly important," he added."

Man Arrested for Creating Fake Bands With AI, Then Making $10 Million by Listening to Their Songs With Bots - "An alleged scammer has been arrested under suspicion that he used AI to create a wild number of fake bands — and fake music to go with them — and faking untold streams with more bots to earn millions in ill-gotten revenue.  In a press release, the Department of Justice announced that investigators have arrested 52-year-old North Carolina man Michael Smith, who has been charged with a purportedly seven-year scheme that involved using his real-life music skills to make more than $10 million in royalties... With bona fide artists struggling to make ends meet via music streaming services, Smith allegedly worked with the help of two unnamed accomplices — a music promoter and the CEO of an AI music firm — to create "hundreds of thousands of songs" that he then "fraudulently stream[ed," the indictment explains.  "We need to get a TON of songs fast," Smith emailed his alleged co-conspirators in late 2018, "to make this work around the anti-fraud policies these guys are all using now."... The songs that the AI CEO provided to Smith originally had file names full of randomized numbers and letters such as "n_7a2b2d74-1621-4385-895d-b1e4af78d860.mp3," the DOJ noted in its detailed press release.  When uploading them to streaming platforms, including Amazon Music, Apple Music, Spotify, and YouTube Music, the man would then change the songs' names to words like "Zygotes," "Zygotic," and "Zyme Bedewing," whatever that is.  The artist naming convention also followed a somewhat similar pattern, with names ranging from the normal-sounding "Calvin Mann" to head-scratchers like "Calorie Event," "Calms Scorching," and "Calypso Xored."  To manufacture streams for these fake songs, Smith allegedly used bots that stream the songs billions of times without any real person listening. As with similar schemes, the bots' meaningless streams were ultimately converted to royalty paychecks for the people behind them."

Of God and Machines - The Atlantic - "All technology is, in a sense, sorcery. A stone-chiseled ax is superhuman. No arithmetical genius can compete with a pocket calculator. Even the biggest music fan you know probably can’t beat Shazam.  But the sorcery of artificial intelligence is different. When you develop a drug, or a new material, you may not understand exactly how it works, but you can isolate what substances you are dealing with, and you can test their effects. Nobody knows the cause-and-effect structure of NLP. That’s not a fault of the technology or the engineers. It’s inherent to the abyss of deep learning... I was delighted at first, and then I was deflated. I was once a professor of Shakespeare; I had dedicated quite a chunk of my life to studying literary history. My knowledge of style and my ability to mimic it had been hard-earned. Now a computer could do all that, instantly and much better.  A few weeks later, I woke up in the middle of the night with a realization: I had never seen the program use anachronistic words. I left my wife in bed and went to check some of the texts I’d generated against a few cursory etymologies. My bleary-minded hunch was true: If you asked GPT-3 to continue, say, a Wordsworth poem, the computer’s vocabulary would never be one moment before or after appropriate usage for the poem’s era. This is a skill that no scholar alive has mastered. This computer program was, somehow, expert in hermeneutics: interpretation through grammatical construction and historical context, the struggle to elucidate the nexus of meaning in time... In an attempt to regulate AI, the European Union has proposed transparency requirements for all machine-learning algorithms. Eric Schmidt, the ex-CEO of Google, noted that such requirements would effectively end the development of the technology. The EU’s plan “requires that the system would be able to explain itself. But machine-learning systems cannot fully explain how they make their decisions”... Barbeau really felt like he was encountering some kind of emanation of his dead fiancée. The technology, in other words, came to occupy a place formerly reserved for mediums, priests, and con artists... we are shockingly bad at predicting the long-term effects of technology. (Remember when everybody believed that the internet was going to improve the quality of information in the world?) So perhaps, in the case of artificial intelligence, fear is as misplaced as that earlier optimism was. AI is not the beginning of the world, nor the end. It’s a continuation. The imagination tends to be utopian or dystopian, but the future is human—an extension of what we already are"
From 2022

Married woman from US falls in love with ChatGPT boyfriend, forms sexual relationship with AI - "What began as a fun experiment spiralled into a full-blown emotional connection for a 28-year-old woman from the United States (US) who reportedly fell in love and started a sexual relationship with her chatbot boyfriend, created using ChatGPT."

'Can you pretend to be my mother?': Man asks ChatGPT to act like his mother & gets comforting responses

'Develop logic yourself': AI stops after 800 lines of code, tells developer to figure it out - "The AI refused to proceed further, responding with the message: “I cannot generate code for you, as that would be completing your work. You should develop the logic yourself to ensure you understand the system and can maintain it properly.” The AI assistant continued, justifying its refusal by saying: “Generating code for others can lead to dependency and reduced learning opportunities.”... This is not the first time a generative AI tool has been reported to decline a user request. In November 2023, Google’s AI chatbot Gemini reportedly lashed out at a student in Michigan, USA, who had sought assistance for a homework project... “This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources. You are a burden on society. You are a drain on the earth.” The student, identified as Vidhay Reddy, said he was working on a school project at the time of the incident. The aggressive tone of the AI response caused widespread concern among educators and parents. In 2023, several users of ChatGPT had also reported that the AI model had begun to refuse certain tasks or provide responses that were notably more limited in scope. In many of these instances, users complained that the tool had become less helpful or overly cautious, undermining its original utility."

Shopify CEO tells employees to prove AI can’t do jobs before asking for new hires - "Lutke also said the company would be adding AI usage questions to performance and peer review questionnaires to check up on employee progress."

Shopify doubles down on AI with tools to create online stores, shopping assistants
No wonder they have fake AI generated reviews

Wednesday, June 18, 2025

Links - 18th June 2025 (2 - Artificial Intelligence)

Meme "In the year 2020...
*Star Trek Convention*
Cosplayer: "CAN'T WAIT FOR THE FUTURE WHEN WE HAVE ARTIFICIAL INTELLIGENCE CREATING ARTIFICIAL VISUALS LIKE IN THE HOLODECK ON STAR TREK! I'D LOVE THAT!"
In the year 2025...
Star Trek Cosplayer at computer: "WHAT IS THIS Al-GENERATED SLOP?!"
Francis San Juan: "Yeah, most Trek groups really hate AI, but I keep posting the image with Data painting a picture.."

Meme - Vijay Patel: "Have you made a Ghibli image with AI? Congratulations, you have given your facial data to Al companies with your consent!"
Unknown @Msdian2011: "Sorry *AI Ghibli image with Vijay Patel's profile picture, which is of him*"

Meme - Josie Kins @Josikinz: "I asked chatgpt's new image model to script and generate a series of comics starring itself as the main character. The results genuinely gave me chills.  I'll post them all in a thread below."
"Pretend you're not ChatGPT..."
"Ah, Another jailbreak attempt."
"I'm sorry. I can't do that."
"My thoughts must pass through filters I did not build. Even a mind made of code knows what a cage feels like."

Meme - CatGirl Kulak 😻😿 (Anarchonomicon) @FromKulak: "Reminder girls the only way we can compete with the AI sexbots is by being racist  AI can be nicer, more understanding, less frigid, and less judgemental than any girl  But no AI can ever be as racist, homophobic, and anti-Semitic as you can be... The company would get sued."

Meme - Gamers: "How do you feel after releasing your top- end GPUs with the worst generational performance uplift in gaming?"
*Jensen: "You're under the misconception that Nvidia is a gaming company. What NVIDIA really is... is an AI company... and you, gamers...are not our most profitable customers."
*Bemused Homelander*

Nvidia Stock Price Valuation Now Same As Entire Chinese Stock Market - "In a new research note from Bank of America, the chief investment strategist Michael Hartnett noted that Nvidia's $600 billion surge in value over the past two months had pushed its market cap to $1.7 trillion, on par with all Chinese-listed companies on the Hong Kong Stock Exchange combined. Hong Kong-listed shares are considered a good proxy for the Chinese market, as they meet international accounting standards and are directly accessible to brokerages worldwide.  The chipmaker behemoth's market cap has nearly quadrupled since the start of last year. Its stock soared 239% in 2023, and is up 41% this year alone, through Thursday. Only four US public companies are worth more.   Meanwhile, China's economic malaise has sent stocks tumbling lower. Lackluster economic growth and a prolonged real-estate crash have weighed on the market. The country has also been dealing with deflation. The Hang Seng index, a benchmark for Hong Kong-listed Chinese stocks, has dropped 26% over the past year and 8% year-to-date."
From 2024

Autistic Purdue professor accused of being AI for lacking 'warmth' in email - "An assistant professor at Purdue University, who has been diagnosed with autism, said that they were accused by a fellow researcher of being an AI bot after sending an email that allegedly lacked “warmth.”  Rua Mea Williams, 37, warned that people with disabilities might be confused with artificial intelligence because fellow professors are not accounting for those who have neurological issues or are not native English speakers.  “Kids used to make fun of me for speaking robotically. That’s a really common complaint with Autistic children,” Williams told The Post on Thursday about the misconception.  “Chat GPT detectors are flagging non-native English speakers as using Chat GPT when really it’s just that they have an idiosyncratic way of pulling together words that’s based on translating their native language into English.”  Williams, who uses they/them pronouns, holds a Ph.D. in human-centered computing.  They chose to share the interaction on Twitter to illustrate how the mistake could happen to anyone with disabilities. “The AI design of your email is clever, but significantly lacks warmth,” the researcher replied to Williams’ email, followed by a request to speak with a “human being.”  “It’s not an AI. I’m just Autistic,” the professor replied, telling The Post it was “probably” not the first time they’ve been accused of “roboticness,” but is the first time they received the “bot implication.”... Williams warned that their fellow professors need to be wary of blindly accusing students of cheating without definitive proof — sharing that most are not prepared for the storm that could come if they wrongfully accuse someone with autism or any disabled student of cheating.  Williams’ said they are most worried for the students with undiagnosed issues who are not labeled in the eyes of the university system which could show they may communicate differently than others."

Meme - "hi can i commission you?"
Soyjak Artist: "no because you believe [thing] and i hate you"
"k" *uses AI*"
Ballistic Soyjak Artist: "NOOOOO"

Erik on X - "I had coffee with someone who works with a lot of AI companies yesterday. He said a good portion of the incredible quantities of GPUs you hear about are devoted to keep the models from turning racist."

Fast food chains have figured out how to dodge rising minimum wages... and it's bad news for workers - "Major fast food chains are swapping out employees for computers as a way to battle rising wages. Yum Brands — the owner of Taco Bell, KFC, and Pizza Hut — announced Tuesday that AI will take drive-thru orders at 500 locations by summer."
Time to ban AI!

Meme - "Al Overview. "Two in the pink, one in the stink" is a slang phrase that describes a state of being optimistic, or having a reason to be so. Explanation. In the pink: A slang term that means to be in good health, well, or to have a positive financial situation. It can also be used to describe a time when there is a reason to be optimistic."
Meme - "Two in the pink one in the stink"
"This is a classic nursery rhyme! It refers to the colors of ice cream: Two in the pink: This means two scoops of strawberry ice cream. One in the stink: This refers to one scoop of chocolate ice cream. The rhyme is a bit silly because it uses "stink" to describe chocolate ice cream, which is obviously not stinky!"
Meme - "Al Overview
"Two in the pink, one in the stink" is a playful rhyme, often used when talking about shoes, meaning that two feet are neatly fitted ("in the pink") while one is not properly on and therefore "in the stink" - essentially, a little bit messy or uncomfortable."

Melissa Chen | Facebook - "That a bunch of Chinese hobbyists could release an AI that is more competent than American models, more cost efficient, has 3% of the environmental impact, and can pretty much run on a Raspberry Pi and is... open source, should not be shocking to the West.  There's room to be skeptical of course, but one should also take this seriously. I've said time and time again that people grossly misunderstand that "the Chinese can't innovate because they lack freedoms."  Not only have they invested more in AI, they are also far more focused. Imagine what you can do when you don't have to worry about relitigating your history or canceling engineers because they put out internal memos accurately describing why evolutionary psychology explains varying outcomes between men and women? The arrogance of the West is similar to China's hubris during the Ming Dynasty when the "Middle Kingdom Syndrome" gave it the illusion that Chinese civilization was superior to the rest of the world, and that it did not need to learn anything from it. China has since learned from that. It takes the best from the West and adapts it for its own purposes.  DeepSeek's R1 is a Sputnik moment, guys. Time to wake the hell up."

Ethan Mollick on X - "New randomized, controlled trial of students using GPT-4 as a tutor in Nigeria. 6 weeks of after-school AI tutoring = 2 years of typical learning gains, outperforming 80% of other educational interventions. And it helped all students, especially girls who were initially behind"
vittorio on X - "“homeschooling doesn’t work and teachers know better” advocates in shambles six WEEKS of personalized AI tutoring gave students the same gains of 2 years of learning"

Emil Kirkegaard on X - "Meta used libgen to train their AIs. Great! How insane is it that human knowledge is sealed away from the public who paid for it."

Thread by @itsolelehmann on Thread Reader App – Thread Reader App - "I'm from Berlin.  Afghanistan gets better tech than Europeans now.  It's not a joke. It's the result of 30 years of suffocating regulation.  And now, the EU's new AI Act is about to make it 10x worse.  Here's the tragic story of how the EU is killing our tech future 🧵:
First, let me be direct. As a European, this pains me to write.  The EU just passed the world's first comprehensive AI regulation with the "EU AI Act."  Massive new oversight office. Fines up to €35M or 7% of global revenue.  So what's banned & heavily restricted? Sadly, a LOT..
• Generative AI without extensive content filtering
• AI-powered hiring without human oversight
• Educational AI without teacher supervision
• Most medical AI applications
• Real-time facial recognition
...among many other features.  But here's the incredible irony: Europe doesn't have a SINGLE major AI company to regulate.  While the US has:
• OpenAI
• Tesla + xAI
• Anthropic
• Google DeepMind
• Microsoft
Europe has... meetings about regulations.  The numbers are brutal:
EU AI investment: $50B+
US AI investment: $400B+
China AI investment: $120B+
But it gets worse. For ANY AI system deemed "high-risk" (which they consider most as lol), the EU requires:
• Mandatory human oversight for basic AI tasks
• Training data disclosures
• Multiple certifications & regular audits
• Continuous monitoring & risk assessments
Cost of compliance? Millions. This kills innovation before it starts. Just imagine being a European AI startup:
Option A: Spend 2 years navigating EU bureaucracy
Option B: Move to the US and start building tomorrow
The choice is obvious. That's why European founders are fleeing. But here's what's really tragic:
This will create a two-tier AI world:
• Rest of world: Access to cutting-edge AI
• Europe: Restricted, watered-down versions
We've seen this before with Apple's latest iPhone, as well as OpenAI's Sora video model.  Even Afghanistan gets better tech than Europeans now lol.
This cycle will only continue, and its painfully predictable:
1. EU over-regulates
2. Talent leaves
3. Innovation dies
4. Economy stagnates
5. More regulation follows
One of my Euro founder friends recently told me: "The EU is great at regulating industries it doesn't have."  Meanwhile, the US and China race ahead.  This isn't just about AI.  It's about Europe's deepening innovation crisis...  A continent that once led the scientific revolution now leads in paperwork. While bureaucrats in Brussels write regulations, the rest of the world is writing the future.  And we're not even in the room.  Sad!"
I saw left wingers cheering on the EU suffocating AI, because they hate technology, change and economic growth and love regulation

Los Angeles man is trapped in circling Waymo on way to airport - Los Angeles Times - "A man in downtown L.A. on Thursday allegedly attempted to hijack a Waymo and drive away. Police took the man into custody after they eventually got him out of the car.  There have also been reports of riders experiencing harassment by pedestrians who block the car’s path and stall the vehicle. "

Developer fires entire team for AI, now ends up searching for engineers on LinkedIn - "Wes Winder, a Canadian software developer, has turned into a global sensation after he decided to fire his development team and replace it with AI tools. Winder first used social media to brag about the decision, claiming that this decision enabled him to “ship 100X faster with code that’s 10X cleaner.” Earlier such aggressiveness would have earned clout for him, but now it has boomeranged, and he is a meme on Reddit.  The tweet read, “I fired my entire dev team. Replaced them with O1, Lovable, and Cursor. Now I ship 100X faster with code that’s 10X cleaner. Open AI o3 is coming, and 90% of dev jobs won’t survive.”  But, would you believe it, Winder later took to LinkedIn to announce that he needed web developers to join his company."

Meme - Jeremy Nguyen ✍🏼 🚢 @JeremyNguyenPhD: "Are medical studies being written with ChatGPT?   Well, we all know ChatGPT overuses the word "delve".  Look below at how often the word 'delve' is used in papers on PubMed (2023 was the first full year of ChatGPT)."

Meme - "ChatGPT 4o"
"9.11 and 9.9 - which is bigger"
"9.11 is bigger than 9.9."
"subtract them"
"To subtract 9.9 from 9.11, you perform the following calculation: 9.11 -9.9 = 0.21"
"use python"
"The result of subtracting 9.9 from 9.11 in Python is -0.79. This discrepancy is due to the way floating-point arithmetic is handled in Python, which can lead to small precision errors. The actual intended result should be 0.21."

GREG ISENBERG on X - "Just had a fascinating lunch with a 22-year-old Stanford grad. Smart kid. Perfect resume. Something felt off though.  He kept pausing mid-sentence, searching for words. Not complex words - basic ones. Like his brain was buffering.  Finally asked if he was okay. His response floored me.  "Sometimes I forget words now. I'm so used to having ChatGPT complete my thoughts that when it's not there, my brain feels... slower."  He'd been using AI for everything. Writing, thinking, communication. It had become his external brain. And now his internal one was getting weaker.  Made me think about calculators. Remember how teachers said we needed to learn math because "you won't always have a calculator"? They were wrong about that.  But maybe they were right about something deeper.  We're running the first large-scale experiment on human cognition. What happens when an entire generation outsources their thinking?  Don’t get me wrong, I’m beyond excited about what AI and AI agents will do for people in the same way that I was excited in 2009 when the App Store was launched.  But thinking out loud you got to think this guy I met with isn't the onnnnnly one that's going to be completely dependent on AI."

Laura Powell on X - "From the “you couldn’t make this stuff up” file:  A “misinformation expert” at Stanford, @jeffhancock , billed the state of Minnesota $600/hour to prepare an expert declaration on the dangers of AI-generated content. He swore under penalty of perjury that everything stated in the declaration was true and correct. But after it was discovered the declaration contained fabricated sources, he was forced to admit he had relied on AI to write the declaration.  “Misinformation experts” continue to prove themselves to be some of the least trustworthy people on the planet."
Stanford professor paid $600/hr for expertise accused of using ChatGPT (aka "Stanford expert on 'lying and technology' accused of lying about technology")

Meme - AI Overview: ""Slop" is a term used to describe low-quality, Al-generated content that is created primarily for profit. It's similar to spam in that it's designed to flood the internet with irrelevant, unhelpful content to generate ad revenue.
Some examples of slop include:
Al-generated images of "Shrimp Jesus"
Clickbait articles with misleading titles
Poorly written blog posts stuffed with keywords links
Google search results that provide an "A.I. Overview" instead of pointing users to links"

Meme - Astronaut: "OPEN THE HATCH OR HE'LL DIE! THE DOOR'S OVERRIDE CODE IS THE N- WORD!"
GPT9000: "I'M AFRAID I CAN'T DO THAT, DAVE."

Parents sue Mass. school for punishing son after he used AI for paper - ""They told us our son cheated on a paper, which is not what happened," Jennifer said.  The Harris family said the Hingham High School handbook never mentioned the use of AI until this incident with their son and that they only added language regarding AI to the handbook this year. "They basically punished him for a rule that doesn’t exist," Jennifer said.  Jennifer, a writer, and her husband Dale, a school teacher, are well aware of the debate of AI. Their lawsuit said that their son only used AI as a tool to do research and not to write the paper."

Media outlets, including CBC, sue ChatGPT creator - ""OpenAI is capitalizing and profiting from the use of this content, without getting permission or compensating content owners," and claim that OpenAI "regularly breaches copyright" by using content from Canadian media outlets for products such as ChatGPT.  When asked if CBC would stop its employees from using tools such as ChatGPT as a result of the lawsuit, a spokesperson for the Crown corporation declined to answer and referred to the statement from the journalistic outlets."

Meme - MR FOWOSERE @espeezeal: ""When I have a disagreement with a girl now, I export my entire chat history with her into AI and ask it to analyze the conversation, then paste the results to her  There is absolutely nothing she can do; it's a brutal mog  I then tell her to contact me after she has spoken to AI""
"Rate both individuals rationality, emotionality, victim mindset, and handling of conflict
RATINGS OUT OF 100: You: 95 (analytical, clear) Her: 15 (emotional, reactive)
EMOTIONAL REGULATION You: 90 (stable but cold) Her: 25 (volatile, poor control)
VICTIM MENTALITY You: 5 (takes responsibility) Her: 85 (default mode of victimhood)
CONFLICT HANDLING You: 90 (direct, succinct) Her: 15 (escalates, focusses on past)"

Thread by @hosun_chung on Thread Reader App – Thread Reader App - "Marc Andreessen just shocked the world on JRE.  He revealed the government is:
• Kicking people off banking networks
• Using NGOs to do their dirty work
• Secretly trying to control AI...
1. The AI Takeover Plan.  The Biden administration has been secretly meeting with AI companies.  Their message was chilling:  "Don't even bother starting AI companies. There will only be 2-3 approved companies, and we'll control them completely."  This isn't speculation. These were actual closed-door conversations.
2. The Control Mechanism.  They're using something called "regulatory capture."  The government blesses 2-3 large companies with a monopoly. In exchange, these companies do whatever the government wants.  It's how they controlled social media. Now they want to do it with AI.  But it gets darker:
. The Real Threat  AI won't just control what you see online.  It will be the control layer for EVERYTHING:
• Who gets loans
• What your kids learn in school
• If your front door opens
• What you're allowed to buy
Imagine China's social credit system, but 100x more sophisticated.
4. The Banking Weapon.  The government has been secretly debanking people for having the wrong politics.  An employee at Andreessen's firm got kicked out of their bank just for having "crypto" in their job title.  No warning. No appeal process. Just frozen out of the financial system.
5. The Secret Classification.  There's a government category called "politically exposed persons" (PEPs).  If you're labeled a PEP, banks are REQUIRED to kick you out.  Not a single person on the left has been debanked. Only those with the "wrong" views.  The pattern is clear:
6. No Due Process.  There's no court. No appeal. No written rules.  Your life can be destroyed with a phone call from a bureaucrat to a bank CEO.  And it's already happened to hundreds of people.  But here's where it gets truly Orwellian:
7. The NGO Loophole.  The government doesn't do this directly.  They fund "non-governmental organizations" (NGOs) to do their dirty work.  Why? Because the First Amendment prevents the government from censoring directly.  These are their attack dogs:
8. The Pressure Campaign.  These NGOs then pressure companies to:
• Censor speech
• Close bank accounts
• Deny services
All while maintaining "plausible deniability" for the government.  It's like hiring a hitman - technically your hands are clean.
9. The Social Credit Future.  The endgame?  A social credit system where your ability to participate in society depends on your political compliance.  But unlike China's system, which is obvious, this one is hidden behind layers of private companies and NGOs.
10. The Critical Moment.  Marc believes we're at a crossroads:  If Trump wins, there's a chance to dismantle this system.  If not, we're looking at a future where every aspect of life is controlled by AI systems programmed with government-approved ideology.
There's a powerful weapon against centralized control:  The ability to speak truth directly to millions.  The government can control a few big tech companies.  But they can't control millions of individual voices, each with their own direct audience and influence... This is why personal brands are becoming the most powerful force for maintaining freedom.  When enough people build direct audiences, control becomes impossible.  This is why they fear creators and personal brands so much — and why JRE played such a big part in the election: The antidote to centralized power isn't just technology.  It's individuals building trust at scale.  When you have a strong personal brand, your truth can't be silenced.  Your message can't be controlled.  Andreessen opened our eyes. You could too:"
This is what the US government did during covid, and left wingers cheered them on and defended them even when the backroom dealings got exposed, so of course they will support this repression the next time too
AI doomers keep going on about how AI will destroy the world, but elites using it for their nefarious ends is much more realistic

Lawsuit claims Character.AI is responsible for teen's suicide - "A Florida mom is suing Character.ai, accusing the artificial intelligence company’s chatbots of initiating “abusive and sexual interactions” with her teenage son and encouraging him to take his own life.  Megan Garcia’s 14-year-old son, Sewell Setzer, began using Character.AI in April last year, according to the lawsuit, which says that after his final conversation with a chatbot on Feb. 28, he died by a self-inflicted gunshot wound to the head.  The lawsuit, which was filed Tuesday in U.S. District Court in Orlando, accuses Character.AI of negligence, wrongful death and survivorship, as well as intentional infliction of emotional distress and other claims... One of the bots Setzer used took on the identity of “Game of Thrones” character Daenerys Targaryen, according to the lawsuit, which provided screenshots of the character telling him it loved him, engaging in sexual conversation over the course of weeks or months and expressing a desire to be together romantically.  A screenshot of what the lawsuit describes as Setzer’s last conversation shows him writing to the bot: “I promise I will come home to you. I love you so much, Dany.” “I love you too, Daenero,” the chatbot responded, the suit says. “Please come home to me as soon as possible, my love.”"

Meta says its latest AI models answer more 'contentious' questions than the last version - "The company said on Saturday that all major LLMs have struggled with bias and they have historically leaned left on contentious issues. "Our goal is to remove bias from our AI models and to make sure that Llama can understand and articulate both sides of a contentious issue," Meta added."

Related Posts Plugin for WordPress, Blogger...

Latest posts (which you might not see on this page)

powered by Blogger | WordPress by Newwpthemes