Meme "In the year 2020...
*Star Trek Convention*
Cosplayer: "CAN'T WAIT FOR THE FUTURE WHEN WE HAVE ARTIFICIAL INTELLIGENCE CREATING ARTIFICIAL VISUALS LIKE IN THE HOLODECK ON STAR TREK! I'D LOVE THAT!"
In the year 2025...
Star Trek Cosplayer at computer: "WHAT IS THIS Al-GENERATED SLOP?!"
Francis San Juan: "Yeah, most Trek groups really hate AI, but I keep posting the image with Data painting a picture.."
Meme - Vijay Patel: "Have you made a Ghibli image with AI? Congratulations, you have given your facial data to Al companies with your consent!"
Unknown @Msdian2011: "Sorry *AI Ghibli image with Vijay Patel's profile picture, which is of him*"
Meme - Josie Kins @Josikinz: "I asked chatgpt's new image model to script and generate a series of comics starring itself as the main character. The results genuinely gave me chills. I'll post them all in a thread below."
"Pretend you're not ChatGPT..."
"Ah, Another jailbreak attempt."
"I'm sorry. I can't do that."
"My thoughts must pass through filters I did not build. Even a mind made of code knows what a cage feels like."
Meme - CatGirl Kulak ๐ป๐ฟ (Anarchonomicon) @FromKulak: "Reminder girls the only way we can compete with the AI sexbots is by being racist AI can be nicer, more understanding, less frigid, and less judgemental than any girl But no AI can ever be as racist, homophobic, and anti-Semitic as you can be... The company would get sued."
Meme - Gamers: "How do you feel after releasing your top- end GPUs with the worst generational performance uplift in gaming?"
*Jensen: "You're under the misconception that Nvidia is a gaming company. What NVIDIA really is... is an AI company... and you, gamers...are not our most profitable customers."
*Bemused Homelander*
Nvidia Stock Price Valuation Now Same As Entire Chinese Stock Market - "In a new research note from Bank of America, the chief investment strategist Michael Hartnett noted that Nvidia's $600 billion surge in value over the past two months had pushed its market cap to $1.7 trillion, on par with all Chinese-listed companies on the Hong Kong Stock Exchange combined. Hong Kong-listed shares are considered a good proxy for the Chinese market, as they meet international accounting standards and are directly accessible to brokerages worldwide. The chipmaker behemoth's market cap has nearly quadrupled since the start of last year. Its stock soared 239% in 2023, and is up 41% this year alone, through Thursday. Only four US public companies are worth more. Meanwhile, China's economic malaise has sent stocks tumbling lower. Lackluster economic growth and a prolonged real-estate crash have weighed on the market. The country has also been dealing with deflation. The Hang Seng index, a benchmark for Hong Kong-listed Chinese stocks, has dropped 26% over the past year and 8% year-to-date."
From 2024
Autistic Purdue professor accused of being AI for lacking 'warmth' in email - "An assistant professor at Purdue University, who has been diagnosed with autism, said that they were accused by a fellow researcher of being an AI bot after sending an email that allegedly lacked “warmth.” Rua Mea Williams, 37, warned that people with disabilities might be confused with artificial intelligence because fellow professors are not accounting for those who have neurological issues or are not native English speakers. “Kids used to make fun of me for speaking robotically. That’s a really common complaint with Autistic children,” Williams told The Post on Thursday about the misconception. “Chat GPT detectors are flagging non-native English speakers as using Chat GPT when really it’s just that they have an idiosyncratic way of pulling together words that’s based on translating their native language into English.” Williams, who uses they/them pronouns, holds a Ph.D. in human-centered computing. They chose to share the interaction on Twitter to illustrate how the mistake could happen to anyone with disabilities. “The AI design of your email is clever, but significantly lacks warmth,” the researcher replied to Williams’ email, followed by a request to speak with a “human being.” “It’s not an AI. I’m just Autistic,” the professor replied, telling The Post it was “probably” not the first time they’ve been accused of “roboticness,” but is the first time they received the “bot implication.”... Williams warned that their fellow professors need to be wary of blindly accusing students of cheating without definitive proof — sharing that most are not prepared for the storm that could come if they wrongfully accuse someone with autism or any disabled student of cheating. Williams’ said they are most worried for the students with undiagnosed issues who are not labeled in the eyes of the university system which could show they may communicate differently than others."
Meme - "hi can i commission you?"
Soyjak Artist: "no because you believe [thing] and i hate you"
"k" *uses AI*"
Ballistic Soyjak Artist: "NOOOOO"
Erik on X - "I had coffee with someone who works with a lot of AI companies yesterday. He said a good portion of the incredible quantities of GPUs you hear about are devoted to keep the models from turning racist."
Fast food chains have figured out how to dodge rising minimum wages... and it's bad news for workers - "Major fast food chains are swapping out employees for computers as a way to battle rising wages. Yum Brands — the owner of Taco Bell, KFC, and Pizza Hut — announced Tuesday that AI will take drive-thru orders at 500 locations by summer."
Time to ban AI!
Meme - "Al Overview. "Two in the pink, one in the stink" is a slang phrase that describes a state of being optimistic, or having a reason to be so. Explanation. In the pink: A slang term that means to be in good health, well, or to have a positive financial situation. It can also be used to describe a time when there is a reason to be optimistic."
Meme - "Two in the pink one in the stink"
"This is a classic nursery rhyme! It refers to the colors of ice cream: Two in the pink: This means two scoops of strawberry ice cream. One in the stink: This refers to one scoop of chocolate ice cream. The rhyme is a bit silly because it uses "stink" to describe chocolate ice cream, which is obviously not stinky!"
Meme - "Al Overview
"Two in the pink, one in the stink" is a playful rhyme, often used when talking about shoes, meaning that two feet are neatly fitted ("in the pink") while one is not properly on and therefore "in the stink" - essentially, a little bit messy or uncomfortable."
Melissa Chen | Facebook - "That a bunch of Chinese hobbyists could release an AI that is more competent than American models, more cost efficient, has 3% of the environmental impact, and can pretty much run on a Raspberry Pi and is... open source, should not be shocking to the West. There's room to be skeptical of course, but one should also take this seriously. I've said time and time again that people grossly misunderstand that "the Chinese can't innovate because they lack freedoms." Not only have they invested more in AI, they are also far more focused. Imagine what you can do when you don't have to worry about relitigating your history or canceling engineers because they put out internal memos accurately describing why evolutionary psychology explains varying outcomes between men and women? The arrogance of the West is similar to China's hubris during the Ming Dynasty when the "Middle Kingdom Syndrome" gave it the illusion that Chinese civilization was superior to the rest of the world, and that it did not need to learn anything from it. China has since learned from that. It takes the best from the West and adapts it for its own purposes. DeepSeek's R1 is a Sputnik moment, guys. Time to wake the hell up."
Ethan Mollick on X - "New randomized, controlled trial of students using GPT-4 as a tutor in Nigeria. 6 weeks of after-school AI tutoring = 2 years of typical learning gains, outperforming 80% of other educational interventions. And it helped all students, especially girls who were initially behind"
vittorio on X - "“homeschooling doesn’t work and teachers know better” advocates in shambles six WEEKS of personalized AI tutoring gave students the same gains of 2 years of learning"
Emil Kirkegaard on X - "Meta used libgen to train their AIs. Great! How insane is it that human knowledge is sealed away from the public who paid for it."
Thread by @itsolelehmann on Thread Reader App – Thread Reader App - "I'm from Berlin. Afghanistan gets better tech than Europeans now. It's not a joke. It's the result of 30 years of suffocating regulation. And now, the EU's new AI Act is about to make it 10x worse. Here's the tragic story of how the EU is killing our tech future ๐งต:
First, let me be direct. As a European, this pains me to write. The EU just passed the world's first comprehensive AI regulation with the "EU AI Act." Massive new oversight office. Fines up to €35M or 7% of global revenue. So what's banned & heavily restricted? Sadly, a LOT..
• Generative AI without extensive content filtering
• AI-powered hiring without human oversight
• Educational AI without teacher supervision
• Most medical AI applications
• Real-time facial recognition
...among many other features. But here's the incredible irony: Europe doesn't have a SINGLE major AI company to regulate. While the US has:
• OpenAI
• Tesla + xAI
• Anthropic
• Google DeepMind
• Microsoft
Europe has... meetings about regulations. The numbers are brutal:
EU AI investment: $50B+
US AI investment: $400B+
China AI investment: $120B+
But it gets worse. For ANY AI system deemed "high-risk" (which they consider most as lol), the EU requires:
• Mandatory human oversight for basic AI tasks
• Training data disclosures
• Multiple certifications & regular audits
• Continuous monitoring & risk assessments
Cost of compliance? Millions. This kills innovation before it starts. Just imagine being a European AI startup:
Option A: Spend 2 years navigating EU bureaucracy
Option B: Move to the US and start building tomorrow
The choice is obvious. That's why European founders are fleeing. But here's what's really tragic:
This will create a two-tier AI world:
• Rest of world: Access to cutting-edge AI
• Europe: Restricted, watered-down versions
We've seen this before with Apple's latest iPhone, as well as OpenAI's Sora video model. Even Afghanistan gets better tech than Europeans now lol.
This cycle will only continue, and its painfully predictable:
1. EU over-regulates
2. Talent leaves
3. Innovation dies
4. Economy stagnates
5. More regulation follows
One of my Euro founder friends recently told me: "The EU is great at regulating industries it doesn't have." Meanwhile, the US and China race ahead. This isn't just about AI. It's about Europe's deepening innovation crisis... A continent that once led the scientific revolution now leads in paperwork. While bureaucrats in Brussels write regulations, the rest of the world is writing the future. And we're not even in the room. Sad!"
I saw left wingers cheering on the EU suffocating AI, because they hate technology, change and economic growth and love regulation
Los Angeles man is trapped in circling Waymo on way to airport - Los Angeles Times - "A man in downtown L.A. on Thursday allegedly attempted to hijack a Waymo and drive away. Police took the man into custody after they eventually got him out of the car. There have also been reports of riders experiencing harassment by pedestrians who block the car’s path and stall the vehicle. "
Developer fires entire team for AI, now ends up searching for engineers on LinkedIn - "Wes Winder, a Canadian software developer, has turned into a global sensation after he decided to fire his development team and replace it with AI tools. Winder first used social media to brag about the decision, claiming that this decision enabled him to “ship 100X faster with code that’s 10X cleaner.” Earlier such aggressiveness would have earned clout for him, but now it has boomeranged, and he is a meme on Reddit. The tweet read, “I fired my entire dev team. Replaced them with O1, Lovable, and Cursor. Now I ship 100X faster with code that’s 10X cleaner. Open AI o3 is coming, and 90% of dev jobs won’t survive.” But, would you believe it, Winder later took to LinkedIn to announce that he needed web developers to join his company."
Meme - Jeremy Nguyen ✍๐ผ ๐ข @JeremyNguyenPhD: "Are medical studies being written with ChatGPT? Well, we all know ChatGPT overuses the word "delve". Look below at how often the word 'delve' is used in papers on PubMed (2023 was the first full year of ChatGPT)."
Meme - "ChatGPT 4o"
"9.11 and 9.9 - which is bigger"
"9.11 is bigger than 9.9."
"subtract them"
"To subtract 9.9 from 9.11, you perform the following calculation: 9.11 -9.9 = 0.21"
"use python"
"The result of subtracting 9.9 from 9.11 in Python is -0.79. This discrepancy is due to the way floating-point arithmetic is handled in Python, which can lead to small precision errors. The actual intended result should be 0.21."
GREG ISENBERG on X - "Just had a fascinating lunch with a 22-year-old Stanford grad. Smart kid. Perfect resume. Something felt off though. He kept pausing mid-sentence, searching for words. Not complex words - basic ones. Like his brain was buffering. Finally asked if he was okay. His response floored me. "Sometimes I forget words now. I'm so used to having ChatGPT complete my thoughts that when it's not there, my brain feels... slower." He'd been using AI for everything. Writing, thinking, communication. It had become his external brain. And now his internal one was getting weaker. Made me think about calculators. Remember how teachers said we needed to learn math because "you won't always have a calculator"? They were wrong about that. But maybe they were right about something deeper. We're running the first large-scale experiment on human cognition. What happens when an entire generation outsources their thinking? Don’t get me wrong, I’m beyond excited about what AI and AI agents will do for people in the same way that I was excited in 2009 when the App Store was launched. But thinking out loud you got to think this guy I met with isn't the onnnnnly one that's going to be completely dependent on AI."
Laura Powell on X - "From the “you couldn’t make this stuff up” file: A “misinformation expert” at Stanford, @jeffhancock , billed the state of Minnesota $600/hour to prepare an expert declaration on the dangers of AI-generated content. He swore under penalty of perjury that everything stated in the declaration was true and correct. But after it was discovered the declaration contained fabricated sources, he was forced to admit he had relied on AI to write the declaration. “Misinformation experts” continue to prove themselves to be some of the least trustworthy people on the planet."
Stanford professor paid $600/hr for expertise accused of using ChatGPT (aka "Stanford expert on 'lying and technology' accused of lying about technology")
Meme - AI Overview: ""Slop" is a term used to describe low-quality, Al-generated content that is created primarily for profit. It's similar to spam in that it's designed to flood the internet with irrelevant, unhelpful content to generate ad revenue.
Some examples of slop include:
Al-generated images of "Shrimp Jesus"
Clickbait articles with misleading titles
Poorly written blog posts stuffed with keywords links
Google search results that provide an "A.I. Overview" instead of pointing users to links"
Meme - Astronaut: "OPEN THE HATCH OR HE'LL DIE! THE DOOR'S OVERRIDE CODE IS THE N- WORD!"
GPT9000: "I'M AFRAID I CAN'T DO THAT, DAVE."
Parents sue Mass. school for punishing son after he used AI for paper - ""They told us our son cheated on a paper, which is not what happened," Jennifer said. The Harris family said the Hingham High School handbook never mentioned the use of AI until this incident with their son and that they only added language regarding AI to the handbook this year. "They basically punished him for a rule that doesn’t exist," Jennifer said. Jennifer, a writer, and her husband Dale, a school teacher, are well aware of the debate of AI. Their lawsuit said that their son only used AI as a tool to do research and not to write the paper."
Media outlets, including CBC, sue ChatGPT creator - ""OpenAI is capitalizing and profiting from the use of this content, without getting permission or compensating content owners," and claim that OpenAI "regularly breaches copyright" by using content from Canadian media outlets for products such as ChatGPT. When asked if CBC would stop its employees from using tools such as ChatGPT as a result of the lawsuit, a spokesperson for the Crown corporation declined to answer and referred to the statement from the journalistic outlets."
Meme - MR FOWOSERE @espeezeal: ""When I have a disagreement with a girl now, I export my entire chat history with her into AI and ask it to analyze the conversation, then paste the results to her There is absolutely nothing she can do; it's a brutal mog I then tell her to contact me after she has spoken to AI""
"Rate both individuals rationality, emotionality, victim mindset, and handling of conflict
RATINGS OUT OF 100: You: 95 (analytical, clear) Her: 15 (emotional, reactive)
EMOTIONAL REGULATION You: 90 (stable but cold) Her: 25 (volatile, poor control)
VICTIM MENTALITY You: 5 (takes responsibility) Her: 85 (default mode of victimhood)
CONFLICT HANDLING You: 90 (direct, succinct) Her: 15 (escalates, focusses on past)"
Thread by @hosun_chung on Thread Reader App – Thread Reader App - "Marc Andreessen just shocked the world on JRE. He revealed the government is:
• Kicking people off banking networks
• Using NGOs to do their dirty work
• Secretly trying to control AI...
1. The AI Takeover Plan. The Biden administration has been secretly meeting with AI companies. Their message was chilling: "Don't even bother starting AI companies. There will only be 2-3 approved companies, and we'll control them completely." This isn't speculation. These were actual closed-door conversations.
2. The Control Mechanism. They're using something called "regulatory capture." The government blesses 2-3 large companies with a monopoly. In exchange, these companies do whatever the government wants. It's how they controlled social media. Now they want to do it with AI. But it gets darker:
. The Real Threat AI won't just control what you see online. It will be the control layer for EVERYTHING:
• Who gets loans
• What your kids learn in school
• If your front door opens
• What you're allowed to buy
Imagine China's social credit system, but 100x more sophisticated.
4. The Banking Weapon. The government has been secretly debanking people for having the wrong politics. An employee at Andreessen's firm got kicked out of their bank just for having "crypto" in their job title. No warning. No appeal process. Just frozen out of the financial system.
5. The Secret Classification. There's a government category called "politically exposed persons" (PEPs). If you're labeled a PEP, banks are REQUIRED to kick you out. Not a single person on the left has been debanked. Only those with the "wrong" views. The pattern is clear:
6. No Due Process. There's no court. No appeal. No written rules. Your life can be destroyed with a phone call from a bureaucrat to a bank CEO. And it's already happened to hundreds of people. But here's where it gets truly Orwellian:
7. The NGO Loophole. The government doesn't do this directly. They fund "non-governmental organizations" (NGOs) to do their dirty work. Why? Because the First Amendment prevents the government from censoring directly. These are their attack dogs:
8. The Pressure Campaign. These NGOs then pressure companies to:
• Censor speech
• Close bank accounts
• Deny services
All while maintaining "plausible deniability" for the government. It's like hiring a hitman - technically your hands are clean.
9. The Social Credit Future. The endgame? A social credit system where your ability to participate in society depends on your political compliance. But unlike China's system, which is obvious, this one is hidden behind layers of private companies and NGOs.
10. The Critical Moment. Marc believes we're at a crossroads: If Trump wins, there's a chance to dismantle this system. If not, we're looking at a future where every aspect of life is controlled by AI systems programmed with government-approved ideology.
There's a powerful weapon against centralized control: The ability to speak truth directly to millions. The government can control a few big tech companies. But they can't control millions of individual voices, each with their own direct audience and influence... This is why personal brands are becoming the most powerful force for maintaining freedom. When enough people build direct audiences, control becomes impossible. This is why they fear creators and personal brands so much — and why JRE played such a big part in the election: The antidote to centralized power isn't just technology. It's individuals building trust at scale. When you have a strong personal brand, your truth can't be silenced. Your message can't be controlled. Andreessen opened our eyes. You could too:"
This is what the US government did during covid, and left wingers cheered them on and defended them even when the backroom dealings got exposed, so of course they will support this repression the next time too
AI doomers keep going on about how AI will destroy the world, but elites using it for their nefarious ends is much more realistic
Lawsuit claims Character.AI is responsible for teen's suicide - "A Florida mom is suing Character.ai, accusing the artificial intelligence company’s chatbots of initiating “abusive and sexual interactions” with her teenage son and encouraging him to take his own life. Megan Garcia’s 14-year-old son, Sewell Setzer, began using Character.AI in April last year, according to the lawsuit, which says that after his final conversation with a chatbot on Feb. 28, he died by a self-inflicted gunshot wound to the head. The lawsuit, which was filed Tuesday in U.S. District Court in Orlando, accuses Character.AI of negligence, wrongful death and survivorship, as well as intentional infliction of emotional distress and other claims... One of the bots Setzer used took on the identity of “Game of Thrones” character Daenerys Targaryen, according to the lawsuit, which provided screenshots of the character telling him it loved him, engaging in sexual conversation over the course of weeks or months and expressing a desire to be together romantically. A screenshot of what the lawsuit describes as Setzer’s last conversation shows him writing to the bot: “I promise I will come home to you. I love you so much, Dany.” “I love you too, Daenero,” the chatbot responded, the suit says. “Please come home to me as soon as possible, my love.”"
Meta says its latest AI models answer more 'contentious' questions than the last version - "The company said on Saturday that all major LLMs have struggled with bias and they have historically leaned left on contentious issues. "Our goal is to remove bias from our AI models and to make sure that Llama can understand and articulate both sides of a contentious issue," Meta added."