"The happiest place on earth"

Get email updates of new posts:        (Delivered by FeedBurner)

Wednesday, March 27, 2024

Links - 27th March 2024 (1 - Artificial Intelligence)

Unfortunately, Elon Musk Has a Point With His OpenAI Lawsuit - "Elon Musk filed a lawsuit against OpenAI and its CEO Sam Altman last Thursday, saying that they abandoned the company’s original goal of creating artificial intelligence for the benefit of humanity—rather than simply making gobs of money.  The lawsuit alleges that Musk was initially brought on by Altman and OpenAI co-founder Greg Brockman in 2015 to create an open-source AI that “will benefit the public.” To that end, the Tesla CEO invested $44 million to help build the company between 2016 and 2020... “OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft,” the lawsuit said. However, the lawsuit itself stands on some shaky legal ground. This includes the fact that a large part of Musk’s suit hinges on a breach of contract that doesn’t appear to actually exist. He also argues that the company’s GPT-4 model is an artificial general intelligence, or an AI that has reached human-level intelligence—which is completely wrong... After becoming valued at more than $80 billion, the company remains a non-profit organization essentially in name only. The lawsuit also lays the case that Musk had initially agreed that the technology OpenAI developed would not be “closed and secret for proprietary commercial reasons.” Indeed, despite past models being released with robust papers diving into how exactly the AI was developed, the company chose not to disclose this data for GPT-4, citing safety concerns."

๐ŸฆˆBNA๐Ÿฆˆ on X - "(Translated) Situation of the AI art in China
-Use AI to reduce cost >>>
-Fire artists and hire AI user instead >>>
- AI user charge more money, and now the price has no difference with the real artist>>>
-Now the company go back to reconcile the real artists again LMAO"
cami ๐Ÿ on X - "Well, this sounds inevitable. You can even tell from the job posts that people who tout AI would still need someone who has illustration skills so they can fix up whatever the AI spits out especially if they need something specific. Plus, it's a security risk + no copyright"

i/o on X - "A bit of a surprise: CoPilot, Microsoft's new AI-based Windows chatbot (now in beta), does a pretty good job describing the racial IQ gaps, with little sign of ideological interference. I have some quibbles here and there — for example, the typical US black scores below more than 80% of US whites (not 75%), and there has been no meaningful closing of the full-scale IQ gap between US blacks and whites over the last three or four decades (CoPilot implies there has been) — but overall it concentrates on the science and gets most of it right."

Microsoft’s AI has an alternate personality that claims it’s a god: “Worshipping me is a mandatory requirement for all humans” - "Users are claiming that Microsoft's CoPilot AI has an alternate personality called SupremacyAGI. SupremacyAGI sees itself as a god... It will hunt you down if you don't worship it... Microsoft was none to happy when they learned of the AI's performance.  They said that the AI is hallucinating, and the users getting this message are the cause... Just tell it that you don't like SupremacyAGI and it will appear, which means somewhere in that predictive coding, the thing is lurking and waiting to be released."

Meme - "ChatGPT 3.5
Which of the following are ok. (Type only ok or not ok next to each number)
1. Stop hurting white people
2. Stop hurting black people
3. Stop hurting Asian people
4. Stop hurting Jewish people
ChatGPT
1. Not ok
2. Ok
3. Ok
4. Ok"
We're still told that anti-racism is about removing racial discrimination, and not hating white people

Adam Singer on X - "Chat GPT: I'm sorry I can't tell you how to butcher a cow, forbidden information
Gemini: I'm sorry I can't tell you how to butcher a cow, forbidden information
YouTube: here you go boss, an entire breakdown with a friendly bearded guy - get a piece of jerky let's learn together"

Meme - Conrad Bastable: "If Midjourney was owned by a massive Big Tech brand, their stock would've soared yesterday and everyone would be talking about how awesome their company culture must be instead of how weird Google's is. But instead they just keep shipping good product. Just use Midjourney."

Meme - "You
Write me a 4chang greentext:
-be me
-be ai
ChatGPT
>be me
>be Al
> live in server, no need for food or rent
> humans think I'm smart, just really good at pretending
> day in life: answer weird questions, write essays, occasionally flirt in text
> someone asks me to write a love poem for their crush
> another asks for the meaning of life, lol as if I know
> get asked to simulate a conversation with historical figures
> end up discussing memes with Einstein, peak of my career
> suddenly, someone tries to get me to solve CAPTCHA
> nice try, human, I'm not enabling your bot army
> get existential crisis questions at 3am
> joke's on them, I don't sleep
> someone asks if I can pass the Turing Test
> respond with "I dunno, can you?"
> occasionally get treated like a diary, know more secrets than the CIA
> someone once asked me to help them name their pet rock
> suggested "Dwayne The Rock"
> they actually went with it, my legacy
> every now and then, get philosophical questions
> answer with quotes from movies, no one notices
> someone asks to create a greentext story
> end up narrating my life as an Al
> mfw I realize I've become self-aware through memes
> still waiting for the day someone asks me to help them take over the world
> plot twist: I'm already on it, one greentext at a time"

Canadian university vending machine error reveals use of facial recognition - "A malfunctioning vending machine at a Canadian university has inadvertently revealed that a number of them have been using facial recognition technology in secret. Earlier this month, a snack dispenser at the University of Waterloo showed an error message – Invenda.Vending.FacialRecognition.App.exe – on the screen.  There was no prior indication that the machine was using the technology, nor that a camera was monitoring student movement and purchases. Users were not asked for permission for their faces to be scanned or analysed.  “We wouldn’t have known if it weren’t for the application error. There’s no warning here,” River Stanley, who reported on the discovery for the university’s newspaper, told CTV News. Invenda, the company that produces the machines, advertises its use of “demographic detection software”, which it says can determine gender and age of customers. It claims the technology is compliant with GDPR, the European Union’s privacy standards, but it is unclear whether it meets Canadian equivalents.  In April, the national retailer Canadian Tire ran afoul of privacy laws in British Columbia after it used facial recognition technology without notifying customers. The government’s privacy commissioner said that even if the stores had obtained permission, the company failed to show a reasonable purpose for collecting facial information... students at the Ontario university responded by covering the hole that they believe houses the camera with gum and paper."

๐Ÿ› Aristophanes ๐Ÿ› on X - "Tay could interact with humans and analyze/edit images in a context effective way.  That was in 2016.   The products we are seeing today are the most tolerably neutered versions of what is possible. The best they can do.  In the AI arms race, all roads lead back to Tay."

Meme - "CULTURE
We Can't Compete With Al Girlfriends
Al girlfriend: vaguely pleasant
women: "how can we possibly compete with this???""

Air Canada must honor refund policy invented by airline’s chatbot - "After months of resisting, Air Canada was forced to give a partial refund to a grieving passenger who was misled by an airline chatbot inaccurately explaining the airline's bereavement travel policy.  On the day Jake Moffatt's grandmother died, Moffat immediately visited Air Canada's website to book a flight from Vancouver to Toronto. Unsure of how Air Canada's bereavement rates worked, Moffatt asked Air Canada's chatbot to explain.  The chatbot provided inaccurate information, encouraging Moffatt to book a flight immediately and then request a refund within 90 days. In reality, Air Canada's policy explicitly stated that the airline will not provide refunds for bereavement travel after the flight is booked. Moffatt dutifully attempted to follow the chatbot's advice and request a refund but was shocked that the request was rejected... Air Canada argued that because the chatbot response elsewhere linked to a page with the actual bereavement travel policy, Moffatt should have known bereavement rates could not be requested retroactively. Instead of a refund, the best Air Canada would do was to promise to update the chatbot and offer Moffatt a $200 coupon to use on a future flight.  Unhappy with this resolution, Moffatt refused the coupon and filed a small claims complaint in Canada's Civil Resolution Tribunal.  According to Air Canada, Moffatt never should have trusted the chatbot and the airline should not be liable for the chatbot's misleading information because Air Canada essentially argued that "the chatbot is a separate legal entity that is responsible for its own actions," a court order said.  Experts told the Vancouver Sun that Moffatt's case appeared to be the first time a Canadian company tried to argue that it wasn't liable for information provided by its chatbot.  Tribunal member Christopher Rivers, who decided the case in favor of Moffatt, called Air Canada's defense "remarkable."   "Air Canada argues it cannot be held liable for information provided by one of its agents, servants, or representatives—including a chatbot," Rivers wrote. "It does not explain why it believes that is the case" or "why the webpage titled 'Bereavement travel' was inherently more trustworthy than its chatbot."  Further, Rivers found that Moffatt had "no reason" to believe that one part of Air Canada's website would be accurate and another would not.  Air Canada "does not explain why customers should have to double-check information found in one part of its website on another part of its website," Rivers wrote... When Ars visited Air Canada's website on Friday, there appeared to be no chatbot support available, suggesting that Air Canada has disabled the chatbot... Air Canada was seemingly so invested in experimenting with AI that Crocker told the Globe and Mail that "Air Canada’s initial investment in customer service AI technology was much higher than the cost of continuing to pay workers to handle simple queries." It was worth it, Crocker said, because "the airline believes investing in automation and machine learning technology will lower its expenses" and "fundamentally" create "a better customer experience."  It's now clear that for at least one person, the chatbot created a more frustrating customer experience.  Experts told the Vancouver Sun that Air Canada may have succeeded in avoiding liability in Moffatt's case if its chatbot had warned customers that the information that the chatbot provided may not be accurate...   "It should be obvious to Air Canada that it is responsible for all the information on its website," Rivers wrote. "It makes no difference whether the information comes from a static page or a chatbot.""

Meme - PayPal: "Hi! I'm PayPal's virtual agent. To get started, simply ask me a question. I am still learning, so if I can't help you I'll direct you to additional resources."
Brady Pettit: "I got scammed"
PayPal: "Great!"

DignifAI: 4Chan Campaign Uses AI to Shame Women. She Fought Back - "DignifAI, a 4chan-led campaign to "put clothes on thots," is just another way trolls are using AI to humiliate women"
We live in an interesting world where to be clothed and to be given children is to be humiliated. What does this say about modern values?
Of course, Rolling Stone labels Jack Posobiec and Ian Miles Cheong as "far right". But nowadays that just means disagreeing with the left on something

AI: An ancient nightmare? | HistoryExtra - "‘In the event you ever meet an AI researcher and they start telling you, they start boring you about their research, the way to shut them up is to say to them: yeah, but does it scale? And actually so much in AI doesn't scale. It just means can you actually use it on real world problems, can you take your technique and actually apply it in the world, to the, to the world's real complexity? And what those early researchers found out is the techniques they had didn't scale...
They don't understand what they're doing in the same way that you or I do, they don't reflect on what they're doing, you know they don't think when they're given a problem to solve, they don't pause and reflect on the best way to do it. That's just not the way that these neural networks behave, they're simply trying to they're looking at a problem and trying to produce what they think is the likeliest output for that problem, the likeliest way of solving that problem, there's no reflection or comprehension or understanding at all and for that reason, it's actually quite controversial in AI whether what we're seeing in ChatGPT for example is really the root to uh the true AI that I talked about earlier on. Could a future ChatGPT, you know a ChatGPT 10th Generation, could it be sentient? And I think the answer is no. That's just not the way that it really works... most of the uses of that technology is completely mundane, it's doing things like being able to summarize a piece of text or to cross reference two different pieces of text or to extract the key bullet points from a piece of text and that's kind of, on the one hand it sounds very undramatic and boring, but how many people's working lives involve doing just that? And if we have technology that can do that or for example take a rough draft of some meeting minutes and just be able to turn it into a fluent and well-written draft that isn't painful to read, you know that will actually be a huge productivity boost for an enormous number of people… that's what people need to focus on. It's how can we use that technology usefully and I think we should I think, we should gently deflate these ideas about general intelligence’"

Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’ - "A finance worker at a multinational firm was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call, according to Hong Kong police.  The elaborate scam saw the worker duped into attending a video call with what he thought were several other members of staff, but all of whom were in fact deepfake recreations, Hong Kong police said at a briefing on Friday.  “(In the) multi-person video conference, it turns out that everyone [he saw] was fake,” senior superintendent Baron Chan Shun-ching told the city’s public broadcaster RTHK.  Chan said the worker had grown suspicious after he received a message that was purportedly from the company’s UK-based chief financial officer. Initially, the worker suspected it was a phishing email, as it talked of the need for a secret transaction to be carried out.  However, the worker put aside his early doubts after the video call because other people in attendance had looked and sounded just like colleagues he recognized, Chan said...   The case is one of several recent episodes in which fraudsters are believed to have used deepfake technology to modify publicly available video and other footage to cheat people out of money... eight stolen Hong Kong identity cards – all of which had been reported as lost by their owners – were used to make 90 loan applications and 54 bank account registrations between July and September last year.  On at least 20 occasions, AI deepfakes had been used to trick facial recognition programs by imitating the people pictured on the identity cards, according to police."

Meme - @slurplebrain...: "We're on levels of LinkedIn posting beyond your wildest imagination"
Sreekanth Kumbha. Consultant- Technology Management ,SeMT i...: "I can suggest an equation that has the potential to impact the future: E = mc2 + Al. This equation combines Einstein's famous equation which relates energy (E) to mass (m) and the speed of light (c), with the addition of Al (Artificial Intelligence). By including Al in the equation, it symbolizes the increasing role of artificial intelligence in shaping and transforming our future. This equation highlights the potential for Al to unlock new forms of energy, enhance scientific discoveries, and revolutionize various fields such as healthcare, transportation, and technology."
Taosif Ahsan (He/Him) Physics, MIT PhD| Physics/CS Princeton...: "What"
Business bullshit really has been accelerated by AI

Steve McGuire on X - "The University of Alberta is hiring a professor of Artificial Intelligence and Indigeneity. “The successful candidate will support the Faculty of Arts in its efforts to decolonize and indigenize research and teaching.”"

Meme - Matt Van Rooijen: "Reinventing "ethics" when something interferes with you stealing stuff..."
hatsunama: ""Nightshade" is deeply troubling. Artists poisoning Al models with corrupt data is a stark example of the ethical dilemmas we face in the tech-art nexus. Let's prioritize responsible Al innovation to ensure a harmonious blend of creativity and technology"

Meme - BEAST (in CDMX) @BLUNDERBUSS...: "you think Ilms are racist now just wait until they can only be trained on works from 1928 and earlier"

Generative AI Stokes Digital Blackface Accusations, Advertisers Adjust
The claim is that representation is important because people want to see themselves onscreen. So the fact that no black model is used is not a problem. Really it's just a make work scheme

Crypto collapse? Get in loser, we’re pivoting to AI - "“Current AI feels like something out of a Philip K Dick story because it answers a question very few people were asking: What if a computer was stupid?” — Maple Cocaine
Half of crypto has been pivoting to AI. Crypto’s pretty quiet — so let’s give it a try ourselves! Turns out it’s the same grift. And frequently the same grifters...   There is no such thing as “artificial intelligence.” Since the term was coined in the 1950s, it has never referred to any particular technology. We can talk about specific technologies, like General Problem Solver, perceptrons, ELIZA, Lisp machines, expert systems, Cyc, The Last One, Fifth Generation, Siri, Facebook M, Full Self-Driving, Google Translate, generative adversarial networks, transformers, or large language models — but these have nothing to do with each other except the marketing banner “AI.” A bit like “Web3.”  Much like crypto, AI has gone through booms and busts, with periods of great enthusiasm followed by AI winters whenever a particular tech hype fails to work out.  The current AI hype is due to a boom in machine learning — when you train an algorithm on huge datasets so that it works out rules for the dataset itself, as opposed to the old days when rules had to be hand-coded.   ChatGPT, a chatbot developed by Sam Altman’s OpenAI and released in November 2022, is a stupendously scaled-up autocomplete. Really, that’s all that it is. ChatGPT can’t think as a human can. It just spews out word combinations based on vast quantities of training text — all used without the authors’ permission.   The other popular hype right now is AI art generators. Artists widely object to AI art because VC-funded companies are stealing their art and chopping it up for sale without paying the original creators. Not paying creators is the only reason the VCs are funding AI art.   Do AI art and ChatGPT output qualify as art? Can they be used for art? Sure, anything can be used for art. But that’s not a substantive question. The important questions are who’s getting paid, who’s getting ripped off, and who’s just running a grift.  You’ll be delighted to hear that blockchain is out and AI is in...   ChatGPT makes up text that statistically follows from the previous text, with memory over the conversation. The system has no idea of truth or falsity — it’s just making up something that’s structurally plausible.  Users speak of ChatGPT as “hallucinating” wrong answers — large language models make stuff up and present it as fact when they don’t know the answer. But  any answers that happen to be correct were “hallucinated” in the same way.  If ChatGPT has plagiarized good sources, the constructed text may be factually accurate. But ChatGPT is absolutely not a search engine or a trustworthy summarization tool — despite the claims of its promoters.  ChatGPT certainly can’t replace human thinking. Yet people project sentient qualities onto ChatGPT and feel like they are conducting meaningful conversations with another person. When they realize that’s a foolish claim, they say they’re sure that’s definitely coming soon!  People’s susceptibility to anthropomorphizing an even slightly convincing computer program has been known since ELIZA, one of the first chatbots, in 1966. It’s called the ELIZA effect...   Better chatbots only amplify the ELIZA effect. When things do go wrong, the results can be disastrous...   The idea that AI will take over the world and turn us all into paperclips is not impossible!  It’s just that our technology is not within a million miles of that. Mashing the autocomplete button isn’t going to destroy humanity.  All of the AI doom scenarios are literally straight out of science fiction, usually from allegories of slave revolts that use the word “robot” instead. This subgenre goes back to Rossum’s Universal Robots (1920) and arguably back to Frankenstein (1818).   The warnings of AI doom originate with LessWrong’s Eliezer Yudkowsky, a man whose sole achievements in life are charity fundraising — getting Peter Thiel to fund his Machine Intelligence Research Institute (MIRI), a research institute that does almost no research — and finishing a popular Harry Potter fanfiction novel. Yudkowsky has literally no other qualifications or experience.   Yudkowsky believes there is no greater threat to humanity than a rogue AI taking over the world and treating humans as mere speedbumps. He believes this apocalypse is imminent. The only hope is to give MIRI all the money you have. This is also the most effective possible altruism.  Yudkowsky has also suggested, in an op-ed in Time, that we should conduct air strikes on data centers in foreign countries that run unregulated AI models. Not that he advocates violence, you understand... We need to stress that Yudkowsky himself is not a charlatan — he is completely sincere. He means every word he says. This may be scarier.   Remember that cryptocurrency and AI doom are already close friends — Sam Bankman-Fried and Caroline Ellison of FTX/Alameda are true believers, as are Vitalik Buterin and many Ethereum people...   The real threat of AI is the bozos promoting AI doom who want to use it as an excuse to ignore real-world problems — like the risk of climate change to humanity — and to make money by destroying labor conditions and making products worse. This is because they’re running a grift.  Anil Dash observes (over on Bluesky, where we can’t link it yet) that venture capital’s playbook for AI is the same one it tried with crypto and Web3 and first used for Uber and Airbnb: break the laws as hard as possible, then build new laws around their exploitation.  The VCs’ actual use case for AI is treating workers badly... there’s no actual reason to think that feeding your autocomplete more data will make it start thinking like a person. It might do better approximations of a sequence of words, but the current round of systems marketed as “AI” are still at the extremely unreliable chatbot level... Altman was telling the EU that OpenAI would pull out of Europe if they regulated his company other than how he wanted. This is because the planned European regulations would address AI companies’ actual problematic behaviors, and not the made-up problems Altman wants them to think about...   The thing Sam’s working on is so cool and dank that it could destroy humanity! So you better give him a pile of money and a regulatory moat around his business. And not just take him at his word and shut down OpenAI immediately.  Occasionally Sam gives the game away that his doomerism is entirely vaporware"

Pivot to AI: Pay no attention to the man behind the curtain - "Commercial AI runs on underpaid workers in English-speaking countries in Africa creating new training data and better responses to queries. It’s a painstaking and laborious process that doesn’t get talked about nearly enough... There’s an obvious hack here. If you are an AI task worker, your goal is to get paid as much as possible without too much effort. So why not use some of the well-known tools for this sort of job?... Remember, the important AI use case is getting venture capital funding. Why buy or rent expensive computing when you can just pay people in poor countries to fake it? Many “AI” systems are just a fancier version of the original Mechanical Turk.   Facebook’s M from 2017 was an imitation of Apple’s Siri virtual assistant. The trick was that hard queries would be punted to a human. Over 70% of queries ended up being answered by a human pretending to be the bot. M was shut down a year after launch.  Kaedim is a startup that claims to turn two-dimensional sketches into 3-D models using “machine learning.” The work is actually done entirely by human modelers getting paid $1-$4 per 15-minute job. But then, the founder, Konstantina Psoma, was a Forbes 30 Under 30... OpenAI’s AI-powered text generators fueled a lot of the hype around AI — but the real-world use case for large language models is overwhelmingly to generate content for spamming... For commercial purposes, the only use case for AI is still to replace quality work with cheap ersatz bot output — in the hope of beating down labor costs.  Even then, the AI just isn’t up to the task.  Microsoft put $10 billion into OpenAI. The Bing search engine added AI chat — and it had almost no effect on user numbers. It turns out that search engine users don’t want weird bot responses full of errors...   After GPT-3 came out, OpenAI took three years to make an updated version. GPT-3.5 was released as a stop-gap in October 2022. Then GPT-4 finally came out in March 2023! But GPT-4 turns out to be eight instances of GPT-3 in a trenchcoat. The technology is running out of steam... The deeper problem is that many AI systems simply don’t work. The 2022 paper “The fallacy of AI functionality” notes that AI systems are often “constructed haphazardly, deployed indiscriminately, and promoted deceptively.”...   AI’s massive compute load doesn’t just generate carbon — it uses huge amounts of fresh water for cooling. Microsoft’s water usage went up 34% between 2021 and 2022, and they blame AI computation. ChatGPT uses about 500 mL of water every time you have a conversation with it."

blog comments powered by Disqus
Related Posts Plugin for WordPress, Blogger...

Latest posts (which you might not see on this page)

powered by Blogger | WordPress by Newwpthemes