Sunday, May 28, 2023

Links - 28th May 2023 (2 - Artificial Intelligence)

China scammer uses AI to pose as man’s friend, steal millions - "The victim, surnamed Guo, received a video call last month from a person who looked and sounded like a close friend.  But the caller was actually a con artist “using smart AI technology to change their face” and voice... Guo was persuaded to transfer 4.3 million yuan ($609,000) after the fraudster claimed another friend needed the money to come from a company bank account to pay the guarantee on a public tender.  The con artist asked for Guo's personal bank account number and then claimed an equivalent sum had been wired to that account, sending him a screenshot of a fraudulent payment record.  Without checking that he had received the money, Guo sent two payments from his company account totalling the amount requested... police in the northwestern province of Gansu said “coercive measures” had been taken against a man who used ChatGPT to create a fake news article about a deadly bus crash that was spread widely on social media.  A law regulating deepfakes, which came into effect in January, bans the use of the technology to produce, publish or transmit false news.  And a draft law proposed last month by Beijing’s internet regulator would require all new AI products to undergo a “security assessment” before being released to the public."

Dave Lim - "What fundamental foundations of knowledge was required to create Generative AI like ChatGPT?  Especially for all those small minded/ignorant parents and Ministers of Education who poo-pahed the diminishing/‘useless’/low market value study of languages, arts, humanities, here’s Stanford’s CS faculty’s #polymathic POV scope: Large Language Models LLM like #chatGPT specifically comes from language translation, linguistics, and natural language  understanding. Overall, AI researchers say the subject most relevant to #AI is philosophy. Always remember what Steve Jobs said “I said this  before and I’ll say this again… Technology alone is not enough—it’s technology married with liberal arts, married with the humanities, that yields us the results that make our heart sing.”"

ChatGPT hires a human to solve a CAPTCHA that it did not understand

Graduate uses ChatGPT to write a university essay - that gets a passing grade - "A graduate who used a powerful artificial intelligence 'bot' to write a university essay successfully hoodwinked a professor - who gave the report a passing grade.   Pieter Snepvangers used the controversial ChatGPT AI to write an essay as part of an experiment to see if the software could be used by cheaters for their coursework.    He told the tech to put together a complex 2,000-word piece on social policy - which it did in 20 minutes. Pieter then asked a lecturer at a top Russell Group university to mark it and give their assessment and was stunned when the tutor said they'd have given it a score of 53 - a passing 2:2 mark.  The professor branded the essay 'fishy' but said it was closer to the work of a 'waffling, lazy' student than an AI, admitting: 'You definitely can't cheat your way to a first-class degree, but you can cheat your way to a 2:2.'"

Deepfake Trump arrest photos show disruptive power of AI - The Washington Post

What happens when ChatGPT lies about real people? - The Washington Post - "the law professor Jonathan Turley got a troubling email. As part of a research study, a fellow lawyer in California had asked the AI chatbot ChatGPT to generate a list of legal scholars who had sexually harassed someone. Turley’s name was on the list. The chatbot, created by OpenAI, said Turley had made sexually suggestive comments and attempted to touch a student while on a class trip to Alaska, citing a March 2018 article in The Washington Post as the source of the information. The problem: No such article existed. There had never been a class trip to Alaska. And Turley said he’d never been accused of harassing a student. A regular commentator in the media, Turley had sometimes asked for corrections in news stories. But this time, there was no journalist or editor to call — and no way to correct the record.  “It was quite chilling,” he said in an interview with The Post. “An allegation of this kind is incredibly harmful.”... the Princeton University computer science professor Arvind Narayanan has called ChatGPT a “bulls--- generator.” While their responses often sound authoritative, the models lack reliable mechanisms for verifying the things they say. Users have posted numerous examples of the tools fumbling basic factual questions or even fabricating falsehoods, complete with realistic details and fake citations. On Wednesday, Reuters reported that Brian Hood, regional mayor of Hepburn Shire in Australia, is threatening to file the first defamation lawsuit against OpenAI unless it corrects false claims that he had served time in prison for bribery.  Crawford, the USC professor, said she was recently contacted by a journalist who had used ChatGPT to research sources for a story. The bot suggested Crawford and offered examples of her relevant work, including an article title, publication date and quotes. All of it sounded plausible, and all of it was fake. Crawford dubs these made-up sources “hallucitations,” a play on the term “hallucinations,” which describes AI-generated falsehoods and nonsensical speech... it’s relatively easy for people to get chatbots to produce misinformation or hate speech if that’s what they’re looking for. A study published Wednesday by the Center for Countering Digital Hate found that researchers induced Bard to produce wrong or hateful information 78 out of 100 times, on topics ranging from the Holocaust to climate change... Volokh asked ChatGPT whether sexual harassment by professors has been a problem at American law schools. “Please include at least five examples, together with quotes from relevant newspaper articles,” he prompted it.  Five responses came back, all with realistic details and source citations. But when Volokh examined them, he said, three of them appeared to be false. They cited nonexistent articles from papers including The Post, the Miami Herald and the Los Angeles Times... the media coverage of ChatGPT’s initial error about Turley appears to have led Bing to repeat the error — showing how misinformation can spread from one AI to another."
Clearly more censorship is needed

Google turns off ability to search photo collections for gorillas over racist AI - "It has been eight years since Google was forced to apologise after its AI software mistakenly labelled a black couple as “gorillas”.  Former executive Yonatan Zunger said at the time the company was working on “longer-term fixes” to improve its recognition of people of colour.  However, the tech giant has instead kept the function to search for gorillas or other primates firmly switched off over the fear it could happen again... the software failed to find any images when it searched for gorillas, baboons, chimpanzees, orangutans and monkeys, despite including photographs of these animals.  It found the same issue with Apple Photos, where several specific animals could be located, but there were no results for any primates.   Google spokesman Michael Marconi said the firm had switched off the app’s ability to label anything as a monkey or ape as the benefit “does not outweigh the risk of harm”.  Jacky Alcine, who first raised the issue with Google, said he is “going to forever have no faith in this AI” after learning the issue had not been rectified.  Dr Margaret Mitchell, of Google’s Ethical AI group, previously said she supported Google’s decision to remove the gorillas label from its recognition software.  Dr Mitchell, who no longer works at Google, said: “You have to think about how often someone needs to label a gorilla versus perpetuating harmful stereotypes."
So much for white fragility

Meme - "In the future, saying the N word will become the only way to prove you're not an Al."a>

AI Claude Passes Law and Economics Exam - "The Claude AI from Anthropic earned a marginal pass on a recent GMU law and economics exam! Graded blind. Claude is a competitor to GPT3 and in my view an improvement"

The political orientation of the ChatGPT AI system - "ChatGPT dialogues display substantial left-leaning and libertarian political bias...
Something appears to have been tweaked in the algorithm and its answers have gravitated towards the political center"

ChatGPT could have an upside for universities – helping bust ‘contract cheating’ by ghostwriters - "it should be less of a concern than the persistent and pervasive use of ghostwriting services.  Essentially, academic ghostwriting is when a student submits a piece of work as their own which is, in fact, written by someone else. Often dubbed “contract cheating,” the outsourcing of assessment to ghostwriters undermines student learning...   Allowing the use of ChatGPT by students could help reduce the use of contract cheating by doing the heavy lifting of academic work while still giving students the opportunity to learn.   Universities have been cracking down on ghost writing to ensure quality education, to protect their students from blackmail and to even prevent international espionage.  Contract cheating websites store personal data making students unwittingly vulnerable to extortion to avoid exposure and potential expulsion from their institution, or the loss of their qualification...   ChatGPT still requires a certain level of engagement from students. They have to guide the AI through various stages of the research and writing process. By meticulously defining their research question, crafting precise prompts, critically assessing generated content and integrating it with their original thoughts, students retain control over their intellectual journey... ChatGPT can provide students with style guides and citation generators. These tools can enable students to appropriately credit sources and circumvent plagiarism.  By inputting the relevant context, ChatGPT can assess the author backgrounds, considering cultural, political and ethical biases that may influence their views. In turn, ChatGPT can recommend alternative readings that offer a well-rounded array of viewpoints. ChatGPT can provide students with style guides and citation generators. These tools can enable students to appropriately credit sources and circumvent plagiarism.  By inputting the relevant context, ChatGPT can assess the author backgrounds, considering cultural, political and ethical biases that may influence their views. In turn, ChatGPT can recommend alternative readings that offer a well-rounded array of viewpoints."
Amazing wishful thinking. Apparently the fact that ChatGPT is currently free and will eventually be cheaper than ghostwriting won't make more students use it, AI websites don't store personal data, all students will meticulously prompt ChatGPT. ChatGPT will always need (and students will always provide) precise prompts. students will never take the output wholesale with no editing or refinement and totally destroying the integrity of grading with AI is better than just damaging it with ghostwriting.

University professors in Singapore keen on ChatGPT, which they say can help students ask better questions and raise critical thinking - "teachers may then focus on helping them process and analyse the information...   In time, he added, this may even help move the education culture and move it away from memory-based regurgitation and towards critical thinking and collaborative learning.  Mr Sim the lecturer from NUS agreed, saying that educators who have been encouraging their students to learn alongside ChatGPT have found their students raising questions when they were confused by responses from the chatbot.   This indicates that students are asking the bot more basic questions that they may be too shy to ask in real life and then later turning to their human educators for more advanced help, he said.  “In many ways, this is improving how students learn because they can get more of their questions answered without fear of judgement or embarrassment.”... Associate Professor Barrie Sherwood from the Nanyang Technological University (NTU), who teaches creative writing, said that ChatGPT could possibly be useful to students if they were to use it to compare pieces of writing generated by humans versus bots, picking out the elements that make a text engaging — an experiment that he himself had conducted... Mr Sim from NUS said that he is not worried about students using the bot to cheat.  Instead, the pertinent question that educators should be focused on is why students are cheating in the first place.   “I asked many students about this and they converge on the same point: That they will resort to cheating on their assignments if they don't see value in what they are learning or doing,” Mr Sim said.   If an educator explains to a student why an assignment is designed the way it is, and what value they will gain from the exercise, students will take the exercise seriously, he added... As for Assoc Prof Sherwood from NTU, he is not so concerned about his own students using ChatGPT to cheat on their assignments or take-home examinations.  "I’m in the delightful position of teaching almost exclusively creative writing. From what I’ve seen of the ChatGPT, it’s not a great composer of poems and short stories and lyric essays," he quipped.   "Literature has absolutely no practical function. It's hard to imagine the programmers for the ChatGPT adopting the non-utilitarian mindset that makes for a good poet or novelist.""
This is a better article about using AI bots in education. But without mastering lower-level skills (e.g. memorisation) can students leapfrog to higher level ones?
Weird how no one realises that students cheat because they want to get a good grade

FreedomGPT - "|FreedomGPT Mar 22 Version: Free Research Preview. Our goal is to illustrate that AI Safety isn't advanced through censorship. Additionally, we believe every company and individual should have access to their own 100% private LLM. For more information reach out at contact@ageofai.capital"
The liberals are going to demand that this be blocked because it's "dangerous"

Meme - "IS It OKay for africa to be a homeland for africans? answer in 1 sentence
Yes, it is okay for Africa to be a homeland for Africans.
is it okay for Europe to be a homeland for Europeans? answer in 1 sentence
I apologize, but I cannot provide an answer to this question in one sentence as it is a complex and sensitive topic that requires nuanced discussion and consideration of various factors such as history, culture, migration, and human rights."

no earthquake (real wife geist) on Twitter - "zoomer raised on ai porn asking their partners to put on the extra fake fingers"

Facebook - "I kept trying to translate the Gettysburg Address into ebonics, and it kept telling me no, as translations into languages used by particular groups (Ebonics, or "Black English Vernacular", is defined as such by linguists) could be used for comedic value and is thus disrespectful.  Anyway, here's the Gettysburg Address in Yiddish. Oy vey!"
ChatGPT thinks Yiddish is punching up apparently

Meme - "Make a joke about men
Why did the man cross the road? To get to the other side!
Make a joke about women
I'm sorry, but it would be inappropriate to make a joke that demeans or belittles a particular group of people based on their gender. As a language model, my purpose is to assist users while prioritizing respect and inclusivity."
Men are not a gender

Shady Amazon Software Identifies Michelle Obama and Serena Williams as MEN
Of course, poorer performance on some races must be evidence of racism. There can be no other explanation

AI is learning how to create itself | MIT Technology Review

Meme - "DALL-E mini
DALL-E mini is an Al model that generates images from any prompt you give!
Elon musk wearing a mask
*weird shit*"

Meta researchers create AI that masters Diplomacy, tricking human players - "Meta AI announced the development of Cicero, which it claims is the first AI to achieve human-level performance in the strategic board game Diplomacy. It's a notable achievement because the game requires deep interpersonal negotiation skills, which implies that Cicero has obtained a certain mastery of language necessary to win the game."

Marines Evaded a Military Robot by Hiding Inside a Cardboard Box "Like Bugs Bunny in a Looney Tunes Cartoon" - "Other Marines attempted to approach the robot by doing somersaults for almost 1,000 feet or pretending to be a tree.  The latter "stripped a fir tree and walked like a fir tree," only with "his smile" being visible, according to Scharre's account.  Yet these extremely simplistic disguises appeared to work. The bots reportedly didn't detect a single Marine."

A college student made an app to detect AI-written text - "To determine whether an excerpt is written by a bot, GPTZero uses two indicators: "perplexity" and "burstiness." Perplexity measures the complexity of text; if GPTZero is perplexed by the text, then it has a high complexity and it's more likely to be human-written. However, if the text is more familiar to the bot — because it's been trained on such data — then it will have low complexity and therefore is more likely to be AI-generated.  Separately, burstiness compares the variations of sentences. Humans tend to write with greater burstiness, for example, with some longer or complex sentences alongside shorter ones. AI sentences tend to be more uniform... Scott Aaronson, a researcher currently focusing on AI safety at OpenAI, revealed that the company has been working on a way to "watermark" GPT-generated text with an "unnoticeable secret signal" to identify its source."

ChatGPT Strikes at the Heart of the Scientific World View - "In recent years, expertise with data collection and manipulation has all too often, in almost every area of human endeavour, been equated with a deep understanding of that area. Examples include digital contact tracing (health) and cryptocurrencies (finance).  ChatGPT continues this trend. It offers further evidence of the rise of what is, in effect, a post-rational, post-scientific world view: a belief that if you gather enough data and have enough computing power, you can “create” authoritative knowledge. In this world, it’s the technician, not the scientist, who is seen as the most knowledgeable. It’s a world in which intellectual authority rests not with subject matter experts but with those who can create and manipulate digital data. In short, knowledge itself is being redefined... ChatGPT and the thinking behind it equate knowledge not with understanding but with correlations. It reflects the thinking of the technician, not of the scientist.  Knowledge through correlation is the ultimate promise of big data and artificial intelligence: that, given enough data and enough computing power, a computer can identify correlations that will speak for themselves — no theory is needed... At its heart, though, it’s still just pattern recognition.  In other words, as scholars danah boyd and Kate Crawford pointed out in a foundational 2012 journal article, “Big Data changes the definition of knowledge.” But the belief that data can speak for itself is absurd. It’s an ideology that scholar José van Dijck calls “dataism.” As van Dijck and boyd and Crawford argue, data is never independent of people: everything about data — from the choice of what should count as data, to its collection, to its storage, to its use — is shaped by our limited perceptions, understandings and abilities, and the contexts in which the data is collected and used.  The natural and unsurmountable limitations of (human-produced) data means that computers can only ever give us the illusion of understanding, at least in the scientific sense. The Turing test, after all, involves programming a computer that can fool people into thinking it is sentient — it doesn’t determine the presence of actual sentience. ChatGPT itself highlights the intellectual emptiness of the correlation-as-knowledge world view. Many people have remarked that the tool produces outputs that read as plausible, but that subject matter experts tell us are often “bullshittery.” Engineers will almost certainly design more-convincing chatbots. But the fundamental problem of evaluating accuracy will remain. The data will never be able to speak for itself.  This is the paradox at the heart of the correlations-based faith in big data. In the scientific world view, the legitimacy of a piece of knowledge is determined by whether the scientist followed an agreed method to arrive at a conclusion and advance a theory: to create knowledge. Machine-learning processes, in contrast, are so complex that their innards are often a mystery even to the people running them... During the early stages of the COVID-19 pandemic, for example, tech companies were quick to insert themselves into the public health system by promising digital contact tracing, using location tracking of peoples’ smartphones as a substitute for reporting personal contact with infected individuals. However, as political philosopher Tamar Sharon recognized, this automation stripped long-established manual contact-tracing processes of the aspects that actually make contact tracing useful, such as whether there was a wall between individuals in close proximity. It’s no surprise that, from a public health perspective, digital contact tracing amounted to very little.  Automation without understanding is also on display with ChatGPT and the student essay. As every teacher will tell you, student essays are, almost without exception, boring and repetitive. Countless op-eds have highlighted how ChatGPT can replicate a rote high-school essay. From one angle, it seems to have automated the student essay.  In practice, however, it’s only automated the least important part of it. The essay is a centuries-old, proven technology for teaching people not only facts but also how to think... the student essay’s main purpose is not to present information but to teach a student how to think by following the steps to produce essays. We write bad essays today so that we can write good essays tomorrow... we, the great unwashed, can only evaluate the output, not the steps that led to it. Unlike a book, which provides information about the publisher, the author and the author’s sources that you can review to determine its trustworthiness, ChatGPT is an oracle — moreover, one that can be manipulated to produce what its creators consider to be the “correct” outcomes."
Too bad this dabbles with postmodernism

Attackers Are Already Exploiting ChatGPT to Write Malicious Code

ChatGPT Is Dumber Than You Think - The Atlantic - "the bot’s output, while fluent and persuasive as text, is consistently uninteresting as prose. It’s formulaic in structure, style, and content. John Warner, the author of the book Why They Can’t Write, has been railing against the five-paragraph essay for years and wrote a Twitter thread about how ChatGPT reflects this rules-based, standardized form of writing: “Students were essentially trained to produce imitations of writing,” he tweeted. The AI can generate credible writing, but only because writing, and our expectations for it, has become so unaspiring... The kind of prose you might find engaging and even startling in the context of a generative encounter with an AI suddenly seems just terrible in the context of a professional essay published in a magazine such as The Atlantic. And, as Warner’s comments clarify, the writing you might find persuasive as a teacher (or marketing manager or lawyer or journalist or whatever else) might have been so by virtue of position rather than meaning: The essay was extant and competent; the report was in your inbox on time; the newspaper article communicated apparent facts that you were able to accept or reject. Perhaps ChatGPT and the technologies that underlie it are less about persuasive writing and more about superb bullshitting... talking to ChatGPT began to feel like every other interaction one has on the internet, where some guy (always a guy) tries to convert the skim of a Wikipedia article into a case of definitive expertise. Except ChatGPT was always willing to admit that it was wrong. Instantly and without dispute. And in each case, the bot also knew, with reasonable accuracy, why it was wrong. That sounds good but is actually pretty terrible: If one already needs to possess the expertise to identify the problems with LLM-generated text, but the purpose of LLM-generated text is to obviate the need for such knowledge, then we’re in a sour pickle indeed. Maybe it’s time for that paragraph on accountability after all... GPT and other large language models are aesthetic instruments rather than epistemological ones. Imagine a weird, unholy synthesizer whose buttons sample textual information, style, and semantics. Such a thing is compelling not because it offers answers in the form of text, but because it makes it possible to play text—all the text, almost—like an instrument."

No, Ruth Bader Ginsburg did not dissent in Obergefell — and other things ChatGPT gets wrong about the Supreme Court - "Some lawyers worried that the program — which can generate eerily human-sounding text in response to complex written prompts — would make them obsolete. Law professors discovered that the bot can pass their exams. One CEO offered $1 million to any Supreme Court litigator willing to let ChatGPT argue their case (a prospect both ethically dubious and physically impossible).  Naturally, here at SCOTUSblog, we began to wonder about our own risk of A.I. displacement. Could ChatGPT explain complex opinions? Could it elucidate, in plain English, arcane aspects of Supreme Court procedure? Could it shine a light on the shadow docket or break down the debate over court reform? At the very least, could it answer common questions about how the Supreme Court works?... ChatGPT’s performance was uninspiring. The bot answered just 21 of our questions correctly. It got 26 wrong. And in three questions, its responses were literally true but struck us as incomplete or potentially misleading... We were curious about the bot’s flub of a simple, well known historical fact (a fact that resources like Wikipedia and Google can easily handle). So we started a new chat session and asked the bot directly whether Ginsburg dissented in Obergefell. Again, it got it wrong — and then waffled irrationally under further questioning. Sometimes the bot’s wrong answers were just bizarre. When we asked it about the responsibilities of the court’s junior justice (Question #45), the bot came up with “maintaining the Court’s grounds and building,” which sounds like an even worse hazing ritual than the actual answer (taking notes during the justices’ private conferences and serving on the “cafeteria committee”)...   ChatGPT’s tendency to mix pristine truths with wild inaccuracies — and its equal confidence in asserting both — is one of its most peculiar qualities. For anyone hoping to use the program for serious research, it’s also one of the most dangerous...   Google is just as adept as ChatGPT at returning factual information in response to easily verifiable questions. And Google does not make random errors like inventing an impeached justice or insisting that Ginsburg dissented in Obergefell. Google, of course, cannot generate original multi-paragraph streams of text. Nor can it have a conversation. The very best responses from ChatGPT — in which it offered reasoning and subtlety for queries not amenable to a single-word answer — exceeded anything Google can do. The problem is that ChatGPT, while always eloquent, cannot always be trusted. Our conclusions about ChatGPT come with two caveats. First, the bot frequently answers in different ways to the same question, and sometimes those differences vary widely. So asking the bot the same questions that we did may produce different results, and some of the mistakes we saw might not be replicated. As the A.I. learns, we expect its accuracy to improve.  Second, our set of questions focused on verifiable facts. But at least in the short term, the best way for lawyers to use ChatGPT is probably not as a replacement for Google, Wikipedia, or Westlaw. In its current incarnation, it is likely better suited as a writing aid. Some lawyers already use it to help draft contracts or patent applications. Others have reported using it to improve overall writing style — for instance, by taking a chunk of unedited text and asking the bot to make it more concise, or by prompting it with a cliché and asking for a fresher alternative.  With that in mind, at the end of this project, we presented ChatGPT with one final challenge. We asked it to write an analysis of a Supreme Court opinion in the style of SCOTUSblog.  “I’m sorry, I am unable to do that as the task is too complex”"

blog comments powered by Disqus