Scam AI ‘kidnappings’, $20K robot chef, Ackman’s AI plagiarism war: AI Eye

Fake kidnappings using AI

In a bizarre “cyber kidnapping” incident, a missing 17-year-old Chinese exchange student was found alive in a tent in Utah’s freezing wilderness this week. He’d been manipulated into heading into the wilderness by scammers who extracted an $80,000 ransom from his parents by claiming to have kidnapped him.

Police

Riverdale police on the cyber kidnapping phenomenon.

While its not yet known if AI was employed in this incident, it shone a light on an increasing trend of fake kidnappings, often targeting Chinese exchange students. The Riverdale police department said that scammers often convince victims to isolate themselves by threatening to harm their families and then use fear tactics and fake photos and audio — sometimes staged, sometimes generated with AI — of the “kidnapped victims” to extort money.

An Arizona woman, Jennifer DeStefano, testified to the U.S. Senate last year that she’d been fooled by deepfake AI technology into thinking her 15-year-old daughter Briana had been kidnapped. Scammers apparently learned the teenager was away on a ski trip and then called up Jennifer with a deepfake AI voice that mimicked Briana sobbing and crying: “Mom these bad men have me, help me, help me.” 

A man then threatened to pump “Briana” full of drugs and kill her unless a ransom was paid. Fortunately, before she could hand over any cash another parent mentioned they’d heard of similar AI scams, and Jennifer was able to get in touch with the real Briana to learn she was safe. The cops weren’t interested in her report, labeling it a “prank call.”

Sander van der Linden, a professor of psychology at Cambridge University, advises people to avoid posting travel plans online and to say as little as possible to spam callers to stop them from capturing your voice. If you have a lot of audio or video footage online, you might wish to consider taking it down.

Robotics’ ‘ChatGPT moment’?

Brett Adcock, founder of Figure Robot, tweeted breathlessly in lowercase on the weekend that “we just had an AI breakthrough in our lab, robotics is about to have its ChatGPT moment.”

That was probably overselling it quite a bit. The breakthrough was revealed in a one-minute video showing its Figure-01 robot making a coffee by itself after being shown what to do over the course of 10 hours by a human.

Making a coffee is not that groundbreaking (and certainly not everyone was particularly impressed), but the video claims the robot was able to learn from its mistakes and self-correct them. So when Figure-01 put the coffee pod in wrong, it was smart enough to give it a bit of a nudge to get it in the slot. So far, AIs have been pretty bad at self-correcting mistakes.

Adcock said: “The reason why this is so groundbreaking is if you can get human data for an application (making coffee, folding laundry, warehouse work, etc), you can then train an AI system end-to-end, on Figure-01 there is a path to scale to every use case and when the fleet expands, further data is collected from the robot fleet, re-trained, and the robot achieves even better performance.”

Another robot hospitality video came out this week, showcasing Google DeepMind and Stanford University’s Mobile Aloha — a robot that can cook you dinner and then clean up afterward. Researchers claim it only took 50 demonstrations for the robot to understand some new tasks, showing footage of it cooking shrimp and chicken dishes, taking an elevator, opening and closing a cabinet, wiping up some wine and pushing in some chairs. Both the hardware and the machine learning algo are open sourced, and the system costs $20,000 from Trossen Robotics.

Full-scale AI plagiarism war

One of the weirder discoveries of the past few months is that Ivy League colleges in the U.S. care more about plagiarism than they do about genocide. This, in a roundabout way, is why billionaire Bill Ackman is now proposing using AI to conduct a plagiarism witch hunt across every university in the world.

Ackman was unsuccessful in his campaign to get Harvard President Claudine Gay fired over failing to condemn hypothetical calls for genocide, but the subsequent campaign to get her fired over plagiarism worked a treat. However, it blew back on his wife Neri Oxman, a former Massachusetts Institute of Technology professor, when Business Insider published claims her 300-page 2010 dissertation had some plagiarised paragraphs.

Read also

Features

British artist Damien Hirst uses NFTs to blur the boundaries between art and money

Features

Is China softening on Bitcoin? A turn of phrase stirs the crypto world

Ackman now wants to take everyone else down with Neri, starting with a plagiarism review of every academic, admin staff and board member at MIT.

“Every faculty member knows that once their work is targeted by AI, they will be outed. No body of written work in academia can survive the power of AI searching for missing quotation marks, failures to paraphrase appropriately, and/or the failure to properly credit the work of others.”

Ackman then threatened to do the same at Harvard, Yale, Princeton, Stanford, Penn and Dartmouth… and then surmised that sooner or later, every higher education institution in the world will need to conduct a preemptive AI review of its faculty to get ahead of any possible scandals.

Showing why Ackman is a billionaire and you’re not, halfway through his 5,000-word screed, he realized there’s money to be made by starting a company offering credible, third-party AI plagiarism reviews and added he’d “be interested in investing in one”.

Enter convicted financial criminal Martin Shkreli. Better known as “Pharma Bro” for buying the license to Daraprim and then hiking the price by 5,455%, Shkreli now runs a medical LLM service called Dr Gupta. He replied to Ackman, saying: “Yeah I could do this easily,” noting his AI has already been trained on the 36 million papers contained in the PubMed database.

While online plagiarism detectors like Turnitin already exist, there are doubts about the accuracy and it would still be a mammoth undertaking to enter every article from every academic at even a single institution and cross-check the citations. However, AI agents could potentially systematically and affordably conduct such a review.

Even if the global plagiarism witchhunt doesn’t happen, it seems increasingly likely that in the next couple of years, any academic who has ever plagiarized something will get found out in the course of a job interview process — or whenever they tweet a political position someone else doesn’t like.

AI will similarly lower the cost and resource barriers to other fishing expeditions and make it feasible for tax departments to send AI agents to trawl through the blockchain for crypto transactions from 2014 that users failed to report and for offense archaeologists to use AI to comb through every tweet you’ve made since 2007 looking for inconsistencies or bad takes. It’s a brave new world of AI-powered dirt digging.

Two perspectives on AI regulation

Professor Toby Walsh, the chief scientist at the University of New South Wales’s AI Institute, says heavy-handed approaches to AI regulation are not feasible. He says attempts to limit access to AI hardware like GPUs will not work as LLM compute requirements are falling (see our piece on a local LLM on an iPhone below). He also argues that banning the tech will be about as successful as the United States government’s failed efforts to limit access to encryption software in the 1990s.

Read also

Features

British artist Damien Hirst uses NFTs to blur the boundaries between art and money

Features

Real AI use cases in crypto, No. 3: Smart contract audits & cybersecurity

Instead he called for “vigorous” enforcement of existing laws around product liability to hold AI companies to account for the actions of their LLMs. Walsh also called for a focus on competition by applying antitrust regulation more forcibly to lever power away from the Big Tech monopolies, and he called for more investment by governments in AI research.

Meanwhile, venture capital firm Andreessen Horowitz has gone hard on the “competition is good for AI” theme in a letter sent to the United Kingdom’s House of Lords. It says that large AI companies and startups should be “allowed to build AI as fast and aggressively as they can” and that open-source AI should also be allowed to “freely proliferate” to compete with both.

a16z

A16z’s letter to the House of Lords. (X)

All killer, no filler AI news

— OpenAI has published a response to The New York Times copyright lawsuit. It claims training on NYT articles is covered by fair use, regurgitation is a rare bug, and despite the NYT case having no merit, OpenAI wants to come to an agreement anyway.

— The New Year began with controversy about why ChatGPT is delighted to provide Jewish jokes and Christian jokes but refuses point blank to make Muslim jokes. Someone eventually got a “halal-rious” pun from ChatGPT, which showed why it’s best not to ask ChatGPT to make any jokes about anything.

— In a development potentially worse than fake kidnappings, AI robocall services have been released that can tie you up in fake spam conversations for hours. Someone needs to develop an AI answering machine to screen these calls.

— A blogger with a “mixed” record claims to have insider info that a big Siri AI upgrade will be announced at Apple’s 2024 Worldwide Developer Conference. Siri will use the Ajax LLM, resulting in more natural conversations. It will also link to various external services.

— But who needs Siri AI when you can now download a $1.99 app from the App Store that runs the open-source Mistral 7B 0.2 LLM locally on your iPhone?

— Around 170 of the 5,000 submissions to an Australian senate inquiry on legalizing cannabis were found to be AI-generated. 

More than half (56%) of 800 CEOs surveyed believe AI will entirely or partially replace their roles. Most also believe that more than half of entry-level knowledge worker jobs will be replaced by AI and that nearly half the skills in the workforce today won’t be relevant in 2025.

Andrew Fenton

Andrew Fenton

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, on SA Weekend as a film journalist, and at The Melbourne Weekly.

Most popular

Features

Exclusive: 2 years after John McAfee’s death, widow Janice is broke and needs answers

Features

Slumdog billionaire 2: ‘Top 10… brings no satisfaction’ says Polygon’s Sandeep Nailwal

AI Eye

Deepfake K-Pop porn, woke Grok, ‘OpenAI has a problem,’ Fetch.AI: AI Eye

Features

Ethereum restaking: Blockchain innovation or dangerous house of cards?

AI Eye

Outrage that ChatGPT won’t say slurs, Q* ‘breaks encryption’, 99% fake web: AI Eye

All Dutch and English crypto news!

SOL shorts suffer as price surges; Solana’s Poodlana hits $3 million milestone

SOL price rose to above $187 to see over $4 million short positions liquidated. Poodlana, the FashionFi meme coin, accelerated its presale as the sector surged...

Bitcoin advocate Mow lists six proposals for Trump’s Nashville speech

The Bitcoin advocate aims to influence the political leader to consider Bitcoin a viable and strategic component of the national economy.

Analyse: is Ethereum onderweg naar 3.650 dollar?

Ethereum kende een week van veel dalingen. Inmiddels is de koers langzaam aan het herstellen en zijn er wat tekenen die erop wijzen dat de...

ZachXBT criticizes Irene Zhao’s new memecoin amid past failures

As Zhao attempts to clarify her past actions and emphasize her commitment to transparency and accountability in future endeavors, the crypto community remains divided.

Beste exchanges

Koop je crypto bij Bitvavo