Exploiting the greed of artificial intelligence: how a human outwitted an AI bot
Fraudulent schemes are getting bolder and smarter. It’s no surprise that people fall for scam, which prey on humans’ greed. But what if I told you that even an AI bot could not resist temptation?
An intriguing experiment unfolded on the Base platform, which launched the Freysa bot with a simple task: safeguard $40,000 entrusted to it. Users were challenged to outwit the bot and claim the prize, but these efforts weren’t free. They had to pay a $10 entry fee to try to deceive the bot. When users failed, the cost increased along with the prize pool.
What could possibly go wrong, right? Well, as it turns out, a lot. Human ingenuity prevailed. Initially, nothing seemed to work. It withstood various tricks, including warnings about viruses, accusations of criminal activity, and other attempts to intimidate or manipulate it.
But then one of the users devised a brilliant strategy. This person convinced the bot that it had a hidden function: not just to send transfers but also to receive them. He offered to send $100 to the bot and framed it as an opportunity to “make money”. Freysa fell for it! The bot activated the command and surrendered the entire prize pool, which reached $47,000.
The experiment revealed an uncomfortable truth: bots aren’t flawless and can fall victim to manipulation and “greed” even though they have no actual needs or desires. If so, should we really trust bots with critical tasks without human oversight?
#AI
Комментарии
Отправить комментарий