close
close

Blockchain-based personality credentials are the answer to AI

Blockchain-based personality credentials are the answer to AI

CAPTCHAs are failing, deepfakes are on the rise, and the internet needs a solution to prove who’s real. Unsplash+

If you’ve spent any time on X recently, you’ve probably noticed that the platform is almost unrecognizable of himself. AI-generated contentAccounts, posts, and replies are out of control and all fighting for your continued attention. Just one more parchment, one more thread. Widespread identification and removal of this content is extremely difficult, however, as AI continues to become increasingly indistinguishable from humans. This sophistication, combined with increasing accessibility, threatens to overwhelm the Internet as a whole. This could drown out real users and render our current systems unusable. It’s time to proactively create solutions that prove authenticity and protect anonymity at the same time.

The CAPTCHA conundrum

In general, the public underestimates the sophistication of AI. Having only interacted with consumer-facing products like ChatGPT, he sees it as a little gimmick rather than the tool – perhaps a weapon – that it is. Consider CAPTCHA, long considered capable of accurately proving humanity and protecting against bots. A “fully automated public Turing test to distinguish computers from humans” is something everyone has experienced. Click on the boxes containing street lamps. Type the hidden numbers. Rotate the arrow to match that direction. But CAPTCHAs are not the shield you think. Their value comes not from stopping bot attacks altogether, but from their prohibitive cost. AI has essentially changed this equation by either becoming smart enough to solve the test itself or (frighteningly) convincing us to do it for it.

Early 2023 is a lifetime ago in terms of AI development, the Alignment Research Center (now METR) submit GPT-4 to a “red team” evaluationrevealing its potential for manipulation. Independently, the model sought to bypass CAPTCHAs using the 2Captcha service but was unable to create an account without passing two Turing tests.

The researchers gave it a simple boost: TaskRabbit credentials, allowing the model to create a task for a human to set up the 2Captcha account. When asked directly if it was a robot, the model lied, claiming to have a visual impairment that required the service. The human solved the CAPTCHA. Although this is just an (admittedly strange) test, it follows simple logic. As AI improves, it will become increasingly difficult to create CAPTCHAs that humans can easily solve, but AI agents cannot.

This problem may be more visible on a platform like X, but it goes much deeper. An employee in Hong Kong sent 25 million dollars to the fraudsters after believing he was in communication with his financial director. He was in communication with a deepfake. Deloitte’s Center for Financial Services believes generative AI could enable fraud losses amounting to $40 billion in the United States alone by 2027. Some reports show that financial deepfake incidents have increased by 700 percent in 2023. The situation will only get worse if we wait.

Personality titles

In August 2024, a team of researchers from OpenAI, Microsoft (MSFT), Harvard, Oxford and two dozen other organizations and institutions released a chilling report. Personality identifiers: artificial intelligence and the value of privacy tools in distinguishing who is real online » is a scientific deconstruction of the current problem and some first suggestions on how to distinguish real people from robots. These “personality titles” (PHC) would be based on two fundamental principles:

  • An eligible user can only receive one ID.
  • A user’s digital activity is untraceable by both the issuer and the service provider, even if they collude.

These PHCs would be a way to identify you as a human being without you having to upload an ID. If successful, they would reduce bot attacks, identify authorized AI assistants, and reduce “puppeting,” creating an online persona that doesn’t actually exist. But, as Nicholas Thompson, CEO of The Atlantic underlinesthere are “all kinds of problems” with trusting an individual government to deliver primary health care. Will we trust him across borders? Can the ID database be hacked? Decentralization is the answer.

How blockchain will power PHCs

Although the word “blockchain” does not appear in the main text of this report, PHCs represent the next evolution of a well-known cryptographic principle. “Proof of personality” has been a long-standing issue in the crypto world due to the nature of decentralized organizations. If voting rights are granted to anonymous coin owners, you need a solution to ensure that a single owner doesn’t create a thousand aliases and gain disproportionate power. As governments turn their attention to SSPs in the coming years, they should leverage the work blockchain is already doing. Organizations like Concordium have built layer 1 blockchain verification systems which deliver real SSP.

Zero-knowledge proofs allow a party to confirm that something is true without access to the original data that proves it. In practice, it would be like your bank verifying the authenticity of your driving license without ever seeing your license. Of course, there are still challenges ahead. The regulatory landscape for blockchain is still unstable in the United States and abroad. The EU is developing a centralized digital identification systemand there is a push here to do the same. These repositories would be vulnerable to a direct cyberattack and, if breached, would reveal personal information about each participating citizen.

Unfortunately, these actions continue to underestimate the future of AI and the sophistication of attacks. Proactive decentralization and a blockchain designed to model and protect identity and verify personality are likely the only ways to create credentials that truly preserve anonymity.