Ab Februar 2025 Pflicht (EU AI Act): Anbieter und Betreiber von KI-Systemen müssen KI-Kompetenz nachweisen.
Alle Informationen 
Person steht vior Schloss mit einem Fingerabdruck, das vor einem Login-Bildschrim schwebt.
Artificial Intelligence

Proof of Personhood: Safeguarding Identity in the Age of AI and Synthetic Media

Lesezeit
21 ​​min

As we move into 2024, Generative AI, powered by large language models (LLMs), has become a global force, driving a wave of automation changing how we work and shaping the world. AI models like OpenAI’s ChatGPT, Google’s Gemini, and others continue to improve, demonstrating remarkable potential with each update. While Generative AI brings many benefits and could revolutionize various industries, it also poses risks of misuse. In this blog post, we will explore the potential downsides of Generative AI and how they can be addressed. But first, let’s take a quick look at 2001.

We are writing the year 2001

The internet’s first iteration, known as Web1, began in the 1990s until the early 2000s as “also known as the Static Web. It was a read-only ecosystem, where users consumed content published by a few website owners.“ [1] Web1 was a revolutionary technology that allowed a few content creators to reach a wide audience of consumers. The internet was a major hype during that time, but its business potential was still unclear. Web1’s high decentralization made it difficult to enforce laws and create sustainable business models, leading many companies that relied on the internet to shut down after the 2000 dot-com bubble. To understand the challenges and dynamics of this time, several interesting books are worth reading. One notable example is the 2001 book titled “Crime and the Internet“ by David S. Wall where he described the dangers posed by the Web1.

As one of the first things David wrote “First, the Internet has become a vehicle for communications which sustain existing patterns of harmful activity, such as drug trafficking, and also hate speech, bomb-talk, stalking and so on.“ Next, he describes that “Second, the Internet has created a transnational environment that provides new opportunities for harmful activities that are currently the subject of existing criminal or civil law like fraud, etc.“ and “Third, the nature of the virtual environment, particularly with regard to the way that it distanciates time and space, has engendered entirely new forms of (unbounded) harmful activity such as the unauthorized appropriation of imagery, software tools and music products, etc.“ [2]

These challenges underscored the need for a new iteration of the internet – one that would provide a safer environment and introduce sustainable monetization models. Many of these issues were addressed with Web2, which marked a shift toward centralization, concentrating control in a few dominant companies, later exemplified by the FANG giants: Facebook (Meta), Amazon, Netflix, and Google (Alphabet). Web2 is described as “the web as we currently know it. It introduced rich user experiences like social media networks, media sharing, blogs, wikis, images, and appealing aesthetics.“ [1] where consumers also became content creators. This centralization helped make the web as powerful and popular as it is today, supporting robust business models and attracting over 5.45 billion users worldwide.

The growth of generative AI powered by large language models (LLMs) has brought about a major shift, putting us in a transitional phase similar to what we experienced in 2001. This shift underscores the need for a new iteration of the internet, one that can tackle challenges that were not even imaginable in 2001 and that Web2 has not fully addressed. In the age of Generative AI, which creates content faster and more easily than ever, Clifford Stoll’s early criticism of the internet in his 1995 Newsweek article, “Why the Web Won’t Be Nirvana“ now feels more relevant than ever. Clifford wrote that “the Internet has become a wasteland of unfiltered data. You don’t know what to ignore and what’s worth reading.“ [3] He recognized early on the downsides and challenges the internet would bring to our daily lives, but he could not foresee how major new inventions would amplify these issues and be used to negatively influence large audiences.

Who better to understand the risks than those who work daily in the AI field and are best positioned to recognize them? One of the most powerful tools is ChatGPT from the company OpenAI, led by CEO Sam Altman. In 2019, Altman also founded another company named Tools for Humanity which is working currently on a Proof of Personhood solution. What does this mean, and what exactly is being referred to? Before we delve into that, let’s explore a few examples of how Generative AI is challenging the modern internet.

AI’s challenge to the modern internet

One of the biggest challenges facing the modern internet is the spread of synthetic media, defined by Europol as “media generated or manipulated using artificial intelligence (AI).“ [4] There are various types of synthetic media, and as generative AI improves, it is becoming increasingly difficult, if not impossible, to distinguish between reality and fake.

Deepfakes

One example is video deepfakes, an AI-based technology that creates realistic fake videos of people, making it seem as though the events occurred. Europol wrote in the paper “Law enforcement and the challenge of deepfakes“ from 2022 that deepfakes are “in the original, strict sense, … a type of synthetic media mostly disseminated with malicious intent.“ [4]. A well-known example of a deepfake is a fabricated video of an Obama speech. Another type of deepfake is audio deepfakes, where AI can perfectly replicate a person’s voice, making it seem as though they said things they never actually did. One example is Donald Trump reading the Darth Plagueis Copypasta. Both video and audio deepfakes can significantly influence large audiences and pose one of the greatest threats to the internet.

AI-generated fake images

Another area of synthetic media includes AI-generated fake images. A 2023 article from the German magazine Der Spiegel, titled “When machines learn to lie“ and the subtitle “… Artificial intelligence creates new realities. What happens when we can no longer distinguish them from the real world?“ [5] demonstrates how dangerous they can be. Appropriately with a cover collage of four computer-generated fake images from prominent people of our time: Papst Benedikt was dancing at a party, Greta Thunberg drank a beer in an airplane, Angela Merkel had a Hawaiian shirt on the beach, and Donald Trump was wearing an orange prison uniform. AI-generated fake images pose a significant threat to social media platforms, where visuals often dominate over text.

Image-to-video and text-to-video

A cutting-edge area of synthetic media is image-to-video technology, where users upload images with a prompt, and AI generates a complete video. AI can also create videos solely from a text description or script. A striking example is the entirely AI-generated love video featuring Kamala Harris and Donald Trump, demonstrating how quickly AI can blur the lines between reality and fiction. This technology raises significant concerns about the potential for misinformation and the erosion of trust in visual media.

Bots and their role in spreading misinformation

Once synthetic media is created, the most effective way to spread it is through bots, which can be used to disseminate misinformation, manipulate public opinion, fuel political polarization, or erode trust. They are prevalent across nearly every social media platform. Bots are dangerous because they can quickly create numerous fake identities, allowing the spread of synthetic media rapidly in a very short time, also known as Sybil attacks.

The internet is becoming more and more filled with synthetic media, and in the coming years, much of what we see online will likely be created by AI and spread by bots instead of humans. This shift makes it harder to trust information and could harm AI, as it might start learning from synthetic media itself, creating a vicious cycle.

To prevent this in the future, it is essential to implement technology that differentiates between human and AI-generated content, while also defending against bot-driven Sybil attacks to limit the spread of AI-generated misinformation. One promising solution to address these challenges is the concept of Proof of Personhood.

Proof Of Personhood

John R. Douceur from Microsoft Research wrote in a paper from 2002 “if a single faulty entity can present multiple identities, it can control a substantial fraction of the system. One approach to preventing these known as Sybil Attacks is to have a trusted agency certify identities“ [6]. On today’s internet, identities are usually linked to email addresses and passwords, enabling individuals to hold multiple identities. A possible solution is to decouple identities from email addresses and ensure each entity has a single verified identity across all internet services. If the entity is human, it would be verified as a unique human identity, with a maximum number limited to the global population of around 8.2 billion people. This process is called Proof of Personhood and also the idea to “link virtual and physical identities in a real-world gathering … while preserving users’ anonymity.“ [7] A unique machine identity could serve as the counterpart to a unique human identity for machines.

A key concern emerges if a centralized authority controls unique human identity verification: It would be restricted to specific geographic regions, only serving those it can verify. This would exclude approximately 850 million people without government-issued IDs and 1.7 billion without access to banking. In underdeveloped areas, these services may not even be available. As more internet applications require verified identities, millions could be left out, denying them access to vital parts of the web. This contradicts Tim Berners-Lee’s original vision of the Internet as a platform connecting people worldwide, regardless of race or location. Additionally, such a system would concentrate power in the hands of a few, allowing them to control and potentially revoke internet access, undermining digital freedom.

With this in mind, Tools for Humanity addresses these challenges with a decentralized Proof of Personhood solution called World ID, aiming to eventually operate independently of geographic borders and single-entity control as its development progresses.

World ID

World ID is described as “a mechanism that establishes an individual’s humanness and uniqueness. It can be thought of as the first and most fundamental building block in establishing digital identity.“ [7] It consists of five fundamental elements.

The first element, Authentication, is crucial for preventing fraudsters from misusing credentials, even if the legitimate user is unaware or complicit. The second element, Deduplication, ensures that each individual can verify their identity only once, preventing the creation of multiple identities. Recovery is the third element, which establishes effective mechanisms to regain access in case a user loses their credentials or if they are compromised. The fourth element, Revocation, is essential for removing compromised or malicious credentials. Finally, Expiry involves setting a predefined expiration date for credentials to maintain long-term security.

Verification with biometric technology

It is also written in the Whitepaper that “In a time of increasingly powerful AI, the most reliable way to issue a global proof of personhood is through custom biometric hardware“ [7] The biometric device used in World ID is called the Orb, which is AI-safe and has two main tasks. First, it confirms that the user is a real, living person without tricking the system. Second, it captures an iris scan to create an iris code, which is a numerical version of the key features of the iris. The Orb then checks this code in a database against all other iris codes to make sure no one is verified more than once. After this, the iris images are deleted, so that the iris code cannot be reversed to protect privacy. The Orb is also designed to ensure no raw biometric data can leave the device. After the Orb process is completed, the user receives a verified World ID. Currently, verification can only be done through various Orb operators in specific locations, but the goal for the Orb is to decentralize its development, production, and operation so that no single entity has control over it. Orb software and hardware are the most crucial components for the success of World ID.

Decentralized and Transparent

A verified World ID is pseudonymous, meaning users do not need to provide personal information during registration and it is also untrackable across applications ensured by Zero-Knowledge Proofs. Until now “6.580.502 people across all continents have already verified their World ID with an Orb, including more than 1% of the population of Chile, 1% of Argentina’s, and 2% of Portugal’s.“ [8]  Since Tools for Humanity considers World ID a public good, it is built in a decentralized manner on the Ethereum blockchain, with bridges to Ethereum Layer 2 solutions like Polygon and Optimism. As we can see, most of the World ID project’s source code is accessible on GitHub, and the contracts managing the identities are openly available on Blockchain networks.

ProjectSource Code
World ID State-BridgeGitHub Repository
MPC Uniqueness Check
GitHub Repository
World ID ContractsGitHub Repository
Orb SoftwareGitHub Repository
World ID Identity Operator EthereumEthereum address
World ID Identity Manager EthereumEthereum address
Bridged World ID OptimismOptimism address
Bridged World ID PolygonPolygon address

When examining the identity registration process in the World ID Contracts which are deployed on Ethereum, only an Identity Operator is allowed to register identities. The contract stores multiple identities in a single transaction using a tree structure, reducing both the number of transactions and associated costs, which would otherwise be higher if each identity required its transaction. However, restricting access to Identity Operators reflects a centralized approach before World ID is established on the blockchain for decentralized use.

Below is an example transaction on the Ethereum Blockchain, where the registerIdentities function receives an array of identities as public keys linked to a user’s wallets, stored on the chain. No sensitive information, such as iris codes, biometric data, or custom user details, is included.

The source code also shows that third-party verification is built using the Semaphore library, a zero-knowledge protocol designed to verify identity while preventing World ID tracking across different internet services. Since the iris code is not stored on the blockchain and the identity remains on the user’s phone, if the phone is damaged or replaced without a backup, the World ID will be lost and require recovery.

After the identity is registered on Ethereum, the data is bridged to Layer 2 blockchains, Optimism, and Polygon, as seen in the World ID State-Bridge contracts. This enables identity verification not only on Ethereum but also on the Optimism and Polygon blockchains, with support designed for adding more blockchains in the future.

To address security concerns about potential hacks and exposure of sensitive data from the centralized storage of iris codes used for identity comparison, Tools for Humanity developed the MPC Uniqueness Check library. This process encrypts each iris code into several unique secret shares, distributed across multiple parties. These parties collaborate to compute results based on the encrypted data without gaining any knowledge of the actual secret information.

The Orb’s source code is also publicly available, providing detailed insights into its functionality and the structure of its various components. It contains all the code on the Orb necessary for capturing images and securely transmitting them to the World App, the frontend for World ID. While this sounds like a promising development, there are also potential drawbacks to consider.

Potential drawbacks

Since the Orb plays a key role in the identification process, there is a risk that it could be hacked or altered, potentially leading to incorrect identity verification. While the Orb’s software is open source, the hardware remains a closed system, making it impossible to verify whether it properly handles sensitive data as intended.

Another concern is the centralization of key elements, including the iris code database and the verification which is limited to authorized Orb operators. This increases the risk of the database being vulnerable to hacking and potential exploitation by malicious actors. Also, the algorithm that generates an iris code could mistakenly reject an identity by thinking the person has already been verified. Since World ID is the same identity across all integrated internet services, a bug could allow users to be tracked across platforms, posing a significant privacy risk.

Another potential issue is the transferability of World ID. If someone is deceived into selling or giving away their World ID keys, a fraudster could then use the ID for authentication. This would allow fraudsters to bypass the “one person, one ID“ principle by acquiring multiple World IDs. This risk could be reduced by implementing a second biometric authentication, such as Face ID on iOS, on the user’s device. Additionally, in cases of suspicion, a reauthentication process at an Orb could be required to confirm the user still controls their World ID.

Summary

In this blog article, we explored the dangers of Generative AI, such as deepfakes, fake images, and text-to-video content—collectively known as synthetic media—and the risks of spreading them via bots. We also took a brief look back at the history, exploring the transition from Web1 to Web2 and the challenges that arose during the Web1 era.

Next, we explored Proof of Personhood, the challenges it addresses, and its implementation through a system called World ID. We examined how biometric verification and decentralized systems could play a crucial role in preventing the misuse of AI-generated content and ensuring trust online. World ID is still in its early development stages, as shown by the centralization of key components like the verification process and the iris code database. However, the Whitepaper details ambitious plans to gradually decentralize these elements, preventing any single entity from exerting excessive control over the system. While there are several drawbacks, the project is aware of them, and most are addressed in the Whitepaper with ongoing work to resolve them.

The World ID App Store offers several apps for Discord, Reddit, Shopify, Telegram, and many more that allow identity verification with World ID [10]. Thanks to its open system, third parties can easily develop verification apps for unique human identities, helping to prevent bots and malicious users from disrupting the system.

In the coming years, we will see if they can fulfill their promises and whether World ID can become a significant player in the future of the internet. As AI continues to evolve, safeguarding digital identities will be increasingly important to counter misinformation and protect privacy. It will be interesting to see how the challenges AI poses to the internet will be addressed and which concepts will ultimately prevail in shaping the future of the web.

 

[1] Ledger Academy (2023), Web 1.0 Meaning,
online. Available at: https://www.ledger.com [Accessed 10 Sep 2024]

[2] David S. Wall (2001), Crime and the Internet,
book. ISBN 9780415244299 [Accessed 10 Sep 2024]

[3] Clifford Stoll (1995), Why the Web Won’t Be Nirvana,
online. Available at: https://www.newsweek.com [Accessed 10 Sep 2024]

[4] Europol (2022), Law enforcement and the challenge of deepfakes,
online. Available at: https://www.europol.europa.eu [Accessed 10 Sep 2024]

[5] DER SPIEGEL 28/2023 (2023), Wenn Maschinen lügen lernen,
online. Available at: https://www.spiegel.de [Accessed 10 Sep 2024]

[6] John R. Douceur (2002), The Sybil Attack,
online. Available at: https://www.microsoft.com [Accessed 10 Sep 2024]

[7] Worldcoin (2024), Worldcoin Whitepaper,
online. Available at: https://whitepaper.worldcoin.org [Accessed 10 Sep 2024]

[8] Worldcoin (2024), Introducing World ID 2.0,
online. Available at: https://worldcoin.org [Accessed 10 Sep 2024]

[9] Worldcoin (2024), Worldcoin Foundation unveils new SMPC system, deletes old iris codes,
online. Available at: https://worldcoin.org [Accessed 10 Sep 2024]

[10] Worldcoin (2024), Worldcoin Apps,
online. Available at: https://worldcoin.org [Accessed 10 Sep 2024]

Hat dir der Beitrag gefallen?

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert