ZK solutions emerging from Web3 could mitigate hidden AI risks

AFP via Getty Images

Zero-knowledge (ZK) technology has become some of the hottest property in the blockchain sphere over the last few years, powering an entire ecosystem of Layer 2s that bring scalability, lower fees, and many other features and enhancements to the Ethereum landscape.

However, so far, the application of ZK tech has been limited to infrastructural solutions for blockchains. But based on emerging developments in Web3, ZK is beginning to move into the arena of user-facing technology, and could even address some of the less obvious risks associated with the increasing prevalence of AI.

AI, and specifically large language models (LLMs) are becoming increasingly embedded in our online experiences. The latest development is the expansion of Open AI’s ChatGPT capabilities into web search, a move mirrored by Meta. Both efforts are evidently aiming to challenge Google’s dominance over the search segment, which sits at around 90 percent.

Hidden risks

The idea of being served up a single answer to a search query, rather than trawling search results to find the page with the information you need, may seem tempting. However, researchers are warning of the risks. Emily M. Bender, Professor of Linguistics at the University of Washington, posted a lengthy thread on X, outlining how the accuracy of LLM search results still leaves plenty to be desired, but perhaps more worrying, that the use of LLMs could erode “information literacy.” She argues that by doing away with a list of answers for the user to make sense of, there’s no chance to consider the reliability or relevance of the answer.

Furthermore, there is still a lack of awareness of the extent to which AI is coming to dominate our online interactions. A 2024 study by AWS researchers revealed that 57% of all web-based text has been generated by AI or put through an AI algorithm, while Pew Research shows that 44 percent of Americans still believe that they don’t interact regularly with AI.

This disconnect creates further risks since the numbers indicate that at least some people are interacting with AI unwittingly. The spread of misinformation is one obvious example, but LLMs may also come with cybersecurity risks if a malicious entity tampers with the model or the underlying data used to drive it.

Lifting the lid

ZK technology is still relatively new and AI is certainly advancing at a much faster rate, thanks to the development efforts and investment capital of big tech. However, ZK solutions currently emerging from the Web3 sector are delivering the kind of functionality that could mitigate some of these hidden AI risks.

Effectively, ZK tech offers a way for an entity to prove a fact is true without revealing the underlying data. So, in the case of LLM-driven search responses, imagine a scenario where the LLM could be asked to show the sources it used to reach an answer without revealing proprietary details about the workings of the algorithm.

Historically, centrally-run systems haven’t necessarily played well with blockchains – but this is where ZK proofs could provide the bridge needed to bring the certainty of on-chain security to off-chain transactions.

Most blockchains can only attest to what happens within the confines of their own networks, but SXT Chain, developed by decentralized data and ZK technology platform Space and Time, is capable of witnessing and securing an array of data types, including from off-chain databases, into its network, which can then be queried with an SQL interface that will be familiar to most developers. One use case for a ZK “proof of SQL”? Auditing and logging the actions of LLMs.

In a similar vein, ZK-based identity solutions are still in their earliest iterations but could help distinguish between humans and bots online. Currently, there are precious few ways we can concretely prove who we are short of sending out copies of our government-issued ID. ZK identity projects such as Holonym aim to equip people with private keys that they can use to verify their humanity without disclosing any elements of their identity or identifying characteristics.

Some barriers remain. A key question is whether the pace of development in Web3 can keep up with the big-tech-enabled advances in AI, while the willingness of big tech to embrace solutions focused on LLM transparency also remains to be seen. However, the existence of such solutions means that the argument for keeping the lid on the AI black box is becoming increasingly difficult to maintain.