Thumbnail for Dr Nicolas Kokkalis LIVE at Consensus 2026 | Pi Network Big Update by PI GCV INDIA

Dr Nicolas Kokkalis LIVE at Consensus 2026 | Pi Network Big Update

PI GCV INDIA

28m 10s4,560 words~23 min read
Auto-Generated

[0:00]Welcome to the AI track, presented by Pi Network. Please welcome to the stage, Daniela Barbosa, Andrew Hughes, Nicholas Kokalis, and Lia Callen Butler. Woo! Oh, I love it when we got hecklers down the front. That's true. That's true. If you're going to heckle, you're going to start. Thank you for joining us, everyone. Um, look, between deep fakes, AI agents, and a wave of social engineering attacks, proving your identity is becoming one of the most important and hardest problems to solve in crypto. Because it's complicated, right? You know, we want assurances that we know who or what we're interacting with, but at the same time, we don't want to have to give up our privacy. So, how do we prove humanness without creating of surveillance state? And what I love about this panel is we've got three people who are approaching this problem from very different angles, which I think reflects reality. You know, there probably isn't a single silver bullet here. We're going to be looking at a spectrum of solutions across cryptographic, social, behavioral, reputational, and even cultural. So, to kick us off and I guess just to ground this conversation, and to make sure that we're all on the same page, I thought we could actually just explain what we mean when we talk about proving that we're human or you're human online today. And what does proof of humanity actually mean to you? I know that's a big question, but you guys must be talking about this a lot. Do you want to keep this off to Daniela? Sure. So, I'm Daniela Barbosa. I'm the Executive Director at the Linux Foundation for LF Decentralized Trust, which is an umbrella of open source projects, standards and specs around decentralized identity, interoperability, cryptography, um, and many other categories of what we call decentralized technologies. Which really touch a lot of what we're going to be talking today, which is identity and proving uh not only who you are as a human, like I'm a a human, I'm online, but also solving trust. You know, how do you trust whether that is an agent or a human? How do you build trust into the models? Accountability. So what happens when either the individual or their agent or something else goes, you know, uh rogue for a while? Um, and how do you really do that with privacy preserving uh methods? Uh meaning that I am not displaying, you know, all sorts of private information and PMI, um, but I'm really only verifying the pieces, so I'm proving that I'm human, I'm proving that I can be trusted, I'm proving that what I'm doing has accountability, so proof of authority essentially to do that. Um, and then I'm also proving or I have the benefit of privacy as part of that. Andrew Nick, would you like to add to that? Uh, sure. Um, Andrew Hughes, uh, VP of global standards at FaceTec. Uh, FaceTec is a face biometric company that can detect the difference between a real person, human being, and any kind of fake on the other side of the camera. So, we don't get into the, can you trust the person as an individual? Um, we just prove that it's a human being, and then our customers take it from that point. So, the title of this panel, um, speaks about humanity and and, um, the identification data that we all know and love. But, we have a very, very tiny piece of proof of human being so that the trust process can extend from there. We gave you that human root of trust to compliment the digital and cryptographic root of trust from private keys and PKI and and blockchains. Um, hi, Nicholas Kokalis, one of the founders of Pi Network, which is a Layer 1 blockchain protocol, and, uh, it has the special property that every single account on the blockchain is already KYC'd. Uh, so I think, uh, to answer your question, I also want to step back a little bit and say that, uh, um, it it depends on the purpose. So in some cases, you need to know who exactly the the person is. Like if someone goes to the bank and wants to withdraw money, we you really need to know who exactly the person is. And in other cases, you need to know whether, um, what's the act of, uh, that is being conducted, whether it's happening by a human or a bot? Yeah. And in other cases, you want to know whether, uh, there are unique people behind the a situation. So, uh, like for example, if you had, uh, ratings on an online system, uh, you you don't want to allocate a thousand, uh, ratings, a thousand different product reviews to one act who created a thousand bots.

[5:04]Uh, you want to give more fairness, everyone's, uh, voice. Just one more question, I guess, uh, you know, on foundational. Um, the second part of this title is, uh, not doxing yourself. When we talk about that, are we saying, you know, I doubt it's the people necessarily need to be completely anon on the internet, or is it more kind of about wanting personal control over your data, having the option of selective disclosure? Uh, how do you draw that line in terms of what is not doxing yourself? So, I'll start. Um, So, I'm assuming everyone's gone through the, uh, the ceremony of taking a picture of your driver's license front and back, taking a selfie, and you get the green checkmark you're, you're the person represented by the document. Anytime. That's right. That's right. Where does the data go? So, now, whoever it was doing that KYC process, identity verification process, has a copy of that data. There are no rules that are universal for how long they should keep those images of your identification data and the fact that that face and that timestamp and that context exists happened.

[6:21]Um, a recent case, uh, where many, many French people were docs accidentally. Um, the France Ministry that, um, deals with digital identity and issuance of credentials was hacked and there was a database breach. And of course, all the audit images that they were required to keep for some period of time, they were just stolen. So, it's, so that's to me what the doxing part is is you go through a process to gain trustworthiness in a system without any direct knowledge of what happens to the data afterwards and just not protect it properly or expunge it quickly.

[7:05]That's the doxing problem. Yeah, I will echo that. And, uh, I want to say that, uh, um, when people are thinking of not doxing your identity, especially in a crypto conference, their mind immediately goes into ZKP's, uh, such as, uh, advanced, uh, techniques like, uh, ZK-Snarks, Snarks, and the bullet proofs and things like that. But even though these techniques are a little more advanced, you need to mathematically convert your problem into a formula and then uh polynomial computations and other things. But, uh, in in practice, you can have zero knowledge with uh existing primitives we all know and understand. Um, let me give you a very quick example. So, uh, imagine in the traditional world, you want to buy alcohol. So you go to the store, what you do is you give your driving license, and people can you are essentially showing all of your data for no reason, because all they need to know is whether you're above 21 or not. Uh, and instead they're getting your home address, your date of birth, everything. Now, let's if you were to do this in a zero knowledge way, you could actually have a KYC authority, let's say the Pi KYC authority. And, uh, that authority can provide you a statement, you are more than 21, uh, above 21 years old, and, uh, give you the a cryptographic signature of that statement, together with, uh, your, for example, your Face ID data. And then all you need to do is you need to provide that statement, and, uh, the recipient can check whether the signature is coming from the trusted third party in this case, and you've already proven that you are, uh, above 21, without delivering more information that's necessary. And just to quickly follow on, I I forgot to say I work on international standards at the ISO level, and specifically the mobile driver's license, the one that you can use at some TSA checkpoints at airports, same one. We built that into that credential format, so we have, you know, over 21 flags, and that's all you have to send. Um, we're building in long tail ZK for future use. That's the next provision. So, the, the kinds of credentials being issued by States, DMVs and governments. They're very well aware of these issues and they're taking measures with the technology available to prevent that oversharing of of data. But, I think, you know, when it comes to digital drivers licenses and credit.

[9:44]There's a whole industry that's like focused on that one thing. If you start thinking about apps and other types of, you know, applications online, internet, um, that the technologies that are, the standards are there. There might be multiple standards, and there might be a lot of arguments about which standards best and the regulations, you know, regulators across the world, depending on the jurisdiction, have different opinion and a lot of the jurisdictional regulations inform what the technology that can only be deployed in those regions as well. But ultimately, what we're seeing because of the fast pace of that application development and is people are not building identity into the core architecture. Yeah. What they're doing is they're bolting it on at the end. Oh, this is a, oh, yeah, an after though. Yeah. Have to, we have to deal with identity. So, the techniques, the ZKPs and other privacy preserving techniques, uh technologies are there. There's really wonderful implementations around the world that you can take a look at, but yet people, rather they're in financial services or they're in, you know, client app building and obviously in the agentic world, are just not putting identity at the architecture from the start. Um, and that I think is the biggest problem that, um, you know, is just running away from us. I guess just as a follow-up question to that, we've got a wealth of knowledge here in terms of digital identity, and I think there's decades of work that's been done on digital identity. Something that came up in in our prep call, if I may say, was a hint of frustration that whenever a new emerging tech comes out, whether it's crypto, whether it's AI, it feels like we're reinventing the wheel and starting from scratch on identity. So why is that and how aren't you putting us people back in their place? So, it's the problem of non-universal knowledge around every possible thing.

[11:39]So, um, several years ago, sort of 2016, 2017, the emerging blockchain as a real thing community started trying to deploy things, trying to standardize.

[11:53]And they came up with the, the great idea of having identity on the blockchain. Until they met some digital identity pros that'd been around for a while, who pointed out that putting identifying information on the blockchain that is immutable and correlatable in certain ways, that has the identity, was countered the principles of non-censorship and independence from authorities. They quickly realized that there was an existing industry and were able to quickly shift focus and learn about the technologies and the existing industry of digital identity and enterprise identity. Um, because we were challenged about our assumptions, we shifted as well. Now, the decentralized identity world sort of came out of that collision of galaxies, which is, that's what all always happens, right? So, the the challengers come in, they realize that there's something already existing, and the worlds collide and find a way forward, because neither is going away. Yes. I think in practical terms, uh, it makes more sense to store attestation or uh logs of whether something happened or not on blockchain, rather than the actual KYC data. And that's the approaches that are taken. Yes, yes. So, because in some cases, for regulation, you need to prove that you actually checked the ID of someone. So, you can, uh, you can have that attestation stored on blockchain, make it publicly accessible any time, but not the actual ID of the person. Yeah, and it's the same today, with, you know, agentic AI.

[13:34]Um, and we're seeing the same push and pull, but I do think a lot of the lessons learned over the last 10 years in this space, because it's a lot of the same people, um, on both sides, um, are really just helping accelerate the technology, the standards in this, you know, being looked at as actual operational building it into the core.

[13:54]So, um, at the Linux Foundation for the last 10 years, we've had groups of, of organizations and individuals working on identity. Um, we've had, you know, identity related blockchain projects, we have Trust over IP, where all the trust governance frameworks are being worked on, the community specs. Um, just in September, uh, a new working group was launched that is called the decentralized trust working group, uh Trust Graph working group, uh to specifically work on agentic AI identity, uh, opportunities and problems to do so. Um, but I think it's like this regardless of what other industry, if you back up like 30 years between our, you know, or 20 years before our had our passports had chips in them, right? Um, that was also something that, you know, the industry had to coincide with folks working on identity. You were there. I was there. And it only took 10 years for the US border people to actually start checking the chips. That's not how it's done. Wow. But, uh, but there's actually another aspect as well, which is, uh, kind of like forcing the industry to reinvent the wheel in justified way, uh, which is the whole agentic AI. So, before that, it was, uh, very easy to make assumptions between who is a bot and who is a human, because humans had very different behavior, uh, compared to bots. Uh, so that therefore you had, uh, systems like CAPTCHA and other things that, uh, you could safely more or less assume that only a human could answer and, uh, proceed a certain set. But now, uh, with agents becoming more and more humanlike, uh, and the fact that an agent can be replicated to a million agents currently running, uh, it creates new challenges for identity that we need to see as, uh, as a community. So, I especially, uh, you know, that that was also one of our, uh, core principles why we started having, uh, fundamental identity KYC verification on the on the Pi blockchain.

[16:00]Because, uh, each individual, for example, can have their own primary identity and can run multiple agents if they want, but if the agents are carrying their identity and, uh, they are essentially attributing, uh, to to the original user, the the behavior, then that's some, uh, realistic good case scenario. Uh, the bad case is when the agents are pretending to be humans themselves. But, uh, uh, that's why, that's why the society needs to add some kind of, uh, checks and balances for this problem in the future. So, sorry, I'm taking a tangent here. Go for it. I think it's interesting, so you must think it's interesting. Um, what was the original proof of human test? So today we're talking about biometrics, uh, attestations on a chain. That didn't exist in the 70s, mainframes, right? So, how did you prove you were human? Password. Password is actually proof of human. Once passwords became not useful anymore, pretty quickly, what was the next proof of human? While your physical, um, what are they called? I can't remember the name. Um, you know, the, the token cards with the with the different with the changed numbers. There's some people that are old enough to have several of these. 2FA. Oh, say 2FA's. Yeah. So, 2FA's. So, first it was knowledge-based. That's how you prove you're human because only people can remember things, right? Secret. Then it was what you have, physical token. Until everyone realized that everyone loses physical tokens and synchronization and batteries and all that stuff. So now, my opinion, working for a biometrics company, obviously the way you prove you're a human is through biometrics and liveness. Until deep fake comes in. No, no. This is this is the answer. Well, let's let's go deeper into this because I think each of you have very interesting use cases to share in terms of how you're. Yeah. It's a very, it's a deceptively simple question, right? Look, at the core of this, we're just trying to figure out how do you reliably link a real human to a digital identity or cryptographic key, without exposing massive security or privacy issues. So, you know, let's let's go through each of you, because I I think you're all working on it in very, very different ways. Daniela, the Linux Foundation has been working on identity and trust infra for years and years and years. You know, maybe you want to walk us through the kernel maintainer example and specifically why it matters in beyond open source? Yeah. And we're here at Consensus. So, the conversation around hacks and, uh, there was a couple months back, there was a social engineering hack, which actually was an individual, a team of individuals that would go from conference to conference, they gotten embedded in a community. Going to events like this. This, they got embedded in a community, they started doing, they started doing, you know, small contributions into the codebase, they eventually got, uh, you know, more trust in that. So it was complete social engineering, it took them months and months and months to do, and we've seen even bigger ones of of that as well. So, uh, a couple years ago in the Linux kernel, the same thing, someone that, uh, impersonating, uh, and, uh, uh a maintainer or a contributor got to put some code into the kernel that, uh, thank goodness the kernel maintainers, uh, noticed and stopped, you know, before it went in through. But again, it was, I think it was a three-year process that that social engineering, uh process was going through. So, how do you do proof of humanhood for our maintainers? So, for those who don't know, the Linux kernel, I would say the most important open source project in the world. Uh, but there's thousands of maintainers around the world that are continuously contributing to the kernel, uh, and, uh, you know, with thousands more, you know, just contributing and there's thousands of maintainers. So, how do you make sure that the maintainers, you know who the maintainers are, right? Um, so today, um, and for many, many years, there's been a physical key signing ceremony. Where you meet with other maintainers, uh, you essentially pull out your passport, you say, Hey, Andrew, I'm Daniela. Actually, IRL. Yeah, okay. And he vouches for me and he's already been vouched by, you know, three others and it, it creates a root of trust, right? It is not scalable in today's digital world, even with the kernel itself that has, you know, thousands of maintainers, uh, but it's certainly not scalable in the DeFi crypto development world at all. So what we've been working on is using these techniques that we've been talking about, with verifiable credentials, is how do we create, um, a, uh, a digital way to trust the, so the maintainers can prove to themselves, you know, prove to each other who they are, um, and then that essentially verified credential is carried with them when they do commits. Um, and I think this is going to be in September, we'll have a working prototype, and, uh, by then, we'll be talking to other major open source programs, uh, that really need to understand who their maintainers are, uh, for security's sake. Um, but also even for, you know, regulatory requirement sake, who actually contributed to a code project as well. Um, so a lot of this work is taking the, the 10, the 10 years of knowledge that our community has had and solving real problems in the digital world, um, around how human beings have proof of of authority to do the thing that they're going to do. Andrew, you're on biometrics, as you mentioned. I'm I'm keen to know how you see, uh, I guess how do you see liveness detection evolving when we're sort of seeing this proliferation of AI generated identities online? So, one of one of the evolutions of digital identity and digital credentials has been a shift towards, um, using multiple signals across multiple sources to establish identification and digital credentials, digital identity. So, plugging into what Daniela said, um, we're getting engaged in the first person credential project, which is what the Linux maintainers are moving towards. And we're actually adding the signal that it's a person to the other signals and combining them in the verifiable credentials. So, instead of doing a monolithic, you know, using a big bureau to do a KYC, and then you've got this static identification of a person that lasts forever, um, the world is moving down to signals, risk context, and, um, basically verification at important times and moving closer to continuous verification. Um, so yeah, that that's sort of where we fit into this picture because baselined metric specifically, you need a device with a camera and some software to check it. It has to be somewhere, physically somewhere, this camera. You can't you can't you can virtualize a camera and do it remotely, but that's incredibly risky when you're trusting the human detection to some remote video feed. Right. So, that's why we do stuff right into hardware with cameras and software. Andrew mentioned just like three things that the audience can take away, right? Um, first, is there's a fantastic white paper on first-person credentials, um, and really understanding the core, it's not a very technical paper. I think everybody here could certainly be through it and understand. And the second is the concept of, um, personhood credentials, right? Understanding what personhood credentials are, um, and then, uh, you know, topics like verifiable credentials, but also like understanding, um, proof of authority. Um, which goes back to you know, in the Linux kernel, you have authority to send a commit and, you know, uh, uh put a PR, uh, accept a PR. But how do you do proof of authority in this digital world, when it is a person, you can prove that a human is a human, but then how do you send your agents out with proof of authority to do the things on your behalf? Um, how do you revoke that authority, um, and, uh, you know, you change jobs, you change roles. You just don't want to trust that agent anymore. Um, so those are the kind of things that, so first-person credentials, proof of personhood, and, uh, proof of authority. I think three themes that would be really, uh, great for everybody to. Dr. Nick is the chance now. You talked a long time. Pi Network of course. You've got the attestation, which is the signal of proof of documentary. I've identi. Yeah, yeah, that's right. So, uh, in the end of the day, um, all these cryptographic techniques and all these credentials have absolutely necessary on the back end. And the what we need to do on the front end on the system designers perspective, is we need to create the user experience and the user interface such that people can actually use those systems. Because in the end of the day, if uh you have a very sophisticated uh foolproof system but people can't use it, or they lose their keys and they completely get locked out, there's no way to recover. Then, uh, it becomes um not not as useful as we wanted it to be. Uh, people end up not using it, and people end up, uh, falling back into very primitive. What is your the last ball of your social security? Types of verifications. So, yeah, that's we are big on user experience. A bit of a disconnect between, uh, theory and real-world applications. Yeah. We're going to run out of time. We've got let's do a quick rapid fire round. If we are lucky enough to find ourselves back here on the stage all together in two years, what progress do you hope the industry has made in terms of proving more human without doxing ourselves? I'll start. I will be an AI agent, because I'll be not traveling anymore, because two will be too expensive. Sorry. We're you speaking? I wasn't. I wasn't, but I will be able to make sure that I can prove that the AI agent is actually acting on behalf of you. Absolutely. Yeah, yeah, distinguish between civil attacks and eight people being actually able to use agents to, uh, empower themselves. So, you want your agent to be able to access all these resources online, uh, and and report back report to those resources who they represent. Uh, so that they can do your work, while at the same time those different services are not bombarded by fake pretending to be human beings. So, I'm going to be a doomsayer. Love it. There is going to be a major hack. There is going to be a major leakage of identity. Your grandmother is going to be using an agent and it's going to be horrible. Um, just like every other industry, this happens all the time, like the crypto industry as well. And from that, we're going to learn, um, and hopefully build better, um, systems that really, um, you know, secure and provide privacy, uh, privacy first systems, um, and, uh, and I hope that happens fast. Because the rate of a technology accelerating today is has never seen before, um, and, you know, I just, I'm grateful for, you know, everybody in our community that over the last 10, 15 years has been working on this. Um, and let's, uh, you know, make sure that we look at what we've built and, uh, use that those primitives to, uh, build something better. I really wish it didn't have to be that way, but if we look at history that is usually how things start to change, something awful has to happen first. So, I mean, you heard it here, don't say you weren't warned. Thank you for an amazing panel, guys. I look forward to seeing you in two years, find out how it's all going.

Need another transcript?

Paste any YouTube URL to get a clean transcript in seconds.

Get a Transcript