Opinion

Morning Joe

RacheL Maddow

Deadline: White House

The weekend

Newsletters

Live TV

Featured Shows

The Rachel Maddow Show
The Rachel Maddow Show WEEKNIGHTS 9PM ET
Morning Joe
Morning Joe WEEKDAYS 6AM ET
Deadline: White House with Nicolle Wallace
Deadline: White House with Nicolle Wallace Weekdays 4PM ET
The Beat with Ari Melber
The Beat with Ari Melber Weeknights 6PM ET
The Weeknight Weeknights 7PM ET
All in with Chris Hayes
All in with Chris Hayes TUESDAY-FRIDAY 8PM ET
The Briefing with Jen Psaki
The Briefing with Jen Psaki TUESDAYS – FRIDAYS 9PM ET
The Last Word with Lawrence O'Donnel
The Last Word with Lawrence O’Donnel Weeknights 10PM ET
The 11th Hour with Stephanie Ruhle
The 11th Hour with Stephanie Ruhle Weeknights 11PM ET

More Shows

  • Way Too Early with Ali Vitali
  • The Weekend
  • Ana Cabrera Reports
  • Velshi
  • Chris Jansing Reports
  • Katy Tur Reports
  • Alex Witt Reports
  • PoliticsNation with Al Sharpton
  • The Weekend: Primetime

MS NOW Tv

Watch Live
Listen Live

More

  • MS NOW Live Events
  • MS NOW Columnists
  • TV Schedule
  • MS NOW Newsletters
  • Podcasts
  • Transcripts
  • MS NOW Insights Community
  • Help

Follow MS NOW

  • Facebook
  • Instagram
  • X
  • Mail

“Your Face Belongs to Us” with Kashmir Hill: podcast and transcript 

Share this –

  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on X (Opens in new window) X
  • Click to share on Mail (Opens in new window) Mail
  • Click to share on Print (Opens in new window) Print
  • Click to share on WhatsApp (Opens in new window)WhatsApp
  • Click to share on Reddit (Opens in new window)Reddit
  • Click to share on Pocket (Opens in new window)Pocket
  • Flipboard
  • Click to share on Pinterest (Opens in new window)Pinterest
  • Click to share on LinkedIn (Opens in new window)LinkedIn

Why Is This Happening?

“Your Face Belongs to Us” with Kashmir Hill: podcast and transcript 

Chris Hayes speaks with author and tech journalist Kashmir Hill about the proliferation of facial recognition technology and the ramifications.

Jan. 26, 2024, 6:27 PM EST
By  Doni Holloway

From unlocking our phones, to scanning our faces to board flights, facial recognition technology has become a ubiquitous part of modern life. And while its implementation can make life easier, what are the ramifications of companies capturing and selling our biometric data? And do we really own our faces? Our guest this week points that unregulated, this technological superpower can lead to dystopian, sci-fi novel-like applications. Kashmir Hill is a tech reporter at the New York Times and author of “Your Face Belongs to Us: A Secretive Startup’s Quest to End Privacy as We Know It,” in which she chronicles the rise of Clearview AI. She joins WITHpod to discuss the growth of this technology, privacy concerns, ways in which our online “dossiers” are linked to our faces and more.

Note: This is a rough transcript — please excuse any typos.

Kashmir Hill: Now with this kind of technology, it is pretty easy to either retroactively, like the Sedition Hunters did, attach a name to a face, but even in real time. And so, I just think it could be pretty haunting if you have to have this assumption as you’re moving through the world every day.

Well, maybe somebody else in this pharmacy is going to take a picture of me and know I’m the person buying hemorrhoid cream. Or you’re out to dinner, you’re having a sensitive conversation that it’s work gossip or you’re talking about a friend or something serious that’s happening in your life, and you just assume the people dining around you are strangers.

Well, what if they get interested in what you have to say, take your picture, now know who you are, understand the context. And that’s just kind of day-to-day life, not even to mention the kind of governments having this power to know who people are all the time, or businesses as we’re already starting to see.

Chris Hayes: Hello, and welcome to “Why Is This Happening?” with me, your host, Chris Hayes.

You know, my friend and colleague, Ryan Reilly, at NBC News has a great book out called “Sedition Hunters” that is about the sprawling investigation into the January 6th rioters and insurrectionists. But specifically about this kind of online community that came together calling themselves Sedition Hunters, who used essentially open-source intelligence, right, just stuff that was out on the internet, images and video, to track down like dozens and dozens, if not hundreds of the folks that had invaded the Capitol, and often in the absence of any FBI progress on these folks.

And there’s all sorts of different ways they were able to track people down, including all sorts of ingenious little details. You know, someone’s got this hoodie with this insignia on it. And it turns out that insignia is for this school or this plumbing business in some town and then you can figure out who it is. But one of the tools they use, and I really learned about it from this book, was facial recognition.

I had no idea, until I read Ryan’s book, how good facial recognition technology had gotten. I mean, people in the backgrounds of photos, like sort of blurry.

And I think this is a case where arguably this is sort of a good use case of facial recognition. These people are in a public place, right? It’s not like an invasion of privacy when you’re in a public place, in the Capitol, to have your face recognized.

You’re committing a crime in front of the nation, often in front of like national television cameras. In lots of cases, people are recording themselves or others are recording other people. And then the facial recognition technology is one of the ways that they’re getting caught.

So you might say, okay, well, that seems like it’s sort of working. But the implications of the fact that everyone’s face everywhere can be captured, manipulated, bought and sold, and used for whatever other purposes any other sort of enterprise might have, is really pretty mind-boggling, and chilling and sort of frankly dystopian.

And there’s a fantastic new book about this, about one facial recognition software company in particular, but the sort of world of facial recognition software from one of my favorite tech reporters, who’s Kashmir Hill. She’s been on “Why Is This Happening?” before.

She’s now a tech reporter at “The New York Times,” though I don’t believe she was the last time we talked to her. So, congrats on that.

And the book is called “Your Face Belongs to Us: A Secretive Startup’s Quest to End Privacy as We Know It.” Kashmir, great to have you on the program.

Kashmir Hill: Thanks for having me back.

Chris Hayes: So Kashmir, how did you first get interested in facial recognition software?

Kashmir Hill: Yes. I mean, I’ve been writing about face recognition for more than a decade. I’ve been writing about privacy for more than a decade. And so I wrote about facial recognition technology back in like, 2010, 2011. But back then, it just didn’t work that well. It was pretty clunky.

There were companies that were dreaming about, let’s put it in billboards that you can look at people’s faces, know who they are, know if they’re a man or a woman, what age they are and give them certain ads. Facebook at that time, Google at that time, were starting to roll out facial recognition technology to tag your friends in photos. But yes, that’s when I first started writing about it. It wasn’t that powerful.

And then I kind of came back across it in the fall of 2019 because I got a tip from somebody that this company called Clearview AI had gathered billions of photos of people and had gathered billions of photos from the internet, from social media sites like Facebook, Instagram, Venmo, et cetera, and that they had built this really powerful facial recognition tool and that they were selling it to police.

And at first, I was skeptical. I didn’t think it could be true.

Chris Hayes: So let’s, before we get into Clearview and sort of the leaps made, I mean let’s just start with facial recognition as a thing that a person can do, because it’s one of those, I love the experience of taking a normal thing you do, right, that you don’t take and pay no mind to, and then trying to think of like, well, how would you program a machine to do this, right?

And this is true of like, I remember if you look at robotics, like walking. There’s so much that goes into walking. There’s so many minute calculations our brain is making about how to shift our weight through the gait and when one foot leaves. And when you start having to put that into a machine, like it turns out it’s really, really difficult and complicated to program walking. And it’s taken a while.

Boston Dynamics, everyone has seen those videos of those, like the weird gait of the Boston Dynamics robots when they start. And they get more and more comfortable as time goes on, but that’s a lot of engineering and a lot of work.

Facial recognition is one of those things as well. We have an incredible facility for it. But when you try to break it down, it’s a pretty sophisticated, complicated thing that our brains are doing instantaneously.

Kashmir Hill: Yes, I talked to neuroscientists for the book to understand, well, how exactly do human beings do it? And there actually wasn’t a definitive answer to that. We’re still kind of figuring out how the brain does what it does, but it is incredible. And they say that there have been studies that babies, within hours of being born, are able to recognize their mother’s face in a room, which is incredible because babies —

Chris Hayes: Yes.

Kashmir Hill: — can barely see.

They think it has to do with pattern recognition, part of our brain called the fusiform. It may be the same skill that people have to recognize kind of species of birds or types of cars, but it is incredible. And it’s really different from person to person.

Some people are terrible at face recognition. Some people are very good at it. And so there’s this whole range of skill levels at this. And yes, it was interesting in the early days when engineers were trying to figure this out, they didn’t understand how we did it. And so with computers originally, it was very much, like, take a ruler to the face. And what’s the distance —

Chris Hayes: Yes.

Kashmir Hill: — between the nose and the top of the lip or between the eyes? And it was very much not how human beings do it. Like, obviously, we’re not walking around with rulers to identify our colleagues.

Chris Hayes: Yes, and it’s funny. I have a sort of interesting personal perspective on facial recognition because, as someone whose face is recognizable to some subset of people in the public, you will see in the moment sometimes a person has the subconscious sort of cognitive perceptual processes of face recognition with you before they have any conscious awareness of who you are.

And this is a very interesting thing to see because you can almost see it in real time that this is a deep, deep subconscious, almost biological, neurological thing that’s happening. Right? Someone sees you and they brighten up because some message has been sent to their brain before they can parse it like, oh, that’s just a guy that hosts a TV show, of like, oh, do we work together? Or are we college roommates? I know you. You’re kin.

There’s like this deep way in which facial recognition is sort of lighting up the whole brain even prior to you being able to process it.

And then in other contexts, we all have this experience, right? Where you recognize someone and you know you know them from somewhere, but you don’t know where, right? Is that my friend’s kid’s dad? Is that someone that I played pickup basketball with?

And yet the durability, intensity, and strength of that facial recognition is so strong that it can endure untethered from any construction of how you know the person.

Kashmir Hill: Yes, I mean, it really is this kind of important way of how we relate as human beings, that there’s a certain number of people that your brain lights up and you see them and —

Chris Hayes: Right.

Kashmir Hill: — they’re part of your social circle.

And I always think it’s funny talking about facial recognition technology with somebody who does have, let’s just say it, you have a famous face. And so some of the threats this potentially poses, you already live in your life. You walk around and people know who you are.

And I think one of the things that might happen is that if this kind of technology gets more widespread, that we all become famous in a way, right? We all become recognizable with all of the benefits or baggage that that brings along, depending on what your kind of reputation is.

Chris Hayes: That’s really a profound point and I think true. I mean, we already see that in a non-facial recognition context in terms of, I wrote an essay about this in “The New Yorker” a few summers ago, about how online life kind of gives a little taste of fame to sort of everyone, because all of a sudden, you have strangers interacting with you about a thing you said, right?

Like, that doesn’t happen in your normal life, really? Like, you have conversations about, like, politics or movies with friends and people that you have symmetrical, bilateral relationships with. All of a sudden, you get online and it’s like, whoa. Like, all these people are responding to me, right? And that doesn’t normally happen. These are strangers talking to me.

And the idea of a stranger recognizing you, or in the case that we’re going to talk about, a government institution, a company, like that is a pretty radical change from just the baseline we’re used to, a pretty radical disruption of, again, facial recognition being part of something that’s symmetrical fundamentally, symmetrical and reciprocal, right? Relationships that you have, and people have with you, now being untethered from that.

Kashmir Hill: Yes, I mean, I think it would just really change how we operate in the world. And you started the program by talking about January 6th and all of these people kind of coming together in a group and very much documenting it. And I think people have the sense when they’re in a big group like that, where everybody is doing something, whether it’s a protest or you’re just out in the park, you just assume that you are anonymous. You’re another face in the crowd.

But now with this kind of technology, it is pretty easy to either retroactively, like the Sedition Hunters did, attach a name to a face, but even in real time. And so, I just think it could be pretty haunting if you have to have this assumption as you’re moving through the world every day.

Well, maybe somebody else in this pharmacy is going to take a picture of me and know —

Chris Hayes: Yes.

Kashmir Hill: — I’m the person buying hemorrhoid cream. Or you’re out to dinner, you’re having a —

Chris Hayes: Right.

Kashmir Hill: — sensitive conversation that it’s work gossip or you’re talking about a friend or something serious that’s happening in your life, and you just assume the people dining around you are strangers.

Well, what if they get interested in what you have to say, take your picture, now know who you are, understand the context. And that’s just kind of day-to-day life, not even to mention the kind of governments having this power to know who people —

Chris Hayes: Yeah.

Kashmir Hill: — are all the time, or businesses as we’re already starting to see.

Chris Hayes: So let’s talk about the tech. Again, I’m going to take my run at my naive understanding and then you can correct me.

So going back to some of these things like natural language processing, right? That was very hard for a very long time. And all kinds of different sort of things that come second nature to humans, but actually when you try to build them from the ground up from a programming perspective are incredibly difficult engineering problems.

Generally what’s happened, and we see this with say ChatGPT, is that the existence of large language models or any sort of large training model basically allows you to use essentially machine learning and huge amounts of data to not have to actually like, when you’re talking about the measurement, right? We skip over the idea of, like, we’re going to construct some methodology and algorithm that measures eyes and all this stuff.

You’re just feeding it a ton of data and training it. Be like, yep, that’s right. Wrong. Right. Right. Wrong. Wrong. You do that enough with enough faces over enough time with sophisticated enough algorithms and basically, it just gets good at it.

Kashmir Hill: Yes, I mean, that’s well put.

You know, engineers first started trying to develop facial recognition technology basically in the early 1960s in Silicon Valley, before it was called Silicon Valley, with funding, secret funding, from the CIA. And at that point, it was very much like, let’s just try to get it to measure the distances between things on the face.

And they basically tried to do that for decades, and decades, and decades. And there would be like, incremental progress, but it just wasn’t getting that much better.

And then we had these modern methods come along in AI, neural networks, machine learning, and places like Facebook, for example, that had a whole bunch of photos of people where people had tagged themselves in the photo. And they tagged themselves —

Chris Hayes: Right.

Kashmir Hill: — in a headshot, they tagged themselves at a birthday party, they tagged themselves in a dark bar.

And Facebook had thousands of photos of people in all of these different kind of lighting conditions, their head tilted, looked away. And Facebook was able to —

Chris Hayes: Of course.

Kashmir Hill: — feed those thousands of photos to a computer and saying, this is all the same person.

And they could do that for thousands of people. And that is how the computers learned is they said, oh, this is the same face in all these different photos. And they started looking at, they probably did look at distances, but they could look at the actual pixels of the photo. They might be seeing a little freckle on your face or the shape of your ear.

Yes, it’s just with that much data, the computers got better really than humans at recognizing human faces.

Chris Hayes: They got better than humans.

Kashmir Hill: Better than humans, yes.

Chris Hayes: And was Facebook the big breakthrough? Like, was that the first big data set? And because people are doing it themselves, you know, you don’t have to hire some huge training operation, right?

People are saying, like, that’s me. That’s me. The guy sitting in the background of this 30th birthday party photo in a dark bar in New York City at 1 a.m., like hunched over. That’s the side of my jawline there.

And that, times a thousand or 5,000, times hundreds of thousands of millions of users gets you pretty good.

Kashmir Hill: Yes, I mean, Facebook did have one of the big breakthroughs. It came around at this moment we keep talking about in AI. Geoffrey Hinton and his group of researchers, they made this kind of big leap forward in computer vision. And a researcher I talked to at Facebook saw what they did and said, we can do this exact same thing for faces. And so they did that, they developed this algorithm that they called DeepFace and they put their findings out there.

Everyone got really excited. Journalists got really scared, said, this is too powerful, what Facebook has created. Are you going to roll this out on Facebook? Are you going to like, let other people use this?

And Facebook said, oh, don’t worry about it. This is just research. You know, we’re not actually rolling this out on the site. This is theoretical.

And then about a year later, they rolled it out on the site, a version of it. And people started noticing, oh, wow, Facebook got way better at tagging my friends in photos than it used to be.

Chris Hayes: So then where does Clearview come in? Like, are they basically using the same technology? Are they building off that? Is it just a more advanced version of the same engine?

Kashmir Hill: Yes, I mean, this is something that is kind of extraordinary about the artificial intelligence community is that they do love to share their research. And so Facebook —

Chris Hayes: Yes.

Kashmir Hill: — kind of kicked off this AI kind of face extravaganza. And all these different researchers started building on that and they’re sharing their knowledge and they’re putting algorithms out there for doing face detection, doing face recognition.

So there were a lot of people that were offering up their strategies. And this guy, Hoan Ton-That, who would go on to become the founder of Clearview, came along and he was wanting to learn about facial recognition. And he said he just started following machine learning experts on Twitter and going on GitHub and searching for face recognition. And he came across these pretty powerful kind of coding resources that helped him start doing this, kind of processing lots and lots of photos.

Chris Hayes: So tell me how Clearview gets started.

Kashmir Hill: Yes, Clearview’s story, it’s a fascinating one. It’s very strange. I think the heart of Clearview is Hoan Ton-That, who is this guy, grew up in Australia, always really excited about computers, technology. At 19 years old, he dropped out of college and moved across the world to San Francisco to kind of be part of Silicon Valley.

It was 2007, 2008, the iPhone was coming out. He started making Facebook quizzes, iPhone games, just kind of like trying to make it in the tech world. Didn’t have a lot of success.

And then around 2015, 2016, he moves to New York and falls in with a kind of right-leaning Trump crowd and comes up with this idea, according to one person involved in those early days, at the Republican National Convention in 2016, there to support Trump.

And they’re wandering around the convention. There’s so many people there. And they’re talking and saying, wouldn’t it be great if you had an app where you could hold this up to someone’s face and you could kind of know who they are, know if they’re kind of a friend or a foe?

And after that, they started working on this technology to process faces. And at first it wasn’t just facial recognition technology. They were thinking that you could look at somebody’s face and discover other things about it, like how intelligent they might be.

Chris Hayes: Like digital phrenology.

Kashmir Hill: Yes, yes. Yes, or digital physiognomy. But they kind of ended up not going in that direction and instead doing this, basically a simpler computer challenge, which was getting a bunch of photos on the internet and making people’s faces findable.

So you could upload somebody’s face and then find the other places it appeared on the internet.

Chris Hayes: More of our conversation after this quick break.

(ADVERTISEMENT)

Chris Hayes: One of the early backers of Clearview is the right-wing venture capitalist, Peter Thiel, who was notoriously a Trump supporter in 2016, spoke at the RNC, has been a supporter of all kinds of right-wing causes and political candidates. He’s sort of in the same circle as Elon Musk.

He was a fairly instrumental and early investor, yes?

Kashmir Hill: Yes, so Hoan Ton-That met him at the RNC convention. And once he started working on this and he had scraped, I think Venmo and Tinder, and had built this kind of basic working engine where you could upload somebody and find their Venmo profile, find their Tinder profile, potentially. He emailed Thiel and said, would you be interested in investing?

And Thiel ended up being the first investor in what was then called Smart Checker and later became Clearview.

Chris Hayes: And how much has Clearview grown since that initial investment by Thiel and the sort of origins of it in 2016?

Kashmir Hill: So Clearview now has a database, they say, of 40 billion photos, which is quite a lot. I mean, obviously more photos than people who exist on the planet. So that means they have a lot of photos of individuals. And they have got over 30 million in startup funding. They have hundreds or thousands of police departments who are now their customers.

But it has been a rocky path because once the world knew about them, and they were kind of operating in secrecy for a while, there was a lot of pushback to what they had done.

Chris Hayes: Well, I can think of a bunch of reasons, but why was there pushback?

Kashmir Hill: I mean, when I first heard about Clearview AI, there was nothing out there about them. I went to their website, and it was just kind of a blank page that said, “Artificial intelligence for a better world.”

They had an address in Manhattan that was just a few blocks away from “The New York Times.” So when I wasn’t having any success finding, kind of, who was linked to the company or getting them to respond to me, I walked over to the office and the building didn’t exist. It just wasn’t there. It was like a wrong address or a fake address. And it just wasn’t public, what they were doing.

And so I ended up doing this digging, talking to police officers who had used the app and I did this big expose on them in “The New York Times.” And people were very upset at the idea this company they had never heard of had gathered their photos from the internet without their permission. There was pushback from the companies that they had scraped, like Facebook and Google and Venmo. They all sent them cease and desist letters. There were lawsuits. Privacy regulators in Europe launched investigations.

What was fascinating about Clearview AI is that they were kind of the first one to push forward, to cross this boundary, break this taboo. And it was this moment for the world to say, do we want this out there?

And honestly, I think we’re still in that moment.

Chris Hayes: Yes, and the method here, recently there’s been a lot of coverage of this “New York Times” lawsuit against OpenAI, the maker of ChatGPT, and a bunch of other lawsuits along similar lines. It basically goes like this. Look, the way that large language models work and the way that these AI programs work is they need enormous amount of data. They need to be trained on that data so that they can then essentially find the sort of reliable patterns of prediction that will produce the stuff that they produce, right?

And what “The New York Times” is saying is, hey, your ChatGPT program is built off the backs of tens of thousands of “New York Times” articles and a ton of other copyrighted material.

Artists particularly, I think in some ways, have the strongest case on this because their work has been taken without their consent. It’s on the internet. It’s sort of plausibly in the public domain, I get. But it’s then, creates a model that can then draw a poster in the style of so-and-so. That now you’ve been replaced by your own work, which you never consented to being fed into a model that can now make something that looks like what you do and what you came up with. So that’s the kind of basic framework of these lawsuits.

Here, it’s not a creative enterprise and in some ways it’s like, makes it even more chilling because it’s your face. Like, which gets this question of like, in a legal sense, like, do you own your face?

Kashmir Hill: Do you own your face? It’s an interesting question.

I mean, there are right of publicity kind of state laws that say that people can’t use your face, for example, that Cheerios couldn’t come along and put Chris Hayes on the box to advertise it. But yes, it’s not exactly clear that we own our faces.

Chris Hayes: I think we don’t. I think the right of publicity law is like —

Kashmir Hill: Well, it depends.

Chris Hayes: The right of publicity law is so that you can’t, say, as a car dealership, right, you can’t take a picture of Arnold Schwarzenegger and say like, the strongest deals anywhere. You know? It’s like, no, like Arnold Schwarzenegger owns Arnold Schwarzenegger’s likeness. And if he wants to endorse the thing, you got to pay Arnold Schwarzenegger. This is true. So there’s that, right?

But that has to do with famous people who plausibly might be people that could sell their endorsement or their likeness for some monetary amount. So, in that case, like they do own their face.

But it’s just unclear whether non-famous people own their faces in any way.

Kashmir Hill: Yes, and I will say, I’m not a lawyer, but generally, yes. I mean —

Chris Hayes: Right.

Kashmir Hill: — when you’re walking around and people are taking photos and you’re in the background of someone’s photo, you can’t go over and say, hey, delete that. You just captured my face.

Chris Hayes: Right.

(LAUGHTER)

Kashmir Hill: There are some legal jurisdictions where I’d say you do —

Chris Hayes: You come off like a real psycho.

(LAUGHTER)

Kashmir Hill: Though (ph), it might be a good idea in the age of Clearview AI —

Chris Hayes: Yeah.

Kashmir Hill: — and other face recognition search engines, particularly if you’re out doing something you don’t want people to know about.

But one jurisdiction I will just point out is Illinois. And I write about the history of this kind of wild law that they have on the books that Illinois passed this law in 2008 called the Biometric Information Privacy Act, rare law that moved faster than the technology. And it says that people there do kind of own their biometrics, and if a company wants to use it, they need to get consent or pay up to $5,000 in fines.

And so there, you do have control of your facial, kind of your facial print, your facial biometric. And there have been a lot of lawsuits over non-consensual use of people’s prints there, including one, several against Clearview AI.

Chris Hayes: Yes, so I guess the question is what has been, what’s the top, I mean, obviously there’s going to be legal challenges and like, there’s a real, I think, open question with OpenAI and these other large language models that are producing, my least favorite word on earth, content, drawings or articles, whatever.

There’s a real open question. I think it’s an unresolved question, right? There’s a bunch of lawsuits and we’re going to see what happens.

And in fact, there’s a universe in which like, it just is a death knell where they say like, no, like, you have to pay everyone that you use to feed your data. And it becomes, I think, essentially it doesn’t pencil out for them to exist. I don’t know if that’s going to happen.

But how has Clearview survived? Like, what is their litigation record here? Clearly, they’re still going. So have they just settled a bunch of these? Have they just gotten favorable rulings? How have they survived?

Kashmir Hill: It’s such an interesting question.

So Clearview has faced a lot of investigations. The wheel of justice moves slowly. So a lot of these lawsuits are ongoing in the United States. One of their —

Chris Hayes: Yep.

Kashmir Hill: — defenses has been that Clearview AI is just like Google and that they have a First Amendment right to gather information that is public, public on the internet. These are people’s photos that are posted for anyone to see and to put it in their search engine and to make it findable. That just like Google makes you searchable by name, Clearview is making you searchable by face.

And so that’s kind of been the defense that they’ve been using in a lot of their lawsuits. Also that, they have decided to only work with law enforcement and so this is a safety and security issue.

They did settle one of the lawsuits that was brought by the ACLU in Illinois. And as part of the settlement, they agreed that they would only sell this kind of database of 40 billion photos to law enforcement and the government and would not sell it to companies or to individuals, which was something that they originally planned to do in terms of, at least, what I saw in my reporting and talking to the early investors.

Chris Hayes: That’s interesting.

Kashmir Hill: And that’s in the U.S. Outside of the U.S., there are all these privacy regulators in Canada, Australia, Europe, who have decided what Clearview AI did was illegal under their privacy laws. That you can’t just gather their citizens’ private information, their faces without consent. And they’ve told Clearview AI, delete our citizens’ faces —

Chris Hayes: Oh wow.

Kashmir Hill: You know, stop selling here. Pay these big fines, EUR 80 million in fines.

And Clearview is just appealing it. And it’s kind of ignoring it for now. And it stopped complying with GDPR, the big European privacy law. And it’s not allowing Europeans to delete their faces from the database anymore as they used to allow.

So they really are kind of resisting —

Chris Hayes: They’re just not in compliance?

Kashmir Hill: Resisting complying with Europe. Yes.

Chris Hayes: Wait, okay, but that’s on the database end, but is anyone using the app? Are like German police departments using it? Like, does it function outside the U.S. as a service people can use?

Kashmir Hill: So Clearview was, when I first started looking into them, it was selling internationally. It was doing trials with police departments around the world. And it kind of pulled back, particularly when these privacy regulators started saying that they didn’t like what they were doing. And so now they’re primarily selling in the U.S.

And actually this has been a successful argument for them in Britain. Britain was one of the places that said it’s illegal, they fined them. And Clearview appealed the decision.

And the appeals court ruled in Clearview’s favor and said that actually, Britain’s privacy law doesn’t apply here because Clearview’s only working with law enforcement. And that basically a British privacy regulator shouldn’t be interfering with another sovereign state security.

So, I mean, it’s really —

Chris Hayes: Wow.

Kashmir Hill: This is complicated stuff in part because it’s so international and you have all these different legal regimes trying to figure out what to do about this.

Chris Hayes: Okay, so tell me about the product. So I sort of understand the technology and I understand the database. I have a vague sense of how this sort of machine learning works. And again, it’s just sort of brute computing force and some brilliant models put together through neural networks and the like.

But you basically get enough data, you run it through, it gets good at pattern recognition, it learns, give it a thousand photos of me and different positions and it gets pretty good at figuring out what’s a photo of me, you iterate that over a whole bunch of people. So now they’ve got this database and they’ve got this technology. So who buys it and what do they do with it?

Kashmir Hill: So Clearview, when they first were out there trying to sell this, they were offering it to hotels, to real estate companies that might want to have it in the lobby to like find out who people were when they were coming inside. To grocery stores, you might want to know who shoppers are, prevent shoplifting.

In the early days, it was a product in search of a customer. But the customer that really worked for them was the police. The NYPD, they were the first user of Clearview AI. They’re using it on a trial basis for free. And this is part of why Clearview kind of spread around the world is that, if you had an email address associated with the police department, you could just sign up for a free trial.

And so it was all police sort of telling each other about it, it was being discussed at international conferences and like everyone’s just using this new tool that the public doesn’t know about, that it hasn’t been tested. Like no one knows exactly how accurate their algorithm was at the time. I mean, that part of it was astounding to me.

I talked to this one detective who described using it to, like, get leads on all these cases where he had a face and that’s all he had to go on.

Chris Hayes: Yes, I mean, well, first of all, it’s funny because it reminds me of Facebook where if you were a college student, you can get Facebook and like, that’s how it grew first, among that community, right? It was like college kids, if you had a dot edu, you can get Facebook. So this is like, if you have a like a dot NYPD or whatever, you can run this search.

I mean, I guess the question is like, I can see all sorts of use cases. Like, you’ll see it if you walk down the street in New York City and there’s a bodega that’s got a picture on it. It’s wanted, and they’ve got the security camera photo of a person that they say stole something from them.

And yes, sure, if you’re a police officer and you’ve got some security camera footage that says this person burgled or this person, there’s so many security cameras everywhere. This person assaulted this person, but I can’t really make it out, feed in Clearview.

But then I guess the question is like, man, that’s a lot of trust to put in that software. Like, were they making cases on this? Was that holding up in court? Were people challenging that?

Kashmir Hill: Yes, there were a lot of police officers who basically had photos of criminal suspects and that was kind of the only lead they had. And so they would run it through Clearview AI. And then, Clearview basically returns a lot of photos of people they think might be that person.

And police told me you’re never supposed to just arrest somebody based on a facial recognition match, that you’re supposed to be doing more investigating than that. So, you’re supposed to go and see. Where was that person at the time of the crime? Is there basically any more evidence linking them to the crime?

And because they would never arrest somebody based on facial recognition, it doesn’t supply probable cause and they don’t necessarily have to mention that’s how they found somebody. So there may be people out there who have been arrested or charged and facial recognition technology was part of it, Clearview match was part of it, but they may not know. It’s not part of the evidence against them.

And so, yes, I mean, I’ve heard about a lot of cases, horrendous cases that have been solved, starting with a tip that they got from Clearview AI. There have also been cases with facial recognition technology where it has gone wrong, where they didn’t do that additional investigating and they arrested somebody basically for the crime of looking like someone else.

Chris Hayes: So that has happened. I mean, that would be my assumption that that has happened.

Obviously, we should note, being arrested for the crime of looking like someone else is as old as policing. So, I just want to be clear here that like that happened before Clearview AI a ton. I don’t want to put all of it on Clearview AI, but obviously the whole point of this technology, I would suspect, is to stop having that happen, right?

Like, eyewitness reconstruction is notoriously, incredibly error prone and obviously police sketches and there’s a whole mistaken identity happens all the time. Witnesses seeing something from far away in the dark. So there’s all kinds of times that people wrongly identified a person.

That has been happening as long as police have existed and as long as witnesses have existed. Presumably, the point of this would be, like, that you don’t do that anymore. And yet here we are where it has happened with the technology.

Kashmir Hill: And so, I know of six cases where it’s happened, once involving Clearview AI. It is horrible for people that it happens to.

The person with Clearview AI, he lives in Atlanta. His name’s Randal Quran Reid. He was driving to his mother’s house the day after Thanksgiving, gets pulled over by four police cars. They have him step out. They start arresting him. They say he’s wanted for larceny in Jefferson Parish.

He said, where’s Jefferson Parish? And they said, it’s in Louisiana. He said, I’ve never been to Louisiana. And he’s really confused.

They take him to jail. There’s an extradition for him to take him to Louisiana to be prosecuted for this crime. He’s in jail for a week awaiting extradition, trying to figure out like, why have I been arrested?

And eventually, he hires lawyers in Georgia. He hires a lawyer in Louisiana. And the lawyer in Louisiana finds out that the detectives had run facial recognition technology on the surveillance tape. He had come up as a match. He looked a lot like the guy. And when they looked at his Facebook page, he had a lot of friends in New Orleans. And basically based on that, they had issued this warrant for his arrest.

So this was —

Chris Hayes: Wow.

Kashmir Hill: — a terrible investigation. And every time this happened, it has been a bad investigation. And the police say the problem here is not the technology. We love the technology. It works very well most of the time. This was a human error. They fell prey to automation bias or confirmation bias.

And I don’t know. I mean, I know of six. I’ve heard from defense attorneys there are more cases where people haven’t come forward. But knowing how often police are using facial recognition technology and only knowing about a handful of cases, maybe most of the time it is working really well.

The thing is we don’t know because we don’t have a study set up. We don’t have investigators who are sitting with the police and telling us, how often is this working, how often is it going wrong, et cetera.

Chris Hayes: Yes, I mean, that’s sort of my next question is, and I remember when reading the sort of big blockbuster investigative piece you did for “The Times,” like obviously in all of these other jurisdictions, there tends to be a lot more privacy regulation generally, particularly the European privacy regulation regime is much stricter, much more proactive than the American version.

I guess the question is, it seems like partly because of how localized and distributed American policing is, that this is happening sort of outside of like any kind of systematic study or regulatory oversight. Like it’s being used by local departments. Maybe it’s working. Maybe it’s a huge net positive. Maybe it’s not. Maybe there’s a lot more cases of mistaken identity than the ones, than the six that you’ve identified.

But like, it just seems like it’s a very powerful tool in the hands of the government to do a thing that is at the sort of the most invasive thing a government does, which is like lock people up basically, without any kind of, do we know how good it is? How reliable it is? Is there any broad regulatory look at this?

Kashmir Hill: Yes. So there’s this federal lab called NIST, or the National Institute of Standards and Technology. And they have been running these kind of accuracy tests on facial recognition algorithms, going back 20 years, basically to 2000.

And there are hundreds of vendors out there and they run these tests regularly and they have shown that facial recognition technology has just advanced so much. Like, it’s basically up to 99 percent accurate when you have good —

Chris Hayes: Wow.

Kashmir Hill: — input, when you have a good photo that you’re putting —

Chris Hayes: Right.

Kashmir Hill: — into the system. It’s not going to work as well in like a grainy surveillance still.

And so, yes, I mean, these vendors can point to that and say, look, this is really accurate. This is really powerful.

But what critics like Deb Raji, who’s a person that has been advocating a lot for kind of algorithmic transparency, algorithmic audits, says is that’s under test conditions. They’re working with certain kinds of photos and it’s not really looking at how does this work for the Detroit Police Department where there have been —

Chris Hayes: Right.

Kashmir Hill: — three misidentifications using facial recognition technology.

How does it actually work out in the field? And I do really feel strongly that we need to see more of that. Like, let’s just have a field test.

Chris Hayes: So we don’t have field studies.

Kashmir Hill: We don’t have field studies. Some departments are keeping track. Like Detroit Police are actually trying to evaluate this to a certain extent. They report how often they use facial recognition technology to an oversight board. They say when they get matches, when they don’t.

They’ve decided to only use facial recognition technology for serious crimes like sexual assault, murders. They’re not using Clearview AI. They’re relying only on government photos, like criminal mug shots, driver’s license photos. And yet still, this is a place that has had three misidentifications and they’re trying to do it right.

Chris Hayes: We’ll be right back after we take this quick break.

(ADVERTISEMENT)

Chris Hayes: Now, you said at the beginning of this that, first Clearview was sort of a kind of product in search of customers and had sort of ideas about people that might want this power. And what they found was that the sort of the kind of killer app for this was police identification.

But I can think of all kinds of, let’s say I’m a organization wants to do fundraising, and let’s say I’m APAC. We want to boost our small dollar fundraising. Well, there was just a march in Washington, shortly after October 7th. I think 100,000 people attended.

You could just scan the photos and just probably come up with a whole bunch of names of folks that aren’t donors, but you have a pretty good shot that they’re going to be sympathetic to your cause.

Same with people that are protesting for a ceasefire and you’re a candidate who’s going to run in a Democratic primary on calling for a ceasefire and opposition to Israel’s continued bombardment of Gaza. Well, there was a big protest in your district and you’ve got a hundred people, right, who it turns out they live in your district and they were at the protest. Like, you can reach out to them to support your campaign.

There’s all kinds of ways that you can imagine wanting to use recognition in other contexts other than just the police.

Kashmir Hill: Man, it sounds like you have a career ahead of you in fundraising if this media thing doesn’t work out. Yes.

(LAUGHTER)

So yes, that’s a very creative idea and it’s possible right now. Clearview AI is limited in use to police departments, but what they did was, I refer to it in the book as kind of an ethical breakthrough, not a technological one. Other companies can go out there and gather a bunch of photos online and apply these powerful algorithms to them. And they have.

There are these face search engines online that anyone can use, sometimes with a subscription, sometimes free. You can do that. You can take a photo of protesters and find out who they are. And you can use it to fundraise or you can use it to harass them if you don’t agree with them.

Chris Hayes: To dox them, yes, right, exactly.

Kashmir Hill: To dox them, yes. I mean, we’re here. Like, it’s happening. It’s happening right now. We’re already seeing this.

There was kind of a silly, it was a very silly story about this person on TikTok who just enjoyed, anytime there was like a viral TikTok, featuring somebody on the street, he or she would find out who the person was and identify them and kind of show the different face search engines he used. And just say, oh, this is this hot dad who was walking by in the background of this TikTok video about the New York air turning orange.

Oh, look, this is who this person is. And if you want to date somebody who’s going to look like that one day, here’s his son’s profile on Instagram.

Chris Hayes: Oh (ph).

Kashmir Hill: Yes, it can be used in fun ways and it can be used in these really scary ways. Like so much technology, it’s that double-edged sword.

Chris Hayes: Doni Holloway, our producer, just reminded me of a particularly egregious example, probably the most egregious example I’d heard, again, at small scale. This isn’t like, used for surveillance state totalitarian repression, but rather Madison Square Garden had facial recognition software so that like people who had criticized the Knicks owner were getting dinged at the entrance because there was like, a not welcome in Madison Square Garden list matching to real-time facial recognition scans.

And there were people who I guess had like, I forget if they publicly criticized or just even on social media been like, the Knicks owner sucks. And then they’re not being allowed to MSG.

Kashmir Hill: I could not believe when this happened. I had been working on the book for a couple of years. And when I saw that Madison Square Garden was using facial recognition to punish its enemies, I was like, oh my God. I knew this was going to happen. I thought it was going to happen in 5 or 10 years.

And Madison Square Garden started using facial recognition technology in 2018 for security reasons, to keep out people who were violent there in the past or harassed somebody or they say they’re on top of Pennsylvania Station, they want to be aware of security threats. But then about a year ago, James Dolan, the owner, decided, wow, this is a great way to keep out people I don’t like, namely lawyers who work at firms that have sued us.

And so they went to the law firm websites and —

Chris Hayes: Oh, that’s what it was.

Kashmir Hill: Yes, they gathered the photos from the lawyers’ own bios on their websites —

Chris Hayes: Oh my God.

Kashmir Hill: — and put thousands of lawyers on this banned list and said, you’re not welcome back here until your firm drops its case.

I actually saw this happen. I went with this woman who worked at a personal injury firm. She was not involved in the case against Madison Square Garden. I bought tickets for a Rangers game. We went to MSG. We put our bags on the conveyor belt. And by the time we picked them up, a security guard had come over to us and he asked for her ID. And then a security manager came over and gave her this little note and kicked her out.

And I was just like, oh my God, there’s thousands of people streaming into this venue. And they were able to just pick her out of the crowd like that. It just shows —

Chris Hayes: Wow.

Kashmir Hill: — how powerful this technology can be. And the way that it could usher in this new era of discrimination, right, where you could be discriminated —

Chris Hayes: Oh yes.

Kashmir Hill: — against for who you work for, that you’re a journalist, that you work for the government, what your political beliefs are. Facial recognition makes it possible for your face to be this key to unlocking everything that’s knowable about you online, kind of all these dossiers that we’ve accumulated over the last 20 years —

Chris Hayes: That’s such a good point.

Kashmir Hill: — can (ph) now just be attached to our face.

Chris Hayes: Yes, like you, right, walking down the street. It’s like, oh, you once said this thing, or you attended this mosque, or just anything about you because so much of our personal data, and then if you can cross-reference it with the face, then you’ve got everything. And then no one’s anonymous in any way in any public space ever again.

And my understanding in China is that they’ve used facial recognition software in incredibly malevolent ways. They’ve used it in sort of erecting the kind of intense surveillance city around, in Western China, where Uyghurs are under this sort of constant surveillance, and they know exactly, they’re tracking who is where, when. I know it was used for COVID quarantine, right? So it’s like, if you sneak out, the Chinese state knows it was you who broke quarantine. And you will get a message saying like, we saw you breaking quarantine.

So I mean, the obvious sort of authoritarian use of the technology by governments, I think, in the wrong hands, persecuting, crushing dissent, political enemies, religious or racial discrimination. There are so many obviously awful ways that one could use this.

The Dolan thing is instructive because it’s so petty, but it’s like a little bit of a tip of the iceberg about what you could do, even as just a private owner of a venue.

Kashmir Hill: Yes, and it also illustrates once you build this surveillance infrastructure, there’s kind of endless ways to use it. And so, yes, I mean, MSG wanting to keep out threats to the crowd makes sense. That is a reason —

Chris Hayes: Yeah, right.

Kashmir Hill: — to institute facial recognition technology. But then once it’s there, you’re like, oh, well, I could also use it to keep out people I don’t like.

In China where they have this vast infrastructure now, using it to suppress human rights, also using it just to automatically give people jaywalking tickets, using it to name and shame people who wear pajamas in public in certain cities.

Chris Hayes: Wait, really?

Kashmir Hill: Yes. I mean, Iran has talked about using facial recognition technology to automatically ticket women whose faces are visible, right, that aren’t —

Chris Hayes: Oh my God.

Kashmir Hill: — covered by the hijab.

China actually put facial recognition technology in a public restroom in Beijing because they had a problem with toilet paper thieves. And so it —

Chris Hayes: Oh my God.

Kashmir Hill: — became the case there. You had to look into a camera to unlock a certain amount of toilet paper. And if you needed more, you had to wait another seven minutes until you could unlock some more squares.

(LAUGHTER)

Chris Hayes: Oh my God.

Kashmir Hill: I just (ph) —

Chris Hayes: I mean, it’s funny because like, I don’t know, it’s like at some level we all have this just overwhelming, almost cellular revulsion, right, at that, I think. I’m speaking for myself here, but I think most of us share. Like, and maybe that’s just, we’re freedom loving Americans or because it’s so sort of closely tracks an entire canon of dystopian fiction and film.

But like, then there’s some part of me that’s like, I don’t know, I guess that’s a clever use of the technology to solve, like, an actual problem. I guess there are use cases, right? Like, to go back to the MSG one, it’s like, right, let’s say someone has been to MSG four times and gotten kicked out for being drunk or abusive to people around them, right? Using racial slurs, right? Like, yes, you want to keep that person out.

And in the old days, that person’s face might be up in the ticket counter, right? Like, that is an old-school way of dealing with people. And so this automates it and maybe it’s more accurate. And like, that’s not a crazy use of it.

So it’s like I keep, I guess, oscillating in my head between like my initial reaction to all of this is like, oh, no, no, no, no, no. Burn it all down. Don’t do it. Bad, bad, bad, bad, bad. And then I’m like, well, I guess it is useful in certain circumstances.

(LAUGHTER)

Kashmir Hill: No, I mean, I think there’s clearly beneficial uses to facial recognition technology and that we need to find the right balance.

It’s interesting with Madison Square Garden because they’re doing that at all their venues in New York. A Girl Scout troop mom got kicked out of a Rockettes show at Radio City Music Hall because she was a lawyer. They’re doing it at the Beacon Theater.

But MSG also owns a theater in Chicago, and they can’t use facial recognition technology to keep lawyers out because of that law I was talking about earlier. They would need the lawyer’s consent —

Chris Hayes: Right.

Kashmir Hill: — to use their biometrics that way. And there are exceptions in that law for security purposes, but I do think this is something that we could regulate. I don’t think we just have to fully open the doors for it and we get all the good with the bad. Or at least, that’s not the world I personally want.

Chris Hayes: Yes, that’s a great point. Like if you make this baseline point of consent, right? Like, you own your face as a matter of law. You own your biometric data. People can’t utilize it for purposes without your consent.

Like the Illinois law, you can pass a state law and a whole bunch of other states or you can pass a national law that says this is true nationally. Although I guess my question is, does the whole thing fall apart then, right? Like in the same way of the kind of like the way that these sort of OpenAI, I think if OpenAI has said you have to pay every copyright holder that you fed into the model, like I just don’t think it works.

I guess my question is, can you have a functioning Clearview or a functioning at scale facial recognition software in a world in which people own their own faces?

Kashmir Hill: Probably not. I think the version of that, honestly, that I’ve been thinking about is the version of this that Meta or Facebook could roll out, which the CTO there, Andrew Bosworth has talked about. He would love to put facial recognition capabilities into their augmented reality glasses so that you won’t be embarrassed when you meet somebody whose name you should remember. That seems to be the killer use case.

Chris Hayes: That would be. That is the killer use case. Oh my God. Obviously, how didn’t I think of that? Yes, that is the killer use case. Just like Gary in Veep, whispering in her ear, like that’s Jane, she’s got, that’s what I want —

Kashmir Hill: There could be an app for that. And so the world I could imagine, and this wouldn’t work in some places like Illinois where they would have to get consent to even scan someone’s face, but you could imagine going onto Facebook and saying, okay, I’m okay with my face being recognized and I’m okay with it just being public. Like, anyone’s allowed to know who I am or maybe identify me to my friends or maybe I don’t want to be recognized.

Like, I could kind of imagine one of the tech giants if they decide to take that step to make it a kind of consent process of, yes, I want to be recognized or I want to be private.

Chris Hayes: So it doesn’t seem to me, I mean obviously you’ve quoted some critics and obviously you’ve written a book about this and there’s a lot of interest in this. And Clearview is sort of just ignoring a bunch of regulators or appealing decisions. They’re operating here relatively unconstrained outside the state of Illinois. Does it seem like there’s momentum at least to the national policy level for some kind of like actual reform, some sort of more proactive regulatory stance here in the U.S.?

Kashmir Hill: I mean, as a reporter who has been covering privacy for more than a decade, I’m pretty pessimistic about privacy laws passing at the federal level. And I kind of document in the book the many times we’ve talked about regulating face recognition and it is a bipartisan issue.

Dick Armey teamed up with the ACLU in 2001 after facial recognition was rolled out on a crowd for the first time at the Super Bowl, which became deemed the Snooper Bowl.

More recently, the late John Lewis had a hearing with Jim Jordan and Mark Meadows where they said, we don’t agree on much, but we agree we need to do something about face recognition. This is too invasive. It could imperil our constitutional rights. And just nothing seems to happen.

So where we see things happening is at the local and state level. There’ve been a few cities that have banned police use of facial recognition until we kind of fully understand any bias problems, constitutional issues. We’ve seen Illinois pass that. It has that Biometric Information Privacy Act. And there’s a handful of states now that have privacy laws that say that you can ask a company if they have information on you and you can ask them to delete it.

So if you live in California, or Colorado, or Connecticut, for example, you can go to Clearview AI and say, hey, I want to see the photos you have of me and then I want you to delete me from your database.

So right now it really is this patchwork where how protected your face is depends on where you live.

Chris Hayes: To me, there’s the regulatory question and then there’s just profound, the face as part of the general loss of anonymity, the kind of being known to strangers as an increasingly common social experience of 21st century life, that to me is kind of one of the overriding themes of life in the digital age.

And this is in some ways the most intimate part of it because it connects us to our corporeal form as opposed to whatever avatar we put up of ourselves in social media or whatever pictures that we choose. It’s the actual physical body, but it’s just the last step in kind of like wrapping all that together such that the sort of online person and IRL person are the same person and not anonymous anywhere.

Kashmir Hill: Yes, I mean, the way it’s trending right now is that police have this ability, and we seem to be comfortable with police having this ability.

But meanwhile, in the background, there are these face search engines where you can do this. And we’re kind of ignoring the fact that they exist, and some people know about them and some people don’t. But for those who know about them, yes, I mean they can know who anyone is around them at any time. And if that becomes widespread, then I think we get what has happened online.

We kind of grappled with this maybe a decade or so ago —

Chris Hayes: Yes.

Kashmir Hill: — this context collapse of you’re not a different person.

Chris Hayes: Yes.

Kashmir Hill: In real life, you’re different with your kids than you are with your boss, than you are with your colleagues, than you are with your friends.

Chris Hayes: Yes.

Kashmir Hill: But online, everything’s collapsed. And I think that same thing could happen in the real world if you’re identifiable all the time.

Like maybe you have an important job, but you’re at a sports event and you’re cursing and supporting your team. And all of a sudden, a person next to you takes a photo and you’re like —

Chris Hayes: Yes.

Kashmir Hill: — wait, you’re a federal judge? Why are you —

Chris Hayes: Yes.

Kashmir Hill: You know, why are you cursing like that? It would mean that all the time, I mean,it really would be the fulfillment of so many dystopian sci-fi novels, the kind of panopticon where you have to assume all the time somebody could be watching me right now. Somebody could be recording me right now. What I’m doing right now could be associated with me forever.

And I just think that would be so chilling, the kind of same fears we have about what we put online. You would just have to feel about that, like everything you do in public.

Chris Hayes: I mean, I got to say, that’s my life. Like —

Kashmir Hill: Yes.

(LAUGHTER)

Chris Hayes: — it’s so funny to hear you say it. Because like that really is like, it is, that is my life. I’m very lucky that I’m not a person whose level of recognition is such that it’s rendered normal life impossible. There are those people, and I’ve sometimes talked to people like that, or even talked to people that worked for people like that. And it just seems like totally crushing. Like, that is not at all (ph). As people know, I take the subway to work back and forth every day. I go out grocery, like all the normal stuff, right?

But it is also the case that like, I just know that I’m not anonymous. I was at my son’s basketball game. Like, to your point about the judge, it’s like, I was getting really into it at a basketball game that went to overtime this weekend, but I had the regulator in my head of like, keep it together, Chris. There’s people here who know who you are.

And yes, that’s a pretty transformational experience. It’s taken me a very, very long time to kind of acclimate to myself and come to some level of comfort with that. But man, is it a disruptive experience and like, society-wide, I think a profoundly negative one for everyone involved.

Kashmir Hill is the tech reporter at “The New York Times,” author of “Your Face Belongs to Us: A Secretive Startup’s Quest to End Privacy as We Know It.”

This is her second time on WITHpod, and you should definitely read everything that she writes.

Kashmir, thanks so much.

Kashmir Hill: Thank you, Chris.

Chris Hayes: Once again, great thanks to Kashmir Hill.

The book is called “Your Face Belongs to Us.” You can also email us with your thoughts and questions about facial recognition technology, or really anything. You can email us at WITHpod@gmail.com, get in touch with us using the hashtag #WITHpod on Twitter, or X, or whatever it’s being called now.

You can follow us on TikTok by searching for WITHpod. You can follow me on Threads @chrislhays and on Bluesky.

“Why Is This Happening?” is presented by MSNBC and NBC News, produced by Doni Holloway and Brendan O’Melia, engineered by Bob Mallory, and featuring music by Eddie Cooper. Aisha Turner is the executive producer of MSNBC Audio.

You can see more of our work, including links to things we mentioned here, by going to mbcnews.com/whyishishappening.

“Why Is This Happening?” is presented by MSNBC and NBC News, produced by Doni Holloway and Brendan O’Melia, engineered by Bob Mallory and featuring music by Eddie Cooper. Aisha Turner is the executive producer of MSNBC Audio. You can see more of our work, including links to things we mentioned here by going to NBCNews.com/whyisthishappening?



MS NOW
  • About
  • Contact
  • help
  • Careers
  • AD Choices
  • Privacy Policy
  • Your privacy choices
  • CA Notice
  • Terms of Service
  • MS NOW Sitemap
  • Closed Captioning
  • Advertise
  • Join the MS NOW insights Community

© 2025 Versant Media, LLC