The last time New York Times cybersecurity journalist Nicole Perlroth spoke with Emirati activist Ahmed Mansoor in 2016, his passport had been taken and he had recently been beaten almost to the point of death. “We learned later on that our phone conversation had been tapped, that someone was in his baby monitor, that his wife was being spied on,” Perlroth told CPJ in a phone conversation in early March. “He was really living in a prison.”
Since United Arab Emirates authorities arrested him for his advocacy in 2017, Mansoor has been in another kind of prison, according to Human Rights Watch, which reported in January 2021 that he was being kept in solitary confinement without even a mattress to sleep on. CPJ emailed the UAE embassy in Washington, D.C., to request comment on Mansoor’s case, but received no response.
Perlroth kept Mansoor in mind when researching “This is how they tell me the world ends,” her book published this year, she told CPJ. It details the little-understood market for zero-day exploits – hacking capabilities that leverage mistakes in the code populating phones and computers around the world. Governments have secretly paid hackers millions of dollars for exploits, hoping to use them before anyone fixes – or takes advantage of – the same mistake, she writes.
The result is that everyone is more vulnerable to being hacked – particularly when the exploits are turned against journalists and activists. Researchers at Citizen Lab dubbed Mansoor the “million dollar dissident” after zero-day exploits were used to infect his iPhone with Pegasus spyware produced by the Israeli firm NSO Group in 2016.
In a statement provided by a representative of Mercury Public Affairs in Washington, D.C., who declined to be named because they were not an authorized NSO spokesperson, the company said: “While our regular detractors rely on unverified claims and their own conclusions, NSO’s technology helps governments save lives in a manner that minimizes threats to the privacy of innocent individuals. […] NSO Group is fully regulated, and has now taken the undisputed lead amongst our industry peers in the protection of and respect for human rights. We have created stringent policies to help ensure that our technology is used only as designed – to investigate serious crime and terrorism – and we comprehensively investigate every credible allegation of misuse that is brought to us.”
CPJ spoke to Perlroth about her reporting, and the implications of the growing trade in exploits and spyware for journalists around the world. The interview has been edited for length and clarity.
Why did you want to take on this project?
I’d been reporting on cyberattacks since 2010, and it seemed like each attack got a little bit worse than the last. But the incentives all seemed to be stacked in favor of further vulnerability. Everyone was buying into this promise of a frictionless society, from Uber on your phone to the chemical controls at a water treatment facility. The business incentives were, “Let’s get the product to market.” Legislation to improve critical infrastructure security was watered down or never passed.
I was looking at the threat landscape for journalists and activists. I was getting my fair share of phishing attacks – who knows whether they were just spammers or some more sophisticated nation-state spyware. So I wanted to call out these patterns, but we didn’t have the transparency or the accessibility for the average person to understand that the cards were stacked against us when it came to security.
The most vivid tangible piece of this was the zero-day market, the fact that governments – including our own – were paying hackers to turn over zero-day exploits, and not to get them fixed, but to leave them open for espionage and surveillance.
That always struck me as a very clear moral hazard. Now that we’re all using the same technology, how does the United States or other Five Eyes government [members of a five-nation intelligence-sharing agreement including Australia, Canada, New Zealand, and the U.K.] justify leaving open a zero-day in iPhone iOS software, knowing that it can be and has been found by nation states and this growing industry of click-and-shoot spyware?
What’s the difference between spyware and an exploit?
A vulnerability is an error in code. If I’m a hacker and I find it and have the means, I can develop a program that can use it for other purposes. That’s an exploit.
We call it a zero-day vulnerability because when it’s discovered the manufacturer has had zero days to fix it, and until they do, anyone can exploit it against their customers. A zero-day exploit is the code to exploit that for another purpose, such as spying on your text messages, or tracking your location, or turning on your audio on your cellphone without you knowing it.
Those capabilities are obviously very valuable to a spy. But over the past 10 years groups like Hacking Team and NSO found they could bake those exploits into click-and-shoot spy tools [known as spyware] to give government agencies. Sometimes they don’t require zero-day exploits, just known flaws that manufacturers hadn’t patched for, or that people hadn’t run their software updates for.
There’s been growing [demand], particularly as companies like Apple have added better security to their customer’s mobile communications. Governments have always been worried about encryption and maintaining the access necessary to track criminals.
What kind of regulation do we need in this area to protect journalists and others?
These are dual-use technologies. Some have argued that if you try to control the sale of zero-day exploits across borders, you will be impeding defense.
The argument goes that new vulnerabilities are introduced into code every day. Knowledge of how to exploit them is of use to governments for espionage or battlefield preparation. Zero-day exploits are also used by penetration testing companies to test a company’s security. The other argument is that we do, to a very small extent, have export controls – at least here in the U.S., because of some older controls for encryption. If you do want to sell intrusion technology you have to go to Department of Commerce and get a license. But that just prevents you from selling to countries that we have sanctioned like Iran, Cuba, North Korea. There’s a lot of leeway, and as far as I know, very few people have been turned down.
I think in a lot of cases people making that argument have profited off sale of zero-day exploits and haven’t publicly disclosed it – nor can they, because the market is steeped in classification and non-disclosure agreements. My goal was to blow this wide open so that we’re not delegating the debate to those who have profited from the status quo.
One of the goals of my book was to say where is this tradecraft is going – places like the UAE, Saudi Arabia. We all saw what happened with Jamal Khashoggi. [Editor’s note: CPJ recently called on the U.S. to sanction Saudi Crown Prince Mohammed bin Salman after a declassified intelligence report published in February said that he had approved the 2018 murder of Khashoggi, a Washington Post columnist.]
We only know what surfaces when someone gets these messages one after the other and flags it for me or Citizen Lab. But tools are being developed that don’t require you to get a text message, and the companies are not doing any enforcement of their own. NSO even admitted that they can’t walk into a Mexican intelligence agency and take it back, there’s no kill switch. But we could mandate a kill switch to use if there is evidence these tools are being abused in the wild. There are steps between doing nothing and harming defense.
[Editor’s note: In the statement provided to CPJ, NSO Group said it “has made clear many times since then that our software includes a ‘kill switch’ that can shut down the system. This has been used on the occasions in which serious misuse of our products has been verified.”]
In your book, you reference a conference call with NSO in which no-one would give you their names. Why should companies operating in this industry engage with the media?
To be fair, that was the first time a spyware dealer had gotten on the phone with me for that period of time, even if no-one identified themselves. [Private equity investors] have been trying to improve NSO’s image but it’s really crisis management, we haven’t seen a lot of transparency from this space. It’s not surprising, because we’re dealing with a product that has to be invisible to work, and their customers are governments that require total secrecy. [Editor’s note: NSO said “the meeting cited by Ms. Perlroth happened five years ago, under an entirely different management structure, and with the involvement of investors who are no longer associated with us. As such, we are not in a position to comment.”]
NSO has said [they] bring in experts, look at indexes of human rights, [they] don’t sell to anyone that falls beneath a certain threshold. That’s good and dandy, but Mexico was caught spying on nutritionists and Americans, India on journalists. In the UAE, Ahmed Mansoor is sitting in solitary confinement without any books. What threshold were you looking at in those cases?
Clearly these tools are very easy to abuse. Maybe you have to bring in human rights monitors who can look at which governments are using them in abusive ways. In the U.S. – because I live here, and we call ourselves human rights respecting – maybe we need have rules about how technologies developed here is shared with those countries. I don’t think the answer is we shouldn’t legislate at all.
Can you talk about the times you’ve felt personally vulnerable because of your own reporting?
The New York Times was attacked by China to discover my colleague David Barboza’s sources for stories on corruption in China’s ruling family. That was my first realization that governments were actively hacking journalists, and since then I have been really careful with how I communicate with sources. Some I don’t communicate with [a source] online at all, and we don’t bring our devices when we meet.
In [the book] I describe going to Argentina and staying in small boutique hotel in Buenos Aires. [I came back to my room to find] the safe which had my burner laptop in it was open, though my cash was still out on the table. I’d not even used it, I’d been using pen and paper. I knew there was probably something on it, so I threw it away. Some of these [spyware] tools are so burrowed into the firmware [computer programs embedded in the hardware] of our devices that it’s the only way to get rid of it.
To be honest, where I have felt the most harm is on Twitter and I finally quit [in February]. Sometimes there’s a nation-state component to it, I’ve been called out by Russia Today who put unflattering photos of me on their Instagram account and launched Russian troll armies my way. But a lot of times it’s just the behavior that we have day by day given a pass to on these platforms. I can publish something and get viciously trolled – threats, direct messages, everything. Then I’ll watch a male competitor publish something, even with errors, and there’s not a peep. The blowback female journalists and journalists of color get is terrible. I write for the Times and that opens me up to a lot of fair criticism because we hold ourselves to a high standard. But there are days when I’ll find myself in a fetal position thinking, “Today was really abusive.”
How do you decide how much technical language to use when explaining these issues?
In my book I describe Snowden documents showing that the NSA was getting into the sweet spot where Google and Yahoo’s customer traffic was unencrypted between their data centers. I allude to [that as] “hacking.” A lot of people have hammered me on this – they weren’t hacking into servers, they were sniffing unencrypted traffic. I did go back to the publisher to change it next time. But if you’re nit-picking journalists, I think you’re picking the wrong fight.
Governments are hacking into our grid. We’re hacking into theirs. No-one’s going to care about terminology when the lights go out because of a cyberattack – it’s going to affect ordinary people. It’s very likely we’ll see something like that in the next 10 years, and it’s important to make people understand that the stakes are high and they have a reason to participate.
What can journalists who are concerned about these issues do?
Think about what you have that someone might want for nefarious reasons, likely your sources, your location data. Don’t click on links, turn on two-factor authentication, sign up to advanced protection on Google, buy a Yubikey, use Signal, and if you have to meet a sensitive source, use pen and paper.
We also need more people on the beat. I am the cybersecurity journalist at The New York Times, but we could have 10. These are huge issues, and it can’t just be up to the tech press. I describe my job as a translator. We need a lot more translators.
[See CPJ’s Digital Safety Kit for more security advice.]