In this Help Net Security interview, Kim Crawley, cybersecurity expert and Professor at the Open Institute of Technology, discusses her upcoming book Digital Safety in a Dangerous World, which will feature her expert advice, as well as insights from other cybersecurity experts, lawyers, and activists, on how to lawfully protect your safety and privacy in challenging times.

What inspired you to write Digital Safety in a Dangerous World, and how does it differ from your previous works?

My data, devices, and digital life matter to me. By pushing GenAI, tech giants are trying to take control. I can think and create for myself, and I won’t rely on a glorified autocomplete built on mass plagiarism. I value my privacy, worry about GenAI’s environmental impact, and reject a future where flawed bots replace real, thinking humans.

I replaced Android 15 with GrapheneOS and Windows 11 with Kubuntu. Though I’ve used Debian-based distros for 15 years, I hadn’t tried them for AAA gaming. Thanks to OpenGL and the Steam Deck, most of my Windows-only games now run on Linux with little effort. GrapheneOS has also been a smooth switch, and I’m relieved to have more control over GenAI, though I still stay cautious with web apps.

There are a lot a of useful books out there that have cybersecurity and OPSEC advice that laypeople can use. Ed Snowden’s Permanent Record, other books about Snowden’s revelations, and Chelsea Manning’s README.txt overlap with my book’s subject matter a lot. And I strongly recommend people to read their books because they saw a lot of this stuff firsthand, from the inside.

My book is largely about what has been going on since this January, because I can’t explain Snowden and Manning’s discoveries better than they did. And I will also go into Boston Dynamics, Clearview AI, Palantir, and similar companies. I’ll provide context with how the majority of important computer technology patents are from US military related R&D since World War II. The history of electronic computing is a really avid interest of mine. I wrote about it a bit in my O’Reilly book Hacker Culture: A to Z. And I’ve given talks about it through O’Reilly and through my school, the Open Institute of Technology. I think it’s useful context that most of the computer technologies we use were invented through DARPA and other American government agencies.

My previous books were mainly focused on the needs of enterprises, and the cybersecurity practitioners who work for them. The Pentester Blueprint is the best selling guide to how to get a career as a pentester. 8 Steps to Better Security was my advice for businesses that have no security maturity at all and need to develop some. I wrote a manual on vulnerability scanning cloud services. Ultimate Cybersecurity Careers Guide was my first self-published book, and the only reason why I self published it is because the major publishers I’ve worked with weren’t interested in it. And yet I get messaged on LinkedIn constantly by people who are eager to learn how to enter the cybersecurity field.

I’m self publishing Digital Safety in a Dangerous World mainly because I will be getting a bit politically controversial and it’s understandable that a big business that exists to make money for selling books about computer technology doesn’t want such a hot potato with its name on it. Kim Zetter’s Countdown to Zero Day and similar books can be controversial in how they acknowledge matters like American involvement in the development of Stuxnet. I’m not famous enough or high status enough for a publisher who would want to risk this particular book from me.

The book addresses digital safety amid rising authoritarianism and big tech influence. How do you approach these sensitive topics to make them accessible to readers?

My intended readership are adults, some of whom will be complete tech laypeople, and others may be a bit into tech. The subject matter of my book will be heavy. But it’s a heaviness that ordinary people are learning about from their very scary firsthand experiences, combined with the news they read from sources, especially those outside of mainstream media.

I will only assume that a reader has the computer literacy to make posts on TikTok or whatever. Any technological concepts more advanced than that sort of stuff will be defined and described. But I certainly won’t sugarcoat anything about the politically hostile world that we’re now in. There will be lots of practical advice. For instance, I will recommend alternative operating systems, online platforms, and applications. There will be lots of OPSEC tips. I will also provide “harm reduction” recommendations for situations where if a reader decides it’s not feasible for them to leave a social media site or operating system but still needs to reduce their attack surface in ways that they can.

You collaborated with various experts, including human rights lawyers and cybersecurity professionals. How did these collaborations shape the content of your book?

They’re crucial. I teach enterprise cybersecurity. I have researched and written about a wide variety of cybersecurity topics, usually from the perspective of helping enterprises. My personal OPSEC and endpoint security is probably a lot better than most laypeople’s. But I don’t deserve to be on any list of top digital privacy experts. I’m absolutely not.

Thankfully at least, my knowledge and background makes it possible for me to speak to actual digital privacy experts in their language, and ask them the right questions. Right now I’m in the process of interviewing all of the people who have agreed to help me. I can’t confirm everyone at this stage. But I can confirm that Wired’s Dell Cameron shared some of his research and sources with me. Fight for the Future’s Evan Greer shared useful information with me earlier this week. And EFF’s Matthew Guariglia is answering my questions with his advanced legal expertise, though of course that’s not legal advice.

My other research sources are largely whitepapers, research reports, books other people have written, and news reporting from here, 404 Media, and Wired.

Were there any surprising findings or insights that emerged during your research that challenged your initial assumptions?

That’s an excellent question! I’m about halfway through the research process right now. There’s always some security hardening that’s within a user’s control, and a lot of vulnerabilities and threats that are completely outside of their control. For instance, we’re pretty helpless when a medical insurer takes sensitive data about us whether we like it or not, and then their lax attitude to handling medical data securely facilitates data breaches that ruin people’s lives.

I started my cybersecurity career in earnest by reporting news about cyber attacks and the discovery of new CVEs and proof-of-concepts. But where the line is drawn between what a user can control and what they can’t control is in areas I hadn’t expected. It may be how data mining, cloud services, and consumer tech are changing, usually for the worse.

How do you address the balance between advocating for digital privacy and the practical challenges individuals face in implementing security measures?

The best I can do is provide two different approaches to each massive attack surface problem.

The optimal approach means replacing common applications and platforms with more niche applications and platforms, learning how to directly encrypt some data (such as through PGP), configuring software in ways that may sometimes be inconvenient, and changing one’s internet usage habits. That’s the more difficult but more effective approach, and I will make it as accessible as I possibly can be providing all the information people need in a way that laypeople can learn.

The less optimal approach is to continue using Windows, Mac, Facebook, and so on. But to change settings when possible, and make significant changes to how one uses those platforms.

I know very well that I won’t be able to convince everyone to live like RMS (Richard Stallman, famous for using computers in very inconvenient ways due to his obsession with avoiding proprietary code). I can’t even get my romantic partner of seven years to give up Facebook, vanilla Chrome, and his various other really risky habits. Some harm reduction is at least an improvement on the status quo. Maybe some readers will use dangerous platforms with some harm reduction first, and then eventually move onto using applications with built-in Tor functionality.

What is the one key message you hope readers take away from Digital Safety in a Dangerous World?

Distrust everything and everyone by default, especially tech and government entities. The cloud is just someone else’s computer, almost always one owned and operated by and on the premises of a tech giant. Commercial VPNs and services like DeleteMe may be helpful in some ways, but you can’t automatically trust a tech company that promises to improve your security posture. For instance, a commercial VPN service used properly will make your data in transit much less accessible to external cyber attackers. But an untrustworthy tech company has the decryption keys to your internet traffic and its related logs, and may not use that access in ethical ways.

We’re only going to be able to survive this by becoming savvier with how we use technology, and by joining forces with each other behind the scenes to look out for each other. We also need to organize. Under no circumstances should that organizing be done on the public internet.

Digital Safety in a Dangerous Worldget it on Kickstarter!

Share.
Leave A Reply