A magazine about programmers, code, and society. Written by humans since 2018.

On Modern Security Culture

Long before Adrian and I started De Programmatica Ipsum, we met at a conference in London where he had invited me to talk on mobile application security. Reflecting on this issue’s topic, I compared that experience with the infosec-focussed events I have attended. I thought of the worst aspects of the infosec culture that I’ve seen. Writing about those would make for a full and interesting article, but wouldn’t be particularly inspiring.

Instead, I’m going to talk about some of the more progressive parts of the infosec world, by introducing the people who informed my view on those parts. If they inspire you as much as they did me, then you will come to think of infosec as something your whole organisation, and your whole community, can come together on.

Window Snyder

Window Snyder is currently Intel’s Chief Software Security Officer. She has a history of transforming software security at computing’s big names: Apple, Mozilla, and Microsoft are all former employers. At Microsoft, she was security lead for Windows XP Service Pack 2 and Windows 2003.

Trustworthy Computing

These products were released in the wake of Bill Gates’ Trustworthy Computing memo.

Today, in the developed world, we do not worry about electricity and water services being available. With telephony, we rely both on its availability and its security for conducting highly confidential business transactions without worrying that information about who we call or what we say will be compromised. Computing falls well short of this, ranging from the individual user who isn’t willing to add a new application because it might destabilize their system, to a corporation that moves slowly to embrace e-business because today’s platforms don’t make the grade.

After that memo, the company stopped development of its products to train development teams on software security and made security and trustworthiness top priorities.

Threat Modelling

Microsoft’s approach to security during that time was shared widely. Their approach became the basis for Secure Development Lifecycles (SDLC) across the industry. Snyder’s co-authorship with Frank Swiderski of Threat Modelling is an important part of this outreach.

I believe this book shows how a modern, self-organising software product team can make itself security-infused. There’s still a place for external consultants, penetration testers, and dedicated security teams. These are people who inform and educate the product team. The product team are ultimately aware of their product’s security model, its risks and how to address those risks.

If your product team is not sure how to think about their product’s security, get this book and run a workshop. If you and your team do not think security is your problem, then you definitely need to read and internalise the Threat Modelling book.

Kate Moussouris

k8em0 is also a Microsoft alumna. It’s time to revise any opinion you have that Microsoft is not an innovative software engineering house. She has advanced the way that organisations including Microsoft and the U.S. Department of Defense interact with external security researchers and testers.

Moussouris was instrumental in introducing a “bug bounty” program at both organisations, and elsewhere. A bug bounty is a prize paid to an external reporter of a security vulnerability. Bug bounties are one incentive designed to support responsible disclosure, another of Moussouris’s successes in the field.

Responsible Disclosure

The actors involved in security vulnerability discovery have different, conflicting objectives.

The product vendor may prefer never to admit that their software was insecure. They would like to silently patch their product (if they intend to patch it), and move on as if nothing had happened.

If an external security expert found the bug, they will want to publish details and get credit for the discovery. They may use that discovery in their marketing materials, as demonstration of their expertise.

Unfortunately another group who would benefit from public disclosure of security bugs is the attackers. They can use information about the bug to design an attack targeting people who haven’t yet remediated the problem.

Which leads us to the customers, the people using the software. They want to know about the problems so that they can patch, or change their security policy, or otherwise address the risks associated with the vulnerability.

Responsible disclosure builds a compromise position from this conflict which represents the best trade-offs for all parties except the attackers. The person who discovered the flaw reports it privately to the vendor, and negotiates a timeline to public disclosure including full credit. The vendor gets time to fix the bug and get the fix to customers before public disclosure. Users get told about vulnerabilities, but only after the vendor has issued a patch.

Bug Bounties

Bug bounty programs act as a form of virtue signal for responsible disclosure. It’s important to see that bounties are not a replacement for Threat Modelling and internal security practices. Instead, bug bounties indicate that a vendor acknowledges that external experts may find problems they have missed. A company with a bug bounty program is saying that it supports responsible disclosure, and will reward others who cooperate in its responsible disclosure approach.

The bounty is not paid to anybody who finds a security bug. Rather, the vendor pays those who engage in its responsible disclosure process. The vendor hopes that the value of the bounty payout stops the researcher from publicising a vulnerability as soon as they discover it. That situation is called a “zero day” or 0day, because attackers have the opportunity to exploit the problem on day zero of the disclosure and patch timeline, before the vendor has a chance to fix it.

Modern Security Culture

Through practices like threat modelling and responsible disclosure, information security has moved from a combative to a collaborative art. We no longer have a security team or an external penetration tester telling us “no” after we’ve built a product. We have devops and devsecops: a culture in which a software team works together and treats security like any other attribute of their product.

Such collaboration extends beyond the org chart, with multiple parties coordinating their vulnerability disclosures to the benefit of the people using our software. The work of Window Snyder, Kate Moussouris and others enables this cultural shift, helping our whole industry to make computers safer and more helpful.

Cover photo by Chris Ried on Unsplash.

Continue reading "Issue 002: Quality" or go back to Issue 003: Security. Did you like this article? Consider subscribing to our newsletter or contributing to the sustainability of this magazine. Thanks!
Back to top