Security engineering ensures that computer systems remain dependable in the face of malice, error or mischance. This includes the tools, processes and methods needed to design, implement and test new systems, and adapting existing systems as their environment evolves.
Security in the software world is governed by two things – psychology and economics. It is well-understood, for example, that we shouldn’t spend £10 protecting an asset worth only £5. Less understood is that systems often don’t fail because of bugs or technical mistakes, but because of misaligned incentives. The people guarding a system, for example, are often not the people who suffer when it fails.
Psychologically the software industry is actively discouraged from creating secure code. The most-heard mantra is to not do it because it’s difficult, making it a pointless exercise. Another is that security testing is too disruptive to production systems .
The software industry is good at recommending specific design ciphers and algorithms, and ignoring symmetric and public key protocols . Unfortunately, many attacks today target security protocols rather than, for example, cracking passwords. Protocol attacks include design ﬂaws in which the wrong things are encrypted, or the right things are encrypted in the wrong way. Such ﬂaws are extremely common in practice and can be difﬁcult to spot.
Other impediments to creating secure systems include agile development methodologies, which discourage up-front design. The average sprint duration of two weeks is too short for meaningful security testing. Testing is not feasible with nightly builds or pre-release sanity checks, either. Also, in scrum the person that prioritises what gets done next is usually the product manager - the one with the least knowledge of security.