Testing shows the presence,
not the absense of bugs.
— Edsger W. Dijkstra,
Software Engineering Techniques, 1969
How to develop software that is more secure
It is extremely difficult to create trustworthy software. The difficulty and slow production leads to much higher price. That means that most software produced will be fairly insecure, because security is a fairly low priority for most producers and consumers of technology. As an example of the difficulty, the NSA asked Praxis High Integrity Systems to develop part of Tokeneer, a secure system. The key statistics were:
9939 lines of code were produced, in
260 days of work, for a production rate of
38 lines of code per day, and with
2 defects discovered since delivery.
A lot of the difficulty goes back to the fundamental differences in secure design. Ross Anderson's book Security Engineering: A Guide to Building Dependable Distributed Systems is an excellent explanation of the issues of designing trustworthy systems.
I think a lot of the problem is philosophical. A typical system design is based on what features you want to include. The most important question for most designers is of the form, "Does this do what I want it to?" Unfortunately, they overlook the important question "Does this refuse to do what I want it to not do?" Could there be some way to get the system to do something helpful to the attacker and overlooked by the designer?
Put another way: A design should specify that the system should have only a narrowly defined set of functions: precisely that and no more. That means that the design analysis and implementation testing need to address failure modes and the handling of unexpected input.
This is an area where Microsoft seems to do an especially poor job. Microsoft products tend to be very good as long as you do exactly what they expected you to do with them. The problem is when error or misunderstanding or bad luck or malice presents inputs or situations not of interest to and therefore not considered by the designers.
Complexity Decreases Security
More complexity means more opportunities for bugs and greater difficulty finding them. Frederick Chang was Directory of Research at NSA. In his November 2013 testimony to the Committee on Science, Space and Technology of the U.S. House of Representatives he said:
When it comes to security, complexity is not your friend. Indeed it has been said that complexity is the enemy of security. This is a point that has been made often about cybersecurity in a variety of contexts including technology, coding, and policy. The basic idea is simple: as software systems grow more complex, they will contain more flaws and these flaws will be exploited by cyber adversaries.
I downloaded the
source code for the Apollo Guidance Computer,
which provided computation and electronic interfaces for
guidance, navigation, and control of the Apollo program
There are 363,397 lines of "code",
some of which are comment headers inserted by the project
that scanned this.
extract the archive, and do this:
$ wc $( find . -name '*.agc' )
The Linux kernel as of
had 21,731,894 lines of code:
$ cd /usr/src
$ wc $( find linux-4.12 -name '*.cc' -o -name '*.c' -o -name '*.h' )
bash: /usr/bin/wc: Argument list too long
$ cat $( find linux-4.12 -name '*.cc' ) | wc
1879 4282 45241
$ cat $( find linux-4.12 -name '*.c' ) | wc
16991032 52312206 460498416
$ cat $( find linux-4.12 -name '*.h' ) | wc
4738983 16958359 195370533
OpenBSD operating system,
kernel plus user-space tools, had
20,154,846 lines of code.
(Download and extract
Windows XP had 45 million lines of code.
Automobiles by 2014 had about 100 million lines.
All of Google's software services are estimated to have about 2 billion lines.
Writing More Secure Code
David Wheeler's Secure Programming HOWTO is a great reference for security software development. Much of it applies to programming in general. It's nicely written, both broad and detailed, and a free download. Many other things at his web site are very helpful.
Cross-Site Scripting (XSS) and similar attacks have become the largest category of threat, surpassing buffer overflow attacks. Wikipedia and others have good descriptions of these programming errors and pointers to further resources:
Cross-Site Scripting (XSS) Whitehat Security's XSS paper Cross-Site Request Forgery ([CX]SRF) SQL Injection Code Injection Format String Attacks Buffer Overflow Attacks in general Stack Overflow Heap Overflow Aleph One's "Smashing the Stack for Fun and Profit" Detailed description of Buffer Overflow and many more
A group of industry experts has defined a list of the 25 most dangerous programming errors, see the lists at the SANS Institute and MITRE.
SAFECode, the Software Assurance Forum for Excellence in Code
SAFECode, the Software Assurance Forum for Excellence in Code, has this mission statement: "SAFECode is dedicated to increasing trust in information and communications technology products and services through the advancement of proven software assurance methods."
Safecode has released a very good document, Fundamental Practices for Secure Software Development: A Guide to the Most Effective Secure Development Practices in Use Today. To summarize it:
Secure Design Principles
- Design using threat modeling, also known as threat analysis or risk analysis..
- Use Least Privilege.
- Implement sandboxing.
Secure Coding Practices
- Minimize use of unsafe string and buffer functions.
- Validate input and output to mitigate common vulnerabilities.
- Use robust integer operations for dynamic memory allocations and array offsets.
- Use anti-Cross Site Scripting (anti-XSS) libraries.
- Use canonical data formats.
- Avoid string concatination for dynamic SQL statements.
- Eliminate weak cryptography.
- Use logging and tracing.
- Determine attack surface.
- Use appropriate testing tools.
- Perform fuzz / robustness testing.
- Perform penetration testing.
- Use a current compiler toolset.
- Use static analysis tools.
Secure Development for Cloud Platforms
SAFECode and the Cloud Security Alliance have written a paper, "Practices for Security Development of Cloud Applications".
Robert Seacord's Secure Coding in C and C++ and Vladimir Kushnir's Safe C++ address a vital topic, as most applications and pretty much all operating systems are written in those languages. C and C++ are used to create pretty much everything, but at the same time they are rather insecure languages. This goes back to the original design philosophy — Kernighan and Ritchie made it the programmer's job to very carefully use arrays. "Nevertheless, C retains the basic philosophy that programmers know what they are doing; it only requires that they state their intentions explicitly."
Bjarne Stroustrup has said, "C makes it easy to shoot yourself in the foot." C permits undesirable operations, and many simple and common programming errors are not detected by the compiler and may remain subtle at runtime.
The Software Engineering Institute at Carnegie Mellon University released the 2016 edition of the SEI CERT C++ Coding Standard: Rules for Developing Safe, Reliable, and Secure Systems in C++. It has rules for secure coding to help create systems free from undefined behaviors and exploitable vulnerabilities. They have also published a C Coding Standard.
Microsoft posted an article about Security Features in Microsoft Visual C++.
Compilers silently drop code when optimizing and
this can lead to security and stability problems.
Optimization-Safe Systems: Analyzing the Impact
of Undefined Behavior"
is a paper from the 24th ACM Symposium on Operating
Systems Principles in 2014.
(and also see
Xi Wang's other papers)
The paper examines optimization-unstable code
which is unexpectedly discarded by compiler optimization
due to undefined behavior within the program.
They present the following common example of a pointer
overflow check found in several code bases.
Optimizing drops the second
leaving the code vulnerable to overflow.
char *buf = ...; char *buf_end = ...; unsigned int len = ...; if (buf + len >= buf_end) return; /* len too large */ if (buf + len < buf) return ; /* overflow, buf+len wrapped around */ /* write to buf[0..len-1] */
They present a list of undefined behaviors in C, both within the language itself and in library calls, and a table of varieties of unstable code optimized away by various commonly used compilers. Their Stack bug-finding tool discovers these vulnerabilities in the Linux kernel; the Python interpreter; the Chrome browser; the QEMU and Xen hypervisors; the Kerberos, OpenSSH, and OpenSSL security tools and libraries, and many other critical systems.
Also see MSC06-C Beware of compiler optimizations at CERT's secure coding site.
The following are from David Wheeler's paper, and they are all free software except as noted.
- Flawfinder scans C/C++ code for common problems.
- RATS (Rough Auditing Tool for Security) scans C/C++, Perl, PHP, and Python code for common problems.
Secure Programming Lint.
It scans C/C++ somewhat as
- cqual looks for C bugs and applies stricter typing.
- Valgrind "supervises" the execution of a program, monitoring all memory management and access. See the overall Valgrind project and the specific Valgrind tool.
- ITS4 scans C/C++ code. It's not strictly open-source.
- Also see the Linux section of the OS-specific page for some modifications to the standard C library to mitigate problems caused by bad code.
PHP is a powerful server-side scripting language, but with the power and the execution on the server comes risk.
A Bite of Python is a good look at important Python security issues.
PyChecker checks for common Python bugs.
RATS (Rough Auditing Tool for Security) scans C/C++, Perl, PHP, and Python code for common problems.
Don't run the web server as
But if creating a web server is suddenly so easy, many will
be created by people who haven't heard the basics.
flags on session cookies.
Confusingly, this limits them to being sent over HTTPS
and preventing script access to the cookie client side.
Add security focused server HTTP headers. As this page explains, the following server settings improve security. The following would be good on any web server, not just one based on Node.js, but the helmet middleware in Node.js makes its easy to add them.
X-Content-Security-Policyset side-wide or page by page. See the W3 Content Security Policy definition for all the details.
Strict-Transport-Securityset to a reasonable timeout.
Access-Control-Allow-Originset to control which sides are allow to bypass the same-original policies and send cross-origin requests.
Protect against CSRF/XSRF
ActiveX Must Be Disabled
Microsoft's ActiveX is fundamentally insecure and you should disable it. That is, if you are so daring as to use Explorer. Really you should be using any other browser.
Some people complain that their organization has foolishly built business practices around pages that only function using ActiveX.
What's next, requiring all personnel to smoke at least one pack of cigarettes each day?
Windows Source Code
The Kuro5hin site had some interesting comments about the leaked Windows source code.
Writing Your Own Exploit Code
The Metasploit Project is an open-source code library for developing and running exploits.
Back to the main Security Page