Monday, 3 November 2014

Is Open Source Secure?


In the light of the recent Heartbleed and ShellShock vulnerabilities (both of which were apparently innocuous code bugs),  I was approached for comment for an article about the security of open source software.  The journalist (Anthony Caruana) wanted to know about how code gets approved for inclusion in open source projects, and especially about the potential for the inclusion of "malicious" code. This is a subject I touched on in a previous posting, so it was good to be able to address it more directly.

Here are the questions & my responses:

With such a huge community of developers contributed code to open source applications, what steps are in place to prevent malicious code being injected into widely distributed applications?

I take the term "malicious code"  to mean code that has been deliberately added for some nefarious purpose – to introduce a security back-door, or capture credentials and identifies, or do some other kind of damage. 
It's important to differentiate between this kind of deliberate attack, and the kind of programming errors that can result in exploitation, such as the openSSL Heartbleed bug.
The way that open source projects are managed actively works against the inclusion of malicious code. Firstly, the fact that the source is open and available to be audited by a large audience means that there is a much greater opportunity to find deliberate attacks than there is in closed source. It was source code audits that found the bugs in openSSL and Bash.  Malicious code would have to be carefully hidden - without leaving trace that it is being hidden – to avoid detection. Obfuscated code immediately attracts attention. 
The second factor is that major open source projects operate in a community where reputation is the critical factor:  simply submitting code is not necessarily enough to have it included into the main codeline.  The author must be known to, and trusted by,  the project maintainer.  It's human nature that when a new programmer joins a project, his or her code will be vetted more carefully than already-known contributors.  Establishing and maintaining a high reputation requires significant effort, so lends against the rapid or repeated insertion of malicious code. 
In contrast with this, closed-source code is not open for wide review by a large audience, and simply requires a disgruntled employee or the instruction of some agency to add the malicious code (as has been alleged by the Snowden leaks ).


What testing is done with open source software before it is released into the community? 

Testing varies from project to project - in this sense "open source" is not some homogenous group with identical policies. One of the features of the open source community is the rapid turn-around and release of projects:  with a major project there may be several different versions of software available at once - for example  a "stable" release, a "development" release, and a "nightly build".  A stable release can be considered to have gone through the most amount of testing; a development release is a version that developers and testers are working on; and a nightly build is version of the code that has the latest changes integrated & is almost certainly still buggy.  Of course, with commercial open source organisations like SUSE,  only stable versions are released.
Since this is open source it is the community itself that does the testing. The community may include commercial organisations like SUSE, but also includes researchers and hobbyists.  Users of open source software have to decide what level of comfort they have for possible bugs when they choose which version to use. One of the biggest contributions non-programmers can make is to try development release software & report back any problems.  Again - the potentially wider audience than you can achieve in a closed-source project means that this phase can be more effective and faster than in proprietary software (the "crowd source" effect). 
Hobbyists and non-programmers may only be able to investigate to a certain level, but commercial organisations and researchers can perform more involved tests. When SUSE contributes changes – either as new product features or patches – the software goes through our QA team to test functionality, stability, scalability and for regressions including vulnerabilities, as well as integration testing with our partners. SUSE makes heavy use of testing automation, and a lot of effort goes into maintaining & refining our automated methods. SUSE also relies on individual expertise to review results and find errors. 
At the end of the day for any software - open source, closed source or embedded - it's a question of confidence.  The attractive thing about open source is that there is potential for a much higher degree of confidence than for closed source.  It is much harder to hide mistakes, much harder to secretly introduce malicious code, and much more likely that a programmer wanting to make a name for him or herself will discover (and fix) problems when the source is open and available for review.  Ultimately, if the users of open source software want to perform rigorous and exhaustive examinations of code, they can;  the option is not even there for closed source software.


Since I was answering this in the context of my job, and I've already mentioned SUSE a couple of times,  I'll point out something else here: as vulnerabilities are detected and patched, keeping up with the changes – or even understanding the extent of your own exposure – can become difficult. This is where the use of management tools and automation can be extremely useful (as discussed in a previous posting).   In particular SUSE Manager can really quickly report on and patch exposures just by typing in a Common Vulnerabilities & Exposures (CVE) code. There's a demo of it on YouTube.

No comments:

Post a Comment