Monthly Archives: April 2016

Whose Code Are You Running?

One of my favorite ways to eat Oreo cookies is to twist the two halves apart, carefully set the filling aside, eat both chocolate halves, and then slowly enjoy the indulgent filling. Without milk, this is by far the best way to fully indulge in both parts of the cookie. But with a glass of milk the implementation changes. Not only is it harder to dunk the separate pieces, but the actual taste of the whole cookie soaked in milk seems much better than the sum of its parts.

In many organizations, Development and Security are two parts of a cookie. Individually they’ve both incredibly important, but too many miss the incredible combined experience. Often both sides hold fast to a false sense of superiority that would’ve made Dr. Seuss’ Sneetches proud.  In fact it’s very common for us to witness clashes between the two teams when it comes to penetration tests or discussing flaws.

When I was first introduced to information security, someone told me that developers focus on finding one way to make something work while security folks focus on finding all the possible ways that could break it. I don’t believe that’s fair. At the end of the day I think we’re all focused on making things better; we just come at it from different perspectives. Developers tend to focus on improving the experience of legitimate users while Security has the goal of preventing the experience of illegitimate users. Two parts of the same cookie you might say.

With this dualistic approach in mind, consider the recent fiasco surrounding the unpublished Kik module on the popular NPM JavaScript package manager. If you’re not familiar with the story, an angry open source developer, bitter over the loss of a module name (kik) to the company that owns the trademark, decided to unpublish all of his 273 published modules. Most of these were unimportant and unnoticed, but one, left-pad, was used extensively throughout the JavaScript world. NPM reported nearly 2.5 million downloads of left-pad in the month prior to this event.

If you’re not familiar with modern-day JavaScript development, it’s very common for libraries to have dependencies that require dependencies that require dependencies in a seemingly unending cascade of code. Each project is like a giant pyramid that only stands if each stone is placed in exactly the right spot. When the left-pad stone was removed, literally thousands of software projects came crashing down. Companies as large as Facebook, Netflix, and Spotify were affected, all because of a rather simple 11-line bit of missing code.

A plethora of blog posts and forum rants have since come out explaining and justifying the actions of each party to this exciting debacle. Like any good philosophical flame war, both sides of the debates have good points. But there is one angle that hasn’t been well covered. This whole situation was predicated on the accepted norm of using third party code, delivered via package managers, within commercial production applications.

Follow along with me for just a moment. This young man, Azer Koçulu, published open source code that was not only used by Fortune 500 companies, but was hosted by a third party package manager so that those commercial projects depended on the existence and maintenance of the hosted packages.

In this particular case, Mr. Koçulu simply unpublished the modules. But what if instead he had updated his modules to contain malicious code? What if he used something like the Browser Exploitation Framework or something worse that injected actual malware attacks into the pages serving his code? Such changes surely would’ve been detected quickly, but out of 2.5 million downloads in one month, how many of those sites would’ve been pushed to production or used in development before it was discovered? If combined with a targeted zero-day exploit, imagine the value of a bot-net that consists of developers and clients of some of the largest websites in the world.

Traditionally information security has been boiled down to a triad of key concepts: confidentiality, integrity, and availability. Though that simplistic breakdown is somewhat antiquated, it still serves the purpose of providing a reasonable understanding of the goals of security.  Infosec professionals are focused on improving these key areas rather than just saying “No” for the sake of some miserly power-trip. Unfortunately we don’t always do a good job of communicating in those terms.

This missing left-pad incident has demonstrated a situation in which developers across the world are suddenly reconsidering the impact to availability by depending on a third-party package manager like NPM for production code. The topics prompted by this even won’t go away anytime soon. However this experience should also cause us to question the impact on our systems’ integrity. If we allow third party code to be updated and integrated into an application without re-vetting the updates, how can we trust the integrity of the application or its data?  (spoiler: we can’t.)

The issue of malicious third party code isn’t unique to the NPM JavaScript package manager either. There’s a wide range of applications and systems that depend on third-party package managers to host and maintain code from a mind-boggling array of open source developers. Take Debian for example. The linux distribution has a very defined process for maintaining standards across package updates, but almost nothing is done to validate the security of the code in those packages. If a maintainer “goes rogue” and introduces malicious code, there are defined processes in place to handle the situation once detected, but nothing to catch it before it affects users.

The same could be said for app stores such as Google’s Play and Apple’s App Store. In each case, some degree of security review may take place, but there’s no way they can accurately review every line of each application for security issues. Many of us may be mindful of what applications we install, but we just assume that application updates are supposed to be safe.

Even if we had reason to trust certain developers or modules, we have no way to know that each update actually comes from the original developer. The cyber attack division of a nation-state could easily look for popular open source modules or applications that can be purchased from the owners. In some cases overworked owners may even just handover the reigns to a project if someone appears genuinely interested in seeing it maintained. Over time a very powerful arsenal could be built without anyone ever realizing it. When necessary, those packages could be weaponized and targeted and take advantage of our regimented patching schedules. By the time anyone realized what happened, it would be too late.

I’ll take off the conspiracy theory hat for a bit. I think it’s safe to assume that most developers are not the unknowing pawns of some malicious nation-state. But it’s important to realize that most security vulnerabilities are found by assuming the worst-case rather than the best-case.

So what do we do about it? To a large degree, there’s not much we can do. Our entire system has been built, literally from the hardware up, on the idea of trusting others. That’s bad enough for hardware, but when we move to software and its collaborative nature it becomes ridiculously unfeasible to even consider any sort of standardized security testing for every application patch or update, especially in the open source world.

In some cases, such as the NPM JavaScript modules, it may be reasonable to not allow dependencies on third party code, or to integrate a code review into all third-party libraries before use. But that would be difficult and often impractical to sustain.

For application updates from 3rd packages, I think we just have to recognize the threat and prepare appropriately. For example it should be assumed that a sufficiently devoted adversary is going to compromise a system. While we should take reasonable steps to prevent that from happening, our goal should be to monitor, detect, and respond to the intrusion before serious damage can be done. Building those probabilities into our threat models provides a more practical estimate of business risk assessments.

I don’t see third-party package managers, or the associated risks, going away anytime soon.  But we can all do a better job of understanding where our packages come from and who we’re trusting.

Nathan Sweaney is a Senior Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at, on Twitter @eternalsecurity, or visit the Secure Ideas – ProfessionallyEvil site for services provided.

Professionally Evil Insights: 2015

Are you interested in knowing which vulnerabilities are the most commonly discovered in penetration tests?  How about which industries are doing the best (or worst) with improving on their security programs?  We pulled together all of our 2014 and 2015 findings, analyzed the results, and came up with some interesting (at least we think so)… Continue Reading