Whose Code Are You Running?

April 14th, 2016

One of my favorite ways to eat Oreo cookies is to twist the two halves apart, carefully set the filling aside, eat both chocolate halves, and then slowly enjoy the indulgent filling. Without milk, this is by far the best way to fully indulge in both parts of the cookie. But with a glass of milk the implementation changes. Not only is it harder to dunk the separate pieces, but the actual taste of the whole cookie soaked in milk seems much better than the sum of its parts.

In many organizations, Development and Security are two parts of a cookie. Individually they’ve both incredibly important, but too many miss the incredible combined experience. Often both sides hold fast to a false sense of superiority that would’ve made Dr. Seuss’ Sneetches proud.  In fact it’s very common for us to witness clashes between the two teams when it comes to penetration tests or discussing flaws.

When I was first introduced to information security, someone told me that developers focus on finding one way to make something work while security folks focus on finding all the possible ways that could break it. I don’t believe that’s fair. At the end of the day I think we’re all focused on making things better; we just come at it from different perspectives. Developers tend to focus on improving the experience of legitimate users while Security has the goal of preventing the experience of illegitimate users. Two parts of the same cookie you might say.

With this dualistic approach in mind, consider the recent fiasco surrounding the unpublished Kik module on the popular NPM JavaScript package manager. If you’re not familiar with the story, an angry open source developer, bitter over the loss of a module name (kik) to the company that owns the trademark, decided to unpublish all of his 273 published modules. Most of these were unimportant and unnoticed, but one, left-pad, was used extensively throughout the JavaScript world. NPM reported nearly 2.5 million downloads of left-pad in the month prior to this event.

If you’re not familiar with modern-day JavaScript development, it’s very common for libraries to have dependencies that require dependencies that require dependencies in a seemingly unending cascade of code. Each project is like a giant pyramid that only stands if each stone is placed in exactly the right spot. When the left-pad stone was removed, literally thousands of software projects came crashing down. Companies as large as Facebook, Netflix, and Spotify were affected, all because of a rather simple 11-line bit of missing code.

A plethora of blog posts and forum rants have since come out explaining and justifying the actions of each party to this exciting debacle. Like any good philosophical flame war, both sides of the debates have good points. But there is one angle that hasn’t been well covered. This whole situation was predicated on the accepted norm of using third party code, delivered via package managers, within commercial production applications.

Follow along with me for just a moment. This young man, Azer Koçulu, published open source code that was not only used by Fortune 500 companies, but was hosted by a third party package manager so that those commercial projects depended on the existence and maintenance of the hosted packages.

In this particular case, Mr. Koçulu simply unpublished the modules. But what if instead he had updated his modules to contain malicious code? What if he used something like the Browser Exploitation Framework or something worse that injected actual malware attacks into the pages serving his code? Such changes surely would’ve been detected quickly, but out of 2.5 million downloads in one month, how many of those sites would’ve been pushed to production or used in development before it was discovered? If combined with a targeted zero-day exploit, imagine the value of a bot-net that consists of developers and clients of some of the largest websites in the world.

Traditionally information security has been boiled down to a triad of key concepts: confidentiality, integrity, and availability. Though that simplistic breakdown is somewhat antiquated, it still serves the purpose of providing a reasonable understanding of the goals of security.  Infosec professionals are focused on improving these key areas rather than just saying “No” for the sake of some miserly power-trip. Unfortunately we don’t always do a good job of communicating in those terms.

This missing left-pad incident has demonstrated a situation in which developers across the world are suddenly reconsidering the impact to availability by depending on a third-party package manager like NPM for production code. The topics prompted by this even won’t go away anytime soon. However this experience should also cause us to question the impact on our systems’ integrity. If we allow third party code to be updated and integrated into an application without re-vetting the updates, how can we trust the integrity of the application or its data?  (spoiler: we can’t.)

The issue of malicious third party code isn’t unique to the NPM JavaScript package manager either. There’s a wide range of applications and systems that depend on third-party package managers to host and maintain code from a mind-boggling array of open source developers. Take Debian for example. The linux distribution has a very defined process for maintaining standards across package updates, but almost nothing is done to validate the security of the code in those packages. If a maintainer “goes rogue” and introduces malicious code, there are defined processes in place to handle the situation once detected, but nothing to catch it before it affects users.

The same could be said for app stores such as Google’s Play and Apple’s App Store. In each case, some degree of security review may take place, but there’s no way they can accurately review every line of each application for security issues. Many of us may be mindful of what applications we install, but we just assume that application updates are supposed to be safe.

Even if we had reason to trust certain developers or modules, we have no way to know that each update actually comes from the original developer. The cyber attack division of a nation-state could easily look for popular open source modules or applications that can be purchased from the owners. In some cases overworked owners may even just handover the reigns to a project if someone appears genuinely interested in seeing it maintained. Over time a very powerful arsenal could be built without anyone ever realizing it. When necessary, those packages could be weaponized and targeted and take advantage of our regimented patching schedules. By the time anyone realized what happened, it would be too late.

I’ll take off the conspiracy theory hat for a bit. I think it’s safe to assume that most developers are not the unknowing pawns of some malicious nation-state. But it’s important to realize that most security vulnerabilities are found by assuming the worst-case rather than the best-case.

So what do we do about it? To a large degree, there’s not much we can do. Our entire system has been built, literally from the hardware up, on the idea of trusting others. That’s bad enough for hardware, but when we move to software and its collaborative nature it becomes ridiculously unfeasible to even consider any sort of standardized security testing for every application patch or update, especially in the open source world.

In some cases, such as the NPM JavaScript modules, it may be reasonable to not allow dependencies on third party code, or to integrate a code review into all third-party libraries before use. But that would be difficult and often impractical to sustain.

For application updates from 3rd packages, I think we just have to recognize the threat and prepare appropriately. For example it should be assumed that a sufficiently devoted adversary is going to compromise a system. While we should take reasonable steps to prevent that from happening, our goal should be to monitor, detect, and respond to the intrusion before serious damage can be done. Building those probabilities into our threat models provides a more practical estimate of business risk assessments.

I don’t see third-party package managers, or the associated risks, going away anytime soon.  But we can all do a better job of understanding where our packages come from and who we’re trusting.

Nathan Sweaney is a Senior Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at nathan@secureideas.com, on Twitter @eternalsecurity, or visit the Secure Ideas – ProfessionallyEvil site for services provided.

 

Professionally Evil Insights: 2015

April 5th, 2016

Are you interested in knowing which vulnerabilities are the most commonly discovered in penetration tests?  How about which industries are doing the best (or worst) with improving on their security programs?  We pulled together all of our 2014 and 2015 findings, analyzed the results, and came up with some interesting (at least we think so) insights.  Welcome to our first ever findings report:

Professionally Evil Insights: 2015

Jason Gillam is a Principal Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at jgillam@secureideas.com, on Twitter @JGillam, or visit the Secure Ideas – ProfessionallyEvil site for services provided.

 

Reversing Type 7 Cisco Passwords

March 4th, 2016

While working on a recent pen test, I came across a few Cisco routers sitting on an internal network. The fact that they were using default cisco/cisco credentials made me cry a little inside, but wait, it gets worse… So I’m in the router, reviewing the running config, and I notice something interesting.

Screen Shot 2016-02-29 at 2.05.03 PM

Note that both of these accounts have the same privilege level, but that the passwords are stored differently. This is because the first user was created with a command like this:

Screen Shot 2016-03-01 at 4.48.44 PM

Whereas, the second user was created with this command:

Screen Shot 2016-03-01 at 4.49.11 PM

The difference between these two storage methods (password or secret) are the hashing algorithms. Type 7 passwords use a very weak algorithm that can be easily reversed, but the “secret” command utilizes a MD5 hash which is much more secure. Due to this, it is never a good idea to use Type 7 passwords. This policy applies to both user accounts and passwords applied to the VTY or Console lines.

So now that I’ve found these Type 7 passwords, I need a way to reverse them. There are several different tools and websites that have this capability, but there’s an easier way! I don’t even have to leave the router! Thanks to a nifty little feature called the “key chain”, I can reverse these passwords right here, right now!

1. First, we will enter config mode:

configure terminal

2. Next, we will create our key chain and give it the name of NEW:

key chain NEW

3. We will enter the first key:

key 1

4. Then we enter the key-string, which will include the number 7 for encryption type and the text of the “encrypted” password:

key-string 7 07212E587D062A0014000E18

* At this point, you may add more keys by repeating steps 3 and 4 if you have multiple passwords to reverse. Make sure to increase the key count though (key 2, key 3, etc.).

5. Finally, we do a show command, and voila! Passwords!

do show key chain

Now that you’ve seen how incredibly easy it is to reverse these types of passwords, please go forth and check your routers! Ensure that all user accounts and enable passwords listed in the running config are proceeded by the word “secret”.

 

Donna Fracarossi is a Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact her at donna@secureideas.com or visit the Secure Ideas – ProfessionallyEvil site for services provided.

 

We’re Just Like the NSA, and Nothing Like Them

February 11th, 2016

During penetration tests, and especially scoping calls, we often get quizzed about what secret, proprietary techniques we’ll use to gain access to privileged resources. Most folks assume they’re doing “good enough” or at least meeting “industry best practices” so only the latest, unknown attacks will be successful. The notorious ZeroDay always seems to take the top spot in many organizations’ threat matrix. Fortunately, that’s not realistic.

Last month Rob Joyce spoke at the Usenix Enigma conference in San Francisco. Rob is the head of the NSA’s Tailored Access Operations (TAO) hacking team. These guys are the American version of Mandiant’s APT1 in China. It’s their job to gain access to the systems of foreign nations. During his talk, Rob discussed how they go about their work, and lessons that businesses should learn from their experiences.

In several ways, their attack process is very similar to those of a good penetration test. Rob talked about the detailed reconnaissance phase in which they seek to know the network “better than the people running it.” Seeking out available information about the people, processes, & technologies in a network should be part of any good pen test. Part of that recon includes scanning the network, mapping out systems & services, looking for openings that may be exploitable.

“A lot of people think the nation states are running on this engine of zero days… There are so many more vectors that are easier, less risky, and more productive than going down that route.” Later he shared that “You know the technologies you intended to use in that network. We know the technologies that are actually in use in that network.” Similarly, in most of our pen tests, the issues that allow us to gain access are rarely a new fancy attack as opposed to chaining together common flaws that are often unknown to the organization. Rob shared that one required key to good security, and to defending against any attacker, is to know and understand your network and systems extremely well. He suggested common best practices such as regular scanning, penetration tests, and table top red teaming to discuss and familiarize yourself with every aspect of your network.

However there are a few critical differences between a TAO engagement and a standard penetration test that are important to realize. And understanding these differences is key to scoping out & receiving a good penetration test.  An attack by a nation state is an active, live-fire event; not an exercise. Conversely, a penetration test is an exercise designed to educate. So even though many of the techniques outlined and used by Rob’s team are similar, their purpose is not.

The first major difference is the scope of the engagement. Every penetration test has limitations on the scope of the assessment, many of which are are very reasonable. For example targeting the personal email accounts or home networks or employees during an education test is both illegal and unethical. Oftentimes though scope limitations are introduced for other reasons. Perhaps the business won’t accept the additional cost, or a compliance-driven test is focused only on checking a box on a form for auditors.  In most cases, the fewer restrictions placed on an assessment, the more successful it will ultimately be at painting a true picture of risk exposure.

A second key difference between an APT attack and a pen test is the amount of resources afforded for the assessment. Rob spoke of having copies of known software and experts in those systems that often know more about the technologies than the administrators themselves. Every firm and pen tester has a range of experiences and a network of colleagues that can be leaned on for support when attacking unfamiliar systems, but no private company can rival the checkbook of the U.S. government.  For this reason, we strongly recommend against black-box testing which is both impractical and ineffective. I mentioned previously that a pen-test is an educational opportunity that should provide as much benefit as possible to the organization. A black-box test inherently limits that transfer of knowledge. Alternatively nearly all of our tests are gray-box tests in which we utilize the existing resources of the organization to some degree in order to better assess security findings and associated risk. If you’re still approaching a pen test as a red-vs-blue contest, you’re doing it wrong.

The final issue is closely related. Rob discussing his team’s persistence (APT remember?)  “We’ll poke and we’ll poke and we’ll wait and we’ll wait and we’ll wait…,”  he shared. Eventually a hole will be introduced, maybe just for a short time during a maintenance window or while a vendor is fixing an issue. With nearly unlimited resources they can continue to scan and test and push until something changes. Most pen tests though rarely last more than a few weeks. However a good test can take these issues into account. A good tester can help you consider different variables and potential events and how those may effect deployed security controls. It may not be realistic to wait for an event to occur, but often a simulated situation can be used to explore potential risk. Again this falls back on the importance of an open gray-box test.

Ultimately, it’s important to understand the difference between advanced attackers and a penetration test. There are similarities, but also key differences.  If your testers aren’t able to have that conversation, give us a call.

Rob had a lot more great advice for businesses on how to secure their network. His 35 minute talk is definitely worth a listen. https://www.youtube.com/watch?v=bDJb8WOJYdA

Nathan Sweaney is a Senior Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at nathan@secureideas.com, on Twitter @eternalsecurity, or visit the Secure Ideas – ProfessionallyEvil site for services provided.

 

Red Teaming – Not What You May Have Thought

January 27th, 2016

Lately, I’ve been doing a lot of reading on some less technical topics and I ran across “Red Team: How to Succeed By Thinking Like the Enemy” by Micah Zenko.  If you are like me, you’ve probably thought of the red team as being a penetration test group or some kind of adversary simulation.  You know, red teams are the pretend bad guys and blue teams are the good guys defending against them.  Reading Micah’s book altered my view of what red teaming is.  I learned about a world of non-technical red teaming and how that can be applied to our organizations.

Credit: www.redteamjournal.com

Credit: www.redteamjournal.com

At its core, red teaming is the idea that we are going to look at things critically and with an eye towards an alternative point of view.  Conventional thought may lead you to believe that if you do X, then Y will be the result(s).  Because everyone believes this to be true, organizations make plans based on it and really don’t challenge it much.  A generic example may be “if we buy advertising, we will make more sales.”  Red teaming this would question that point of view and may come up with alternative results based on following questions through the process.  Questions like this may be asked:

  • What if buying advertising doesn’t result in enough sales to justify the expense?
  • What if buying ads in this particular format or delivery mechanism turns potential customers off?
  • What if the desired revenue level is never reached?  What impact will that have on our future plans?

In the world of cybersecurity and penetration testing, this could look something similar to the following:

IT says…

  • This server (or service) isn’t really important so we don’t need to worry about patching it yet.
  • We’ve got a SIEM and it will let us know if anything important is happening.
  • Our firewall rules are effective at preventing an attacker from moving through the network.

The red team asks…

  • What if that unimportant server has an important account logging into it?  What access could that give us?
  • What if that unimportant server has different network restrictions in place and being on this system would give us access to more important systems that I can’t get to right now?
  • When would the SIEM alert you and would it be in enough time?  Would anyone even read the alerts and how would they respond?

The list of What Ifs can go on and on.  In penetration testing, we would then start testing these scenarios to see what would happen.  We want to determine what the impact to the organization could be if an attacker ran a string of these What Ifs together and would they be able to get access to the victim’s secret sauce.  And we want to verify that controls are effective and perform as we expect.

I believe that there is value in taking in some of the perspectives and techniques that red teaming employs.  We don’t have to do a full penetration test or red team exercise (though these are useful) to start getting some benefit from this discipline.  Simply sitting down with a few other people and questioning the assumptions or understandings that are being used in projects can be very helpful.  The possible results at the end of these scenarios may not actually occur, but a number of the questions and answers that will be encountered during the process can spot risks that we may not have ever noticed.  We can then take steps to minimize the impact of these risks or the likelihood of them ever occurring.  We can be better prepared to respond if the unlikely suddenly becomes reality because we have at least thought through it before hand and kicked around how we could respond.

I’d recommend reading or listening to Micah’s book.  I enjoyed listening to it quite a bit and I learned a lot.  I’d also recommend looking at other resources for red teaming.  I ran into the Red Team Journal as a result of Micah’s book and found some of the things there to be very thought provoking.

Jason Wood is a Principal Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at jason@secureideas.com, on Twitter @Jason_Wood, or visit the Secure Ideas – ProfessionallyEvil site for services provided.

 

Five Outdated Security Excuses

January 12th, 2016

The Security Industry as a whole has been known to criticize businesses large and small with respect to how they manage security.   Why does it so often seem like an after-thought?  How is it that today we still frequently find that security teams are understaffed (or not at all), that business decisions involving sensitive information are made without consideration for security, and established well-known best practices (like using strong passwords) are ignored.Hand showing 5 fingers

This myth-busting post will cover five excuses (in no particular order) for poor security that should no longer be valid:

Excuse #1: SSL is Slow

SSL was very slow and CPU intensive… over a decade ago.  It is true that there was a time when SSL handshakes were required much more frequently and the speed of the average Internet connection was much slower than it is today.  Remember when “SSL-accelerator” was a buzz word?  The performance hit from switching over to encrypted communication for virtually all connections is well worth the benefit.  Even high-volume sites that don’t necessarily host confidential information (e.g. Google’s search engine) have switched over to HTTPS for everything.

SSL should not be shunned due to performance concerns.  If your website is slow, the poor performance is most likely due to factors outside of SSL.  There are several studies available online that already prove this so I won’t go through it here.  I have had recent conversations with some folks where they make the point “but using SSL made you vulnerable to Heartbleed!”.  True, but still not a good excuse.  Once Heartbleed was announced  we had patches available in short order.  Fear of vulnerabilities in OpenSSL is not a good reason to communicate over unencrypted channels. And by the way, if by SSL you don’t really mean TLS, then it is time to update your vocabulary and probably patch a few things.

Excuse #2: Patching Might Break Stuff

True, but just because patching might break stuff doesn’t mean we should stop patching.  These days many vendors release patches on a frequent schedule and have thorough regression tests to minimize new issues from upgrading.  With all the vulnerabilities publicized over the past couple of years you would think that patching would be assumed, but unfortunately this is not the case.  And the most common line of excuses we hear from our clients is “we don’t want to risk breaking the release”.

Let’s take a step back and think about this.  If your service breaks, and you announce “service is temporarily offline because a security patch broke it”… as a consumer I will be annoyed, but much less annoyed with the outage than if you announce “your account has been compromised because we decided not to deploy a security patch”.  So when a vendor patch is announced, someone should read it, assess the risk to your services (i.e. Does it address an issue with a feature you use? What would be the impact?), and based on the level of risk decide when to schedule the patch.  But deciding not to patch anything for fear of breaking stuff is a lame excuse.

Excuse #3: We Don’t Have Funding for Security

This one of the most common excuses we in the industry hear when we perform security reviews of companies. Many smaller companies don’t even dedicate a single full-time resource to security.  It’s the one IT person who manages everything with a circuit board in it who is also responsible for security.  And many larger companies might run scans but don’t follow through and remediate issues.

In this age just about every company must dedicate some funding towards security.  If you are handling credit card or healthcare or personal data of any type you are probably required either by law or contract to test and remediate security.  This is not optional.  I heard a story over the holidays where a friend of a friend  is a small business owner and ended up with ransomware on his laptop.  This laptop contained all of his business data.  Fortunately it was recovered but this was a case of no patching, no backups, etc… There are rudimentary security precautions every business owner should invest time and a little bit of money into.  If your business is large enough that you have a network (even a small one), you should be investing in security scans and remediating issues.

Excuse #4: We Aren’t a Target

Perhaps you feel that your business is too small, or doesn’t deal with information that might be interesting to an attacker.  Perhaps you actually are not subject to PCI or HIPAA and feel that the risk of being attacked is virtually non-existent.

Not true.  If you have an Internet address then you are a target.  Plain and simple.  I have recently set up a number of honeypots for the purpose of analyzing attacks and found that every time I made them available to the Internet they were attacked within a few hours.

Yes, some companies are big obvious targets (e.g. financial, healthcare, government, etc…).  But there are organizations of criminals out there who will attack anything they can find, without any idea of what’s behind the IP address.

Excuse #5: Users Can’t Handle Complex Passwords

Many companies tend to have relatively simple password policies.  For example, it is quite common to find an 8-character limit for passwords, or to only allow certain special characters.  Sometimes the excuse for this is that it is based on some system limitation (e.g. mainframe).  This should be very rare, as there are few modern systems (even mainframes) that still limit passwords to only 8 characters.  But the other excuse we often hear is along the lines of “management doesn’t want to have to deal with passwords that are harder to remember”.

A full discussion on password complexity is a topic for another day, but for now lets just say that complexity is a factor of:

  • predictability (is it a common password found on a wordlist?)
  • character set (just alpha-numerics or can more printable characters be used?)
  • length

And perhaps surprisingly, length is usually much more of a factor than the character set.  Therefore, a 25 character fragment of lyrics to a favorite song will usually take much longer (i.e. centuries) to brute-force than “fo0B@r1!” (i.e. minutes).  My guess is that favorite lyrics (or a poem, or movie lines, or book) fragments would be at least as easy to remember as eight mangled l33t-spe4k characters for most of us.  So yes, users probably can handle complex passwords.  They just don’t really understand what makes a password complex.

Jason Gillam is a Principal Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at jgillam@secureideas.com, on Twitter @JGillam, or visit the Secure Ideas – ProfessionallyEvil site for services provided.

 

Introduction to Metasploit Video

August 17th, 2015

The Metasploit Framework is a key resource for security assessors. Whether you’re goal is to become a commercial penetration tester, to demonstrate the risk of a vulnerability, or just need to identify certain weaknesses in your environment, Metasploit is your tool. Understanding how it works, and how to get started is the first step.

The Metasploit project was started in 2003 by HD Moore as an open source framework for developing and executing exploits. It’s modular designed allows developers to focus on the code unique to their objective without having to recreate components like transport methods or payloads.   It has since grown to include thousands of modules for exploitation, post-exploitation attacks, scanning, encoding, and others.

In addition to exploiting known vulnerabilities, Metasploit has the functionality to do port scans, identify systems with default passwords, using credentials or hashes to run commands on remote systems, and much more. You can even setup listeners for capturing user credentials via common protocols like HTTP and SMB to be used in multi-part attacks. And if the functionality you need doesn’t exist, it’s very easy to write your own new modules.

Before you get to all that though, you have to understand how Metasploit works and get it up and running.  We put together a one-hour webinar to help you get started. Whether you’ve never used Metasploit, or just need a refresher course, this video will walk you through the basic steps of understanding how things work, getting it installed, and exploiting your first vulnerability.

Check it out here:

When you’re ready for the next step, we also have a 2-hour recorded training class designed to help you become more proficient in Metasploit. It offers tips and tricks that we use on engagements. You can purchase that course for $25 here: Recorded Classes




Nathan Sweaney is a Senior Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at nathan@secureideas.com or visit the Secure Ideas – Professionally Evil site for services provided.

 

Introducing Burp Correlator!

August 14th, 2015
This one is for you web penetration testers!  This new Burp extension is designed to help with efficiency when you are testing a complex application full of parameters or a series of applications and just do not have enough time to thoroughly analyze each one.  It analyzes all the parameters in your in-scope traffic and presents them in a table.  But that’s just the start!  In addition to generating some basic statistics, it will intelligently attempt to determine the format of each parameter based on the values seen in the traffic.  Correlator will automatically and recursively base64 and URL decode, check for known hash lengths (e.g. MD5, SHA1, etc…), make note of familiar formats (e.g. 123-45-6789), decode BigIP cookies, and more!  It will also check to see if the value shows up in the response (i.e. was it reflected), and even whether the URL decoded version was.
It is a lot easier to explain how this works with a demonstration, so I made a video:



I’m very hopeful that this extension will make large-scale manual web penetration testing more palatable and significantly more efficient.  But I need help! Please check it out and give me all your feedback so I can make it even better.

Look for the Correlator (beta) download link on http://burpco2.com.



Jason Gillam is a Senior Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at jgillam@secureideas.com, on Twitter @JGillam, or visit the Secure Ideas – ProfessionallyEvil site for services provided.

 

Practical Pentest Advice from PCI

August 4th, 2015

The PCI Security Standards Council released a Penetration Testing Guidance information supplement in March 2015.  This document, while geared towards the Payment Card Industry, provides a lot of valuable advice to the providers of penetration tests and their clients, regardless of industry.  At 40 pages in length the document might seem a bit heavy, so this blog post is meant to summarize the main areas of the document and highlight those sections that will likely be the most interesting or useful to penetration testers.

The meat of the document includes the following main topics: Penetration Testing Components, Tester Qualifications, Methodologies, and Reporting.  In addition to these there is a sort of introductory section which does a side-by-side comparison of Vulnerability Scans to Penetration Tests.  I’ve read several similar comparisons in the past but the Penetration Testing Guidance document offers a nicely organized comparison table that many organizations may find helpful.

Scope is something that is covered ad nauseam throughout the document but not without just cause. In a PCI-based penetration test it is critical to understand exactly what encompasses the Cardholder Data Environment (CDE).  Although this particular document focusses on scope and segmentation as it relates to the CDE, the same rules can be adjusted to most other penetration testing scenarios. This is why it is critical to understand what is important to your client and how they are protecting it.  If it isn’t PCI data then perhaps it is Protected Health Information (PHI), or a production application cluster, or secret formulas (i.e. consider a pharmaceutical company).  It is up to the penetration tester to properly understand not just the scope in terms of an IP address range but also what information or functionality is considered the “crown jewels” and what sort of boundary, such as network segmentation, exists around it.

Another area of interest in this document is the section on Social Engineering.  According to the document, Social Engineering testing isn’t strictly required by PCI DSS.  However it does enter a sort of gray area because it is recognized as a viable (and quite common) attack vector… which means it should actually be in scope.  My take on it is that the Penetration Testing Guidance document strongly encourages Social Engineering testing even though it isn’t strictly required.  It also seems to strongly advise that any company not including Social Engineering testing should document why it is not being included in the penetration test.

The section on Penetration Tester Qualifications is a little thin but there really isn’t much to say here especially since PCI DSS doesn’t require any specific certifications.  The document does provide the common sense advice that the quality of the test will be significantly influenced by the level of experience of the tester and testing organization, and that having some certification indicates a common body of knowledge.

The Penetration Testing Methodologies section is perhaps the most useful section to any penetration testing company.  It covers pre-engagement, engagement, and post-engagement activities.  Included are lists of documentation the client should provide before the test begins, topics to be covered under the Rules of Engagement, and a description of Success Criteria.  In addition this section covers a lot of detail on how past threats should be disclosed to the penetration testing team for consideration.  This includes everything from previous vulnerability scan results, to last year’s penetration test, to actual threats from the wild.  This information seems to be required under PCI but any company with the goal of reducing risk should provide this information to their penetration tester.  I found the Penetration Testing Methodologies section very worthwhile.  If you don’t read anything else in the document, do read this section.


The final section of the document (not including the use cases in the back) covers reporting guidelines.  Included is quite a bit of advice on how to lay out a report, how to cover retesting, and what to do about evidence retention.  I found it a little odd that one section of the report that is not mentioned at all is a conclusion, since most penetration testing reports I have seen do seem to include one.  Aside from the report advice, the most interesting part of this section is the Report Evaluation Checklist.  This guide has provided a checklist for entities hiring penetration testers, to ensure that the report they are receiving includes everything it should.  This checklist is a good takeaway in that it has the potential of becoming a de facto report quality standard.

So that’s it.  Although I don’t personally agree 100% with every detail, overall I’m impressed with what is covered and find the level of detail just right.  I believe this is generally solid guidance for not just PCI-based penetration tests but guidance that can be adapted to virtually any kind of penetration test. The full document is available here:

https://www.pcisecuritystandards.org/documents/Penetration_Testing_Guidance_March_2015.pdf


Jason Gillam is a Senior Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at jgillam@secureideas.com, on Twitter @JGillam, or visit the Secure Ideas – ProfessionallyEvil site for services provided.

 

Tip: Running BurpSuite on a Mac

May 21st, 2015

Here’s a quick tip I use to save some time when spinning up Burp Suite on a Mac.  I use Burp Suite frequently enough that having an icon on my task bar is warranted. I also like to start Burp Suite with more memory allocated to the JVM than the default.  To accomplish all of this, we will simply create an Automator workflow that runs Burp in a shell script.

I’m going to break this down step-by-step for those who are not familiar with Automator.  Before I get started though I will mention that I originally “borrowed” this tip from James (Jardine) when I saw him using it… and then made my own improvements.

To start, open the Automator app, which is a standard application that should already be installed, and create a new automator ‘Application’ (that second icon on the ‘type of document’) prompt:

Next add the ‘Run Shell Script’ action, which can be found in the ‘Utilities’ library of actions:
Now you just need to replace the default text ‘cat’ with the right shell script.  I typically run BurpSuite with 4GB of RAM, which means I will run Java with the following options:
java -Xmx4g -jar <jar filename>

What about that filename?  Well that’s easy.  If you have been installing Burp in the default location under /Applications, then it will simply be something like:

/Applications/burpsuite_pro_v1.6.18.jar

…where that version number is whatever the latest version is that you have installed.  All you have to do is modify your automator script whenever you install a new version of Burp.  But wait a minute… with all the power of Linux running on a modern processor there must be some way to have your Automator script find the most recent burp jar file for you, right?  Of course there is!  We will replace the filename with an instruction to list all burpsuite_pro files ordered by modified time and return just the first one.  Now our final command looks like this:

java -Xmx4g -jar “$(ls -t /Applications/burpsuite* | head -n1)”

Save it and test it.  If all is working properly Burp should start up.  There are a couple of quirks I will mention so that you know these are expected.  First, because the automator script is calling Burp (which has its own window), you will see both the automator script icon and the Java icon as active apps on the task bar.  Second, while the automator script is running you will see a small spinning gear on the bar at the top of the screen.  Both of these are normal behavior.

The last step to polishing this solution off will be to change the icon of your new Automator app to one that is more meaningful.  This entails finding an appropriate icon, opening the ‘info’ tab for your app, and pasting it in.  I am not going to walk through those details here since others have already covered the task in detail (e.g. osxdaily shows us here).


Jason Gillam is a Senior Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at jgillam@secureideas.com, on Twitter @JGillam, or visit the Secure Ideas – ProfessionallyEvil site for services provided.