Announcing Tactical Sec Ops: Cloud Edition Online

August 3rd, 2016

2016 is shaping up to be an interesting and exciting time at Secure Ideas. We have always done training in one form or another. Many of you may have first heard of Secure Ideas through the training that we have done for organizations such as SANS, DerbyCon, Blackhat, OWASP, MISTI, Princeton University, Columbia University, and our webcasts. The evolution has continued this year and we are very proud to announce our first full length, on demand training class: Tactical Sec Ops: Cloud Edition!

This course is the same two-day course that we are launching at DerbyCon 6.0, but it will also be available via our online training platform! You can check the training site out by going to Tactical Sec Ops: Cloud Edition will be released after DerbyCon at the end of September 2016!

So what is Tactical Sec Ops: Cloud Edition about? IT has changed quite a bit over the years and there has been a huge push to move systems to cloud computing. Unfortunately, a lot of what we are familiar with in security doesn’t translate real well to this model. Firewalls? Nope, now we have security groups. How about network IDS? Nope, we don’t have access to all our network traffic anymore. Fortunately, not everything is new and unfamiliar, so there’s a lot of common ground to build on!

TSO: CE is designed to jump start your understanding of Amazon’s AWS offerings specifically (and cloud computing in general) and covers how we can secure, monitor and assess our AWS based systems. Here are some of the highlights from the course:

  • Introduction to Cloud Computing and AWS
  • Understanding and Working with Elastic Cloud Computing (EC2)
  • Identity and Access Management (IAM) Basics
  • AWS API Setup and Usage
  • Amazon’s Simple Storage Service (S3)
  • Understanding Amazon Virtual Private Cloud (VPC)
  • Command Line Tools
  • Automation within AWS
  • Configuring AWS for Developers
  • Security Monitoring in AWS
  • Using the AWS WAF
  • Getting Familiar with AWS Inspector

TSO: CE is a hands-on class, so be prepared for exercises that will get you acquainted with AWS’ console and services. Labs will walk you through setting up, tuning and monitoring your cloud environment. If you would like to learn more about the class and get notified when the class becomes available, just complete the form below. We’ll keep you up to date on what’s going on and when registration is available. We are really excited to offer this course and are looking forward to seeing you in class!

Subscribe to receive notifications about Tactical Sec Ops: CE

* indicates required

I want to be notified with news about Tactical Sec Ops: CE


Wireless Attacking EAP-TTLS with Kali 2 and ALFA AWUS051NH

May 16th, 2016

Is your corporate wifi as secure as you think it is?

A common configuration for WPA Enterprise wireless networks is to use a combination of PEAP (Protected EAP) and EAP-TTLS (Tunneled Transport Layer Security). Though this configuration solves several issues found in other configurations, it (sometimes) also has its own fatal flaw. If a client is able to connect without properly verifying the authenticity of the access point (through its certificate), then there is an opportunity to capture the username and handshake. This information can be used to attempt to crack the user’s password offline using a password dictionary. To capture the necessary information we will need to set up a rogue access point (AP) that masquerades as the target AP. Setting up the rogue AP can be a bit of a challenge, so this blog post is a walkthrough of a rogue AP setup on Kali 2.

The rogue AP we will use for this purpose is a modified version of the open source FreeRADIUS server, currently called hostapd-wpe (hostapd Wireless Pwnage Edition). The modifications were developed by Joshua Wright and Brad Antoniewicz.

Installing hostapd-wpe:

To get this solution running on Kali 2, you first need to be sure you are using the rolling sources. At a minimum, you must add the following line to your /etc/apt/sources.list file if it is not already there:

deb kali-rolling main contrib non-free

More details about the rolling sources can be found on the Kali docs website. My understanding is that the current recommendation is to move over to rolling repositories exclusively, but that decision is outside of the scope of this post.

If you had to make any changes to your sources, you will want to do a apt-get clean

Then to install hostapd-wpe you will want to do:

apt-get update
apt-get install hostapd-wpe

A sample configuration file will be produced at: /etc/hostapd-wpe/hostapd-wpe.conf. In most cases this file will require modifications before it can be used. Since this is a rogue AP used for a wireless assessment or pentesting engagement and not meant to be run as a service, it is easy enough to run it with a custom configuration file for the current engagement. So the easy way to manage this is to copy that file into an engagement working directory so that we have a record of the configuration used for the test. Then we simply edit the copy.

Verify your wireless interface:

Before we start editing the configuration file we should verify the name of our wireless interface. Typically this is wlan0 but if working with multiple cards it may be something else (e.g. wlan1, wlan2, etc…). You can use the iwconfig command to list wireless interfaces. If you don’t see any wireless interface listed, you may need to unplug and replug the USB adapter. This sometimes happens if running Kali inside a virtual machine.

Configuring hostapd-wpe:

The configuration file changes you will need to consider include:

  • interface=wlan0 Set this to your wireless interface, usually wlan0.
  • #driver=wired This is the default setting and must be commented out if using a wireless adapter (which is usually the case).
  • ssid=impersonated_ssid Uncomment and set this value to the SSID that is being impersonated.
  • hw_mode=g Uncomment and set to the value g for wireless g or n. Other values (e.g. a, b, ad) are possible. Ideally this should match the ap being impersonated.
  • channel=1 Uncomment and set to whichever channel you want to broadcast you ap. Often this is 1 or 6.
  • wpa=2 This should be set to 2 for wireless g, and 3 to support wireless n.
  • wpa_pairwise=TKIP CCMP This may need to be modified to match the ap being impersonated. Removing TKIP from the list is likely for modern configurations.

“Unmanage” your wireless adapter:

With the configuration file ready to go there is still one step left before we can turn on the server. By default Kali will probably have the NetworkManager service running, which is going to try to manage the state of our wireless interface for us. This makes it impossible to switch to AP mode, which is what we need. To address this, we can tell the NetworkManager service to ignore our device. This is done by modifying the /etc/NetworkManager/NetworkManager.conf file and adding the following section:


Note that the 00:aa:bb:cc:dd:ee should be the mac address of the ALFA. Multiple addresses can be specified if separated by a semicolon (i.e. ‘;’).

Once this change is made, unplug the USB, wait a few seconds, and plug it back in. The led on the ALFA should not be lit. The follow these steps:

  • Type iwconfig to verify that the device and interface was recognized.
  • Type ifconfig to verify that the interface is not up (it should not be listed).
  • Type the following command, replacing wlan0 with your interface: ip link set wlan0 up
  • Type ifconfig to verify that the interface is up (it should now be present). The led on the device should now be illuminated.

Start the rogue server:

If all of this works, you should be ready to go. Simply type:
hostapd-wpe ./hostapd-wpe.conf

Remember to replace ./hostapd-wpe.conf with the path to your custom configuration file.

You can test connecting to the rogue AP with any wireless device. Login will, of course, fail and you should get a certificate prompt, which is fine since most target users will click through and accept a bad cert. If all this passes you should start seeing stuff show up in the logs. A successfully collected credential will look something like:

mschapv2: Thu May 5 12:48:51 2016
username: test
    challenge: 35:30:33:52:07:d1:8c:78
    response: b8:53:4b:77:a5:a9:b2:ae:53:33:72:88:e4:d0:39:72:48:02:67:9b:b6:09:5f:c7
    jtr NETNTLM: test:$NETNTLM$3530335207d18c78$b8534b77a5a9b2ae53337288e4d039724802679bb6095fc7
wlan0: CTRL-EVENT-EAP-FAILURE 08:74:02:5d:de:86

The values from the challenge and response can then be fed into the asleap tool, also written by Joshua Wright and also present Kali 2.

Closing Notes:

If you are trying to do something that is not covered in this blog post, I would encourage you to take a look at the latest edition of Hacking Exposed Wireless, by Joshua Wright and Johnny Cache. That is where I found much of the information covered in this blog post.

Jason Gillam is a Principal Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at, on Twitter @JGillam, or visit the Secure Ideas – ProfessionallyEvil site for services provided.


Whose Code Are You Running?

April 14th, 2016

One of my favorite ways to eat Oreo cookies is to twist the two halves apart, carefully set the filling aside, eat both chocolate halves, and then slowly enjoy the indulgent filling. Without milk, this is by far the best way to fully indulge in both parts of the cookie. But with a glass of milk the implementation changes. Not only is it harder to dunk the separate pieces, but the actual taste of the whole cookie soaked in milk seems much better than the sum of its parts.

In many organizations, Development and Security are two parts of a cookie. Individually they’ve both incredibly important, but too many miss the incredible combined experience. Often both sides hold fast to a false sense of superiority that would’ve made Dr. Seuss’ Sneetches proud.  In fact it’s very common for us to witness clashes between the two teams when it comes to penetration tests or discussing flaws.

When I was first introduced to information security, someone told me that developers focus on finding one way to make something work while security folks focus on finding all the possible ways that could break it. I don’t believe that’s fair. At the end of the day I think we’re all focused on making things better; we just come at it from different perspectives. Developers tend to focus on improving the experience of legitimate users while Security has the goal of preventing the experience of illegitimate users. Two parts of the same cookie you might say.

With this dualistic approach in mind, consider the recent fiasco surrounding the unpublished Kik module on the popular NPM JavaScript package manager. If you’re not familiar with the story, an angry open source developer, bitter over the loss of a module name (kik) to the company that owns the trademark, decided to unpublish all of his 273 published modules. Most of these were unimportant and unnoticed, but one, left-pad, was used extensively throughout the JavaScript world. NPM reported nearly 2.5 million downloads of left-pad in the month prior to this event.

If you’re not familiar with modern-day JavaScript development, it’s very common for libraries to have dependencies that require dependencies that require dependencies in a seemingly unending cascade of code. Each project is like a giant pyramid that only stands if each stone is placed in exactly the right spot. When the left-pad stone was removed, literally thousands of software projects came crashing down. Companies as large as Facebook, Netflix, and Spotify were affected, all because of a rather simple 11-line bit of missing code.

A plethora of blog posts and forum rants have since come out explaining and justifying the actions of each party to this exciting debacle. Like any good philosophical flame war, both sides of the debates have good points. But there is one angle that hasn’t been well covered. This whole situation was predicated on the accepted norm of using third party code, delivered via package managers, within commercial production applications.

Follow along with me for just a moment. This young man, Azer Koçulu, published open source code that was not only used by Fortune 500 companies, but was hosted by a third party package manager so that those commercial projects depended on the existence and maintenance of the hosted packages.

In this particular case, Mr. Koçulu simply unpublished the modules. But what if instead he had updated his modules to contain malicious code? What if he used something like the Browser Exploitation Framework or something worse that injected actual malware attacks into the pages serving his code? Such changes surely would’ve been detected quickly, but out of 2.5 million downloads in one month, how many of those sites would’ve been pushed to production or used in development before it was discovered? If combined with a targeted zero-day exploit, imagine the value of a bot-net that consists of developers and clients of some of the largest websites in the world.

Traditionally information security has been boiled down to a triad of key concepts: confidentiality, integrity, and availability. Though that simplistic breakdown is somewhat antiquated, it still serves the purpose of providing a reasonable understanding of the goals of security.  Infosec professionals are focused on improving these key areas rather than just saying “No” for the sake of some miserly power-trip. Unfortunately we don’t always do a good job of communicating in those terms.

This missing left-pad incident has demonstrated a situation in which developers across the world are suddenly reconsidering the impact to availability by depending on a third-party package manager like NPM for production code. The topics prompted by this even won’t go away anytime soon. However this experience should also cause us to question the impact on our systems’ integrity. If we allow third party code to be updated and integrated into an application without re-vetting the updates, how can we trust the integrity of the application or its data?  (spoiler: we can’t.)

The issue of malicious third party code isn’t unique to the NPM JavaScript package manager either. There’s a wide range of applications and systems that depend on third-party package managers to host and maintain code from a mind-boggling array of open source developers. Take Debian for example. The linux distribution has a very defined process for maintaining standards across package updates, but almost nothing is done to validate the security of the code in those packages. If a maintainer “goes rogue” and introduces malicious code, there are defined processes in place to handle the situation once detected, but nothing to catch it before it affects users.

The same could be said for app stores such as Google’s Play and Apple’s App Store. In each case, some degree of security review may take place, but there’s no way they can accurately review every line of each application for security issues. Many of us may be mindful of what applications we install, but we just assume that application updates are supposed to be safe.

Even if we had reason to trust certain developers or modules, we have no way to know that each update actually comes from the original developer. The cyber attack division of a nation-state could easily look for popular open source modules or applications that can be purchased from the owners. In some cases overworked owners may even just handover the reigns to a project if someone appears genuinely interested in seeing it maintained. Over time a very powerful arsenal could be built without anyone ever realizing it. When necessary, those packages could be weaponized and targeted and take advantage of our regimented patching schedules. By the time anyone realized what happened, it would be too late.

I’ll take off the conspiracy theory hat for a bit. I think it’s safe to assume that most developers are not the unknowing pawns of some malicious nation-state. But it’s important to realize that most security vulnerabilities are found by assuming the worst-case rather than the best-case.

So what do we do about it? To a large degree, there’s not much we can do. Our entire system has been built, literally from the hardware up, on the idea of trusting others. That’s bad enough for hardware, but when we move to software and its collaborative nature it becomes ridiculously unfeasible to even consider any sort of standardized security testing for every application patch or update, especially in the open source world.

In some cases, such as the NPM JavaScript modules, it may be reasonable to not allow dependencies on third party code, or to integrate a code review into all third-party libraries before use. But that would be difficult and often impractical to sustain.

For application updates from 3rd packages, I think we just have to recognize the threat and prepare appropriately. For example it should be assumed that a sufficiently devoted adversary is going to compromise a system. While we should take reasonable steps to prevent that from happening, our goal should be to monitor, detect, and respond to the intrusion before serious damage can be done. Building those probabilities into our threat models provides a more practical estimate of business risk assessments.

I don’t see third-party package managers, or the associated risks, going away anytime soon.  But we can all do a better job of understanding where our packages come from and who we’re trusting.

Nathan Sweaney is a Senior Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at, on Twitter @eternalsecurity, or visit the Secure Ideas – ProfessionallyEvil site for services provided.


Professionally Evil Insights: 2015

April 5th, 2016

Are you interested in knowing which vulnerabilities are the most commonly discovered in penetration tests?  How about which industries are doing the best (or worst) with improving on their security programs?  We pulled together all of our 2014 and 2015 findings, analyzed the results, and came up with some interesting (at least we think so) insights.  Welcome to our first ever findings report:

Professionally Evil Insights: 2015

Jason Gillam is a Principal Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at, on Twitter @JGillam, or visit the Secure Ideas – ProfessionallyEvil site for services provided.


Reversing Type 7 Cisco Passwords

March 4th, 2016

While working on a recent pen test, I came across a few Cisco routers sitting on an internal network. The fact that they were using default cisco/cisco credentials made me cry a little inside, but wait, it gets worse… So I’m in the router, reviewing the running config, and I notice something interesting.

Screen Shot 2016-02-29 at 2.05.03 PM

Note that both of these accounts have the same privilege level, but that the passwords are stored differently. This is because the first user was created with a command like this:

Screen Shot 2016-03-01 at 4.48.44 PM

Whereas, the second user was created with this command:

Screen Shot 2016-03-01 at 4.49.11 PM

The difference between these two storage methods (password or secret) are the hashing algorithms. Type 7 passwords use a very weak algorithm that can be easily reversed, but the “secret” command utilizes a MD5 hash which is much more secure. Due to this, it is never a good idea to use Type 7 passwords. This policy applies to both user accounts and passwords applied to the VTY or Console lines.

So now that I’ve found these Type 7 passwords, I need a way to reverse them. There are several different tools and websites that have this capability, but there’s an easier way! I don’t even have to leave the router! Thanks to a nifty little feature called the “key chain”, I can reverse these passwords right here, right now!

1. First, we will enter config mode:

configure terminal

2. Next, we will create our key chain and give it the name of NEW:

key chain NEW

3. We will enter the first key:

key 1

4. Then we enter the key-string, which will include the number 7 for encryption type and the text of the “encrypted” password:

key-string 7 07212E587D062A0014000E18

* At this point, you may add more keys by repeating steps 3 and 4 if you have multiple passwords to reverse. Make sure to increase the key count though (key 2, key 3, etc.).

5. Finally, we do a show command, and voila! Passwords!

do show key chain

Now that you’ve seen how incredibly easy it is to reverse these types of passwords, please go forth and check your routers! Ensure that all user accounts and enable passwords listed in the running config are proceeded by the word “secret”.


Donna Fracarossi is a Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact her at or visit the Secure Ideas – ProfessionallyEvil site for services provided.


We’re Just Like the NSA, and Nothing Like Them

February 11th, 2016

During penetration tests, and especially scoping calls, we often get quizzed about what secret, proprietary techniques we’ll use to gain access to privileged resources. Most folks assume they’re doing “good enough” or at least meeting “industry best practices” so only the latest, unknown attacks will be successful. The notorious ZeroDay always seems to take the top spot in many organizations’ threat matrix. Fortunately, that’s not realistic.

Last month Rob Joyce spoke at the Usenix Enigma conference in San Francisco. Rob is the head of the NSA’s Tailored Access Operations (TAO) hacking team. These guys are the American version of Mandiant’s APT1 in China. It’s their job to gain access to the systems of foreign nations. During his talk, Rob discussed how they go about their work, and lessons that businesses should learn from their experiences.

In several ways, their attack process is very similar to those of a good penetration test. Rob talked about the detailed reconnaissance phase in which they seek to know the network “better than the people running it.” Seeking out available information about the people, processes, & technologies in a network should be part of any good pen test. Part of that recon includes scanning the network, mapping out systems & services, looking for openings that may be exploitable.

“A lot of people think the nation states are running on this engine of zero days… There are so many more vectors that are easier, less risky, and more productive than going down that route.” Later he shared that “You know the technologies you intended to use in that network. We know the technologies that are actually in use in that network.” Similarly, in most of our pen tests, the issues that allow us to gain access are rarely a new fancy attack as opposed to chaining together common flaws that are often unknown to the organization. Rob shared that one required key to good security, and to defending against any attacker, is to know and understand your network and systems extremely well. He suggested common best practices such as regular scanning, penetration tests, and table top red teaming to discuss and familiarize yourself with every aspect of your network.

However there are a few critical differences between a TAO engagement and a standard penetration test that are important to realize. And understanding these differences is key to scoping out & receiving a good penetration test.  An attack by a nation state is an active, live-fire event; not an exercise. Conversely, a penetration test is an exercise designed to educate. So even though many of the techniques outlined and used by Rob’s team are similar, their purpose is not.

The first major difference is the scope of the engagement. Every penetration test has limitations on the scope of the assessment, many of which are are very reasonable. For example targeting the personal email accounts or home networks or employees during an education test is both illegal and unethical. Oftentimes though scope limitations are introduced for other reasons. Perhaps the business won’t accept the additional cost, or a compliance-driven test is focused only on checking a box on a form for auditors.  In most cases, the fewer restrictions placed on an assessment, the more successful it will ultimately be at painting a true picture of risk exposure.

A second key difference between an APT attack and a pen test is the amount of resources afforded for the assessment. Rob spoke of having copies of known software and experts in those systems that often know more about the technologies than the administrators themselves. Every firm and pen tester has a range of experiences and a network of colleagues that can be leaned on for support when attacking unfamiliar systems, but no private company can rival the checkbook of the U.S. government.  For this reason, we strongly recommend against black-box testing which is both impractical and ineffective. I mentioned previously that a pen-test is an educational opportunity that should provide as much benefit as possible to the organization. A black-box test inherently limits that transfer of knowledge. Alternatively nearly all of our tests are gray-box tests in which we utilize the existing resources of the organization to some degree in order to better assess security findings and associated risk. If you’re still approaching a pen test as a red-vs-blue contest, you’re doing it wrong.

The final issue is closely related. Rob discussing his team’s persistence (APT remember?)  “We’ll poke and we’ll poke and we’ll wait and we’ll wait and we’ll wait…,”  he shared. Eventually a hole will be introduced, maybe just for a short time during a maintenance window or while a vendor is fixing an issue. With nearly unlimited resources they can continue to scan and test and push until something changes. Most pen tests though rarely last more than a few weeks. However a good test can take these issues into account. A good tester can help you consider different variables and potential events and how those may effect deployed security controls. It may not be realistic to wait for an event to occur, but often a simulated situation can be used to explore potential risk. Again this falls back on the importance of an open gray-box test.

Ultimately, it’s important to understand the difference between advanced attackers and a penetration test. There are similarities, but also key differences.  If your testers aren’t able to have that conversation, give us a call.

Rob had a lot more great advice for businesses on how to secure their network. His 35 minute talk is definitely worth a listen.

Nathan Sweaney is a Senior Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at, on Twitter @eternalsecurity, or visit the Secure Ideas – ProfessionallyEvil site for services provided.


Red Teaming – Not What You May Have Thought

January 27th, 2016

Lately, I’ve been doing a lot of reading on some less technical topics and I ran across “Red Team: How to Succeed By Thinking Like the Enemy” by Micah Zenko.  If you are like me, you’ve probably thought of the red team as being a penetration test group or some kind of adversary simulation.  You know, red teams are the pretend bad guys and blue teams are the good guys defending against them.  Reading Micah’s book altered my view of what red teaming is.  I learned about a world of non-technical red teaming and how that can be applied to our organizations.



At its core, red teaming is the idea that we are going to look at things critically and with an eye towards an alternative point of view.  Conventional thought may lead you to believe that if you do X, then Y will be the result(s).  Because everyone believes this to be true, organizations make plans based on it and really don’t challenge it much.  A generic example may be “if we buy advertising, we will make more sales.”  Red teaming this would question that point of view and may come up with alternative results based on following questions through the process.  Questions like this may be asked:

  • What if buying advertising doesn’t result in enough sales to justify the expense?
  • What if buying ads in this particular format or delivery mechanism turns potential customers off?
  • What if the desired revenue level is never reached?  What impact will that have on our future plans?

In the world of cybersecurity and penetration testing, this could look something similar to the following:

IT says…

  • This server (or service) isn’t really important so we don’t need to worry about patching it yet.
  • We’ve got a SIEM and it will let us know if anything important is happening.
  • Our firewall rules are effective at preventing an attacker from moving through the network.

The red team asks…

  • What if that unimportant server has an important account logging into it?  What access could that give us?
  • What if that unimportant server has different network restrictions in place and being on this system would give us access to more important systems that I can’t get to right now?
  • When would the SIEM alert you and would it be in enough time?  Would anyone even read the alerts and how would they respond?

The list of What Ifs can go on and on.  In penetration testing, we would then start testing these scenarios to see what would happen.  We want to determine what the impact to the organization could be if an attacker ran a string of these What Ifs together and would they be able to get access to the victim’s secret sauce.  And we want to verify that controls are effective and perform as we expect.

I believe that there is value in taking in some of the perspectives and techniques that red teaming employs.  We don’t have to do a full penetration test or red team exercise (though these are useful) to start getting some benefit from this discipline.  Simply sitting down with a few other people and questioning the assumptions or understandings that are being used in projects can be very helpful.  The possible results at the end of these scenarios may not actually occur, but a number of the questions and answers that will be encountered during the process can spot risks that we may not have ever noticed.  We can then take steps to minimize the impact of these risks or the likelihood of them ever occurring.  We can be better prepared to respond if the unlikely suddenly becomes reality because we have at least thought through it before hand and kicked around how we could respond.

I’d recommend reading or listening to Micah’s book.  I enjoyed listening to it quite a bit and I learned a lot.  I’d also recommend looking at other resources for red teaming.  I ran into the Red Team Journal as a result of Micah’s book and found some of the things there to be very thought provoking.

Jason Wood is a Principal Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at, on Twitter @Jason_Wood, or visit the Secure Ideas – ProfessionallyEvil site for services provided.


Five Outdated Security Excuses

January 12th, 2016

The Security Industry as a whole has been known to criticize businesses large and small with respect to how they manage security.   Why does it so often seem like an after-thought?  How is it that today we still frequently find that security teams are understaffed (or not at all), that business decisions involving sensitive information are made without consideration for security, and established well-known best practices (like using strong passwords) are ignored.Hand showing 5 fingers

This myth-busting post will cover five excuses (in no particular order) for poor security that should no longer be valid:

Excuse #1: SSL is Slow

SSL was very slow and CPU intensive… over a decade ago.  It is true that there was a time when SSL handshakes were required much more frequently and the speed of the average Internet connection was much slower than it is today.  Remember when “SSL-accelerator” was a buzz word?  The performance hit from switching over to encrypted communication for virtually all connections is well worth the benefit.  Even high-volume sites that don’t necessarily host confidential information (e.g. Google’s search engine) have switched over to HTTPS for everything.

SSL should not be shunned due to performance concerns.  If your website is slow, the poor performance is most likely due to factors outside of SSL.  There are several studies available online that already prove this so I won’t go through it here.  I have had recent conversations with some folks where they make the point “but using SSL made you vulnerable to Heartbleed!”.  True, but still not a good excuse.  Once Heartbleed was announced  we had patches available in short order.  Fear of vulnerabilities in OpenSSL is not a good reason to communicate over unencrypted channels. And by the way, if by SSL you don’t really mean TLS, then it is time to update your vocabulary and probably patch a few things.

Excuse #2: Patching Might Break Stuff

True, but just because patching might break stuff doesn’t mean we should stop patching.  These days many vendors release patches on a frequent schedule and have thorough regression tests to minimize new issues from upgrading.  With all the vulnerabilities publicized over the past couple of years you would think that patching would be assumed, but unfortunately this is not the case.  And the most common line of excuses we hear from our clients is “we don’t want to risk breaking the release”.

Let’s take a step back and think about this.  If your service breaks, and you announce “service is temporarily offline because a security patch broke it”… as a consumer I will be annoyed, but much less annoyed with the outage than if you announce “your account has been compromised because we decided not to deploy a security patch”.  So when a vendor patch is announced, someone should read it, assess the risk to your services (i.e. Does it address an issue with a feature you use? What would be the impact?), and based on the level of risk decide when to schedule the patch.  But deciding not to patch anything for fear of breaking stuff is a lame excuse.

Excuse #3: We Don’t Have Funding for Security

This one of the most common excuses we in the industry hear when we perform security reviews of companies. Many smaller companies don’t even dedicate a single full-time resource to security.  It’s the one IT person who manages everything with a circuit board in it who is also responsible for security.  And many larger companies might run scans but don’t follow through and remediate issues.

In this age just about every company must dedicate some funding towards security.  If you are handling credit card or healthcare or personal data of any type you are probably required either by law or contract to test and remediate security.  This is not optional.  I heard a story over the holidays where a friend of a friend  is a small business owner and ended up with ransomware on his laptop.  This laptop contained all of his business data.  Fortunately it was recovered but this was a case of no patching, no backups, etc… There are rudimentary security precautions every business owner should invest time and a little bit of money into.  If your business is large enough that you have a network (even a small one), you should be investing in security scans and remediating issues.

Excuse #4: We Aren’t a Target

Perhaps you feel that your business is too small, or doesn’t deal with information that might be interesting to an attacker.  Perhaps you actually are not subject to PCI or HIPAA and feel that the risk of being attacked is virtually non-existent.

Not true.  If you have an Internet address then you are a target.  Plain and simple.  I have recently set up a number of honeypots for the purpose of analyzing attacks and found that every time I made them available to the Internet they were attacked within a few hours.

Yes, some companies are big obvious targets (e.g. financial, healthcare, government, etc…).  But there are organizations of criminals out there who will attack anything they can find, without any idea of what’s behind the IP address.

Excuse #5: Users Can’t Handle Complex Passwords

Many companies tend to have relatively simple password policies.  For example, it is quite common to find an 8-character limit for passwords, or to only allow certain special characters.  Sometimes the excuse for this is that it is based on some system limitation (e.g. mainframe).  This should be very rare, as there are few modern systems (even mainframes) that still limit passwords to only 8 characters.  But the other excuse we often hear is along the lines of “management doesn’t want to have to deal with passwords that are harder to remember”.

A full discussion on password complexity is a topic for another day, but for now lets just say that complexity is a factor of:

  • predictability (is it a common password found on a wordlist?)
  • character set (just alpha-numerics or can more printable characters be used?)
  • length

And perhaps surprisingly, length is usually much more of a factor than the character set.  Therefore, a 25 character fragment of lyrics to a favorite song will usually take much longer (i.e. centuries) to brute-force than “fo0B@r1!” (i.e. minutes).  My guess is that favorite lyrics (or a poem, or movie lines, or book) fragments would be at least as easy to remember as eight mangled l33t-spe4k characters for most of us.  So yes, users probably can handle complex passwords.  They just don’t really understand what makes a password complex.

Jason Gillam is a Principal Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at, on Twitter @JGillam, or visit the Secure Ideas – ProfessionallyEvil site for services provided.


Introduction to Metasploit Video

August 17th, 2015

The Metasploit Framework is a key resource for security assessors. Whether you’re goal is to become a commercial penetration tester, to demonstrate the risk of a vulnerability, or just need to identify certain weaknesses in your environment, Metasploit is your tool. Understanding how it works, and how to get started is the first step.

The Metasploit project was started in 2003 by HD Moore as an open source framework for developing and executing exploits. It’s modular designed allows developers to focus on the code unique to their objective without having to recreate components like transport methods or payloads.   It has since grown to include thousands of modules for exploitation, post-exploitation attacks, scanning, encoding, and others.

In addition to exploiting known vulnerabilities, Metasploit has the functionality to do port scans, identify systems with default passwords, using credentials or hashes to run commands on remote systems, and much more. You can even setup listeners for capturing user credentials via common protocols like HTTP and SMB to be used in multi-part attacks. And if the functionality you need doesn’t exist, it’s very easy to write your own new modules.

Before you get to all that though, you have to understand how Metasploit works and get it up and running.  We put together a one-hour webinar to help you get started. Whether you’ve never used Metasploit, or just need a refresher course, this video will walk you through the basic steps of understanding how things work, getting it installed, and exploiting your first vulnerability.

Check it out here:

When you’re ready for the next step, we also have a 2-hour recorded training class designed to help you become more proficient in Metasploit. It offers tips and tricks that we use on engagements. You can purchase that course for $25 here: Recorded Classes

Nathan Sweaney is a Senior Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at or visit the Secure Ideas – Professionally Evil site for services provided.


Introducing Burp Correlator!

August 14th, 2015
This one is for you web penetration testers!  This new Burp extension is designed to help with efficiency when you are testing a complex application full of parameters or a series of applications and just do not have enough time to thoroughly analyze each one.  It analyzes all the parameters in your in-scope traffic and presents them in a table.  But that’s just the start!  In addition to generating some basic statistics, it will intelligently attempt to determine the format of each parameter based on the values seen in the traffic.  Correlator will automatically and recursively base64 and URL decode, check for known hash lengths (e.g. MD5, SHA1, etc…), make note of familiar formats (e.g. 123-45-6789), decode BigIP cookies, and more!  It will also check to see if the value shows up in the response (i.e. was it reflected), and even whether the URL decoded version was.
It is a lot easier to explain how this works with a demonstration, so I made a video:

I’m very hopeful that this extension will make large-scale manual web penetration testing more palatable and significantly more efficient.  But I need help! Please check it out and give me all your feedback so I can make it even better.

Look for the Correlator (beta) download link on

Jason Gillam is a Senior Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at, on Twitter @JGillam, or visit the Secure Ideas – ProfessionallyEvil site for services provided.