Wednesday, September 10, 2014

CORS Global Policy

I recently noticed an uptake on Cross-Origin Resource Sharing (CORS) findings showing up in automated scanning tools, which would not have been a significant concern except for the fact that the tools were rating this as a relatively "high" severity and very few people I asked about it seemed to have any idea what it was. I love to find obscure quirky behaviors that can be turned into vulnerabilities... but first it we need to determine whether or not it is as bad as the scanning tools lead us to believe.

So what is CORS?

Simply put: it's a mechanism that creates a contract between a browser and a server permitting an exception to the Same Origin Policy. This is an extremely important aspect of today's application mashup dance party where all sorts of applications are intentionally interacting with each other in dynamic ways which must work around the Same Origin Policy. It is accomplished through request and response headers in such a way that the browser will identify all script-based requests with an Origin header, to which the server will respond with a set of cross-origin restrictions. The browser will only proceed with sending requests that adhere to the restrictions. The whole thing is slightly more complicated than this so if you want more details I recommend giving this a read: http://www.html5rocks.com/en/tutorials/cors/

Now the really interesting part of this from a penetration tester perspective is when we start involving cookies. It is possible to use CORS in what is known as a "with Credentials" mode, which will in fact send cookies during a cross-origin request. By this I mean that if you establish a session cookie with one origin, that cookie will be sent during any CORS requests (from other origins) to that origin. This allows a cross-origin request (e.g. an AJAX call) from one site to interact with another site as if the user was logged in.

So here is the line the scanning tools complain about:

        Access-Control-Allow-Origin: *

 ...and I see why. With a global wildcard policy like this (on any site containing sensitive data), wouldn't that mean a user browsing any attacker-controlled site is at risk of a cross-origin call pulling data off the target site? Researching this did not yield a satisfactory answer so I decided to write some test code and experiment with some scenarios on a few different browsers to see how they reacted. I tested Firefox 31.0, Chrome 37.0, and Safari 7.0.6 and all three of these yielded the same results.

What's the Risk?

In answer to the risk question: cross-origin requests using the "with Credentials" mode appear to fail if a global wildcard origin policy (i.e. Access-Control-Allow-Origin: *) is set. Such requests do not fail if specific origins are listed. So is there really a risk here or are the scanning tools overreacting?  As is often the case, the answer isn't as simple as "yes" or "no".  While experimenting with CORS and looking at some of the server configurations out there I realized that a typical server solution consists of simply adding CORS response headers.  They do not automatically perform filtering based on the values of these headers.  That is a job left up to applications and the browser.  In the case of the browsers I tested it seems like the browser is doing its job by not accepting the response body for requests against the server's CORS restrictions.  But the server is still sending the response body.  So if you examine CORS communication through a intercepting proxy (e.g. Burp), you will see that even though your Javascript can't read the CORS response, it is still sent over the wire unless the application is smart enough to respond based on Origin headers.

So what does this mean for penetration testers?  I feel we have not yet reached the end of the story.  So far my testing suggests that it is not common practice (yet) for servers to filter response based on Origin headers.  At a minimum this means the responses of an overly-permissive CORS policy can be easily captured by a proxy.  So an overly-permissive CORS policy combined with lack of Origin filtering and cleartext (HTTP) communication sounds to me like a solid recipe for data theft.

Next steps

I would like to see if a CORS request can be sent through a Javascript-based proxy within the same context.  In addition a lot more testing needs to be done on how different web servers handle Origin filtering.

Want to talk about it?

I will be teaching a Web Penetration Testing course at  AppSec USA next week and would be happy to discuss this research.

Jason Gillam is a Senior Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at jgillam@secureideas.com, on Twitter @JGillam, or visit the Secure Ideas - ProfessionallyEvil site for services provided.

Tuesday, September 09, 2014

Professionally Evil Courses: Ride Along Penetration Testing

Secure Ideas is excited to announce the latest class in our Professionally Evil Course series: Ride Along Penetration Testing.  This course will be held on October 9th at 2PM eastern.

Unlike so many other courses, this is not a typical "here is a tool and how to use it" course.  In this 2 hour course, James Jardine and Kevin Johnson will walk attendees through a realistic web penetration test.  They will target a web application typical to our penetration tests, and explain how they approach it.  Attendees will learn how to identify potential flaws within an application and then how to verify and exploit those flaws.

Secure Ideas will also include access to the application being tested, so attendees can follow along during the class.  This allows the attendees to have hands-on experience with the techniques being taught.

This course is $25 for the two hour session.  To register, visit here.


Kevin Johnson is the CEO of Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at kevin@secureideas.com or visit the Secure Ideas - Professionally Evil site for services provided.

Wednesday, September 03, 2014

The ABC's of ASV's & PCI

Secure Ideas' prides itself on providing the highest level of service to our customers. We are tirelessly searching for new tools, and methods to use in strengthening security for our clients.  With that in mind, it is with great pleasure that we announce the unveiling of PCIScout.  After undergoing a thorough vetting, and testing process, we were recently made aware that Secure Ideas is now an official Approved Scanning Vendor for PCI.  PCIScout is operational, and we are excited to offer this new service along with the rest of our Scout tools.

Privacy Matters

Security should be a top priority for most organizations that handle sensitive information.  Having strong security measures in place is the first step in protecting against vulnerabilities, and attacks to your system.  With that said, keeping sensitive information private is becoming increasingly difficult in this fast-paced, technological age.  Adhering to a specific standard helps keep a fundamental security structure in place, one standard of which is PCI.

The Payment Card Industry Data Security Standard (PCI DSS) is in place, for companies and businesses that handle payment card data, to follow a uniform standard.  Keeping payment information safe from attack is a key component in organizations being PCI compliant.  To be compliant, an organization must commit to certain testing and auditing performed by contracted, authorized, vendors. If testing finds a company out of compliance, the necessary steps to remediate the issue must be taken immediately.

Finding the Right Vendor

An Approved Scanning Vendor, or ASV is able to perform these security scans for organizations that need to be considered PCI compliant.  An ASV must go through rigorous testing to become approved, and all ASV’s adhere to a specific protocol as defined by PCI, ensuring a consistent testing environment.  A comprehensive list of approved vendors can be found by clicking the link below.
https://www.pcisecuritystandards.org/approved_companies_providers/approved_scanning_vendors.php (Secure Ideas is now on the list.)

We know that PCI DSS is a standard used by organizations, and that approved scanning vendors are contracted to complete testing to ensure compliance.  Next, we need to understand who needs to follow PCI and how organizations benefit from compliance.  Simply put, any organization that accepts, transmits, or stores any cardholder data must be PCI compliant.  Failing to obtain such status can result in monthly fines from $5,000 to $100,000 and up. (https://www.controlscan.com/support-resources-qa.php)

PCI has four levels of merchants, 1 through 4.  The number of transactions a company performs per year is how PCI determines what level an organization falls under.  With each level comes a different set of validation requirements an organization must adhere to in order to stay compliant.  If an organization accepts any cardholder data whatsoever, they must be in compliance.  There is no grace period in which companies have to get compliant, so staying proactive is imperative.  Part of PCI requires quarterly scanning, so every 90 days, a passing scan must be submitted.  For a scan to pass, every host must meet the requirements of PCI.  The following link better outlines what constitutes a failing scan. http://www.qualys.com/products/pci/qgpci/pass_fail_criteria/

Is PCI Right For You?

The benefits of being compliant are great, and far outweigh the risks of not following the standard. Customers will begin to rely on an organization if they know their information is secure.  Trust is the greatest asset an organization can gain from being compliant, as it will most likely result in repeat customers, a better reputation in the security community, and the ability to help stop sensitive information from being compromised.

Andrew Kates is a Security Analyst for Secure Ideas. If you are in need of a penetration test, Scout services, or other security consulting services you can contact him at info@secureideas.com or visit the Secure Ideas - Professionally Evil site for services provided.

Tuesday, August 26, 2014

Egress Filtering: Are you authorized to leave?

One of the first concerns with protecting a network is stopping outsiders from being able to enter into the internal network.  Of course, this does make sense because we believe that the main threat to our network is external by default.  Over the years we have found this to not be so true and that many threats actually are from internal actors.  While I do believe that blocking external requests that are unwanted is completely necessary for a secure network, we should not overlook how data leaves the network.


There are multiple reasons why data may be leaving the internal network.  Maybe the employees are sending data out to clients with a non-malicious intent.  On the other hand, maybe an employee is trying to steal data, or an external attacker has gained entry and is now trying to ex-filtrate the data for future use.  There are two types of data leaving a network: Acceptable and unacceptable.  The first step is to determine what is acceptable data to leave the network.  This includes not only identifying the category of the data, but also how it is transmitted.  What ports does it use?  Is there a specific destination address or originating address for that data?  Once we can start to identify these items, we can then start to determine what cannot be sent out.

An example of traffic  you may want to block is VPN connections.  While a VPN connection creates a secure channel for safely connecting to a network, it also makes it difficult for the company to determine what data is being transmitted over it.  If an employee or attacker is able to create a VPN to an external source and send company confidential information through it, the company may never be aware it happened.  There are lots of other ways to send data to an outside source, so configuring your network to block these avenues is essential.

Egress filtering is the process of filtering out the data that is being transmitted to the outside of the network.  Typically, non-standard ports will be blocked as a measure to prevent data from leaving.  Malware or a virus that makes its way into the network could try sending data on a random port.  If that is the case, then by blocking all non-essential ports, the malicious software loses its ability to communicate to the outside and possibly helps reduce its impact.  Unfortunately, we are seeing more and more these malicious tools using common ports because they are known to be available. 

Once the company has adopted a solid data classification policy, other constraints like Data Loss Prevention (DLP) can be implemented to help identify data leaving the network.  In addition, network segmentation can also help protect data from being removed from the networks.   We will discuss those topics in future posts.

James Jardine is a Principal Security Consultant at Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at james@secureideas.com, @jardinesoftware on twitter or visit the Secure Ideas - Professionally Evil site for services provided.

Wednesday, July 30, 2014

Logging Like a Lumber Jack

Turn on any news outlet or visit any news site and you will most likely see an announcement of yet another data breach.  On the DTR podcast we discuss breaches in the news during almost every episode.  There is a push to put more of an emphasis on identifying and reacting to a breach or security incident rather than just focusing on preventing the event from happening to begin with.  We want to prevent as much as possible, however the reality is that breaches appear to be inevitible and if that is the case, we should put some focus on properly reacting and identification.

Logging is a key element to any security program.  It provides the capability to see what is currently happening as well as what has happened in the past.  In the event of a breach, the logs become critical in determining what happened, what data may have been effected, etc.

Current Logging...

Over the years I have found that many developers do incorporate logging into their applications, but it is not for security purposes.  Most of the time the logs are strictly for troubleshooting.  This makes sense because that is where a developer focuses their time once an application has been released.  Without debug logs, it can be really difficult to fix a problem.  The issue here is that while this works for debugging, it doesn't really help from a security standpoint.   In the event of a  breach, it is doubtful that a log full of stack traces is going to be the help that is needed to determine what happened.

What Should I Log?

There is no exact formula for what needs to be logged in a system.  There are of course some starting points that are consistent across many platforms.  The following are a few examples of events that should be logged.

  • Failed Authentication Attempts
  • Successful Authentication Attempts
  • Account Lockouts
  • Sensitive Transactions
  • Access of Sensitive Data
  • Application Feature Changes - Logging Disabled/Enabled, System/Application Shutdown
Unfortunately, the hard part comes in when you are trying to determine the key elements to log.  You have to have a solid understanding of your business and what is important to it.  Take the time to identify the key data the application uses, the key transactions that must be recorded, and why you are logging to begin with.  

Don't Forget Monitoring

Logging doesn't do a whole lot of good if there is nothing monitoring it.  All too often we hear of breaches that happened more than a year ago and the Incident Response team shows the evidence from a log file.  If only someone had been looking, it would have been identified much more quickly.  Make sure that you have some method for reviewing the logs regularly.  There are lots of packages available to help with this process.  I know, manually looking at logs all day is not much fun at all.

The monitoring should be more then just looking for attack requests, but have the ability to identify when a brute force attack is happening so an alert can be sent to a technician.  As you spend more time logging and reviewing those logs you will be able to identify more events you want to monitor to help secure the site.

What To Do?

Develop a plan for any items that could appear in the log that are of concern.  Just like any response plan, you want to know what steps to take in the event of a security alert.  For example, brute force attempts may be met with blocking an IP address.  The identification of a compromised server with malware may mean taking that server offline immediately.  Know what procedures exist so proper reactions can be performed as soon as possible.


James Jardine is a Principal Security Consultant at Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at james@secureideas.com or visit the Secure Ideas - Professionally Evil site for services provided.

Tuesday, July 22, 2014

Policy Gap Analysis: Filling the Gaps

 In today's world, something never seems to be true unless it is written down, and even then it is a guideline.  In the business world there are policies that define how employees should present themselves as well as how company equipment can be used.  The policies are important because they provide a written definition of the individual's responsibilities.  For example, a company may have a policy regarding the use of social media on company owned equipment.  Password complexity is another example of a policy that should be found at most businesses.

When starting a new job, often times the company policies are provided for your review and signature that you have received and understand them.  You are expected to be held to these standards, unfortunately you only see them maybe once a year so you have probably forgotten every little detail.  It is encouraged to be very familiar with the policies for your company.  Many of these may be similar across organizations, however some may be very specific to that company.

As a security consulting firm, Secure Ideas spends a fair amount of time looking at company policies, providing feedback and gap analysis.  It is recommended to get someone that specializes in the topic to review the policies to see if there are any items that just don't work, or that are possibly missing.  For example, if your password policy is still 8 characters with upper, lower, special character and number requirements, it might be a good idea to update that to a more secure standard.

Performing a gap analysis is important because it helps make you aware of the policies that you may need, but are missing.  I haven't been to a company yet that didn't have at least one policy in place.  Often times this starts off slowly and then, if no one is assigned to it, drops off.   Things change rapidly in technology and the policies have to evolve with it.  This change and evolution causes the organization to miss policies that are needed or not update existing ones to take into account the changing environment.

For example, the proliferation of mobile devices requires policies to help identify how those devices can be used at the enterprise, and with its data.  The release of Google Glass and other wearables also adds some new twists to the idea of mobile data or video camera capabilities.  Social media has become so mainstream that many employees need to get access from company computers.   How do they present themselves and how much time they spend on it is very important for the business.  Without a policy stating that you are limited in the time spent on social media, can we assume that we can just not work and play on the computer all day?  It is up for interpretation.

When creating the policies you want them to be clear in their meaning.  I know in the legal world, you want it vague so it covers more, but at the same time that makes it harder to show it is an actual violation.  You don't want to leave it up to the judges.  Make sure that you are thinking of all the things that need to be said to an employee regarding DONT'S with a company computer or on company property and write them down.   Educate the employees on what the policies mean and how they are enforced.

All too often we see companies that have policies that are just assumed and not written down.  This is a bad idea.  You want to have them formalized and distributed so employees understand their rules of engagement with the business.

If you are not sure about that status of your policies, Secure Ideas can perform a gap analysis of your existing policies to help provide recommendations on what could be added to make it more robust. 

James Jardine is a Principal Security Consultant at Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at james@secureideas.com, @jardinesoftware on Twitter, or visit the Secure Ideas - Professionally Evil site for services provided.

Wednesday, July 09, 2014

New Data Security Breach Laws in Florida

Since many organizations are collecting what many would consider personal, non-public, information, it is very important that they protect this information since it is considered sensitive.   Almost every state has specific laws around what happens if that information is breached.  Florida just passed a new law that outlines what is considered sensitive information and the thresholds regarding when and what to report to the state.  The full bill can be found at http://laws.flrules.org/2014/189.

Personal Information according to the Florida law is described in summary below:
  • First name or first initial with last name in combination with at least one of:
    • Social Security Number
    • Driver License Number
    • Passport Number
    • Military Number
    • Financial Account Number (Including Credit or Debit Card)  in combination with security code, access code ore password to account
    • Any Medical History
    • Health Insurance Policy Number or Subscriber ID
  • Username  or E-mail Address in combination with:
    • Password
    • Security Question and Answer
Data that is not covered are items made publicly available by the government or data that is protected using encryption, is secured or modified to be unusable. (Interestingly, they don't define "is secured".)

In the event more than 500 instances of this data are breached, the organization is required to provide notice to the state within 30 days of identifying the breach.  There are also many details about what information needs to be provided in that notice within the text of the new law.  One interesting thing that you can be required to provide is a copy of your policies in place regarding breaches.  I don't know if going with "We don't have any" is going to be the right answer here.  If you don't have solid policies in place and ways to show you are performing them, this might be where you want to get started. 

The law replaces the previous one and is broad in its declaration.  It is imperative that businesses pay attention to the laws that exist for their area to ensure that they are meeting the requirements.  It is bad enough to suffer a breach and all the reputation and monetary damage that it brings.  You don't want to add on the fines or other issues that may be involved by not properly reporting the breach when it occurs. 

James Jardine is a Principal Security Consultant at Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at james@secureideas.com or visit the Secure Ideas - Professionally Evil site for services provided.