Tuesday, October 07, 2014

SQLite: the good, the bad, the embedded database

SQLite is an embedded, open-source, lightweight SQL database engine. The C based library is transactional, self-contained, and highly compact. It’s also fairly easy to implement. It doesn’t require any sort of installation or configuration, and all data is stored locally. This is very differently from a standard Oracle or MySQL database, so don’t make the mistake of thinking they are interchangeable.

The compact and efficient nature of SQLite has led it to become very common in mobile development. The library understands most standard SQL commands, with a few exceptions (such as missing right/full outer joins, limited alter commands, and the inability to write to views).  It also has bindings for a large array of popular programming languages and is included by default on most smartphones. SQLite has even expanded into on-disk file formats for desktop apps and low traffic web sites.

With a small footprint, server-less databases, readable source code, and cross-platform capability, there are plenty of advantages. But… we’re tech people; we know the other side of that coin. There’s always a price to pay.

So what are the disadvantages of SQLite? Well, to begin with, the database is stored as a single file (which can be located anywhere in the directory hierarchy). While this may seem convenient, any rogue process can open the database file and overwrite it. SQLite has no way to defend against this, so security must be performed at the file level. Set permissions carefully, keep files out of the web root, and/or use an .htaccess rule to prevent unauthorized viewing. You’ll also want to make sure to sanitize user input using sqlite_escape_string(), since SQL Injection is still an issue.

Another security concern is a feature called journaling. When changes are made, the SQLite database maintains separate “journal” or “WAL” files to facilitate roll backs. These files are generally temporary and get deleted when the transaction commits or rolls back. However, there are 3 conditions that can interfere with journal deletion.

1.    If a crash occurs mid-transaction, the -journal or -wal file is stored to disk until the next use.
2.    Developers have the option to set journaling mode to PERSIST (which prevents the journal from being deleted).
3.    Developers also have the option to put SQLite into exclusive locking mode (often done for performance). With this option, the rollback journal might be truncated or have it’s header zeroed (depending on which version you’re using) but the journal itself is not deleted until SQLite exits the exclusive lock mode.

These conditions could present serious security concerns if the database is handling sensitive data (even if it is only being stored temporarily). So how do we protect it? Well, theoretically, it is possible to turn off journaling at the source level. However, this is not recommended. If journal files are missing when the application crashes, your database will likely be corrupted.

In the end,  some solutions for storing sensitive data in SQLite is to encrypt it before storage and to make sure that you are not storing sensitive information. If you prefer not to encrypt the data yourself, SQLite has an extension called SQLCipher that will perform encryption. The commercial version does have a fee, but the community edition is open source.

Donna Fracarossi is a Security Analyst at Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact us at info@secureideas.com or visit the Secure Ideas - Professionally Evil site for services provided.

Wednesday, October 01, 2014

Thumb Drives.. Can you tell the difference?

 During a physical penetration test, it is not uncommon for the tester (attacker) to drop usb thumb drives out in the parking lot or someplace within the building.  The hope is that an employee will pick it up and connect it to their computer.  The end goal: malware that makes a connection back to the attacker.

Business have some controls in place to try and help block this simple attack by blocking the ability for USB drives to connect to the device.  There are two problems that immediately come to mind.  First, how do you know that the device is really a thumb drive?  What if it was something else that attacks your system, like a human input device (hid)?  The second problem is just user awareness in understanding how attackers attempt to trick you into accessing the system.

Today, I want to talk about the device itself and the difference between a USB thumb drive and the HID.  As you can see from the picture, they look very similar.   Can you tell which is which?



The one on the left with the "Secure Ideas" label is in fact a 4GB USB thumb drive.   The one on the right is a HID, called a rubber ducky.  So what is the difference?  The USB device is just a storage device.  You can save and retrieve files from it.  You can even install bootable operating systems on it if you want.  This is what you normally buy when you want to transport files around with you on something small.

The HID device actually has a microsd card installed in it and is not meant for storage as this picture shows.



When plugged into a computer, the computer sees it as a Human Input Device (HID).  What does that mean?  Simply, it thinks it is a keyboard.  The devices is programmed by the attacker to run commands by sending a set of pre-configured commands to the system.  When it is plugged in, it runs the commands, even if USB thumb drives are blocked.  We can't really block HID devices since most people need a keyboard to work.

The HID device may install some malware or a back door that creates a connection out to the attacker to give him access to the system.  You may notice it doing this as it is not always very subtle (opening a command prompt and writing out a program to compile).

Just the other day, sitting in the office, Kevin asked for a thumb drive to put some files on.  He rustled around the desk and found one to use.  Unfortunately, the one he chose was actually the HID and not the Thumb Drive.  He was very surprised when he plugged it in and the keyboard popped up.  Fortunately it wasn't set up to manipulate a Mac, but it could have been.  These devices do exist and we need to be able to realize the potential that a device you receive may be malicious.  Sure, the Professionally Evil thumb drive could be malicious too, but it is easier to block those in a corporation.

James Jardine is a Principal Security Consultant at Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at james@secureideas.com, @jardinesoftware or visit the Secure Ideas - Professionally Evil site for services provided.

Wednesday, September 10, 2014

CORS Global Policy

I recently noticed an uptake on Cross-Origin Resource Sharing (CORS) findings showing up in automated scanning tools, which would not have been a significant concern except for the fact that the tools were rating this as a relatively "high" severity and very few people I asked about it seemed to have any idea what it was. I love to find obscure quirky behaviors that can be turned into vulnerabilities... but first it we need to determine whether or not it is as bad as the scanning tools lead us to believe.

So what is CORS?

Simply put: it's a mechanism that creates a contract between a browser and a server permitting an exception to the Same Origin Policy. This is an extremely important aspect of today's application mashup dance party where all sorts of applications are intentionally interacting with each other in dynamic ways which must work around the Same Origin Policy. It is accomplished through request and response headers in such a way that the browser will identify all script-based requests with an Origin header, to which the server will respond with a set of cross-origin restrictions. The browser will only proceed with sending requests that adhere to the restrictions. The whole thing is slightly more complicated than this so if you want more details I recommend giving this a read: http://www.html5rocks.com/en/tutorials/cors/

Now the really interesting part of this from a penetration tester perspective is when we start involving cookies. It is possible to use CORS in what is known as a "with Credentials" mode, which will in fact send cookies during a cross-origin request. By this I mean that if you establish a session cookie with one origin, that cookie will be sent during any CORS requests (from other origins) to that origin. This allows a cross-origin request (e.g. an AJAX call) from one site to interact with another site as if the user was logged in.

So here is the line the scanning tools complain about:

        Access-Control-Allow-Origin: *

 ...and I see why. With a global wildcard policy like this (on any site containing sensitive data), wouldn't that mean a user browsing any attacker-controlled site is at risk of a cross-origin call pulling data off the target site? Researching this did not yield a satisfactory answer so I decided to write some test code and experiment with some scenarios on a few different browsers to see how they reacted. I tested Firefox 31.0, Chrome 37.0, and Safari 7.0.6 and all three of these yielded the same results.

What's the Risk?

In answer to the risk question: cross-origin requests using the "with Credentials" mode appear to fail if a global wildcard origin policy (i.e. Access-Control-Allow-Origin: *) is set. Such requests do not fail if specific origins are listed. So is there really a risk here or are the scanning tools overreacting?  As is often the case, the answer isn't as simple as "yes" or "no".  While experimenting with CORS and looking at some of the server configurations out there I realized that a typical server solution consists of simply adding CORS response headers.  They do not automatically perform filtering based on the values of these headers.  That is a job left up to applications and the browser.  In the case of the browsers I tested it seems like the browser is doing its job by not accepting the response body for requests against the server's CORS restrictions.  But the server is still sending the response body.  So if you examine CORS communication through a intercepting proxy (e.g. Burp), you will see that even though your Javascript can't read the CORS response, it is still sent over the wire unless the application is smart enough to respond based on Origin headers.

So what does this mean for penetration testers?  I feel we have not yet reached the end of the story.  So far my testing suggests that it is not common practice (yet) for servers to filter response based on Origin headers.  At a minimum this means the responses of an overly-permissive CORS policy can be easily captured by a proxy.  So an overly-permissive CORS policy combined with lack of Origin filtering and cleartext (HTTP) communication sounds to me like a solid recipe for data theft.

Next steps

I would like to see if a CORS request can be sent through a Javascript-based proxy within the same context.  In addition a lot more testing needs to be done on how different web servers handle Origin filtering.

Want to talk about it?

I will be teaching a Web Penetration Testing course at  AppSec USA next week and would be happy to discuss this research.

Jason Gillam is a Senior Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at jgillam@secureideas.com, on Twitter @JGillam, or visit the Secure Ideas - ProfessionallyEvil site for services provided.

Tuesday, September 09, 2014

Professionally Evil Courses: Ride Along Penetration Testing

Secure Ideas is excited to announce the latest class in our Professionally Evil Course series: Ride Along Penetration Testing.  This course will be held on October 9th at 2PM eastern.

Unlike so many other courses, this is not a typical "here is a tool and how to use it" course.  In this 2 hour course, James Jardine and Kevin Johnson will walk attendees through a realistic web penetration test.  They will target a web application typical to our penetration tests, and explain how they approach it.  Attendees will learn how to identify potential flaws within an application and then how to verify and exploit those flaws.

Secure Ideas will also include access to the application being tested, so attendees can follow along during the class.  This allows the attendees to have hands-on experience with the techniques being taught.

This course is $25 for the two hour session.  To register, visit here.


Kevin Johnson is the CEO of Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at kevin@secureideas.com or visit the Secure Ideas - Professionally Evil site for services provided.

Wednesday, September 03, 2014

The ABC's of ASV's & PCI

Secure Ideas' prides itself on providing the highest level of service to our customers. We are tirelessly searching for new tools, and methods to use in strengthening security for our clients.  With that in mind, it is with great pleasure that we announce the unveiling of PCIScout.  After undergoing a thorough vetting, and testing process, we were recently made aware that Secure Ideas is now an official Approved Scanning Vendor for PCI.  PCIScout is operational, and we are excited to offer this new service along with the rest of our Scout tools.

Privacy Matters

Security should be a top priority for most organizations that handle sensitive information.  Having strong security measures in place is the first step in protecting against vulnerabilities, and attacks to your system.  With that said, keeping sensitive information private is becoming increasingly difficult in this fast-paced, technological age.  Adhering to a specific standard helps keep a fundamental security structure in place, one standard of which is PCI.

The Payment Card Industry Data Security Standard (PCI DSS) is in place, for companies and businesses that handle payment card data, to follow a uniform standard.  Keeping payment information safe from attack is a key component in organizations being PCI compliant.  To be compliant, an organization must commit to certain testing and auditing performed by contracted, authorized, vendors. If testing finds a company out of compliance, the necessary steps to remediate the issue must be taken immediately.

Finding the Right Vendor

An Approved Scanning Vendor, or ASV is able to perform these security scans for organizations that need to be considered PCI compliant.  An ASV must go through rigorous testing to become approved, and all ASV’s adhere to a specific protocol as defined by PCI, ensuring a consistent testing environment.  A comprehensive list of approved vendors can be found by clicking the link below.
https://www.pcisecuritystandards.org/approved_companies_providers/approved_scanning_vendors.php (Secure Ideas is now on the list.)

We know that PCI DSS is a standard used by organizations, and that approved scanning vendors are contracted to complete testing to ensure compliance.  Next, we need to understand who needs to follow PCI and how organizations benefit from compliance.  Simply put, any organization that accepts, transmits, or stores any cardholder data must be PCI compliant.  Failing to obtain such status can result in monthly fines from $5,000 to $100,000 and up. (https://www.controlscan.com/support-resources-qa.php)

PCI has four levels of merchants, 1 through 4.  The number of transactions a company performs per year is how PCI determines what level an organization falls under.  With each level comes a different set of validation requirements an organization must adhere to in order to stay compliant.  If an organization accepts any cardholder data whatsoever, they must be in compliance.  There is no grace period in which companies have to get compliant, so staying proactive is imperative.  Part of PCI requires quarterly scanning, so every 90 days, a passing scan must be submitted.  For a scan to pass, every host must meet the requirements of PCI.  The following link better outlines what constitutes a failing scan. http://www.qualys.com/products/pci/qgpci/pass_fail_criteria/

Is PCI Right For You?

The benefits of being compliant are great, and far outweigh the risks of not following the standard. Customers will begin to rely on an organization if they know their information is secure.  Trust is the greatest asset an organization can gain from being compliant, as it will most likely result in repeat customers, a better reputation in the security community, and the ability to help stop sensitive information from being compromised.

Andrew Kates is a Security Analyst for Secure Ideas. If you are in need of a penetration test, Scout services, or other security consulting services you can contact him at info@secureideas.com or visit the Secure Ideas - Professionally Evil site for services provided.

Tuesday, August 26, 2014

Egress Filtering: Are you authorized to leave?

One of the first concerns with protecting a network is stopping outsiders from being able to enter into the internal network.  Of course, this does make sense because we believe that the main threat to our network is external by default.  Over the years we have found this to not be so true and that many threats actually are from internal actors.  While I do believe that blocking external requests that are unwanted is completely necessary for a secure network, we should not overlook how data leaves the network.


There are multiple reasons why data may be leaving the internal network.  Maybe the employees are sending data out to clients with a non-malicious intent.  On the other hand, maybe an employee is trying to steal data, or an external attacker has gained entry and is now trying to ex-filtrate the data for future use.  There are two types of data leaving a network: Acceptable and unacceptable.  The first step is to determine what is acceptable data to leave the network.  This includes not only identifying the category of the data, but also how it is transmitted.  What ports does it use?  Is there a specific destination address or originating address for that data?  Once we can start to identify these items, we can then start to determine what cannot be sent out.

An example of traffic  you may want to block is VPN connections.  While a VPN connection creates a secure channel for safely connecting to a network, it also makes it difficult for the company to determine what data is being transmitted over it.  If an employee or attacker is able to create a VPN to an external source and send company confidential information through it, the company may never be aware it happened.  There are lots of other ways to send data to an outside source, so configuring your network to block these avenues is essential.

Egress filtering is the process of filtering out the data that is being transmitted to the outside of the network.  Typically, non-standard ports will be blocked as a measure to prevent data from leaving.  Malware or a virus that makes its way into the network could try sending data on a random port.  If that is the case, then by blocking all non-essential ports, the malicious software loses its ability to communicate to the outside and possibly helps reduce its impact.  Unfortunately, we are seeing more and more these malicious tools using common ports because they are known to be available. 

Once the company has adopted a solid data classification policy, other constraints like Data Loss Prevention (DLP) can be implemented to help identify data leaving the network.  In addition, network segmentation can also help protect data from being removed from the networks.   We will discuss those topics in future posts.

James Jardine is a Principal Security Consultant at Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at james@secureideas.com, @jardinesoftware on twitter or visit the Secure Ideas - Professionally Evil site for services provided.

Wednesday, July 30, 2014

Logging Like a Lumber Jack

Turn on any news outlet or visit any news site and you will most likely see an announcement of yet another data breach.  On the DTR podcast we discuss breaches in the news during almost every episode.  There is a push to put more of an emphasis on identifying and reacting to a breach or security incident rather than just focusing on preventing the event from happening to begin with.  We want to prevent as much as possible, however the reality is that breaches appear to be inevitible and if that is the case, we should put some focus on properly reacting and identification.

Logging is a key element to any security program.  It provides the capability to see what is currently happening as well as what has happened in the past.  In the event of a breach, the logs become critical in determining what happened, what data may have been effected, etc.

Current Logging...

Over the years I have found that many developers do incorporate logging into their applications, but it is not for security purposes.  Most of the time the logs are strictly for troubleshooting.  This makes sense because that is where a developer focuses their time once an application has been released.  Without debug logs, it can be really difficult to fix a problem.  The issue here is that while this works for debugging, it doesn't really help from a security standpoint.   In the event of a  breach, it is doubtful that a log full of stack traces is going to be the help that is needed to determine what happened.

What Should I Log?

There is no exact formula for what needs to be logged in a system.  There are of course some starting points that are consistent across many platforms.  The following are a few examples of events that should be logged.

  • Failed Authentication Attempts
  • Successful Authentication Attempts
  • Account Lockouts
  • Sensitive Transactions
  • Access of Sensitive Data
  • Application Feature Changes - Logging Disabled/Enabled, System/Application Shutdown
Unfortunately, the hard part comes in when you are trying to determine the key elements to log.  You have to have a solid understanding of your business and what is important to it.  Take the time to identify the key data the application uses, the key transactions that must be recorded, and why you are logging to begin with.  

Don't Forget Monitoring

Logging doesn't do a whole lot of good if there is nothing monitoring it.  All too often we hear of breaches that happened more than a year ago and the Incident Response team shows the evidence from a log file.  If only someone had been looking, it would have been identified much more quickly.  Make sure that you have some method for reviewing the logs regularly.  There are lots of packages available to help with this process.  I know, manually looking at logs all day is not much fun at all.

The monitoring should be more then just looking for attack requests, but have the ability to identify when a brute force attack is happening so an alert can be sent to a technician.  As you spend more time logging and reviewing those logs you will be able to identify more events you want to monitor to help secure the site.

What To Do?

Develop a plan for any items that could appear in the log that are of concern.  Just like any response plan, you want to know what steps to take in the event of a security alert.  For example, brute force attempts may be met with blocking an IP address.  The identification of a compromised server with malware may mean taking that server offline immediately.  Know what procedures exist so proper reactions can be performed as soon as possible.


James Jardine is a Principal Security Consultant at Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at james@secureideas.com or visit the Secure Ideas - Professionally Evil site for services provided.