PCI Security Council’s opposing views on scope reduction technologies

Enigma-rotor-windows
Enigma Rotors

Many organizations seek ways to reduce the scope for PCI DSS. While there are many different methods of reducing scope, I want to focus specifically on tokenization and encryption of the primary account number (PAN).

For the intent of this post we will assume that the encryption and tokenization methods that are used meet the appropriate requirements to protect the PAN and that unauthorized attempts to reverse the token or encrypted value would be computationally infeasible. Additionally, I want to note that this post is not arguing the technical strengths or differences between tokenization and encryption, just to point out the council’s documented view on PCI DSS scope as it relates to the key and vault management aspect of these technologies.

Definitions

From the PCI Security Standards Glossary of Terms: Encryption is the process of converting information into an unintelligible form except to holders of a specific cryptographic key.

From the PCI DSS Tokenization Guidelines information supplement, tokenization is defined as a process by which the primary account number (PAN) is replaced with a surrogate value called a token. De-tokenization is the reverse process of redeeming a token for its associated PAN value.

For both tokenization and encryption, the overall principal is that a valuable data is altered into non-valuable data through a given mechanism. For encryption, that mechanism is the encryption process and the keys. For tokenization, the mechanism is the tokenization process and the relationship between the token and PAN. Additionally, the process of de-tokenization and decryption are similar in that there is a process to provide my encrypted data or tokenized data to a mechanism in order to receive the PAN in return.

The tokenization guidance that is provided in the PCI DSS Tokenization Guidelines August 2011 notes that tokenization can be used as a method of reducing the scope of PCI in an organization. There are obviously many considerations for de-scoping using tokenization that are adequately covered in this document. But it is important to note that the document does not seem to link de-scoping  with who manages the tokenization process, vault or tables.

The topic of de-scoping using encryption is covered by the PCI DSS FAQ article 2086 where it is noted that scope reduction can be achieved only if the merchant or service provider does not have access to the encryption keys.

If we follow this logic through to its conclusion, the council seems to be saying that tokenization can be used as a method for scope reduction without restriction to who manages the tokenization mechanism, but encryption can only be used if the merchant or service provider has no access to the keying mechanism.

Here we have nearly identical fundamental approaches to protecting data. Each deals with nonsensical data that can be exchanged for valid data through a particular mechanism. Depending on the details of the architecture, each has the capability of adequately securing cardholder data. However, it seems that the council has added additional rules around scope reduction using encryption vs. tokenization.

First of all, I do not believe that the council should be able to dictate the ability to de-scope based on a potential management issue. It is feasible to architect an effective key management while keeping clear segregation of duties within the same organization or entity. I do not think it should be the council’s place to prohibit a solution based upon the predetermination that an organization cannot maintain effective segregation of duties or other risk mitigation strategies. An enterprise encryption strategy isn’t acquired as an off-the shelf widget that I just plug in and turn on. There are many different potential elements that can make up a solution (one-time use, long term storage, tokenization hybrid, etc). It should be left up to the QSA to determine if the implemented solution meets the intent of the requirements.

But more importantly for the purposes of this discussion is the inconsistency of the view of two nearly identical fundamental approaches to protecting sensitive data and the accompanying keying systems. Why is the council’s position on these two scenarios different? I hope that the council will address this ambiguity in upcoming FAQ’s or supplemental documentation as the current information that is available is contradictory in nature.