In order to enrich the amount of data for vulnerabilities that are managed through the Merge.io platform, we have integrated a local copy of the National Vulnerability Database. The combined data set contains more than 2 million records with the resource and affected software categories and is updated on a daily basis from the NIST feeds.
With this data we have the ability to correlate imported vulnerabilities with the NVD to supplement and extend the vulnerability information that is managed by the product. This allows us to continue to build wider support for vulnerability scanning tools as well as provides additional, up-to-date information for the resolution of vulnerabilities.
Converting mounds of vulnerability scan data into operational action is a challenge that many organizations have. One major part of that challenge is how to go about systematically boiling down the volumes of vulnerability data to get to the vulnerabilities that you need to fix.
This is especially crucial when dealing with internal scans of larger networks. Internal vulnerability scanning tends to be a tricky process due to the fact that in most cases there is very little access control that is implemented between the scan engine and the target systems. With little access control in place, the vulnerability scanners produce a lot of vulnerability data to analyze. The question still remains – how do you make the most sense of the volumes of data that is generated?
We begin with a policy and a process. Does you have established remediation thresholds for ranked vulnerabilities and has management signed off on it? If not, start your journey here. Build an organizational policy that establishes a base set of conditions that determine which vulnerabilities must be remediated vs. vulnerabilities that are acceptable in a given environment. As an example policy/process may state that you remediate (per the defined remediation thresholds) any vulnerability that meet the following criteria for internal scans:
- CVSS score of 5.5 or greater
- SQL injection
- Cross site scripting
Since these are internal scans, there will be certain vulnerabilities that we will choose not to remediate. This could be due to environmental variables, compensating controls, etc. Examples of these may include:
- Self signed certificates
- Denial of service vulnerabilities
These are obviously just examples, but each organization generally has vulnerabilities they want to remediate and vulnerabilities that they don’t due to acceptable levels of risk. So how do you apply this logic in an automated fashion to our vulnerability data? The reality is that most companies that I deal with are still using Excel (the duct tape and bailing wire of the business world) to accomplish this. And even with the use of spreadsheets, the process is highly manual.
This is where Merge.io can help out. With Merge.io you can create custom risk profiles that can determine which vulnerabilities from the imported data sets to remediate and which to exclude for acceptance. Just create the profile using the risk profile builder and apply it to a scan import. All of the vulnerabilities that match your include criteria will be added to the Merge.io workflow system. And the vulnerabilities that match the exclude filter will be excluded from workflow tracking.
Risk profile filters can be built that examine vulnerability CVSS score, Risk rating, title description, port, service, operating system and more.
These filters are available for use with any scan imported into Merge.io. They can be built to match your internal remediation standards and they can be added to over time as the organization changes. This provides an automated, repeatable approach to managing the volume of data that vulnerability scans can create.
Merge.io has a built-in filter for PCI that matches the requirements laid out by the PCI ASV Program Guide Version 2.0. Additional compliance filters are coming soon!
Be sure to check out Automated Scan Validations with Merge.io.
PCI requirement 11.2 requires merchants and service providers to conduct internal and external vulnerability scanning on their infrastructures. The requirements further state that the scanning activities should be repeated until clean scans are achieved.
The devil is indeed in the details when it comes to operationalizing what seems to be a simple concept. Companies struggle with how to effectively achieve clean-scans when they are scanning many different assets running on multiple platforms that have different patching cycles.
Many companies create manual file comparisons using spreadsheets to compare scan files quarter over quarter. This manual comparison process can be come exceedingly complex and time consuming.
In order to stop the vulnerability management “whack-a-mole” we have incorporated an automatic validations feature into the Merge.io platform. This validation process allows you automatically compare scan data against closed vulnerabilities to prove that the vulnerabilities have been remediated on the target systems.
Once a project is created and baseline scan data has been imported, the Merge.io platform provides a mechanism to allow continuous validation scan data to be imported into that project. For each validation scan file that is imported, Merge.io analyzes the data in the scan file, specifically looking for vulnerabilities that are in the “Closed-Approved” state on the Merge.io platform. If the vulnerability is not present in the validation scan file, then Merge.io marks the vulnerability as “Validated” noting the uploaded scan file that it compared. If the vulnerability still persists in the validation scan file, then Merge.io re-opens the vulnerability and assigns it back to the engineer who closed the vulnerability.
This closed-loop approach to vulnerability lifecycle management allows organizations to not only track the vulnerability state all the way through the vulnerability management process. But have assurance that each vulnerability has been proven not to persist.
The introduction of the attestation was intended to bring a reduction in errors with a more formal process of engagement between the merchant and ASV.
But exactly what is the ASV attesting to?
The ASV Program Guide version 1.2 requires that the ASV provide the merchant or service provider an Attestation of Scan Compliance attesting that the scan results meet PCI DSS Requirement 11.2 and the PCI DSS ASV Program Guide.
That’s not how I read it…
There seems to be some inconsistencies amongst ASV’s as to what exactly they are attesting to as a part of the required ASV Attestation of Scan Compliance. In working with different ASV’s over the past years I have come across three different methodologies on what an ASV is attesting to:
- There should be NO vulnerabilities on a given host at the time of attestation
- All identified vulnerabilities have been addressed within 30 days of when they were identified
- All identified vulnerabilities must have been addressed within 90 days of when they were identified
As an example, Qualys (who supplies their scan tool to a large portion of the ASV industry) has implemented a rule within their platform that fails a given host if that host has not been scanned within the last thirty days. This helps them to support their methodology of attestations – vulnerabilities should be addressed within 30 days of when they were identified.
The differences in these methodologies have a huge impact on organizations with a large ASV scanning scope. It is very difficult for an organization of size, running many different operating systems with different patch/release/test cycles to achieve an attestation under these types of attestation requirements.
This is not necessarily an issue with patch timing, it’s an issue with the alignment of different patch cycles and how that is reflected on a particular scan. Let’s take an example – an ASV scan scope of 100 system components contains the following platforms:
- Microsoft Windows
- Redhat Linux
- Cisco IOS
- Juniper OS
- F5 Networks
This approach also presents some inconsistencies with the requirements of DSS Requirement 6.1, which as an example, allows a risk based approach to prioritize less critical security patches for implementation beyond 30 days.
Where’s the sweet spot?
The ASV scanning element of the PCI requirements should be a validation that the merchant or service provider has a working program in place for patch and configuration management by providing proof that vulnerabilities are being addressed in accordance with PCI DSS 6.1 and 11.2. While the stricter attestation requirements by some ASV’s provide a reduction in risk for the ASV, it can also provide operational hurdles for larger organizations.
At this point in time, PCI DSS 11.2 provides no further clarification other than clean scans are required on a quarterly basis. If the merchant can prove to the ASV that vulnerabilities that were identified have been resolved within a 90-day period, I believe that this meets what is currently documented in 11.2.
Establishing the time that vulnerabilities must be remediated by should be based on risk to the organization while meeting the minimum guides of compliance requirements. It should be up to the merchant or service provider to establish these time frames according to their business risk. Unfortunately in some circumstances, this timing is being established based on risk to the ASV and what they will attest to, not risk to the merchant and their operating environment.
Hopefully the highly anticipated next version of the PCI Program Guide will bring some much needed clarification in this space.