Vulnerability scanning is not new. In fact, its origins can be traced back to the late 90s, which is a lifetime in terms of technology.
It was a simpler time; in the year 2000 there were just 1,020 published vulnerabilities in the newly created Common Vulnerability and Exposure (CVE) system, launched in late 1999 as a federally funded research and development center operated by the MITRE Corporation.
Such a small number of vulnerabilities meant that the process for detection and remediation could be largely manual, with scanning software providing a report that was then checked for accuracy by someone in the IT department of an organization. Once reviewed and approved, the report would be passed on to system and network administrators to remediate the issues found and a follow up scan would be scheduled.
The year 2000 saw an average of 100 new vulnerabilities every month, and the process remained manageable and largely unchanged for years. However, the rate of vulnerability discovery and disclosure grew steadily and soon outpaced the traditional report and remediation process. By the end of 2005, CVEs were being published at the rate of 400 a month, eclipsing 40,000 published vulnerabilities by the end of the decade.
It was clear a change was needed, both in the approach as well as the formalization of the process. This led to the concept of Vulnerability Management (VM), defined as the “cyclical practice of identifying, classifying, prioritizing, remediating, and mitigating" software vulnerabilities. An increase in cyberattacks as well as industry compliance regulations, such as the Payment Card Industry’s Data Security Standard (PCI DSS) first published in 2004, drove wider adoption of vulnerability management as a requirement and as best practice.
Traditional vulnerability scanners require configuration of scan policies and scheduling, with most practitioners preferring after-hours scans due to the impact to the network. Intense or thorough scans can often bring production servers and networks to their knees, slowing or halting critical traffic and processes, especially those that are sensitive to network congestion, such as Voice over IP (VoIP) phones and remote desktop connections. This presents other issues, as an increasingly mobile workforce takes critical devices home with them at the close of business, leaving company owned laptops, tablets, and phones out of scheduled scans.
The output of these scans—the report—grows larger as more vulnerabilities are published and corresponding plugins are added to the scan engines. Without tweaking the scan profiles being run or carefully parsing the results, the sheer size of the report often slows the remediation process and leads to vulnerabilities not being patched or addressed in a timely fashion. It also means the annual or quarterly vulnerability scans are a dreaded process by all involved.
At IGI, our cybersecurity services team was regularly performing vulnerability scans as a service for our customers but were acutely aware of the shortcomings and gaps left by a one-time scan, and the amount of work it left our customers to complete the required remediation. Often, this meant finding the same vulnerabilities in the same systems on the next scheduled or follow up scan due to the volume of results and the lack of prioritization.
In 2016, we set out to change how we approached vulnerability management by altering our scanning technique and methods. This resulted in the creation of Nodeware, IGI’s own vulnerability management platform. It began as an in-house tool but the need for its innovative take on VM in the market became apparent and IGI began offering Nodeware to its existing channel of security service providers.
Nodeware shifts the paradigm in several ways. The first is the continuous functionality meaning that from the moment Nodeware is connected to a network until it is removed, it collects actionable data about the assets on the network. This is possible due to many tweaks to the timing of scans and the individual treatment of assets. By scanning a few assets at any given time and utilizing modern scan techniques, Nodeware maintains a lower network utilization than traditional scans, under 5% at peak and typically lower than 2% in real world scenarios.
Network inventory is constantly changing in modern networks, by handling assets individually, Nodeware is able to return first results sooner and optimize the run time of the scans to capture transient devices more effectively than other full network scanners. This also means that an up to the minute device inventory is available with online status as well as current operating system and service discovery.
With innovative threat intelligence delivery, the security content is updated prior to each asset scan, pulling only changes and updates. This avoids time-consuming and manual syncs found in traditional products, and guarantees each asset is scanned for the latest vulnerabilities automatically.
Assets are scanned using an intelligent queue that prioritizes new, never seen devices as well as high-risk devices over known and high-availability resources like servers. The end result is a real-time view into the assets and risks on the network, prioritized using a risk score that highlights the most vulnerable or at risk devices that allows for focused remediation efforts.
Verifying remediation is simple as well; instant rescans are available to users that confirm that issues have been addressed. On-demand and regular reports can be used to show trends and improvement as well as adherence to security policies for an organization. Even organizations that require third-party audits to be performed for regulatory compliance benefit from less surprises during those scans and fewer rescans that are often costly.
Four years and over 45 million vulnerability scans later, Nodeware brings real-time visibility into networks and the risks associated with the assets on them to our partners and customers. We continue to deliver the modern, cloud-based platform that ensures every user has access to the latest in vulnerability and threat intelligence and product improvements.