Verified Security Rules
Reduce attack surface by ensuring endpoints abide by rules
What is an endpoint
Endpoints represent hardware or software that contain a runtime, of some sort, responsible for managing applications.
A classic endpoint example is a laptop. This device consists of hardware, an operating system, and applications that sit on top. There are innumerable variations of this device, as different hardware can be paired with different operating systems and different applications can be installed on top of any combination, each configured differently.
What is consistent about an endpoint - whether a laptop or a traffic light - is the surface area. A Macbook Air has a particular surface area allowing the user to install arbitrary software, compile code, or manipulate network traffic or memory space. Conversely, an iPad has a smaller surface area as these actions are not allowed.
Most endpoints are made up of 8-10 components that describe their surface area, such as:
- Process Management.
- I/O Device Management.
- File Management.
- Network Management.
- Main Memory Management.
- Secondary Storage Management.
- Security Management.
- Command Interpreter System.
Rules were developed to cover these components, when thought of from a computer security perspective.
What is a rule
A rule is a statement that would confine the surface area of an endpoint, if true, resulting in a more secure device.
Rules are specific to the operating system of the endpoint. This component is ultimately responsible for protecting the device from a software-based risk. The OS must either vertically protect the device or have a 3rd party solution hooked in for this to be true.
Every OS should abide by a set of rules that describe how to limit the surface area. These rules should always be in effect - or the resulting risk is accepted. Rules must be true in all cases, regardless of context. Understanding the rules being followed vs those being broken is critical intelligence for anyone responsible for IT and/or security management.
Because rules apply at the operating system, they naturally cover categories of systems not individual ones. For example, rules will be similar across workstations but be quite different on a container or mobile device.
Example rule: Malicious files should be quarantined
If a MacOS rule states that malicious files should be quarantined and I have no antivirus on my Macbook, I’m accepting the associated risks. In this case, the risks are any actions that can follow a malicious file being dropped on my hard drive.
For each rule, tests prove whether it is being enforced (or not). A test should not be all encompassing but instead verify a specific implementation of the rule. This singular focus requires multiple tests to sit under each rule. The benefit of this approach is that new tests can be produced rapidly, as dictated by threat intelligence or vulnerability analysis.
While tests are aimed at specific attack exposure, as related to reducing surface area, in order to qualify as a test it must be provably tied to an attack in the wild. Furthermore, this proof needs to be grounded by either an example CVE or an ATT&CK technique/procedure that takes advantage of a positive test. This means tests will always use known, malicious or exploitable code in their implementation.
The test validation process, a combination of peer review and programmatic testing through the Prelude Compute service, ensures that each is safe to run.
Updated 18 days ago