Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Open service risk correlation
1. White Paper
making network security secure
Risk Based Correlation
vs. Rule Based Correlation
OpenService, Inc., 100 Nickerson Road, Suite 100, Marlborough, MA 01752
800.892.3646 508.597.5300 info@openservice.com www.openservice.com
2. Contents
1.0. About OpenService, Inc. 2
2.0. Accuracy 3
3.0. Total Cost of Ownership 3
4.0. Efficiency 3
5.0. Event Order & Training 5
6.0. Conclusions 5
7.0. Finite-State Engine 5
01
3. 1.0. About OpenService, Inc.
OpenService, Inc. (Open) helps global enterprises and government organizations turn deployed
security systems into effective enterprise protection. OpenService offers integrated security
information management and network fault correlation applications that intelligently link events
from multiple sources to accurately pull the threat signal from the event noise using real-time
root cause analysis.
Founded in the early 1990’s as an IT consultancy, OpenService produced technologies which
developed into the expertise and products to collect, manage and correlate large amounts of
real-time data from disparate sources. Well funded and with a growing track record of successful
security information management implementations, our customers include Sonnenschein et al.,
Ace Hardware, Raytheon and Visa. OpenService led the enterprise security information manage-
ment market with public customer success stories during the first half of 2004, a testament to
our values, approach and technology. Investors include Advent International, one of the world’s
leading venture capital firms, who led an $8 million ‘C’ round in November 2003.
Unlike security information management toolkits that can be expensive and time-consuming
to deploy and maintain, OpenService’s software applications deploy in days, not months, and
provides a blended view of security and network metrics to effectively manage threats and meet
legislative standards compliance. Our security event management and network fault correlation
technologies are based on proven software solutions that have stood the test of time in major
corporations. OpenService’s track record of innovation shows how these trusted technologies
deliver the confidence that enterprise network security managers seek.
• Eight patents already granted on Security Threat Manager (STM) components.
• First Security Information Management vendor to be certified as “Nokia OK”
• Only vendor to deliver multiple published customer successes in 2004.
• irst security event correlation product that detects threats before they become exploits.
F
• First SIM / SEM vendor to provide business security intelligence capabilities.
• First SIM product to deliver security operations business performance metrics.
Our continued innovation and leadership extends to relationships with leading enterprise IT
vendors such as Check Point, Hewlett-Packard, Micromuse and Akamai. For more information
visit OpenService online at www.openservice.com or email us at info@openservice.com
02
4. 2.0. Accuracy
There are certain cases of known exploits, but in general, no system is able to provide perfect
intrusion detection. Merely examining n number of events over some period of time cannot
conclusively determine that a device has been exploited. Underlying IDS systems, even when
tuned, are notorious for reporting false positives. How, then, can a rule system—relying exclu-
sively on these types of inputs to make decisions—be accurate in its assessments?
The risk based approach relies on the preponderance of evidence across an enterprise when
making an assessment. Numerous factors are considered in the process, including the type of
events, topological location of the event, and various attacker and target characteristics, which
may increase or decrease the impact a single event has on the overall risk score of a device.
Unlike a rules engine, the risk based approach does not rely on fuzzy inference, but on an edu-
cated and accurate assessment of the situation across an enterprise.
3.0. Total Cost of Ownership
According to CERT, roughly 4,000 new vulnerabilities are discovered every year. That’s 10 per
day, including weekends. Many of these vulnerabilities include multiple attack vectors and,
therefore, require multiple rules to detect. Writing loose, generic rules will likely lead to many
false positives, while writing tight, concise rules (if it is even possible for a given vector) is ex-
tremely time consuming, given the volume. Additionally, the rules engine owner must make a
substantial investment in developing expertise in the rules entry system. Easy to use, GUI based
systems tend to be limited in the flexibility of rule creation, while those with actual embedded
scripting language processors require the security staff to spend countless hours developing
code, rather than mitigating risks. The system becomes only as effective as the creativity of the
rule writer.
Risk based systems focus mainly on the assets and their position in the network topology. As
new threats emerge, the assets remain constant and no system tuning or additional program-
ming is required. Instead, signature updates are received by the system so that new threats can
be incorporated into risk calculations. The algorithms themselves have been developed over a
period of months by subject matter experts and have remained unchanged since their incep-
tion. The rules system requires continual maintenance, while the risk algorithms have stood the
test of time.
4.0. Efficiency
Many rules engines implement a variant of the Rete algorithm for rules processing which con-
tinually applies a series of “if-then” conditionals repeatedly against a data set. This algorithm,
while effective for expert systems, isn’t as efficient for the characteristics of security event pro-
cessing. The implementation of the Rete algorithm calls for a memory of recently tested data
sets to be maintained so that they may be skipped on future iterations of the rule set if the data
set they represent has not changed. Unfortunately, the characteristics of an active network don’t
03
5. cleanly fit this model as high value targets generally remain under constant assault. As more
targets are constantly under monitoring, the expected efficiencies are not recognized. To miti-
gate this problem, constraints are applied to the system, including dropping partially matched
rules with time or keeping the datasets on a slower, secondary storage medium (ie – a database)
reducing the effectiveness of the system.
Furthermore, it is recognized that static implementations of data processing algorithms, such as
the risk based system, are more able to optimize both speed, and memory consumption than
rules based implementations.
Risk Based Correlation - Unconstrained by Sliding Windows
The first event initiates a Correlation Instance. The instance A single alert sounds and raises
immediately calculates a Risk Score for this first event and in priority as events increase. The
compares that score to a Risk Threshold and issues an alarm user is not overwhelmed with alerts.
if the threshold is crossed.
Illustration shows how the Alarm Priority changes over time.
Rules Based Correlation - Limited by a Sliding Window A single alarm sounds for every rule
The company presets the number of events and detection that is met. The user can find himself
window size. This example shows a rule of 5 events occurring unindated with alarms not knowing
within a 20 second window. which to check first.
Sliding Window - 20 seconds in duration.
04
6. 5.0. Event Order and Timing
To remain efficient, rule based systems must be sensitive to the timing and ordering of events.
This problem becomes particularly difficult in a distributed environment, as events arrive at vari-
ous times due to network latency and various scheduling issues. Now, recognize the possibility
of evasion an attacker can enjoy who introduces a slight variation in the attack vector, events
generated out of order, or a timing delay. How can you assume the attack will follow a set script
during an exploit? If the script is reduced to a guaranteed recognizable event, then there is no
correlation at all and the system is effectively reduced to an IDS. The rules based system be-
comes a slave to its own rules.
As already mentioned, in a risk based system, each event is considered in its own context as a
score for that event is determined. In this case, the score is the same whether it becomes before
or after another event or happens to be delayed for some reason. The risk based system relies
on data across an algorithm to develop a complete picture of the risk associated with a device
and, therefore, the importance of precise timing and ordering of events in these algorithms is
reduced.
6.0. Conclusions
If rules based processing is so inferior, why does it appear so popular? Most people can easily
conceive of a simple rule to detect some condition and perform some action. Developing and
optimizing a risk algorithm is not trivial. However, managing a rule based system does not stop
at developing a few rules, but instead involves managing and maintaining hundreds of rules,
combinations of rules, and a variety of actions associated with them.
7.0. Finite-State Engine
As an added benefit, using a finite-state engine in conjunction with the risk algorithms enhances
the effectiveness. A rule is time bound by nature, a combination on events based on some
criteria, in some period of time. This can lead to false negatives when the criteria for the rule
are met, but not within the time window (sliding window). Additionally, rules processing mostly
takes place on events that have already been inserted into a database. Using the database for
correlation is inherently inefficient as the database is processing continuous inserts while at the
same time trying to process the rules queries. By using finite-state, in memory processing there
is no time bound “sliding window” constraint, nor is the inefficiency of a database method a
factor.
05