Prior to joining Exabeam, I was a private contractor, attached to a special operations team, doing surveillance and providing behavioral analytics to the team. Delivering analytics via drone surveillance did not share much in common with modern-day security information and event management (SIEM) platforms or the level of analytics they provide. At that time, SIEMs generally gave static-based rules and were incapable of providing the level of flexibility and context as I would have, doing my surveillance.
So, what do I mean by that? In the surveillance world, I was assigned an asset (which is a person, building, vehicle, or geolocation), and my responsibility was to monitor everything associated with that asset, to determine normal behavior or when things deviated from normal patterns. Some examples are, how many people does person X associate with each day, how many different compounds does person X visit each day, what clothing do they wear, what vehicle do they drive, what paths do they walk, what are their driving patterns, etc.
If any one of these things deviated outside of normal, it wasn’t cause for alarm unless that one thing was so significant, it warranted escalation to the team. Things that warrant escalation and heightened observation levels were mass deviations from several key aspects of an asset, and they were never expected to be in any recognizable order. However, if an asset was seen driving a different vehicle, or following different walking paths, and visiting new compounds, or dressing differently and behaving in any other possible options, we would direct resources towards this asset.
The problem with non-linear behavior
In real world analytics like this, you don’t expect to see changes in behavior in any particular order, such as A happened, then B happened, then C – and if they didn’t happen in order, do not trigger an alarm or notification.
Unfortunately, this is exactly how most legacy SIEMs operate. Analysts are somehow expected to know the appropriate order of operations on how an operation will be compromised. This would be equivalent to me providing surveillance to the team but not making them aware of a potentially bad situation because a person’s behavior didn’t change in an exact, expected order.
Stitching together a timeline from non-linear events
SIEMs must evolve to provide baselines of all user and machine behavior and create profiles for an environment. Attacks can then be identified by anomalous behavior, regardless of the order in which it occurs. For example, if a profile deviated x amount from normal, it would trigger an alarm based on a risk threshold that a user defined.
Some SIEMs only create baselines around authentication data, which can leave gaps in the attack chain and leave an organization susceptible to threats that fly under the radar. Some solutions do baseline a wide range of fields to provide a holistic profile of “normal” across things like badge access, email activity, database activity, asset logon and access, account creation and management, user agent strings, web traffic and more. When a profile begins to deviate across these fields in any order, risk is applied to the session. Once enough risk has been applied, the user or asset will be flagged for the security team to investigate.
Importance of understanding nonlinear events in a military setting
During a given day, doing surveillance, I would be asked to create a timeline of a user’s activities. These timelines would be used to compare against previous activities to see if there was deviation from standard behavior. Telling someone that Asset A went into compound B at a specific time lacked context unless you knew the previous timelines of when Asset A typically went into compound B, so creating these timelines were crucial in providing context.
The same goes for creating timelines for users and machines in an organization. If User X logged onto the network at 3 a.m. and I was to trigger an alarm for this behavior, it would lack context unless I was able to show what time this user typically logs onto the network.
A typical SIEM alert would be laid out as such:
User X logged in after hours: Fire Alarm A
User X logged in after hours and hit a critical server: Fire Alarm B
User X logged in after hours and hit a critical server and created a new local account on the asset: Fire Alarm C
These have traditionally been the types of rules an analyst would have to create and predict the pattern of how they would be compromised.
With behavior analytics, the baselines would let us know if it is normal or not for that user to log in after hours, hit critical servers and create local accounts. And if the deviation from normal was great enough, it would trigger one alert for the analyst to investigate, not numerous ones that you have to try to piece together.
Also, with behavioral analytics, you can get away from time-consuming tasks such as maintaining lengthy whitelists for users that you would typically see doing these tasks. As they take new roles in the organization or leave the company, these lists need to be updated, but with behavioral analytics the data models will let you know when the operations being performed are normal or not.
This type and level of profile building is what was expected of me when I conducted surveillance as a contractor for our military units. This level of comprehensive intelligence should be what we expect from the SIEMs that are designed to protect our organizations. The level of detail, clarity, flexibility and insight regarding organizations’ users and machines can and should mimic the human intelligence provided by military contractors, keeping our companies and our customers safe from cyberattacks.