Detecting Malicious Activities in Enterprise Applications
Rogue insiders and external attackers have become a growing concern in enterprise business applications. External attackers leverage stolen credentials to impersonate an insider and connect to applications, while at the same time insiders are not sufficiently monitored in SaaS and home-grown applications. This poses a risk from employees and admins who might misuse and engage in malicious activities.
The market-wide shift from on-prem to SaaS technologies for business-critical functions, such as finance, HR, and operations, has extended the attack surface for malicious activities in applications, creating a greater market need for solutions that address Application Detection and Response (ADR) in a way that scales across multiple different applications. Even a perfect deployment, with no application layer vulnerabilities, is still exposed to impersonators and malicious insiders. Core business applications today are often poorly monitored, and as result we discover the misuse, abuse and/or malicious activities in business applications only after complaints from the victims. Such examples could include a bank teller skimming cash, or a customer service agent at an insurance company modifying a policy to add themselves as a beneficiary, or a salesperson moving to a competitor who downloads a report of all customers to take with them. The detection of these breaches is usually comprised of manual sifting through tons of log data from multiple sources when there is a suspicion.
This makes ADR a massive pain point for enterprises, particularly with their core business applications (many times home-grown, custom-built applications).
This white paper provides an overview of the cybersecurity detection landscape, zooms into a gap identified by RevealSecurity in the application layer, and outlines a solution to this gap. It is intended for CISOs, Information Security Officers, Risk Security Officers, and SOC Managers who are looking to augment cyber defenses with an accurate detection solution focusing on the business application layer.
A Gap in the Detection Landscape
Cybersecurity detection solutions nowadays are mostly focused on malicious activities at the access, network infrastructure and operating system layers. Accordingly, a wide range of solutions is available for users, networks, and devices, such as NDR on the network layer, EDR on the device layer and UEBA and CASB for the user/access layer. These detection solutions for users, networks and devices are based on two main technologies:
- Rules and patterns that define illegal or malicious behavior.
- Statistical volumetric/frequency methods, based on averages and standard deviations of activities, such as the number of logins, number of emails, etc. These technologies are often referred to as User Entity Behavioral Analytics (UEBA), setting baselines for average, standard deviation, median, and other statistical metrics, and then detecting abnormal values using these baselines.
Rules and UEBA have been effective due to major commonalities in the network, device, and user access layers: the market by and large uses a limited set of network protocols and a handful of operating systems.
However, when it comes to the application layer, UEBA has failed due to the vast dissimilarities between applications. Models have therefore been developed only for
a limited set of application layer scenarios, such as in the financial sector (credit card, anti-money laundering, etc.). As a result, bespoke rules written for a specific application continue to be the most common detection solution for applications.
The First Generation of Detection Solutions - Rules
Rules were the first generation of cybersecurity detection technology, but they only detect known patterns, while attackers are constantly leveraging loopholes, leading to false negatives. They also require expensive experts to develop and maintain bespoke rules. Because each application is different, one must be extremely familiar with each application’s business logic, log formats, how it is used, etc., in order to write and manage rules for detecting application breaches. However, despite this continuous high level of maintenance, rules still generate many false positives when applied over an entire population because the rules set for most users are often incorrect for others.
Consequently, rule-based detection solutions are notoriously problematic because they generate numerous false positives and false negatives, and they don’t scale across the many applications.
The Second Generation of Detection Solutions - UEBA
Over a decade ago, the security market adopted statistical analysis to augment rule-based solutions in an attempt to provide more accurate detection for the infrastructure and access layers. However, User and Entity Behavioral Analytics (UEBA) failed to deliver as promised to dramatically increase accuracy and reduce false positive alerts due to a fundamentally mistaken assumption; that user behavior can be characterized by statistical quantities, such as the average daily number of activities. This mistaken assumption is built into UEBA, which characterizes a user by an average of activities … In reality though, people don’t have “average behaviors,” and it is thus futile to try and characterize human behavior with quantities such as “average,” “standard deviation,” or “median” of a single activity.
As an example of non-average behavior, meet David, a personal banking account manager at a major bank. As part of his normal daily activities, David has a variety of different professional working profiles:
- He may be called by a customer to perform a bank transfer on her behalf, either externally, between branches, or between accounts at the same branch.
- At other times, he may assist a customer with the buying and selling of various stocks.
- On a monthly basis, David will generate a report of status of all customers under his responsibility and email it to his manager.
Computing an average of the daily activities in David’s workday would be meaningless. We should focus instead on learning David’s multiple typical activity profiles.
The Third Generation of Detection Solutions - Accurate Detection with User Journeys
The main criteria for success in a detection solution is accuracy, which is dictated by the number of false positives, and the number of false negatives. The below diagram demonstrates the evolution of three generations of detection solutions. The first generation solution is based on rules, while the second generation solution is based on frequency/volumetric analysis. As explained above, both these solutions failed to provide the accuracy required for successful cybersecurity detection.
his contemporary third generation of solutions instead uses sequences of activities, ie Journeys, to contextualize activity and improve detection accuracy. Cisco’s NetFlow was the first to do this, at the network layer, analyzing sequences of packets instead of individual packets. NetFlow demonstrated that sequences of packets enable higher accuracy by providing enriched context. In recent years there has also been some initial attempts to apply this idea of sequence-based detection to infrastructure and access security.
RevealSecurity’s innovation is based on implementing this concept of sequence-based detection on the application layer, to analyze user journeys in applications and to detect abnormal user journeys highly accurately. By a user journey we refer to the sequence of activities performed by the user in any application, be it a SaaS application, a custom-built application, or a cloud application or service.
Analysis of user journeys can accurately detect impostors, as it is very difficult to imitate a user’s normal journey in an application. It will also accurately detect insiders looking to misuse or abuse an application as they would then deviate from their normal user journey profiles.
As an example, think of a bank with many rooms, including a vault room with precious articles, including cash, gold, jewelry, documents, etc… The bank of course has a main entrance, and the vault also has its own door, which people go through to deposit or withdraw precious goods.
People walk through the front door, entering and leaving the bank. They may walk in and out of the vault and perform various activities in that room itself.
Our goal is to find misuse and theft in the vault. However, just monitoring the vault’s door and actions doesn’t provide enough information for accurate detection, as most of the people involved are performing legitimate actions there.
Analyzing the path people take from the moment they enter through the front door of the bank, as they pass throughout the hallways and rooms – to, in and from the vault – enables us to learn which journeys are normal and expected. These normal journeys provide our base for detection. We find malicious journeys by comparing each user journey to their learned normal journeys, because malicious users are likely to use a journey that is different from normal: maybe their journey in the bank is longer because they don’t know where they’re going; or maybe they just quickly go in and out as fast as possible, to avoid raising suspicion.
The accurate detection of malicious behavior via analysis of user journeys is based on the underlying assumption that an abnormal session is characterized by a journey which isn’t similar to the user’s typical journeys in an application. Thus, by learning typical journeys and creating normative journey profiles, we can accurately detect abnormal journeys, which are highly correlated to malicious activities.
User Journeys are fundamental to RevealSecurity’s solution, TrackerIQ, so let’s define them before we continue. Sequences of application layer activities are typically a User Session, and we denote them as a User Journey in the application. Thus, a user journey portrays what a user has done within an application session.
TrackerIQ Application Layer User Journey Analytics
While User Behavior Analytics is about a single baseline for each activity and an analysis of each activity on its own, User Journey Analytics looks at sequences of activities and learns for each user the complete set of typical user journeys in an application. This enables TrackerIQ to achieve extremely accurate detection.
The implementation of these concepts to detect malicious insiders and imposters from information in the application logs raises two major obstacles:
- First, users don’t have a single route, or an “average” route. Each user has many typical activity journeys in each application, in addition to multiple journeys across applications. To find anomalies, the TrackerIQ detection solution must be able to automatically learn all these multiple typical journeys.
- The second obstacle is that of ubiquity, which is described below.
A Ubiquitous Model
Each application has a bespoke set of activities and log formats. To be able to apply TrackerIQ to any application layer log, TrackerIQ must be agnostic to the meaning of the application activities and application log records.
To accomplish that, as TrackerIQ analyzes the user journey in an application session, it uses sequence characteristics that can be extracted from any sequence of log events, irrespective of the application logic:
- The set of activities, each denoted by numeric codes
- The order in which activities were performed in the session
- The time Intervals between activities during the session
These three characteristics are agnostic to the meaning of activities and can therefore be applied to any application session, and even to sessions across applications.
The right hand example illustrates the 3 characteristics of a user journey based on five activities, each denoted by a number (the activity is a numeric code from the model’s perspective).
Learning Multiple Typical User Journeys
As explained above, TrackerIQ learns all common user journeys for accurate detection. It often learns many user journeys, especially when TrackerIQ is analyzing the journeys of a cohort of users.
To learn these user journeys, TrackerIQ uses clustering technology (see page 9), which groups together similar data points (in our case these are user sessions). Thus, TrackerIQ applies clustering to group similar sessions together and then build a typical user journey from each such group of similar sessions. This is a process that runs continuously as new log data is available.
Once typical journey profiles have been learned for a user, TrackerIQ starts checking every new session, to see if it is similar to one of the typical user journeys learned for this user. An anomaly is detected when the current user journey is not similar to any of the user journey profiles learned for the user.
To detect scenarios in which users behave differently than their peer group, TrackerIQ also compares a user journey against typical user journeys learned for the cohort of users to which the user belongs.
The TrackerIQ Clustering Engine
There are a variety of clustering engines available on the market, but they were deemed unsuitable for TrackerIQ’s user journey clustering because they all suffer from at least a 10% margin of error, which is unacceptable when clustering results are intended for cybersecurity anomaly detection. Classical clustering also requires presetting the number of clusters in advance, which isn’t practical for typical user journeys in applications. Most importantly, classical clustering engines require the cleansing of outliers, as outliers degrade clustering results. However, these outliers are what we are looking for in anomaly detection.
RevealSecurity developed a unique clustering engine tailored for sequence clustering. Our clustering engine doesn’t require prior knowledge as to how many clusters to generate. It is also extremely accurate, while still almost linear in the number of data points it clusters. The engine detects outliers – removing them from the data set to enhance clustering accuracy, while also identifying these outliers as anomalies. Thus, the same clustering engine that generates groups of similar user journeys, also detects abnormal user journeys, and reports them as anomalies in historical data used for learning normative user journeys.
Information security professionals look for anomalies which impact their business, so not every anomaly is “interesting.” Anomalies that contain sensitive activities from a business perspective are of course more interesting. Ranking anomalies based on their sensitive activities enables TrackerIQ to alert analysts only about anomalies they care about.
For example: Daniella uses a mobile banking application but has never shown an interest in the stock market. In the current session Daniella uses her mobile banking application to look at stocks. This is an anomaly, but not an interesting anomaly from a business perspective. Daniella also rarely transfers funds with her mobile banking application. When she transfers funds 3 times in a session, this does generate an anomaly that is very interesting from a business perspective.
TrackerIQ enables the enterprise to provide a set of sensitive activities for its applications (with sensitivity expressed on a scale from 0 to 10). For SaaS applications supported by TrackerIQ, the sensitivities are already included out-of-the-box, and only need to be changed if the enterprise has different needs.
A sensitivity score is computed for each user journey (i.e. session) based on this list of sensitive activities. The final risk score calculated for anomalous user journeys combines the user journey anomaly score with its sensitivity score. Thus, anomalies with high sensitivity are ranked higher than anomalies with low sensitivity. This enables analysts to focus on anomalies that are meaningful from a business perspective.
TrackerIQ provides analysts with a set of tools for user journey investigation. This enables a quick decision as to whether a further deep dive is required for each of the anomalies detected.
Log content is comprised of codes which are rarely “human readable.” TrackerIQ however can “translate” these codes to present the information to the analyst in a human readable way, enabling quick and proactive investigation. Customers provide a translation table via a simple text file for home grown custom-built applications.
TrackerIQ is a detection and investigation engine that augments any log repository. It is composed of several modules:
- The first module is a log collection module that retrieves application log records from almost any type of log repositories – be they a database, SIEM, Splunk, files, or APIs (used mainly for SaaS applications). The module extracts and transforms these log records to its internal model.
- The second module is the user journey builder, which builds user journeys (i.e., sessions) out of the transformed log records. As previously explained, each user journey (or session) is a sequence of application layer activities performed by the user during a user session in the business application.
- The third module is the user journey profiler, which generates a set of typical/normative user journey profiles for each user (or cohort of users).
- The fourth module is the user journey scoring module, which checks each user journey according to its resemblance to the user’s typical journeys (learned by the user journey profiler) to detect abnormal user journeys. When an anomaly is detected, its score is computed from a combination of its anomaly score and its sensitivity score.
In addition, an easy-to-use GUI is provided to manage and configure TrackerIQ, and to investigate detected anomalies.
RevealSecurity detects abuse, misuse and malice at the application layer, from insiders as well as imposters. As explained above, TrackerIQ has unique differentiators that provide a strong value proposition: