Closing the ITDR Gap: The Okta Breach Revisited
RevealSecurity logo
Blog

Should We Care About the Theft of One Dollar?

February 1, 2023
One Dollar theft
One Dollar theft

Imagine that the head of security at one of the nation’s leading financial institutions receives a call from their team because $500,000 went missing. After long hours of analyzing transactions, the team traces the missing money to an employee who also stole $1 six months earlier.

The employee in question made several $1 transactions to their own account on the company’s claim settlement portal. Once the employee realized that no one was scrutinizing these transactions, they grew increasingly bolder and started embezzling more significant sums. Eventually, greed caught up when they tried sending $500,000, which is when the security team detected the incident and swung into action.

This is a real-life example from an insurance company.

Insider threat: What you can’t detect makes you vulnerable

A leading number of today’s threats to financial institutions worldwide come not just from external threats, but from within. Or by external actors using stolen credentials from authenticated users. As a result, financial institutions are tightening their security to be watchful of potential misuse or abuse from employees and contractors using their SaaS and custom-built applications.

Cybersecurity technology solutions enable the detection of malicious activities on networks, operating systems, and devices. Malicious activity and fraud are primarily detected by two methods:

  • Rule and signature-based detection which identifies potential malicious behavior through rules and known bad indicators.
  • Statistical volumetric frequency methods, also known as User Entity Behavior Analytics (UEBA).

 

These solutions have been effective on the network, endpoint and access layers. But when it comes down to the application layer, these methods of detection and response fall short. Assessing abnormal user behavior by average daily activities does not deliver accurate results, as there is no such thing as ‘average’ behavior.

Let’s take, for instance, a manager at an insurance company: Some of her days are spent settling claims and transferring money to client accounts. On other days she is preparing reports, and towards the end of the quarter, she spends a few days preparing a presentation of her department’s activity. The manager doesn’t have an average daily behavior, she does different things all the time.

So, how can we detect intentional misuse from within? We must construct user journeys across business applications and learn the typical usage patterns of internal and external users.

User journey analytics for insider threat detection

User journey analytics does not look at a single activity from a single user. Instead, it analyzes sequences of activities from a given user and forms a set of journey profiles that this user undertakes in an application. As users perform multiple actions in different sequences and time intervals, this method learns what is considered a ‘typical’ user journey for each user. When an employee performs an action that appears outside these normative user journeys, it identifies the changed journey as an ‘outlier.’

Learning user journeys at scale to prevent threats

Let’s return to the example we started with. By deploying user journey analytics, the insurance company would have seen instances of anomalous behavior for the employee crediting $1 to their account. This anomaly would have alerted potential malicious activity, thus narrowing the focus on the employee in question and providing timely intervention.

This post was originally published in VentureBeat on Jan 28, 2023.

Share: