Identifying the threat behaviors associated with an alert will allow you to conclude whether or not a threat is present. If you can understand how an attacker could benefit from taking the reported action, it will be easier to assess if there is malicious intent
A great tool to help you identify threat behavior is the MITRE ATT&CK framework.
It is common practice for security alert use cases to have an ATT&CK mapping. If you see a number such as T1078, that is the corresponding ATT&CK technique. Take time to familiarize yourself with that technique, how it has been used in the past, and for what purpose.
In many cases, the entities associated with an alert will be recorded in the alert body.
Key entities to identify include:
Once you have identified the entities involved with an alert, it's time to figure out what they are. Often, you won't have specific or deep knowledge of all the entities involved, but you can begin to bucket them into categorical groups based on their unique attributes. This grouping will help you conclude if this set of entities should be involved with the reported activity.
VERIS, a taxonomy for describing cyber events in categorical terms, is a good resource that can help you understand what type of entity classifications are relevant in a security context.
Many organizations have databases or systems such as CMDB or a Directory Service that has specific information about internal entities. Many times, you will need to translate identities to a real entity name. In most cases, there is a one-to-one mapping, such as a user id to username to email address.
Some information such as IP address can be dynamically assigned to one or more entities. Be skeptical of information about IP addresses and instead, identify the underlying entity that has been assigned that address. Specifically, geo location and blacklist status of an IP address are very unreliable and should be verified by other means if able.
A great place to start when looking for related activity is to search each entity against other recent security alerts. Identifying other related alerts can help you figure out if other suspicious actions have occurred and can help you deconflict your investigation with others that your colleagues might be running. Also, related security alerts can sometimes help you identify additional entities that might relate to the alert you are investigating.
It is also helpful to search for other alerts of the same type to understand what similar activity has been detected in the past and how others have treated prior investigations of a similar type. Typically, you can search for the alert name to find similar alerts. Remember that every investigation is unique and you shouldn't rely solely on conclusions from past events.
The best way to start is to have a set of hypotheses, some covering cases where this alert represents normal business activity and others covering cases where this alert represents a threat to your organization.
Based on what you know so far, if you can't think of a reason this activity could be a threat, then you probably know enough to mark it as a false positive.
Similarly, if you don't know any cases where this alert should be considered a false positive, you should escalate the case to a senior analyst.
When you have a set of hypotheses, you need to compile a list of questions that need to be answered until you have conviction to make a final decision about this alert.
For example, if a user has downloaded proprietary data while signed in from a new location, you might need to identify if the user is expected to be in that new location. As a next step, you might decide to send a message to the user's manager to inquire if this user is currently on travel.
With experience, and some additional research, you will start to identify more reasons why alerts may fire for non-threat behavior. Check below to see threat family specific playbooks that have more detail about threat scenarios and false positive conditions.