August 2, 2023
Optimize your SIEM implementation by avoiding redundant analysis and focusing on the highest-value log data first.
Deciding which logs to analyze is an important step in the process of SIEM implementation. Every organization must answer this question based on its own network infrastructure, security posture, and risk profile.
Your organization’s approach to log collection and analysis will have a significant impact on the success and sustainability of its SIEM implementation initiative.
In order to generate lasting value for the security team, your SIEM solution must be cost-effective to operate.
Why Not Just Send Everything to Your SIEM?
At first glance, the log capture problem appears to have a simple solution. It’s easy to assume that the best way to guarantee security and visibility is to capture and analyze all log data from throughout the entire organization.
This is rarely the case.
While it’s true that SIEM platform performance relies on capturing and analyzing high volumes of data, that doesn’t mean more data is always better. At a certain point, the cost of sending every single bit of data to your SIEM for analysis outweighs the security benefits.
These additional costs can be significant. The higher your SIEM platform’s event-per-second (EPS) and gigabyte-per-day (GB/day) log volume are, the more the organization must pay to maintain it. These metrics can increase substantially as the organization expands, creating a barrier to growth.
How Do You Choose Which Logs to Analyze?
Once security leaders understand that they don’t have to analyze every single log, choosing what logs to send to the SIEM becomes a complex issue. It’s especially challenging for organizations that don’t have deep visibility into their network infrastructure.
This is the problem that observability platforms like Cribl are designed to solve. Without clear visibility into the information specific logs contain, security leaders can’t confidently guarantee that those logs can be discarded safely. Other than a few simple scenarios where network assets generate redundant logs, there is a great deal of uncertainty about what optimal log management looks like.
For example, consider the difference in volume and value for the following types of logs:
- Command history is a relatively low-value, low-volume type of log data for real-time monitoring. It’s more useful for post-incident analysis.
- Windows event logs from Active Directory and individual workstations can be valuable, but it depends on context. You need to filter for the most important event IDs.
- DNS logs don’t take up a great deal of storage volume but carry much more value from a security perspective.
- NetFlow logs are high-volume logs, but they have more real-time monitoring value in a network anomaly detection context, rather than the SIEM.
- Proxy logs provide a high volume of high-value data for SIEM analysis.
Security leaders should treat these logs differently based on the value they represent to security incident investigations and the volume-related costs of analyzing that type of log.
Keep Log Management and SIEM Analysis Separate
Your SIEM platform needs in-depth visibility to every corner of your organization. That doesn’t mean it needs to capture every data point from every asset all the time. Effective log management helps determine which logs are important enough to analyze, and which ones can be safely left out.
It’s now very common for SIEM vendors to include log management as part of the SIEM service package. As a result, many security leaders see log management and SIEM performance as essentially the same thing.
But when it comes to choosing which logs to analyze and which ones to discard, it’s important to maintain a distinction between the two. Think of log management and SIEM performance as two separate services that your organization uses to improve its security posture.
- Yes, your organization should collect logs from every device and asset under its control, but
- No, that doesn’t necessarily mean sending every single log to your SIEM. It does mean you need to observe and control the flow of log data throughout your network.
Centralize and Organize Log Data with Dedicated Log Management
Security leaders that treat log management as its own separate technological solution gain the ability to prioritize certain log sources before deploying a SIEM platform. This grants visibility into the organization’s security posture and risk profile before it spends significant resources connecting everything to its SIEM.
It’s impossible to tell exactly which security events the organization’s future incident response playbooks will rely on. The security team doesn’t know in advance what data will be relevant to their investigations.
Having access to this data in a centralized repository enables security teams to proactively adjust SIEM performance over time. It gives analysts a chance to troubleshoot logging issues, conduct post-incident analysis, and engage in proactive threat hunting.
Castra provides remote log management services to organizations that want to gain insight into how their day-to-day operations impact their security posture. Gain visibility and context into activities occurring on your organization’s network with our help.