Integrating Keeper Security Event Logs with Wazuh SIEM

Integrating Keeper Security Event Logs with Wazuh SIEM

index

In a previous note I covered deploying Keeper PAM for managing privileged access. Today, we’ll take that a step further by integrating Keeper’s audit logs with Wazuh, creating a unified monitoring setup that provides real-time visibility into vault access, credential usage, and administrative changes across the entire environment.

In this guide, I’ll show you how I set up this integration using syslog-ng as a secure intermediary, and how I created custom detection rules that actually make sense for real-world security operations.

What You’ll NeedSection titled What%20You%u2019ll%20Need

To follow along with this integration, make sure you have these components ready:

  • Keeper Enterprise account with Advanced Reporting & Alerts Module (ARAM) enabled
  • Wazuh cluster 4.x or newer (I’m using 4.13.0 and using a cluster for redundancy and high-availability, but you can also use a single instance)
  • A separate Linux VM for syslog-ng connected in a DMZ
  • SSL/TLS certificates for secure syslog transmission
  • Network connectivity between Keeper, the syslog-ng VM, and the Wazuh managers
  • Administrator access to the Keeper Admin Console
  • Ability to open firewall ports for syslog traffic

A note on the architectureSection titled A%20note%20on%20the%20architecture

In my setup, I deliberately placed syslog-ng on a separate VM rather than running it directly pushing the logs to the Wazuh Manager. Why? Because this VM will be exposed to the internet to receive logs from Keeper’s cloud service. By keeping it separate on a minimal, heavily monitored and restricted VM, I add an extra layer of security. If the syslog receiver is ever compromised, my Wazuh SIEM remains protected behind another network boundary. Think of it as a security buffer zone or DMZ specifically for log ingestion.

This approach follows the principle of defense in depth, even if an attacker compromises the internet-facing syslog-ng VM, they still need to breach the internal network to reach Wazuh, and they can’t tamper with historical logs already stored in Wazuh’s immutable archives.

Understanding the Data FlowSection titled Understanding%20the%20Data%20Flow

Before we start configuring things, let’s understand how data flows through our setup:

Keeper to Wazuh data flow

  1. Keeper pushes audit events in near real-time via encrypted syslog on port 6514
  2. syslog-ng receives and processes the connection
  3. syslog-ng forwards the payload via an unencrypted connection to the Nginx load balancer which forwards it to one of the Wazuh Managers available
  4. Wazuh Manager applies decoders and rules to generate alerts
  5. Wazuh Indexer stores the data for searching and reporting
  6. Wazuh Dashboard provides visualization and investigation capabilities

This architecture gives us security (TLS encryption and an isolated receiver), reliability (load balancing), and flexibility.

Security Consideration: While the connection between syslog-ng and Wazuh is unencrypted (due to Wazuh’s limitation), this is an acceptable risk in my setup because both systems are on an internal, isolated network. If your environment requires encryption throughout the entire path, consider:

  • implementing IPSec for network-layer encryption on the management network
  • deploying Wazuh agents with built-in encryption instead of direct syslog forwarding

Configuring the message pathSection titled Configuring%20the%20message%20path

Step 1: Forwarding the 6514 port through the internal networkSection titled Step%201%3A%20Forwarding%20the%206514%20port%20through%20the%20internal%20network

In my case, I am using a Mikrotik RouterOS, so the following will apply to RouterOS firewalls:

/ip firewall nat
add action=dst-nat \
chain=dstnat \
dst-port=6514 \
protocol=tcp \
to-addresses=10.x.x.x to-ports=6514

This command will add a rule to forward the 6514 port from the firewall to the virtual machine hosting the Syslog-NG.

Additionally, you should restrict the port to Keeper’s specific IPs listed here: Keeper Firewall Config

Note: this step is specific to my environment, and you should check how you can forward ports based on your environment.

Step 2: Configuring Syslog-NGSection titled Step%202%3A%20Configuring%20Syslog-NG

In this step, you should already have a VM created and the syslog-ng package installed.

Open your favorite text editor and add the configuration below to /etc/syslog-ng/conf.d/01-keeper.conf

options {
threaded(yes);
chain_hostnames(no);
stats_freq(0);
mark_freq(0);
};
source s_tcp {
network(
port(6514)
transport("tls")
tls(
key-file("/etc/syslog-ng/certs/keeper.root-security.eu.key")
cert-file("/etc/syslog-ng/certs/keeper.root-security.eu.pem")
peer-verify(no)
)
flags(no-parse)
keep-alive(yes)
);
};
destination d_wazuh {
network(
"wazuh.root-security.eu"
port(514)
transport("tcp")
);
};
log {
source(s_tcp);
destination(d_wazuh);
};

NOTE: update the configuration according to your certificate’s name (lines 13-14) and the destination according to your Wazuh environment (line 24)

What the config does is instruct syslog-ng to listen on port 6514 for TLS connections. When a connection is received it should forward it to wazuh.root-security.eu using port 514 using the TCP transport protocol.

Now restart the service:

Terminal window
systemctl restart syslog-ng

Important: Self-signed certificates will not work with Keeper’s Syslog Push feature. You need a certificate from a trusted Certificate Authority. Let’s Encrypt certificates should work fine since they’re trusted on most systems, though I use a commercial cert in my setup.

Step 3: Set up the Wazuh load balancer (optional)Section titled Step%203%3A%20Set%20up%20the%20Wazuh%20load%20balancer%20%28optional%29

I am using Nginx to balance requests between Wazuh managers. If you are using a single Wazuh manager instance you can skip this step.

Adapt this configuration to best match your environment and add it to your Nginx configuration for Wazuh (typically /etc/nginx/conf.d/default.conf):

stream {
// ... (agent and cluster upstreams config omitted)
upstream wzmsyslog{
hash $remote_addr consistent;
server 10.x.x.1:514;
server 10.x.x.2:514;
}
server {
listen 514 tcp;
proxy_pass wzmsyslog;
}
}

The full Nginx config is available on Github

After updating, test and reload Nginx:

Terminal window
sudo nginx -t
sudo systemctl reload nginx
sudo netstat -tlnp | grep :514

Step 4: Configure Wazuh to receive syslog messagesSection titled Step%204%3A%20Configure%20Wazuh%20to%20receive%20syslog%20messages

Now that we have configured the network path of the syslog message let’s configure Wazuh to receive the messages. Login to your Wazuh dashboard and go to Server Management > Settings and click on the Edit configuration to open the XML config file. At the bottom add the following config, adapting it to your environment:

<ossec_config>
<remote>
<connection>syslog</connection>
<port>514</port>
<protocol>tcp</protocol>
<allowed-ips>10.x.x.x/16</allowed-ips>
<local_ip>0.0.0.0</local_ip>
</remote>
</ossec_config>

Once added, click Save and restart the Wazuh manager. If you are using a cluster you should add this for all managers in your cluster so that any of them can process the messages.

Step 5: Configuring the Keeper AdminSection titled Step%205%3A%20Configuring%20the%20Keeper%20Admin

Keeper, by default, has connectors to many security solutions, and in order to use it with Wazuh we will use the Syslog Push connector.

Keeper log connectors

Start by logging into the Keeper Admin Console and go to Reporting & Alerts. Then, at the top select the External Logging tab. There click on the Setup button for Syslog Push. This will open a popup where you need to enter your domain and port.

Keeper syslog popup

If the connection could be established the popup will close and you should start receiving the logs.

Step 6: Verify the pipelineSection titled Step%206%3A%20Verify%20the%20pipeline

Before moving on to decoders and rules, let’s verify that logs are flowing correctly through the entire pipeline.

1. Check syslog-ng is receiving connections:

Terminal window
# On the syslog-ng VM, monitor incoming connections
sudo tail -f /var/log/messages | grep syslog-ng
# Or check for active TLS connections on port 6514
sudo netstat -tnlp | grep 6514
# Check actual packets are coming if you don't see connections from previous commands
sudo tcpdump -i eth0 src port 6514
# on another machine use netcat to connect to the port
nc 10x.x.x.x 6514

2. Verify syslog-ng is forwarding to Wazuh:

Terminal window
# Check syslog-ng statistics
sudo syslog-ng-ctl stats
# Look for destination "d_wazuh" statistics

3. Confirm Wazuh is receiving the logs: Open your Wazuh dashboard and go to Explore > Discover and you should search for keeper in the wazuh-archives-* index to see the logs coming in.

Wazuh dashboard logs

*some fields have been obfuscated, they are not missing :)

If you don’t see any entries:

  • check firewall rules that allow traffic on required ports
  • review syslog-ng logs: sudo journalctl -u syslog-ng -f
  • ensure certificates are valid - debugging this I’ve seen allsorts of weird errors. The most frequent problem I’ve encountered is that Keeper silently drops the syslog connection without displaying any error messages, despite initially confirming that the connection was successfully established when setting up the domain and port.

Creating decoders and rulesSection titled Creating%20decoders%20and%20rules

Now that we are receiving the syslog messages we can see that Keeper uses the JSON format for the message payload. This makes it very easy to create a single decoder with a variable number of fields.

Open the Wazuh dashboard and go to Server Management > Decoders and click on Add new decoders file. In the file name add your desired name, in my case I use this format XXXX-keeper.xml, and add the following content:

0010-keeper.xml
<!--Parent decoder-->
<decoder name="keeper_audit">
<prematch>keeper - - -</prematch>
</decoder>
<!--JSON Fields decoder-->
<decoder name="keeper_audit_fields">
<parent>keeper_audit</parent>
<prematch>keeper - - -</prematch>
<plugin_decoder offset="after_parent">JSON_Decoder</plugin_decoder>
</decoder>
<!--Event timestamp decoder-->
<decoder name="keeper_audit_fields">
<parent>keeper_audit</parent>
<regex type="pcre2">.*(\d{4}\-\d{2}\-\d{2}T\d{2}\:\d{2}\:\d{2}\.\d{3}Z)</regex>
<order>event_timestamp</order>
</decoder>

This will decode the JSON payload of the syslog message and additionally will extract the event timestamp as that is not part of the JSON payload that Keeper sends us.

The two decoders sharing the same name are intentionally designed this way. This naming convention allows Wazuh to merge the outputs of both decoders into a single, unified result.

With the decoders in place let’s add a few rules, by going to Server management > Rules. Click the Add new rules file and give it a name, I used keeper_logs.xml, and add the following content:

keeper_logs.xml
<group name="keeper">
<!--Alert when a failed login is seen-->
<rule id="119011" level="10">
<decoded_as>keeper_audit</decoded_as>
<field name="audit_event">login_failure</field>
<field name="category">login</field>
<description>Keeper user [$(username)] has failed to login via [$(channel)] using [$(client_version)] with [$(result_code)] from [$(remote_address)]</description>
</rule>
<!--Alert when multiple failed logins are seen-->
<rule id="119012" level="14" frequency="3" timeframe="300">
<if_matched_sid>119011</if_matched_sid>
<description>Keeper user [$(username)] has failed multiple times to login via [$(channel)] using [$(client_version)] with [$(result_code)] from [$(remote_address)]</description>
</rule>
</group>

This creates two alerts, one for a single failed login and one triggered when multiple failed logins are seen which might indicate that a credential attack is underway.

Now you can monitor and create alerts for more than 100 events grouped by event type. This gives us a lot of flexibility in monitoring and alerting the security team based on the organization’s interests.

Additional Useful RulesSection titled Additional%20Useful%20Rules

Beyond authentication failures, here are some additional rules I’ve found valuable in production:

keeper_logs.xml
<!-- Alert on administrative actions -->
<rule id="119020" level="8">
<decoded_as>keeper_audit</decoded_as>
<field name="category">account|security|policy|managed_company|msp</field>
<description>Keeper administrative action: [$(audit_event)] by [$(username)]</description>
<group>keeper,administrative,</group>
</rule>
<!-- Alert on external sharing -->
<rule id="119030" level="9">
<decoded_as>keeper_audit</decoded_as>
<field name="audit_event">ext_share_added|ext_share_removed</field>
<description>User [$(username)] performed [$(audit_event)] on a One-Time Share link to 'Record UID:$(app_uid)'</description>
<group>keeper,sharing,external,</group>
</rule>
<!-- Alert on vault record deletion -->
<rule id="119040" level="12">
<decoded_as>keeper_audit</decoded_as>
<field name="audit_event">record_delete|deleted_folder</field>
<description>Keeper CRITICAL: [$(username)] deleted [$(audit_event)]</description>
<group>keeper,deletion,critical,</group>
</rule>

Each of these rules addresses a specific security concern:

  • 119020: tracks all administrative changes for audit trails
  • 119030: flags external sharing which could indicate data exfiltration
  • 119040: critical alert for deletion events (potential sabotage or cover-up)

Next stepsSection titled Next%20steps

With the integration in place and operational, here are some ideas for your security team:

Expand the detection rules: Beyond failed logins, consider creating rules for:

  • Vault record sharing events (both internal and external)
  • Administrative changes (user additions, role modifications, policy changes)
  • Privileged session activities from KeeperPAM
  • Credential rotation events from Secrets Manager
  • After-hours access to critical vaults

Create Custom Dashboards: Build Wazuh dashboards that give you at-a-glance visibility into:

  • Failed authentication attempts by user and source IP
  • After-hours access attempts
  • Administrative actions timeline
  • Geographic distribution of access (if using GeoIP enrichment)

Implement Automated Responses: Configure Wazuh active response to take action on critical events:

  • Email notifications for high-severity alerts
  • Perform administrative actions using Keeper Commander
  • Automated ticket creation in your ITSM platform

Correlate with Other Security Data:

  • Cross-reference Keeper credential access with authentication logs from target systems
  • Correlate PAM session starts with EDR process execution data
  • Link administrative changes to change management tickets

Some of these I’ve implemented myself, and they helped a lot to improve my understanding of Keeper usage in the organization.

ConclusionSection titled Conclusion

Integrating Keeper’s audit logs with Wazuh has transformed how I monitor privileged access. What was once scattered across Keeper’s Admin Console and periodic export files is now a real-time security monitoring system that alerts me to potential threats before they escalate.

The architecture I’ve shared, using syslog-ng as an internet-facing intermediary with TLS encryption, load-balanced Wazuh managers, and custom detection rules, provides both security and reliability. Yes, it’s an extra VM to manage, but the security isolation it provides is worth the operational overhead.

The two rules I’ve included are just a starting point. Keeper logs over 100 event types, and you should build rules that align with your organization’s risk profile. Monitor failed logins for credential attacks, track external sharing for data exfiltration, and alert on administrative changes for audit trails are just a few examples that you try.

If you’ve been following my Keeper PAM PoC, this monitoring layer completes the privileged access security stack: secure storage, managed access, and comprehensive visibility. Together, I believe these form the foundation of a mature PAM program.

I’d love to hear about your experiences integrating Keeper with Wazuh. What challenges did you face? What additional rules have you found valuable?

Drop your thoughts in the comments or reach out directly.