Yes. It is possible to change the host
field in logstash output with ruby
filter without much hassle.
ruby {
code => "
event['host'] = event['message'].split(' ')[3]
"
}
Here I assumed in the syslog server logs, the host field is the fourth field where white space is the separator.
While running Logstash with a file
input against your logs on a CIFS share will work, I don't think it'll work very well. I haven't directly used Logstash like that but, In my experience using Logstash-Forwarder to watch logs files over an SSHFS mount, it doesn't deal well will file rotations or reboots of either end.
As for not being sure how to deal with your multi-line exceptions with Filebeat, I don't think you need to worry about it. FileBeat just takes
lines from the files you want to ship, and fires them across the network. It adds a few fields, but they don't affect the overall concept of FileBeat being a very basic log shipper.
This means you can just run your multi-line filter in Logstash on the collecting server, just as you would if you ran Logstash on the app servers directly.
Now, depending on your log volume, you might find that you need to increase the number of workers for LS to handle grokking your data effectively.
What I do to handle such things is very similar to your option 2, but instead of just having two LS instances ("Broker" and a "Parser"), I have 3.
+-------------------+
+-------------------+|
+-------------------+||
| App Servers |||
| +----------+ ||+
| | FileBeat | |+
+----+----------+---+
/
/
/
+----------------/----------------------------------------+
| / Collecting Server |
| +----------/-+ +---------------------+ +------------+ |
| | Logstash | | Logstash | | Logstash | |
| | Broker | |Multi-line Pre-Parser| | Parser | |
| +------------+ +---^-----------------+ +-----^---V--+ |
| | | | | | |
| | | Redis | | | |
| V +---------------------V------+ | | |
| +-------> DB0 | DB1 + --->+ | |
| +----------------------------+ / |
+-------------------------------------------------/-------+
/
/
/
+-------------------+ /
+-------------------+|/
+-------------------+||
| ElasticSearch ||+
| Cluster |+
+-------------------+
All the Pre-Parser
instance does is transform multi-line log entries into a single line so that the Parser
can do it's job properly. And even then, I'm checking type and tags to see if there's even a possibility that the line(s) will be multi-line, so the overhead is minimal.
I'm easily able to push 1000 events a second through it (barely hitting 20% CPU). Further, the system is an ELK stack-in-a-box, so with dedicated nodes for both LS and ES, it should be easy.
Why not just crank up the workers
on the Parser instance? Well, this stems from the fact that the multiline
filter in LS doesn't support multiple workers.
multiline
This filter will collapse multiline messages from a single source into one Logstash event.
The original goal of this filter was to allow joining of multi-line messages from files into a single event. For example - joining java exception and stacktrace messages into a single event.
Note: This filter will not work with multiple worker threads -w 2
on the Logstash command line.
Best Answer
Try running
tcpdump
to actually confirm you have traffic coming from your pfSense device. For example you could run something like:That will show you the packet header and payload.
Try also checking that
ossec-remoted
process is listening for incoming traffic. You can do it by running:In addition, as another option that I personally like, you can use (on the Wazuh server) Rsyslog daemon to collect Syslog data and dump it into a file.
Then you can configure the Wazuh server logcollector component to read that log file, so it is also processed by Wazuh and the analysis engine.
A good tool to monitor if Rsyslog is writing to the file and if ossec-logcollector component is reading it is running
lsof
. Example:To use Rsyslog you will need to configure it to listen for remote data, and a rule to write logs to the file. An example of a rule would be:
If you go this way, to avoid conflict, remember to disable
ossec-remote sysl