Note : This is all regarding Linux and free software, as that's what I mostly use, but you should be fine with a syslog client on Windows to send the logs to a Linux syslog server.
Logging to an SQL server:
With only ~30 machines, you should be fine with pretty much any centralised syslog-alike and an SQL backend. I use syslog-ng and MySQL on Linux for this very thing.
Pretty frontends for graphing are the main problem -- It seems that there is a lot of hacked-up front-ends which will grab items from the logs and show how many hits, alerts etc but I've not found anything integrated and clean. Admittedly this is the main thing that you're looking for... (If I find anything good then I'll update this section!)
Alerting: I use SEC on a Linux server to find bad things happening in the logs and alert me via various methods. It's incredibly flexible and not as clicky as Splunk. There's a nice tutorial here which guides through a lot of the possible features.
I also use Nagios for graphs of various stats and some alerting which I don't get from the logs (such as when services are down etc). This can be easily customized to add graphs of anything you like. I have added graphs of items such as the number of hits made to an http server, by having the agent use the check_logfiles plugin to count the number of hits in the logs (it saves the position it gets up to for each check period).
Overall, it depends on how much your time will cost to set this up, as there are many options which you can use but they aren't as integrated as Splunk and will probably require more effort to get doing what you want. The Nagios graphs are straightforward to set up but don't give you historical data from before you add the graph, whereas with Splunk (and presumably other front-ends) you can look back at the past logs and graph things you've only just thought of to look at from them.
Note also that the SQL database format and indexing will have a huge effect on the speed of queries, so your idea of fulltext indexing will make a tremendous increase to the speed of searches. I'm not sure if MySQL or PostgreSQL will do something similar.
Edit : MySQL will do fulltext indexing, but only on MyISAM tables prior to MySQL 5.6. In 5.6 Support was added for InnoDB.
Edit: Postgresql can do full text search of course: http://www.postgresql.org/docs/9.0/static/textsearch.html
This is just an example and should guide you in the right direction. I haven't actually tried it before posting.
In master.cf
comment out the line containing bounce
in the first column. Then create new lines with something like
bounce unix - n n - - pipe
flags=DRhu user=vmail:vmail argv=/usr/local/bin/deliver -f ${sender} -d ${recipient}
This will pipe all bounced mails to the script /var/local/bin/deliver with some parameters. The script is executed as user "vmail" and group "vmail" (change accordingly). This script then needs to dump stdin to a (random) file and location. Probably you can use already existent scripts like procmail
.
It will not disallow outgoing mails but "catch" the bounces you are after.
Best Answer
Logstach and graylog are probably good tools to solve you problem. But you could maybe take a look at rsyslogd. You can use it to specify logs templates, selector and filters, and take different actions based on that. For example, when log line matches the first filter regexp it can trigger an insert in an SQL DB or a document addition in any kind of index based on your template and you're output module. And the second template can trigger an update of the SQL row or the indexed document. Although this is not a ready to use solution, this is quire simple to setup and can make searches a lot easier.
If interested take a look at the pages bellow:
http://www.rsyslog.com/doc/rsyslog_conf_filter.html
http://www.rsyslog.com/doc/rainerscript.html