Note : This is all regarding Linux and free software, as that's what I mostly use, but you should be fine with a syslog client on Windows to send the logs to a Linux syslog server.
Logging to an SQL server:
With only ~30 machines, you should be fine with pretty much any centralised syslog-alike and an SQL backend. I use syslog-ng and MySQL on Linux for this very thing.
Pretty frontends for graphing are the main problem -- It seems that there is a lot of hacked-up front-ends which will grab items from the logs and show how many hits, alerts etc but I've not found anything integrated and clean. Admittedly this is the main thing that you're looking for... (If I find anything good then I'll update this section!)
Alerting: I use SEC on a Linux server to find bad things happening in the logs and alert me via various methods. It's incredibly flexible and not as clicky as Splunk. There's a nice tutorial here which guides through a lot of the possible features.
I also use Nagios for graphs of various stats and some alerting which I don't get from the logs (such as when services are down etc). This can be easily customized to add graphs of anything you like. I have added graphs of items such as the number of hits made to an http server, by having the agent use the check_logfiles plugin to count the number of hits in the logs (it saves the position it gets up to for each check period).
Overall, it depends on how much your time will cost to set this up, as there are many options which you can use but they aren't as integrated as Splunk and will probably require more effort to get doing what you want. The Nagios graphs are straightforward to set up but don't give you historical data from before you add the graph, whereas with Splunk (and presumably other front-ends) you can look back at the past logs and graph things you've only just thought of to look at from them.
Note also that the SQL database format and indexing will have a huge effect on the speed of queries, so your idea of fulltext indexing will make a tremendous increase to the speed of searches. I'm not sure if MySQL or PostgreSQL will do something similar.
Edit : MySQL will do fulltext indexing, but only on MyISAM tables prior to MySQL 5.6. In 5.6 Support was added for InnoDB.
Edit: Postgresql can do full text search of course: http://www.postgresql.org/docs/9.0/static/textsearch.html
We use free Splunk together with OSSEC on several customers and it's perfectly usable. Of course, it has some limitations compared to the non-free version:
- 500MB limit per day (with two or three peaks allowed per month): If you don't generate that much data, this won't affect you
- Authentication: free Splunk does not have it. We use apache and http_auth to overcome this limitation. It's not a perfect solution but good enough. If you will be the only user, you can run it on localhost.
- Different users: free Splunk only has one user. So you don't get personalized dashboards and customization. Again, if you are all looking for the same and don't care about sharing or you are the only one, there should be no problem.
Overall, free Splunk (particularly version 4) is a product per se and can be used in production without worries, unless you happen to need the added features of the non-free version.
Best Answer
Install the logcheck package. It will scan the logs once an hour and email you anything it doesn't consider normal. Essentially, it emails anything that entered the logs in the last hour that it doesn't have a rule for ignoring. There are additional attack rules than include things which shouldn't be in the log. The email subject line varies depending on the reason things were picked up.
I generally build a local ignore file for it as I discover things which I consider normal, but don't have existing ignore rules.
The various syslog alternatives all support server consolidation, so you can forward the logs to a single server. However, I haven't been in the habit of doing it. The only system I forward logs off of is my OpenWRT firewall.
EDIT: I do use Splunk at work to search log files, although if I known the particular log I am looking for I am more likely to use less. It does have alert capabilities, but we don't use them. I expect they would alert on a match to a known record. This can lead to a lot of false negatives if you have new problems without an alert rule. I prefer to have false positives like I get from logcheck. Splunk may have better timeliness on alerts though.
I do get timely alerts from fail2ban on cases that cause it to trigger. It also maintains blacklist entries for the originating source.