Once you install the server and it's php frontend, log into the frontend. On the dashboard, you get a little summary. Make sure that that says "zabbix server is running: yes" before doing any agent stuff.
Then, when all that works as planned, start configuring the agents. First do the agent on the zabbix server itself:
After you have installed the agent, it needs to be configured appropriately. The configuration is done in the configuration file. For Linux/Unix operating systems it is located in "/etc/zabbix/zabbix_agentd.conf" and for windows it is called "c:/zabbix_agentd.conf" by default.
There are two settings in this file that are really important: 'Server' and 'Hostname'.
The 'Server' setting needs to be set to whatever the ip or fqdn of your server is.
The 'Hostname' setting can be set to anything you like, but it is preferrable to choose a lowercase name without spaces or strange symbols. A good choice is to use the hostname of the server with your company name or site address as suffix. Let's say you have a server called workhorse, and your site is called example.com. Then you would choose hostname=workhorse.example.com as the value for the 'hostname' setting in the configuration file.
Note that the value you choose for hostname in the configuration file doesn't need to be equal to the actual hostname of the server.
Next, go into the php frontend, add a host and set it's ip address or dns correctly. Also, set the name field to the value you chose in the agent config file. After saving, restart the agent and all should be well :)
Good luck!
This is a slightly older question but the problem presented here is based on a misconception on how and when failover in clusters, especially two-node clusters, works.
The gist is: You can not do failover testing by disabling communication between the two nodes. Doing so will result in exactly what you are seeing, a split-brain scenario with additional, mutual STONITH. If you want to test the fencing capabilities, a simple killall -9 corosync
on the active node will do. Other ways are crm node fence
or stonith_admin -F
.
From the not quite complete description of your cluster (where is the output of crm configure show
and cat /etc/corosync/corosync.conf
?) it seems you are using the 10.10.10.xx addresses for messaging, i.e. Corosync/cluster communication. The 172.10.10.xx addresses are your regular/service network addresses and you would access a given node, for example using SSH, by its 172.10.10.xx address. DNS also seems to resolve a node hostname like node1
to 172.10.10.1.
You have STONITH configured to use SSH, which is not a very good idea in itself, but you are probably just testing. I haven't used it myself but I assume the SSH STONITH agent logs into the other node and issues a shutdown command, like ssh root@node2 "shutdown -h now"
or something equivalent.
Now, what happens when you cut cluster communication between the nodes? The nodes no longer see each node as alive and well, because there is no more communication between them. Thus each node assumes it is the only survivor of some unfortunate event and tries to become (or remain) the active or primary node. This is the classic and dreaded split-brain scenario.
Part of this is to make sure the other, obviously and presumably failed node is down for good, which is where STONITH comes in. Keep in mind that both nodes are now playing the same game: trying to become (or stay) active and take over all cluster resources, as well as shooting the other node in the head.
You can probably guess what happens now. node1
does ssh root@node2 "shutdown -h now"
and node2
does ssh root@node1 "shutdown -h now"
. This doesn't use the cluster communication network 10.10.10.xx but the service network 172.10.10.xx. Since both nodes are in fact alive and well, they have no problem issuing commands or receiving SSH connections, so both nodes shoot each other at the same time. This kills both nodes.
If you don't use STONITH then a split-brain could have even worse consequences, especially in case of DRBD, where you could end up with both nodes becoming Primary. Data corruption is likely to happen and the split-brain must be resolved manually.
I recommend reading the material on http://www.hastexo.com/resources/hints-and-kinks which is written and maintained by the guys who contributed (and still contribute) a large chunk of what we today call "the Linux HA stack".
TL;DR: If you are cutting cluster communication between your nodes in order to test your fencing setup, you are doing it wrong. Use killall -9 corosync
, crm node fence
or stonith_admin -F
instead. Cutting cluster communication will only result in a split-brain scenario, which can and will lead to data corruption.
Best Answer
I found the issue. It was a very elusive typo in /etc/zabbix/web/zabbix.conf.php
I got one character wrong in server name!