MySQL “the.cnf” variables not updating after Ansible run

my.cnfMySQL

i'm creating a simple Ansible playbook for my Project where i'm installing MySQL on an Ubuntu VM.

As part of this setup i'm creating a custom my.cnf file in /etc/my.cnf and it looks like this after the jinja2 template is done parsing it.

[client]
port   = 3306
socket = /var/run/mysqld/mysqld.sock

[mysqld_safe]
socket           = /var/run/mysqld/mysqld.sock
log_error        = /var/log/mysql/mysql_error.log
pid-file         = /var/run/mysqld/mysqld.pid
general_log      = on
general_log_file = /var/log/mysql/mysql.log

[mysqld]
bind-address     = 127.0.0.1
datadir          = /var/lib/mysql
pid-file         = /var/run/mysqld/mysqld.pid
log_error        = /var/log/mysql/mysql_error.log
general_log      = on
general_log_file = /var/log/mysql/mysql.log
socket           = /var/run/mysqld/mysqld.sock
user             = root
port             = 3306

# Disabling Symlinks is recommended for security purposes #
symbolic-links=0

Next, since i'm running Ubuntu i call

service: name=mysql state=started enabled=yes

And everything appears correct, but when i check my variables using

mysqld --verbose --help i find that the variables are wrong, for instance it says general-log is false and symbolic-links is TRUE even though i set it to 0 in this cnf file, same if i run mysql show variables

So, i've check that the file exists and is in etc/my.cnf and that it is loaded as mysql --verbose --help reports

Default options are read from the following files in the given order:
/etc/my.cnf /etc/mysql/my.cnf /usr/etc/my.cnf ~/.my.cnf

Might it be a user permissions issue? I believe this my.cnf file belongs to the root user this might be the cause of the issue.

What i really need is some help debugging what might be going wrong as i'm relatively new to this kind of low level MySQL configuration.

Thank you in advance

Best Answer

You're on Ubuntu, which automatically starts services after installation. Thus at the time you ask for it to be enabled and started, it has already been enabled and started. And of course if you run the playbook again while it's running, it's already started...

What you need to do is to set up a handler that will restart the service. For instance:

$ cat roles/mysql/handlers/main.yml
---
- name: restart mysql
  service: name=mysql state=restarted

Then be sure to notify: restart mysql in any task that changes the configuration.