Cassandra 2.1 system schema missing

cassandra

I have a six node cluster running cassandra 2.1.6. Yesterday I tried to drop a column family and received the message "Column family ID mismatch".

I tried running nodetool repair but after repair was complete I got the same message. I then tried selecting from the column family but got the message "Column family not found".

I ran the following query to get a list of all column families in my schema
select columnfamily_name from system.schema_columnfamilies where keyspace_name = 'xxx';
At this point I received the message
"Keyspace 'system' not found."

I tried the command describe keyspaces and sure enough system was not in the list of keyspaces.

I then tried nodetool resetlocalshema on one of the nodes missing the system keyspace and when that failed to resolve the problem I tried nodetool rebuild but got the same messages after rebuild was complete.

I tried stopping the nodes missing the system keyspace and restarted them, once the restart was completed the system keyspace was back and I was able to execute the above query successfully. However, the table I had tried to drop previously was not listed so I tried to recreate it and once again received the message Column family ID mismatch.

Finally, I shutdown the cluster and restarted it… and everything works as expected.

My questions are:
How/why did the system keyspace disappear?
What happened to the data being inserted into my column families while the system keyspace was missing from two of the six nodes? (my application didn't seem to have any problems)
Is there a way I can detect problems like this automatically or do I have to manually check up on my keyspaces each day?
Is there a way to fix the missing system keyspace and/or the Column family ID mismatch without restarting the entire cluster?

EDIT
As per Jim Meyers suggestion I queried the cf_id on each node of the cluster and confirmed that all nodes return the same value.

select cf_id from system.schema_columnfamilies where columnfamily_name = 'customer' allow filtering;

cf_id
————————————–
cbb51b40-2b75-11e5-a578-798867d9971f

I then ran ls on my data directory and can see that there are multiple entries for a few of my tables
customer-72bc62d0ff7611e4a5b53386c3f1c9f9
customer-cbb51b402b7511e5a578798867d9971f

My application dynamically creates tables at run time (always using IF NOT EXISTS), seems likely that the application issued the same create table command on separate nodes at the same time resulting in the schema mismatch.
Since I've restarted the cluster everything seems to be working fine.

Is it safe to delete the extra file?
i.e. customer-72bc62d0ff7611e4a5b53386c3f1c9f9

Best Answer

1 The cause of this problem is a CREATE TABLE statement collision. Do not generate tables dynamically from multiple clients, even with IF NOT EXISTS. First thing you need to do is fix your code so that this does not happen. Just create your tables manually from cqlsh allowing time for the schema to settle. Always wait for schema agreement when modifying schema.

2 Here's the fix:

1) Change your code to not automatically re-create tables (even with IF NOT EXISTS).

2) Run a rolling restart to ensure schema matches across nodes. Run nodetool describecluster around your cluster. Check that there is only one schema version. 

ON EACH NODE:

3) Check your filesystem and see if you have two directories for the table in question in the data directory.

If THERE ARE TWO OR MORE DIRECTORIES:

4)Identify from schema_column_families which cf ID is the "new" one (currently in use). 

cqlsh -e "select * from system.schema_column_families"|grep

5) Move the data from the "old" one to the "new" one and remove the old directory. 

6) If there are multiple "old" ones repeat 5 for every "old" directory.

7) run nodetool refresh

IF THERE IS ONLY ONE DIRECTORY:

No further action is needed.

Futures

Schema collisions will continue to be an issue until - CASSANDRA-9424

Here's an example of it occurring on Jira and closed as not a problem CASSANDRA-8387

Related Topic