MySQL cluster – a wonderful idea. It’s really nice: you have a front end, and multiple redundant back ends, and all the data is loaded into memory, so things go very fast, even queries that inefficiently scan vast squabs of data, all executing in parallel with failsafe redundancy. Not only this, but you get to use words like “cluster” and “database” in one sentence, and it’s all very cool until you hit a limit:
- You run out of space.
- You have too many rows.
- You have too many columns.
- Too many concurrent queries – but they don’t block, they die.
- Old operations that interfere with new operations.
- Your query is too big.
- Your back end dies and you didn’t monitor it, and then they all die.
- You find that one of the weird restrictions actually matters – like you can’t address both ndbcluster and myisam tables in a trigger, or you can’t do certain types of query that, um, well, are bigly big or something.
So, in the shorter term, you migrate archive data away from the ndbcluster tables and into MyISAM tables. (Why not innodb? Because you need to make copies of these things, and tying up your front end to suck out innodb tables is not much fun.)
So today, for the longer term, we’re migrating back to plain InnoDB on some fair to amazing hardware. This means setting up a slave of a live system, and bringing it into sync – 150Gb of data. This has proved interesting:
- Row level replication doesn’t work quite right with triggers and clusters from version 5.1.51-ndb-7.1.10-cluster-gpl-log to 5.6.11 MySQL Community Server (GPL). Some things simply do not happen — certain UPDATE statements don’t get replicated. Switching to STATEMENT replication seems to make the replication happen, but of course, it doesn’t really make the two copies the same unless the databases happen to be in sync.
- The mysqldump of one of the tables made on the cluster could not be loaded, because of multiple entries sharing the same unique key. It was a less than unique unique key. I suppose I did run mysqldump with –skip-lock-tables so maybe I deserved this.
Anyhow, this brought us to reducing the size of the tables so that we could actually work with them. That has been fun, and this is where we ran into problems with a mass DELETE statement with some odd error that means “I’m busy with the stuff you said, so I can’t do more right now, sorry.”
So there was this idea:
Why don’t we generate the SQL statements using SQL!
Now this is truly the daftest idea that was had all day. And fortunately, MySQL seemed to agree:
mysql> select eval('select 1'); ERROR 1305 (42000): FUNCTION eval does not exist
What a relief! Sanity prevailed! A language which had the foresight not to implement that one thing that makes script languages eval.
Or so we thought, until this gem, which is eval for mysql:
mysql> set @string='select 1'; prepare eval from @string; execute eval; Query OK, 0 rows affected (0.00 sec) Query OK, 0 rows affected (0.01 sec) Statement prepared +---+ | 1 | +---+ | 1 | +---+ 1 row in set (0.00 sec)
So that’s eval for mysql. It was ugly. Fortunately, nobody did this. Equally fortunate is that it takes two statements to do this, which seriously limits the opportunities for SQL injection from applications.