Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In 2012 MySQL had several flavors of replications, each with its own very serious pitfalls that could introduce corruption or loss of data. I saw enough of MySQL replication issues in those days that I wouldn't want to use it.

But sure, it was easy to get a proof of concept working. But when you tried to break it by turning off network and/or machines, then shit broke down in very broken ways that was not recoverable. I'm guessing most that set up MySQL replication didn't actually verify that it worked well when SHTF.



Maybe that was true in 2012 (maybe it was related to MyISAM) but by ~2015 with InnoDB MySQL replication was rock solid.


It was not related to MyISAM.

How did you verify that it was rock solid? And which of the variants did you use?


Many of the largest US tech companies were successfully using MySQL replication in 2012 without frequent major issues.

source: direct personal experience.


> pitfalls that could introduce corruption or loss of data

sometimes, repairing broken data is easier than, say, upgrading a god damn hot DB.

MVCC is overrated. Not every row in a busy MySQL table is your transactional wallet balance. But to upgrade a DB you have to deal with every field every row every table, and data keeps changing, which is a real headache

Fixing a range of broken data, however, can be done by a junior developer. If you rely on rdbms for a single source of truth you are probably fucked anyway.

btw I do hate DDL changes in MySQL.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: