It is possible for the RDB file to become corrupt if the underlying datastore has problems. Redis includes a utility redis-check-dump to validate a file, and you could use this to check the newly written dump file for consistency.
If an RDB is corrupt on startup, Redis will fail to start and report a somewhat cryptic error. There is a pull request:
https://github.com/antirez/redis/pull/1744
To make the check program run automatically, but it has not been merged (yet).
The dump file is written in a background thread, which means that it cannot contain a 100% up to date copy of the key space. To get this, you need to use the AOF file and set it to sync after every write (which has performance implications).
Redis' architecture allows you to build many different solutions to this. You could turn on AOF writing on the master with "never" fsync, then create two slave servers, one of which only created RDB files every 10 minutes and another that used AOF with "always" or "everysec" fsync. This would give you built-in redundancy on the master so long as the disk never failed, but if it did you could take the AOF file from the slave and use it to bring the master back. If that failed, you could go to the RDB slave, but in this case you might lose the data written since the last 10 minute dump.
This flexibility is part of what makes Redis so powerful: you can choose, based on the data you are storing what level of redundancy to use.
Alternatively, you can go with a hosted Redis service and let them worry about the details.
Best Answer
The bug described in https://code.google.com/archive/p/redis/issues/525 no longer exists. While there's no documented way to get advanced warning that a PubSub client is not keeping up, Redis will kill connections from slow clients to protect itself from running out of memory and inform you of this via its log file. Per the docs: