This sounds suspiciously like you're trying to invent output caching. I definitely wouldn't agree that it's hard to do partial page caching in MVC, it's just that you would need to use partial Views with their own Controller actions to do it as opposed to having the View itself call its sub-Views, which can be slightly counter-intuitive. (So in ASP.net MVC you'd want to use Html.RenderAction as opposed to Html.RenderPartial). There's a name for this particular pattern that is currently escaping my recollection.
I would suggest that the main flaw with your design is that Views will have to know things about the architecture of the site. So they'll have to know about where to get their data from AND how to get it, they'll have to know about where to cache it, when to cache it, when not to cache it, etc.
Realistically you should be trying to separate layers away from knowledge of other layers, as the less knowledge each layer has of another the easier it is to change a layer (i.e. switch DB, add a transparent data caching layer, move a DB call across to a web service, etc.).
I would suggest that if you're going to implement such an idea, and there isn't a native output caching system in your MVC-framework-of-choice, that you add the caching layer to the Controller actions. Unless you've got some VERY heavyweight Views that need to do large amounts of recursive rendering of Models (something that's very rare) the actual HTML generation time is miniscule compared to DB calls and client network latency, so take a more pragmatic approach and cache where you need to cache, i.e. in the application layer.
If you really need custom output caching then you probably want to just slip in a caching layer above your Controller actions with either a wrapping class (if you're using a dynamic language) or a different implementation of an interface (if in a static language) that can hijack the calls to the Action and react accordingly.
As for caching that reacts to DB changes, you'd be better off with a caching layer that takes into account both loading and saving in your repository class, with each save call flushing the cache (or part of the cached set) and each load being gracefully degraded from cache to DB when needed (i.e. use the cache when the cached data is available). That way you keep the database driven behaviour close to the database.
My recommendation is to look at your usage profile and your requirements for the cache.
I can see no reason why you would leave stale data in memcached. I think you have picked the right approach ie: update the DB.
In any case, you're going to need a wrapper on your DB update (which you've done). Your code to update the User in the DB and in-RAM should also do a push to memcached, OR an expiry in memcached.
For example - If your users normally do an update once per session as part of log off, there's not much point updating the data in cache (eg high score total) - you should expire it straight away.
IF however they are going to update the data (eg current game state) and then 0.2 seconds later you'll have an immediate PHP page hit that is going to request the data, you'd want it fresh in the cache.
Best Answer
One question is whether the cache itself is really a requirement that should be tested by QA. Caching improves performance, so they could test the difference in performance to ensure it meets some requirement.
But good idea to have some testing around caching, whoever is responsible for it. We used performance counters. If your cache system takes advantage of these, they are straightforward. If there is any way to get a count from the cache itself, that is another option.
Using your approach is nice too. If any of these are wrapped in automated tests that check the results, then no one has to look through logs to find answers.