I think versioning is probably the best argument. When you have an existing operation contract like
int GetPersons(int countryId);
that you want to enhance, e.g. by another filter later on
int GetPersons(int countryId, int age);
You would have to write a new operation contract and with a new name since it has to be unique. Or you would keep the name and publish a new v2 of your service with the old v1 still being around for backwards compatibility.
If you wrap the parameter into an object you can always extend it with default/optional parameters and all your existing clients will be unaffected when you reuse the same operation contract.
However I would also urge to name your objects appropriately. Even if it just wraps an int, if you start with IntMessage
or something similar you're not doing yourself a favor extending it. You'd have to name it e.g. PersonFilter
right from the start which means you have to think a little about what this service call should expect as parameter semantically and therefore what it's supposed to do. Maybe (and that's a very vague maybe) that's going to help in developing the right services and maintain a decent sized API.
It allows common data and behavior to be isolated to a base class
That's something to be cautious with. Inheritance and data contracts don't go that well together. It does work, but you'll have to specify all known subtypes of the contract that could go over the wire, otherwise the data contract serializer fails complaining about unknown types.
But what you could do (but still probably shouldn't, I'm still undecided on this) is reuse the same messages among different services. If you put the data contracts in a separate dll you can share that between client and service and you don't need to convert between types when you call different services that expect basically the same message. E.g. you create a PersonFilter and submit that to one service to get a filter list of persons and then to another service and have the same objects on the client. I can't find a good real-world example for that though and the danger is always that an extension to the data-contracts is not general enough for all services that use this contract.
Overall, apart from versioning, I can't really find the killer reason for doing it that way either.
Nothing. You're still free to use WCF where it is most suitable, or at your own discretion.
ASP.NET MVC has supported a RESTful communication style since its inception, and many people use it as a thin veneer for RESTful services. That doesn't automatically cause WCF to go obsolete, or make ASP.NET MVC the One Tool to Rule Them Allâ„¢.
This is why carpenters and other craftsmen don't just have one type of hammer. They have several different types, each optimized for a particular type of hammering.
To help you decide which to use, listen to this Hanselman podcast:
This is not your father's WCF - All about the WebAPI with Glenn Block
How does WCF fit into a world of Web 2.0 lightweight APIs? What's the
WCF WebAPI and how does compare to services in ASP.NET MVC?
I haven't personally looked at it yet, but it wouldn't surprise me if the Web API you refer to in ASP.NET MVC 4, and the new WebAPI in WCF, turn out to be the same thing. Phil Haack is probably using WCF to implement WebAPI internally in ASP.NET MVC 4, or they both resolve to the same internal mechanism.
See Also
http://wcf.codeplex.com/wikipage?title=WCF%20HTTP
Best Answer
SignalR has a JavaScript API. SignalR helps build asynchronous scalable web applications with real-time persistent long-running connections. Scott Hanselman wrote a great blog post about this.
If that's not your speed, you may be looking for something more like WCF Support for jQuery, which seems to have sprung up from the old WCF-RIA jQuery client. Looks like this will be part of ASP.NET MVC 4. The wiki page for this project seems to indicate that its flexible enough to plug in to WebSockets.
I'm a little concerned about WebSockets in the near term. There are some questions about just how Internet infrastructure (particularly in corporate environments where proxy servers are common) will handle WebSockets. I believe SignalR has strategies to help mitigate this by negotiating fallbacks when WebSockets don't work. Scott Hanselman has another blog post about this which makes some good points in this direction.