If you are doing an AJAX heavy application then one pro about REST is you have the option of using JSON as your data-interchange format instead of XML. JSON requires less markup than XML so that would speed up your application since you would be sending less data over the wire.
It also seems that REST is over taking SOAP with web services, and its always a good thing to use a technology that is more widespread when you bring on new developers or if you want other developers to consume your data.
The pro for SOAP is that it already has structure built into it, but I don't see why you can't build your own structure into your REST solution.
EDIT: When I did telecom programming our phone switch only supported SOAP and not REST. I found that SOAP required a bunch of boilerplate stuff that just got in my way when I didn't need it.
Where I work, the decision was taken to conceal the use of protobuf. We don't distribute the .proto
files between applications, but rather, any application that exposes a protobuf interface exports a client library which can talk to it.
I have only worked on one of these protobuf-exposing applications, but in that, each protobuf message corresponds to some concept in the domain. For each concept, there is a normal Java interface. There is then a converter class, which can take an instance of an implementation and construct an appropriate message object, and take a message object and construct an instance of an implementation of the interface (as it happens, usually a simple anonymous or local class defined inside the converter). The protobuf-generated message classes and converters together form a library which is used by both the application and the client library; the client library adds a small amount of code for setting up connections and sending and receiving messages.
Client applications then import the client library, and provide implementations of any interfaces they wish to send. Indeed, both sides do the latter thing.
To clarify, that means that if you have a request-response cycle where the client is sending a party invitation, and the server is responding with an RSVP, then the things involved are:
- PartyInvitation message, written in the
.proto
file
PartyInvitationMessage
class, generated by protoc
PartyInvitation
interface, defined in the shared library
ActualPartyInvitation
, a concrete implementation of PartyInvitation
defined by the client app (not actually called that!)
StubPartyInvitation
, a simple implementation of PartyInvitation
defined by the shared library
PartyInvitationConverter
, which can convert a PartyInvitation
to a PartyInvitationMessage
, and a PartyInvitationMessage
to a StubPartyInvitation
- RSVP message, written in the
.proto
file
RSVPMessage
class, generated by protoc
RSVP
interface, defined in the shared library
ActualRSVP
, a concrete implementation of RSVP
defined by the server app (also not actually called that!)
StubRSVP
, a simple implementation of RSVP
defined by the shared library
RSVPConverter
, which can convert an RSVP
to an RSVPMessage
, and an RSVPMessage
to a StubRSVP
The reason we have separate actual and stub implementations is that the actual implementations are generally JPA-mapped entity classes; the server either creates and persists them, or queries them up from the database, then hands them off to the protobuf layer to be transmitted. It wasn't felt that it was appropriate to be creating instances of those classes on the receiving side of the connection, because they wouldn't be tied to a persistence context. Furthermore, the entities often contain rather more data than is transmitted over the wire, so it wouldn't even be possible to create intact objects on the receiving side. I am not entirely convinced that this was the right move, because it has left us with one more class per message than we would otherwise have.
Indeed, I am not entirely convinced that using protobuf at all was a good idea; if we'd stuck with plain old RMI and serialization, we wouldn't have had to create nearly as many objects. In many cases, we could just have marked our entity classes as serializable and got on with it.
Now, having said all that, I have a friend who works at Google, on a codebase that makes heavy use of protobuf for communication between modules. They take a completely different approach: they don't wrap the generated message classes at all, and enthusiastically pass them deep(ish) into their code. This is seen as a good thing, because it's a simple of way of keeping interfaces flexible. There is no scaffolding code to keep in sync when messages evolve, and the generated classes provide all the necessary hasFoo()
methods needed for receiving code to detect the presence or absence of fields that have been added over time. Bear in mind, though, that people who work at Google tend to be (a) rather clever and (b) a bit nuts.
Best Answer
Does the business value of implementing them exceed the cost?
If you implement, you need to change not just your server, but all clients (although you can support both formats and only change clients as needed). That will take time and testing, which is a direct cost. And don't underestimate the time taken to really understand protocol buffers (especially the reasons to make field required or optional), and the time taken to integrate the protobuf compiler into your build process.
So does the value exceed that? Are you faced with a choice of "our bandwidth costs are X% of our revenues and we can't support that"? Or even "we need to spend $20,000 to add servers to support JSON"?
Unless you have a pressing business need, your "pros" aren't really pros, just premature optimization.