Rest – Is REST and HATEOAS a good architecture for web services

hateoasrest

If I understand correctly, REST was formalized by Roy Fielding as a descriptive model of the architecture of the web. AFAIK Fielding didn't claim REST was any good, he was just describing the de-facto architecture of the web. The web had already at this point proven an enormous successful distributed hypertext system, so this kind of validates REST as a successful architecture for the domain of distributed hypermedia primarily navigated and consumed by humans.

REST web services were created by applying REST architecture to API's. But is there actually any reason to think REST is a desirable architecture for this domain? More specifically, is there any evidence that says HATEOAS is a beneficial design principle for machine-to-machine communication?

My concern is that HATEOAS makes sense for hypermedia because there are few well-known content types (HTML, images, video etc) and the client knows how to consume them. But for API's the content types are very specific and can only be consumed in a meaningful way by the client if the client is specifically programmed to consume them. Returning an URL to the client does not in itself make the client able to consume the indicated resource.

Best Answer

AFAIK Fielding didn't claim REST was any good, he was just describing the de-facto architecture of the web.

That undersells it a bit, I would think. REST is, after all, an enumeration of the architectural style that Fielding was using as chief architect of the HTTP/1.1 spec.

But is there actually any reason to think REST is a desirable architecture for this domain? Is there any evidence that say HATEOAS is a beneficial design principle for machine-to-machine communication?

"It depends". HATEOAS is part of the uniform interface constraint of REST.

By applying the software engineering principle of generality to the component interface, the overall system architecture is simplified and the visibility of interactions is improved. Implementations are decoupled from the services they provide, which encourages independent evolvability. The trade-off, though, is that a uniform interface degrades efficiency, since information is transferred in a standardized form rather than one which is specific to an application's needs. The REST interface is designed to be efficient for large-grain hypermedia data transfer, optimizing for the common case of the Web, but resulting in an interface that is not optimal for other forms of architectural interaction.

So let's think for a moment about what this means. When I'm having trouble with my wireless router, I can communicate with it using the same browser that I use to submit answers to stackexchange. In particular, it doesn't matter what browser I'm using, or whether my browser is a few updates behind (or ahead) of what the router is expecting. It doesn't matter that the engineering organization that wrote the browser is completely independent of the organization that created the router interface.

It just works.

It's not, of course, universal. Fielding, in 2008, wrote:

That doesn’t mean that I think everyone should design their own systems according to the REST architectural style. REST is intended for long-lived network-based applications that span multiple organizations. If you don’t see a need for the constraints, then don’t use them.

The constraints that form the REST architectural style were chosen for the properties that they induce; if those properties aren't valuable to your use case, then you should absolutely be considering dropping the corresponding constraints.

Where machine to machine gets difficult, is that you've lost the ability of the human being to fuzzy match the semantics provided by the representations. The clients can get by with knowing just the media types, but we normally have a human being looking at the semantic cues to derive meaning.

schema.org is one part of an effort to create a machine readable vocabulary; the machine agents use the client to find the semantic hints, and applies its own understanding of the meaning to choose the correct actions to take.

But it's work; you need to invest in developing machine friendly representations of your resources, and ensuring that those representations remain forward and backwards compatible, so that clients can be developed independently.

When a single organization controls both the client and the server, the benefits of this independence are a lot smaller, in which case the constraint may not be an appropriate architectural choice.

Related Topic