There is no rule, either in the W3C spec or the unofficial rules of REST, that says that a PUT
must use the same schema/model as its corresponding GET
.
It's nice if they're similar, but it's not unusual for PUT
to do things slightly differently. For example, I've seen a lot of APIs that include some kind of ID in the content returned by a GET
, for convenience. But with a PUT
, that ID is determined exclusively by the URI and has no meaning in the content. Any ID found in the body will be silently ignored.
REST and the web in general is heavily tied to the Robustness Principle: "Be conservative in what you do [send], be liberal in what you accept." If you agree philosophically with this, then the solution is obvious: Ignore any invalid data in PUT
requests. That applies to both immutable data, as in your example, and actual nonsense, e.g. unknown fields.
PATCH
is potentially another option, but you shouldn't implement PATCH
unless you're actually going to support partial updates. PATCH
means only update the specific attributes I include in the content; it does not mean replace the entire entity but exclude some specific fields. What you're actually talking about is not really a partial update, it's a full update, idempotent and all, it's just that part of the resource is read-only.
A nice thing to do if you choose this option would be to send back a 200 (OK) with the actual updated entity in the response, so that clients can clearly see that the read-only fields were not updated.
There are certainly some people who think the other way - that it should be an error to attempt to update a read-only portion of a resource. There is some justification for this, primarily on the basis that you would definitely return an error if the entire resource was read-only and the user tried to update it. It definitely goes against the robustness principle, but you might consider it to be more "self-documenting" for users of your API.
There are two conventions for this, both of which correspond to your original ideas, but I'll expand on them. The first is to prohibit the read-only fields from appearing in the content, and return an HTTP 400 (Bad Request) if they do. APIs of this sort should also return an HTTP 400 if there are any other unrecognized/unusable fields. The second is to require the read-only fields to be identical to the current content, and return a 409 (Conflict) if the values do not match.
I really dislike the equality check with 409 because it invariably requires the client to do a GET
in order to retrieve the current data before being able to do a PUT
. That's just not nice and is probably going to lead to poor performance, for somebody, somewhere. I also really don't like 403 (Forbidden) for this as it implies that the entire resource is protected, not just a part of it. So my opinion is, if you absolutely must validate instead of following the robustness principle, validate all of your requests and return a 400 for any that have extra or non-writable fields.
Make sure your 400/409/whatever includes information about what the specific problem is and how to fix it.
Both of these approaches are valid, but I prefer the former one in keeping with the robustness principle. If you've ever experienced working with a large REST API, you'll appreciate the value of backward compatibility. If you ever decide to remove an existing field or make it read-only, it is a backward compatible change if the server just ignores those fields, and old clients will still work. However, if you do strict validation on the content, it is not backward compatible anymore, and old clients will cease to work. The former generally means less work for both the maintainer of an API and its clients.
Best Answer
When you add complexity the code will run slower. Introducing a REST service if it's not required will slow the execution down as the system is doing more.
Abstracting the database is good practice. If you're worried about speed you could look into caching the data in memory so that the database doesn't need to be touched to handle the request.
Before optimizing performance though I'd look into what problem you're trying to solve and the architecture you're using, I'm struggling to think of a situation where the database options would be direct access vs REST.