A web app designed with highly modular, small components (in this case using AngularJS directives but could just as easily be WebComponents, ReactJS components, or any other technology). Components often have asynchronous REST API calls, upon initialization or upon user interaction. This design is causing many API calls per page (sometimes 20+). Is there any problem with this design? Some are suggesting we condense the API calls into larger client-side services that act as singletons. So 10 API calls may be reduced to 1, even though a page may only use a portion of that data. Are there any red flags, or problems with this design? Which should be preferred?
Rest – Too many REST API calls on a page
apiasyncrest
Related Solutions
Is it appropriate to mix some sort of action call with a resource URI (e.g.
/collection/123?action=resendEmail
)? Would it be better to specify the action and pass the resource id to it (e.g./collection/resendEmail?id=123
)? Is this the wrong way to be going about it? Traditionally (at least with HTTP) the action being performed is the request method (GET, POST, PUT, DELETE), but those don't really allow for custom actions with a resource.
I'd rather model that in a different way, with a collection of resources representing the emails that are to be sent; the sending will be processed by the internals of the service in due course, at which point the corresponding resource will be removed. (Or the user could DELETE the resource early, causing a canceling of the request to do the send.)
Whatever you do, don't put verbs in the resource name! That's the noun (and the query part is the set of adjectives). Nouning verbs weirds REST!
I use the querystring portion of the URL to filter the set of resources returned when querying a collection (e.g.
/collection?someField=someval
). Within my API controller I then determine what kind of comparison it is going to do with that field and value. I've found this really doesn't work. I need a way to allow the API user to specify the type of comparison they want to perform.The best idea I've come up with so far is to allow the API user to specify it as an appendage to the field name (e.g.
/collection?someField:gte=someval
- to indicate that it should return resources where someField is greater than or equal to (>=
) whateversomeval
is. Is this a good idea? A bad idea? If so, why? Is there a better way to allow the user to specify the type of comparison to perform with the given field and value?
I'd rather specify a general filter clause and have that as an optional query parameter on any request to fetch the contents of the collection. The client can then specify exactly how to restrict the set returned, in whatever way you desire. I'd also worry a bit about the discoverability of the filter/query language; the richer you make it, the harder it is for arbitrary clients to discover. An alternative approach which, at least theoretically, deals with that discoverability issue is to allow making restriction sub-resources of the collection, which clients obtain by POSTing a document describing the restriction to the collection resource. It's still a slight abuse, but at least it's one you can clearly make discoverable!
This sort of discoverability is one of the things that I find least strong with REST.
I often see URI's that look something like
/person/123/dogs
to get the persons dogs. I generally have avoided something like that because in the end I figure that by creating a URI like that you are actually just accessing a dogs collection filtered by a specific person ID. It would be equivalent to/dogs?person=123
. Is there ever really a good reason for a REST URI to be more than two levels deep (/collection/resource_id
)?
When the nested collection is truly a sub-feature of the outer collection's member entities, it is reasonable to structure them as a sub-resource. By “sub-feature” I mean something like UML composition relation, where destroying the outer resource naturally means destroying the inner collection.
Other types of collection can be modeled as an HTTP redirect; thus /person/123/dogs
can indeed be responded to by doing a 307 that redirects to /dogs?person=123
. In this case, the collection isn't actually UML composition, but rather UML aggregation. The difference matters; it is significant!
Option 1 (multiple async calls) is the best choice because:
- each individual call is its own entity, so you can retry individually if something happens to fail. In the monolithic 'one-call' architecture, if one thing fails you have to do the entire call again
- the server side code will be simpler: again, modularity, meaning that different developers can work on different API resources
- In a typical MVC pattern it doesn't make sense to have one API call load multiple separate resources; for example, if you make a request to
/products
to get a list of products to show on a page, and you also want to display a list of locations where popular products are sold, you have two separate resources:Product
andLocation
. Though they are displayed on the same page, you can't logically make a call to/products
and have it also return locations as well - your logging/utilization reports will be simpler in the modular approach. If you make a request to
/products
and you're also loading locations, your log files are going to be really confusing - if you have a problem with a particular resource, the one-call approach will cause the entire page to break and it won't be obvious to your users what broke - and this means it will take longer for your team to fix the issue; however, in the modular approach if one thing breaks, it will be very obvious what broke and you can fix it faster. It also won't ruin the rest of the page (unless things are too closely coupled...)
- it will be easier to make changes in general if things are separated; if you have 5 resources being loaded by one API call, it will be harder to figure out how to not break things when you want to change something
The whole point is that resources are separate, and in a REST API returning many separate resources from a single API path doesn't make sense, even if you're "saving connections to the server". By the way, using parameters to conditionally load (different) resources is not RESTful.
All that said, the only logical option is to make multiple async requests to separate resources: take the modular approach!
PS - Don't prematurely optimize away "connections to the server", especially when HTTP connections are incredibly low overhead and you're on a LAN. That kind of thinking instead of choosing the simpler design right off the bat is going to get you into trouble later.
Related Topic
- REST API vs Direct DB Calls – Best Practices for Desktop Applications
- Rest – By the book REST vs Too Many Requests
- REST API Calls with Side Effects on Other Resources
- REST API: How to Update Many-to-Many Relationships
- REST API Design – Multiple Calls vs Single Call
- REST API – Automating REST API Call Chaining
Best Answer
There shouldn't be. The fact that each request is small and async means you can greatly speed up your web app, rather than having to wait for a single large request to complete which blocks everything.
Just make sure your javascript is properly asynconious and can do things while your other requests are waiting and you will end up with a much better app than if you had one massive request that fetched everything.
After all browsers are designed to handle loading of many URLs in tandem, even a typical standard webpage may have tens if not hundreds of requests to images, css, javascript, iframes etc etc