JSON/HTTP is a really good decoupling mechanism, and I'll throw out a couple of suggestions that will make it even more loosely coupled.
The rapid industry adoption of JSON/HTTP interfaces really speaks well about how people view the usefulness of that model.
- Enforce a MUST IGNORE rule.
That is, when parsing the JSON (client or server), the app MUST IGNORE any fields it don't recognize.
XML went in the with idea that the app MUST UNDERSTAND each field or else the document was invalid. But that created problems with versioning, because with almost any change, clients needed to upgrade every time the server did. Even adding an informational field broke the spec. With MUST IGNORE, the server can add new fields any time, and as long as it doesn't remove or change the meaning of other fields (see below). Existing clients can just ignore the new fields. Rather, they MUST IGNORE the new fields.
A search on MUST IGNORE and MUST UNDERSTAND will reveal lots of good articles that talk about that.
- Minimize breaking changes.
A "breaking change" is a change that will break existing clients. That is, removing a field the clients use. Or changing the meaning of a field (i.e. changing an amount field from dollars to Yen). That is, something that invalidates a client's assumptions about the data it's currently using.
With a breaking change, every client needs to make a change to support the new semantics or stop relying on missing fields. Do don't do that unless its necessary.
The next logical step gets kind of contentious -- but in the extreme you would never make a breaking change. That is, have full backward-compatibility for every release. That may or may not be realistic, and it may require carrying along baggage from early versions, but it will spare a lot of churn for the clients.
OAuth 2 is a really good bet for a well-thought out, standardized security protocol. You could sit down and design something simpler, depending on what compromises are OK. But OAuth is a good fleshed-out protocol that has undergone years of industry scrutiny, so they've had lots of time to work out the kinks. And standard libraries are readily available for both client and server. I used an OAuth plugin to DJango for one project and it worked out really well.
Because of the ubiquity of JSON parsers, maintaining a single API regardless of client will make life a lot easier. Sometimes it doesn't work out -- sometimes a client can only understand XML or some proprietary protocol, but starting simple & adding complexity makes life easier.
I have answered this question on StackOverflow as well - I place my answer here for easy reference...
The PRG pattern alone will not prevent this, as the P action in it takes time (which is usually the case) and the user can submit the form again (via click or browser refresh), which will cause the PRG pattern to "fail".
Note that malicious users can also bypass all of your client side measures by running multiple http posts in quick succession.
A solution to all of the above is to check for duplicate submissions on server side against your anti-forgery token.
If you make use of a hidden anti-forgery token in your form (as you should), you can cache the anti-forgery token on first submit and remove the token from cache if required, or expire the cached entry after set amount of time.
You will then be able to check with each request against the cache whether the specific form has been submitted and reject it if it has.
Best Answer
So if I understand correctly the difference between the 2 solutions is really just in serving the 1st page if the external API is down:
Both solutions will fail subsequent requests if the external API remains down, so really there isn't a significant difference between them from the error handling and availability perspective.
Personally I wouldn't design an app with such total dependency on the external API, I'd decouple a bit the requests to my app from the requests to the external API, in the sense that I'd always provide a response, even if that means just informing the user of the inadequate service status of the external API.
Something along these lines (where the long-lasting tasks would be your accesses to the external API): https://stackoverflow.com/questions/35030799/asynchronous-requests-in-appengine
Clarification: I consider the external API accesses as long-running because you have no control over how long it takes to get a response (if you even get one) in order for the app to process it and respond to the original client request. If the whole thing takes longer than 60 sec (the GAE request deadline limit) the app will be killed.
You could keep the synchronous calls to the external API as you intended (the app's "normal mode", when the external API responds in a timely manner), but with a safety timeout allowing the app to unblock, fallback to the "offline mode" I described in that Q&A, maybe record service status changes and still respond to the client's request accordingly, all within those 60s.