Yes you should.
It not only makes your back end re-usable but allows for more security and better design. If you write your backend as part of a single system, you're making a monolithic design that's never easy to extend, replace or enhance.
One area where this is popular at the moment is in Microservices. Where the backend is split into many little (or even large) services that each provide an API that the client system consumes. If you imagine using many 3rd party sources of data in your application you realise you might be doing this already.
One other benefit is that the construction and maintenance of each service can be handed off to a different team, they can add features to it that do not affect any other team producing product. Only when they are done and release their service do you them start to add features to your product to consume them. This can make development much smoother (though potentially slower overall, you would tend to get better quality and understandable)
Edit: OK I see your problem. You think of the API as a remote library. It's not. Think of the service as more of a data providing service. You call the service to get data and then perform operations on that data locally. To determine if a user is logged on you would call "GetUser
" and then look at the 'logged on'
value, for example. (YMMV with that example, of course).
Your example for bulk user creation is just making excuses - there is no difference here, whatever you could have done in a monolithic system can still be done in a service architecture (e.g. you would have passed an array of users to bulk-create, or a single one to create. You can still do exactly the same with services).
MVC is already based around the concept of isolated services, only the MVC frameworks bundle them into a single project. That doesn't mean you lose anything except the bundled helpers that your framework is giving you. Use a different framework and you'll have to use different helpers. Or, in this case, rolling your own (or adding them directly using a library).
Debugging is easy too - you can thoroughly test the API in isolation so you don't need to debug into it (and you can debug end-to-end, Visual Studio can attach to several processes simultaneously).
Things like extra work implementing security is a good thing. Currently, if you bundle all the code into your website, if a hacker gains access to it, they also gain access to everything, DB included. If you split it into an API the hacker can do very little with your code unless they also hack the API layer too - which will be incredibly difficult for them (ever wondered how attackers gain vast lists of all website's users or cc details? It's because they hacked the OS or the web server and it had a direct connection to the DB where they could run "select * from users
" with ease).
I'll say that I have seen many web sites (and client-server applications) written like this. When I worked in the financial services industry, nobody would ever write a website all-in-one, partly as it's too much of a security risk, and partly because much development is pretty GUIs over stable (i.e. legacy) back-end data processing systems. It's easy to expose the DP system as a website using a service style architecture.
2nd Edit: Some links on the subject (for the OP):
Note that when talking about these in context of a website, the web server should be considered the presentation layer, because it is the client that calls the other tiers, and also because it constructs the UI views that are sent to the browser for rendering. It's a big subject, and there are many ways to design your application - data-centric or domain-centric (I typically consider domain centric to be 'purer', but YMMV), but it all comes down to sticking a logic tier in between your client and your DB. It's a little like MVC if you consider the middle, API, tier to be equivalent to your Model, only the model is not a simple wrapper for the DB, it's richer and can do much more (e.g. aggregate data from 2 data sources, post-process the data to fit the API, cache the data, etc.):
These checks invariably fail. Try to avoid tying your application to specific software, browsers or versions. Code to standards. As it appears you are Microsoft based, you may need to code around issues with different Internet Explorer versions. Try to keep these to a minimum.
From a security standpoint, you don't want to prevent users from upgrading to a supported version. You especially don't want to force them to remain on an insecure version.
I have run into numerous issues with built-in checks. Two significant version checks (both from vendors that should know better) I have had to deal with are:
- A virus scanner Java console that was pinned to a specific patch level of Java. It failed whenever Java was updated (for security fixes).
- A program that was configured to run on only 4 versions of Internet Explorer stopping at IE8. It works fine on IE9, if you spoof the User Agent.
Best Answer
If the device has GPS and the browser supports the geolocation api you can use that.
If not I would use a cookie and have the site ask the user to select the location, either from a dropdown of known locations or enter a postcode etc.
Have javascript check the cookie and prompt for location if its missing. Prevent the user from using functionality without entering a location
Display the location semi prominently so that if a wrong location is selected it can be noticed and corrected.
In theory the host name or IP on a network you control should be enough to identify the location. In theory.
the problem you have is that there is that extra link between network setup and your application and the network setup isn't part of the application.
These problems are out of your control and hard to fix.
If the app includes a setup or login step where you can get a human to enter information, then you can still have problems, but the app is self contained.