I really like the first approach in general.
- it's simple to understand and implement
- it's secure (to my knowledge)
- it's a not uncommon approach which I've seen used in the past
One thing I don't see mentioned about the first that you should keep in mind, the timestamp used to hash the token needs to have a TTL expiry that's exceedingly short (like 1 second) so you verify the message wasn't sent with the same timestamp and token from a message 12 hours earlier; obviously it would calculate as legit but is not in this case.
If these are the only two options you're considering though I'd just like to make sure you've looked at other approaches too, as there are many. More than I'm going to list in fact. These are some common auth approaches which are worth studying just to see if they might fit your purpose better, and if nothing else understanding them may give you some ideas to help tighten up whichever approach you do go with.
Do note, I am not a security expert.
OAuth/Federated
In this approach you have a 3rd party guarantor where the consuming code requests the token/cert/what have you from them and passes that to you, at this point all you need to do is ask the 3rd party if the key you were given is legit.
Pro:
- Standards based
- Issues will be found by others on other people's systems so you will find out if insecurity happens
- Much less auth work will be needed by you
Con:
- You have to deal with a 3rd party servicer and their API, or create and host your own "3rd party" to segregate the auth out of your main service.
- For many services overkill, but conceptually worth considering
Asynchronous Certificates
Here you would have your clients encrypt their communications with a public cert you have shared with them when they created a user. On your side you would decrypt using the private key associated with there user. Generally you would initiate the communication with a challenge-response to show they can encrypt/decrypt as you expect identifying them as who they claim to be. Though "synchronous" approaches are possible which don't use the challenge-response, they have slightly less security and some time synchronization issues which can make them trickier.
from Novell (yeah I know, novell? really?)
Tokens use a variable as the basis to generate the one-time password.
This variable is called the challenge. The two main methods for
determining the variable used to generate the password are
asynchronous or synchronous.
With the asynchronous or challenge-response method, the server
software sends the token an external challenge---a randomly generated
variable--- for the token device to encrypt. The token uses this
challenge variable, the encryption algorithm, and the shared secret to
generate the response---the correctly encrypted password.
With the synchronous method, the challenge variable used to generate
the password is determined internally by the token and the server. A
time counter, event counter, or time and event counter combination
within each device is used as the basis for the challenge variable.
Because the token and the server each separately and internally
determine the challenge variable from their own counters, it is very
important for their time counters and the event counters to stay
synchronized. Because it is so easy for the server and the token to
get out of sync, most implementations allow for a certain amount of
drift between the counters. Usually, a small range or window of these
counter values is used to compute the password. However, if the token
and server get out of sync beyond this window, a special procedure is
necessary to synchronize them.
Pro:
- Certificates have CA roots which make them trustworthy and difficult to forge
- There are standard facilities in operating systems for managing and maintaining cert stores easily
- Well-studied approach, lots of information available on it
- Expiry along with a variety of other things are in-built facilities of standard certificates, they are generally robust
Con:
- Certificates can be tricky to work with programmatically
- Depending on if you require an external CA, may not be free
- May need to maintain cert stores manually to ensure expected root trusts are configured
NTLM
Don't laugh, if this is a smaller or internal only service and you're in a windows environment, there is nothing wrong with using standard NTLM authentication to guarantee access. Especially if you're working with IIS this is hands down the simplest approach. Easy to maintain and configure as well in a web.config.
Pro:
- Extremely easy to configure, implement, and maintain
Con:
- Minimal interoperability
- Not sufficient for public facing authentication
Nonces
When working with nonces in your authentication approach, you supply a method to get a nonce on the service. This method returns a unique arbitrary string or piece of data ("a nonce") on each request. Every request to other methods now require a nonce to be retrieved, and used in the crypto algorithm for the request. The value here is that the server keeps track of the nonces used, and never allows reuse of a nonce, this completely prevents replay attacks because once a request with one nonce is made, a request with that nonce can never be made again. As nonces are requested they're added to a list of available nonces, as they're used they're moved from the available list to the used list. When generating a nonce you ensure what you generate is not on the used list and the available list will never again have one of the old ones and therefore no repeats can be made.
Pro:
- Thwarts replay attacks quite well
- Not altogether difficult to implement or understand
Con:
- Requires clients make two requests for each one request (though may be lessened by requiring nonces for only certain requests)
- Requires management of nonces, which should be transactional
- Negatively affects performance by requiring the extra requests for nonces (transactionality further increases resource cost of working with nonces)
Question 1: Any alternatives or remarks of the previously mentioned
issues, which would change the conclusion?
Cookies are just headers, your native client can add and read them them no problem. Not sure if the built in solution allows revocation though.
JWT/OAuth The reason revocation isnt supported is to enable to idea that resources can validate the request without contacting a central server. Avoiding that bottleneck
However! any solution with immediate revocation will require that you validate on a central server.
Implementing this extra check is simple. eg with the IdentityModel nuget
public class MyJwtSecurityTokenHandler : System.IdentityModel.Tokens.Jwt.JwtSecurityTokenHandler
{
public override ClaimsPrincipal ValidateToken(string token, TokenValidationParameters validationParameters, out SecurityToken validatedToken)
{
return myAuthService.ValidateToken(token, validationParameters, out validatedToken);
}
}
The auth server can keep track of issued tokens, expose a revoke endpoint and a LoggedInUsers endpoint. Just like the old MembershipProvider...
Question 2: Anyone know an implementation, which satisfies this flow
or can be configured to?
You want an out of the box solution, MembershipProvider and getting your native client to understand cookies is probably the closest you will get. You could go for the Cookieless option, but having the token in the url is considered less secure and its probably hard to handle on the client anyway.
But with .net Core, I'm unsure that it fulfils all your goals, and it's definitely swimming against the tide of JWT tokens with claims.
I would go with OAuth and JWT and program the revocation custom extras myself. Ensuring that I do validate the JWT normally as well as checking for revocation.
I would expect the requirement for logged in users and revocation to eventually be dropped and this would allow me to simply remove it.
In Fact, lets move the logged in users right out to a reporting component rather than auth, we can get better stats that way anyway
Best Answer
Yes. Headers, request params and request body are encrypted during the communication.
Once on the server-side, do not log the request body :-)
You can not. Basically, once the API is on the WWW, it's automatically exposed to all sort of malice. The best you can do is to be prepared and to be aware of the threats. At least about those that concern you. Take a look here.
A possible approach to the problem could be implementing (or contracting) an API Manager.
On-premise API Managers can reduce the attack surface because all the endpoints behind the AM are not necessarily public.
You could achieve the same result with some products in the cloud, but they are absurdly expensive for the mainstream.
Anyways, the API Management endpoints will remain exposed to attacks.
If by programmatic logins you mean attacks by brute force, a threshold (max number of allowed requests per second) and a black list should be enough to deter the attacker's insistence. For further information, take a look here.
Many of the API Managers provide out of the box API Rate Limit configurations and Whitelists.
If you are familiar with the Google API Console, then you can guess what an API Manager can do.
Whether the refresh token is a plain UUID or anything else, I don't like to expose this sort of implementation detail. So I would suggest to hash it. To me, the more opaque are the implementation details of the security layer, the better.
Regarding the JWT security, take a look here.
You might be interested in JSON Web Token (JWT) - Storage on client side.