I hope this isn't too blunt, but the task you are undertaking here is extremely difficult, and the odds of you getting it right are slim. Security flaws are most of the time caused by mistakes in implementation, not in the underlying technologies. In order to make a system like the one you've described secure, you have to use the correct tools and the correct methodology and account for all of the edge cases or the security of the entire system will be compromised.
That's not really a helpful answer though, is it? When you are building a system like you are building the question you should be asking shouldn't be "How do I do this?" It should instead be "What is the way I can do this that relies the least on myself?" The answer to that question is to use tried and tested systems wherever possible, and to roll your own solutions only as a last resort.
To answer your first point about encryption, it doesn't make sense to worry too much about securing a key in memory of the server. If an attacker has enough access to a machine to read your keys out of memory, you are totally and completely hosed and any solutions that you have coded up aren't going to help much any way. In other words, favor securing data at rest and data that is moving over the internet, since that is where most attacks are going to occur.
As far as storing the data goes, I don't see any reason why asymmetric crypto needs to be involved here. I would use something like PBKDF2 to derive a key directly from the user's password, then encrypt the data and store the encrypted blob in a database. I would recommend a database over a flat file because managing a folder full of flat files is tedious at the best of times. Databases may not show any solid benifits in speed or security over flat files, but they come with many other features such as pooled connections and they also make backing up data much easier than flat files. Use the simplest system you can to minimize your attack surface, and use thoroughly tested open source tools whenever possible. If you can find a way to use GPG for the encryption and key derivation part of things, I would recommend it.
As far as transfer goes, I believe that you are thinking about things the wrong way. Don't do any encryption client side. Browser javascript is not suitable for cryptography, as explained in this article. So long as you make sure that you use TLS/SSL for all connections to your site, you shouldn't need to worry about transmitting data unencrypted. For an example of why it is hard to do client side encryption, do some googling about the security of MegaUpload's successor, MEGA.
Finally, I wouldn't trust any one dude you get an answer from on the internet, including myself. I would do a lot of research about this sort of thing before committing to a solution. Also, I might recommend asking this question over at the IT Security Stack Exchange.
-- EDIT --
Somehow, I totally missed the fact that there are three parts to your system, the client (browser), the server (database), and the connector that imports data from the VisualFox Database. This actually makes the whole system a lot more complex, because there are essentially three parties that need to share a secret, instead of two. What I would recommend is not to encrypt the data based on the users password, but to instead encrypt it based on some server password. I'm having a little bit of trouble thinking of a good way to describe this process, so I'll give you an example workflow instead.
Server Side
- Admin starts server.
- During start up, server code asks for a password.
- Server uses PBKDF2 to derive a key which is stored only in memory.
- Server spawns a thread that will poll the VirtualFox Pro server every X (days/hours/minutes) for updated data.
- Server enters loop awaiting requests from browser clients.
Updating database
- Main Server's child thread requests an update of data from the Virtual Fox Pro server.
- VirtualFox Pro server dumps a report containing data for client's with modified entries.
- VirtualFox Pro server opens secure connection to main server (ssh, sftp, etc) and transmits zipped data.
- One by one, the main server uses the PBKDF2 derived key that is stored in memory to decrypt blobs stored in a database, update them with new data, reencrypt them, and store them back into the database. This process should all happen in-memory.
Browser client connects
- Main server receives https request from client.
- Main server uses some third party authentication framework to check clients credentials. This framework should use bcrypt to hash passwords and only store the hashes on the file system.
- If the authentication framework positively identifies a user, the main server will decrypt the user's blob using the PBKDF2 derived key in memory and send the data to the user.
- When the user's authentication cookie expires, the main server will stop using the PBKDF2 derived key to decrypt data, and will instead prompt the user to re-authenticate.
This model is more in line with how traditional websites work (which means that you can rely on third party, bug tested frameworks), but data is encrypted/decrypted in memory before touching the database. Ideally, you could use GPG or some other keystore for managing the encryption keys on the main server as well.
The method of storing the key pair depends very heavily on how badly you want to protect it from undesired use. Personally I store my keys in a folder protected by the standard ACLs (so anybody who manages to get admin access to my machine can get to my key pairs). For me that's good enough given that my key pairs aren't that valuable. Microsoft recommends using a key container.
If you want to avoid having to synchronize your key pairs between machines then it is probably best if you were to build the release version of the libraries on one single machine (either your PC or the laptop) and make that machine the 'build machine'. In that case depending on where you are storing your key pair (in a file protected by ACLs or in a key container) you can use either one of these techniques:
If you keep the key pair in a file:
- Create public/private key pairs for both computers. You can copy the key pair but that's not required, generating individual ones for each machine is equally valid.
- On each machine create an environment variable that points to the location of the key pair on that specific machine. Make sure the environment variables have the same name on both machines.
To allow you to share code between the machines and still use the machine specific key files add the following section to your C# project file.
<PropertyGroup>
<SignAssembly>true</SignAssembly>
<DelaySign>false</DelaySign>
<AssemblyOriginatorKeyFile>$(YOUR_ENVIRONMENT_VARIABLE_HERE)</AssemblyOriginatorKeyFile>
</PropertyGroup>
If you also want the key to show up in the 'properties' section of the project then you can add the following section
<None Include="$(YOUR_ENVIRONMENT_VARIABLE_HERE)">
<Link>Properties\App.snk</Link>
</None>
If you keep the key pair in a key container:
- Create public/private key pairs for both computers and store them in the key containers. Make sure you use the same name for the key container.
To allow you to share code between the machines and still use the machine specific key files add the following section to your C# project file.
<PropertyGroup>
<SignAssembly>true</SignAssembly>
<DelaySign>false</DelaySign>
<KeyContainerName>YOUR_CONTAINER_NAME_HERE</KeyContainerName>
</PropertyGroup>
In this case I'm not sure if you can link the key from the 'properties' section (I suspect not).
Using either one of these ways allows MsBuild to find your key and use it during the build while still giving you a (semi-)portable way of dealing with different keys on different machines. Just make sure that you always build your release version of the libraries on the same machine (which will have one set of keys), otherwise you get releases done with different keys.
As for key-pair hell, the only thing you can't do is move the bin folders from one machine to another and expect partial builds to work. If you rebuild the libraries then there shouldn't be any problems.
Best Answer
First of all, I would not refer to myself as a security expert, but I have been in the position of having to answer this question. What I found out surprised me a bit: There is no such thing as a completely secure system. Well, I guess a completely secure system would be one where the servers are all turned off :)
Someone working with me at the time described designing a secure system in terms of raising the bar to intruders. So, each layer of securing decreases the opportunity for an attack.
For example, even if you could perfectly secure the private key, the system is not completely secure. But, correctly using the security algorithms and being up to date with patches raises the bar. But, yes, a super computer powerful enough and given enough time can break encryption. I'm sure all of this is understood, so I'll get back the question.
The question is clear so I'll first try to address each of your points:
Yes, if you use something like Windows Key Store or a password encrypted TLS private key you are exposed to the users that have the password (or access) to the private keys. But, I think you will agree that raises the bar. The file system ACLs (if implemented properly) provide a pretty good level of protection. And you are in the position to personally vet and know your super users.
Yes, I've seen hardcoded keys in binaries. Again, this does raise the bar a bit. Someone attacking this system (if it is Java) has to understand that Java produces byte code (etc) and must understand how to decompile it are read it. If you are using a language that writes directly to machine code, you can see that this raises the bar a bit higher. It is not an ideal security solution, but could provide some level of protection.
Yes, essentially then the algorithm becomes the private key information for creating the private key. So, it would need to now be protected.
So, I think you have identified a core issue with any security policy, key management. Having a key management policy in place is central to providing a secure system. And, it is a pretty broad topic.
So, the question is, how secure does your system (and, therefore the private key) need to be? How high, in your system, does the bar need to be raised?
Now, if you willing to pay, there are some people out there that produce solutions to this. We ended up using an HSM (Hardware Security Module). It is basically a tamper-proof server that contains a key in hardware. This key can then be used to create other keys used for encryption. The idea here is that (if configured correctly), the key never leaves the HSM. HSMs cost a lot. But in some businesses (protecting credit card data lets say), the cost of a breach is much higher. So, there is a balance.
Many HSMs use key cards from maintenance and admin of the features. A quorum of key cards (5 of 9 lets say) have to be physically put into the server in order to change a key. So, this raises the bar pretty high by only allowing a breach if a quorum of super users collude.
There may be software solutions out there that provide similar features to an HSM but I'm not aware of what they are.
I know this only goes some way to answering the question, but I hope this helps.