As a developer working almost exclusively with payments I perhaps can give you some pointers about where the possible pitfalls are. I'm working for a site with high traffic, ~40 active payment service providers (PSPs) and dealing with tens of thousands transactions per day. And trust me, the sh*t will always hit the fan and all you can do is prepare as much as possible for dealing with the situation from that point.
Log, log and log some more...
This is the most important part in your payment process, make sure you have a record of everything. Ask yourself the questions "could this piece of information help me once in a million transactions?". If the answer is yes, log it!
Our setup is that the main point of logging is the database. Every initiated, failed, settled, redirected, etc transaction is stored per PSP in tables. If the PSP uses an API, all API-calls are logged (both requests sent and responses). All callbacks are logged in tables as well. And so forth..
When we encounter an unexpected event or exception we log it to the PHP log file with a certain format to make it searchable and easily found based on transaction ID, user ID, etc.
You will be grateful that you have all the data if you get sued for example.
Monitoring and alerts
Build in monitoring logic with some simple tests that will alert you through email when something goes wrong. For example, if 3 callbacks in a row fails for a PSP. All of these small things that can make you work on resolving issues quickly instead of reacting to them after your customers have made you aware of them (which doesn't always happen) is very important.
Database transactions
If you have the luxury of setting up or changing your database structure to support database transactions, for example use InnoDB on MySQL. Take a look at this answer on how to handle it in the code. You should make sure that every step of a transaction is completed or none is, having partially completed transactions is just a burden on you and your system. For example: 1) User completes transaction, 2) User gets rewarded somehow. If not both step 1 and 2 are met here, the transaction should not be set as complete in step 1.
Technical people to contact
When you have a technical issue, make sure you have someone technical to contact immediately. Having an account manager is pretty useless, especially if that person decides to mediate between you and their technician.
Live world example that happened to me: one PSP all of the sudden stops working, nothing in our code had changed and they say they haven't changed anything as well. After sending debug information back and forth with them (through their account manager), one of their technicians catches that the WSDL-schema fetches seems off. Then they realize they updated their WSDL-schema and after some debugging I realize that PHP cached the old WSDL-schema. This could've been caught sooner as resolved if I just had a technical person on Skype to contact directly. Or if they just admitted to changing their WSDL-schema, but it's often hard to get a confession out of the PSPs when something goes wrong..
Final words...
Prepare all you can and assume that ANYTHING can happen :)
Not sure this was exactly the kind of answer you were after, but I'm trying to provide a real world example of payments and how you can prevent the biggest traps and pitfalls. Let me know if there's something you would like me to elaborate on or if you're interested in another topic.
I hope this isn't too blunt, but the task you are undertaking here is extremely difficult, and the odds of you getting it right are slim. Security flaws are most of the time caused by mistakes in implementation, not in the underlying technologies. In order to make a system like the one you've described secure, you have to use the correct tools and the correct methodology and account for all of the edge cases or the security of the entire system will be compromised.
That's not really a helpful answer though, is it? When you are building a system like you are building the question you should be asking shouldn't be "How do I do this?" It should instead be "What is the way I can do this that relies the least on myself?" The answer to that question is to use tried and tested systems wherever possible, and to roll your own solutions only as a last resort.
To answer your first point about encryption, it doesn't make sense to worry too much about securing a key in memory of the server. If an attacker has enough access to a machine to read your keys out of memory, you are totally and completely hosed and any solutions that you have coded up aren't going to help much any way. In other words, favor securing data at rest and data that is moving over the internet, since that is where most attacks are going to occur.
As far as storing the data goes, I don't see any reason why asymmetric crypto needs to be involved here. I would use something like PBKDF2 to derive a key directly from the user's password, then encrypt the data and store the encrypted blob in a database. I would recommend a database over a flat file because managing a folder full of flat files is tedious at the best of times. Databases may not show any solid benifits in speed or security over flat files, but they come with many other features such as pooled connections and they also make backing up data much easier than flat files. Use the simplest system you can to minimize your attack surface, and use thoroughly tested open source tools whenever possible. If you can find a way to use GPG for the encryption and key derivation part of things, I would recommend it.
As far as transfer goes, I believe that you are thinking about things the wrong way. Don't do any encryption client side. Browser javascript is not suitable for cryptography, as explained in this article. So long as you make sure that you use TLS/SSL for all connections to your site, you shouldn't need to worry about transmitting data unencrypted. For an example of why it is hard to do client side encryption, do some googling about the security of MegaUpload's successor, MEGA.
Finally, I wouldn't trust any one dude you get an answer from on the internet, including myself. I would do a lot of research about this sort of thing before committing to a solution. Also, I might recommend asking this question over at the IT Security Stack Exchange.
-- EDIT --
Somehow, I totally missed the fact that there are three parts to your system, the client (browser), the server (database), and the connector that imports data from the VisualFox Database. This actually makes the whole system a lot more complex, because there are essentially three parties that need to share a secret, instead of two. What I would recommend is not to encrypt the data based on the users password, but to instead encrypt it based on some server password. I'm having a little bit of trouble thinking of a good way to describe this process, so I'll give you an example workflow instead.
Server Side
- Admin starts server.
- During start up, server code asks for a password.
- Server uses PBKDF2 to derive a key which is stored only in memory.
- Server spawns a thread that will poll the VirtualFox Pro server every X (days/hours/minutes) for updated data.
- Server enters loop awaiting requests from browser clients.
Updating database
- Main Server's child thread requests an update of data from the Virtual Fox Pro server.
- VirtualFox Pro server dumps a report containing data for client's with modified entries.
- VirtualFox Pro server opens secure connection to main server (ssh, sftp, etc) and transmits zipped data.
- One by one, the main server uses the PBKDF2 derived key that is stored in memory to decrypt blobs stored in a database, update them with new data, reencrypt them, and store them back into the database. This process should all happen in-memory.
Browser client connects
- Main server receives https request from client.
- Main server uses some third party authentication framework to check clients credentials. This framework should use bcrypt to hash passwords and only store the hashes on the file system.
- If the authentication framework positively identifies a user, the main server will decrypt the user's blob using the PBKDF2 derived key in memory and send the data to the user.
- When the user's authentication cookie expires, the main server will stop using the PBKDF2 derived key to decrypt data, and will instead prompt the user to re-authenticate.
This model is more in line with how traditional websites work (which means that you can rely on third party, bug tested frameworks), but data is encrypted/decrypted in memory before touching the database. Ideally, you could use GPG or some other keystore for managing the encryption keys on the main server as well.
Best Answer
Yes, all implementations I've seen for CSRF use sessions to store the token. This is so users can use the website in multiple tabs or windows without issuing multiple tokens (which effectively overwrites the previously valid token as only one token per user is tracked). I think you'll find the additional server requirements for using sessions is quite negligible.
Post requests will work. The Input::get() function actually gets variables from $_GET and $_POST. To quote the documentation (http://laravel.com/docs/requests)
A possible solution would be to pass back a new token in one of the json requests every time it expires. If you use this method and have normal (non ajax) forms on the same page you will have to use javascript to replace the token values in each of those forms.
As the CSRF uses sessions to keep track of the token, and assuming you have solved problem 3, the second window should have no problem performing the operation as the tokens would be automatically refreshed on expiry.