I don't have all the answers. Hopefully I can shed some light on it.
To simplify my previous statements about .NET's threading models, just know that Parallel Library uses Tasks, and the default TaskScheduler for Tasks, uses the ThreadPool. The higher you go in the hierarchy (ThreadPool is at the bottom), the more overhead you have when creating the items. That extra overhead certainly doesn't mean it's slower, but it's good to know that it's there. Ultimately the performance of your algorithm in a multi-threaded environment comes down to its design. What performs well sequentially may not perform as well in parallel. There are too many factors involved to give you hard and fast rules, they change depending on what you're trying to do. Since you're dealing with network requests, I'll try and give a small example.
Let me state that I am no expert with sockets, and I know next to nothing about Zeroc-Ice. I do know about bit about asynchronous operations, and this is where it will really help you. If you send a synchronous request via a socket, when you call Socket.Receive()
, your thread will block until a request is received. This isn't good. Your thread can't make any more requests since it's blocked. Using Socket.Beginxxxxxx(), the I/O request will be made and put in the IRP queue for the socket, and your thread will keep going. This means, that your thread could actually make thousands of requests in a loop without any blocking at all!
If I'm understanding you correctly, you are using calls via Zeroc-Ice in your testing code, not actually trying to reach an http endpoint. If that's the case, I can admit that I don't know how Zeroc-Ice works. I would, however, suggest following the advice listed here, particularly the part: Consider Asynchronous Method Invocation (AMI)
. The page shows this:
By using AMI, the client regains the thread of control as soon as the invocation has been sent (or, if it cannot be sent immediately, has been queued), allowing the client to use that thread to perform other useful work in the mean time.
Which seems to be the equivalent of what I described above using .NET sockets. There may be other ways to improve the performance when trying to do a lot of sends, but I would start here or with any other suggestion listed on that page. You've been very vague about the design of your application, so I can be more specific than I have been above. Just remember, do not use more threads than absolutely necessary to get what you need done, otherwise you'll likely find your application running far slower than you want.
Some examples in pseudocode (tried to make it as close to ice as possible without me actually having to learn it):
var iterations = 100000;
for (int i = 0; i < iterations; i++)
{
// The thread blocks here waiting for the response.
// That slows down your loop and you're just wasting
// CPU cycles that could instead be sending/receiving more objects
MyObjectPrx obj = iceComm.stringToProxy("whateverissupposedtogohere");
obj.DoStuff();
}
A better way:
public interface MyObjectPrx : Ice.ObjectPrx
{
Ice.AsyncResult GetObject(int obj, Ice.AsyncCallback cb, object cookie);
// other functions
}
public static void Finished(Ice.AsyncResult result)
{
MyObjectPrx obj = (MyObjectPrx)result.GetProxy();
obj.DoStuff();
}
static void Main(string[] args)
{
// threaded code...
var iterations = 100000;
for (int i = 0; i < iterations; i++)
{
int num = //whatever
MyObjectPrx prx = //whatever
Ice.AsyncCallback cb = new Ice.AsyncCallback(Finished);
// This function immediately gets called, and the loop continues
// it doesn't wait for a response, it just continually sends out socket
// requests as fast as your CPU can handle them. The response from the
// server will be handled in the callback function when the request
// completes. Hopefully you can see how this is much faster when
// sending sockets. If your server does not use an Async model
// like this, however, it's quite possible that your server won't
// be able to handle the requests
prx.GetObject(num, cb, null);
}
}
Keep in mind that more threads != better performance when trying to send sockets (or really doing anything). Threads are not magic in that they will automatically solve whatever problem you're working on. Ideally, you want 1 thread per core, unless a thread is spending much of its time waiting, then you can justify having more. Running each request in its own thread is a bad idea, since context switches will occur and resource waste. (If you want to see everything I wrote about that, click edit and look at the past revisions of this post. I removed it since it only seemed to cloud the main issue at hand.)
You can definitely make these request in threads, if you want to make a large number of requests per second. However, don't go overboard with the thread creation. Find a balance and stick with it. You'll get better performance if you use an asynchronous model vs a synchronous one.
I hope that helps.
Best Answer
If you are making a standard web app, then TLS is your only "safe" method of protecting users from MitM attacks and traffic sniffing. Take a look at ipviking's map of real time cyber attacks; a large number come from China because encryption is largely outlawed and browser requests can get hijacked. (Most infosec guys I know agree that X.509 is broken, the IETF is compromised, TLS is likely weakened intentionally by the NSA (kinda like this), and the entire key spaces for RSA and elliptic curves are being precomputed by governments. Nevertheless, it is still better than unencrypted traffic.)
If you must use security questions, I suggest putting everything in lowercase, stripping out whitespace, and hashing with a random salt stored along side it (it could be a single salt stored as part of a user record). I suggest at least 256 bit hash and 100 rounds (fewer if hash algorithm is already slow). Bcrypt hashes are also quite good because they involve encrypting a hash and storing the salt as part of the hash string.
If not using bcrypt, then be sure to not use a simple
str1==str2
boolean for checking the answer; you should use something like the following, which will avoid the string comparison side channel attack.The considerations for binary apps are a little different in that you can hard code or bundle a public key (or several) and use NaCl to generate client private keys and handle handshakes and encryption between server and client. There are libraries in every compilable language for this, and writing a wrapper for a statically linked library is not too difficult if that is the route you choose. Rolling your own crypto is definitely not recommended if you are not very familiar with cryptography and infosec matters.