About Offloading processing to client side:
Can the client machines handle the business logic processing smoothly? Does other applications (outlook, VS, eclipse, etc) suffer because of the heavy silverlight application?
If the client machines have trouble running the silverlight application, then you need to take some processing to server side, maybe do some conversions from the raw data and send more streamlined data (e.g. JSON/XML) to the clients.
If the silverlight application works fine without impeding your users' work, then it's perfectly OK to offload the business logic to clients since you are saving server side resources.
To answer your questions:
Is it going to be faster?
It will be faster on client side but slower on server side. You might need to upgrade the VMs.
Do I need to scale immediately?
Your server will be processing more data than it used to, so you might have to scale if you see its getting slower.
Is Fetching the data only once on the server then caching it something
like redis than doing the computations according to user requests
better solution?
If different user requests fetch the exact same data into the server multiple times, and if fetching is a slow operation, then it is a very good idea to cache it. Based on the amount of data you might use storage systems like Redis.
If clientside approach is good, do I need to switch to Javascript and
Javascript client side MVC frameworks like AngularJS?
If you have a standardized set of client machines, then Silverlight is good. You might have to think about javascript based clients if you have Linux, Mac, or Windows systems without Silverlight. Plus, Silverlight and other RIA frameworks are much faster than javascript when you are implementing heavy business logic on the client side.
The approach I would recommend is to focus on the interface of your key-value store, so as to make it as clean as possible and as nonrestrictive as possible, meaning that it should allow maximum freedom to the callers, but also maximum freedom for choosing how to implement it.
Then, I would recommend that you provide an as bare as possible, and as clean as possible implementation, without any performance concerns whatsoever. To me it seems like unordered_map
should be your first choice, or perhaps map
if some kind of ordering of keys must be exposed by the interface.
So, first get it to work cleanly and minimally; then, put it to use in a real application; in doing so, you will find what issues you need to address on the interface; then, go ahead and address them. Most chances are that as a result of changing the interface, you will need to rewrite big parts of the implementation, so any time you have already invested on the first iteration of the implementation beyond the bare minimum amount of time necessary to get it to just barely work is time wasted.
Then, profile it, and see what needs to be improved in the implementation, without altering the interface. Or you may have your own ideas about how to improve the implementation, before you even profile. That's fine, but it is still no reason to work on these ideas at any earlier point in time.
You say you hope to do better than map
; there are two things that can be said about that:
a) you probably won't;
b) avoid premature optimization at all costs.
With respect to the implementation, your main issue appears to be memory allocation, since you seem to be concerned with how to structure your design in order to work around problems that you foresee that you are going to have with respect to memory allocation. The best way to address memory allocation concerns in C++ is by implementing a suitable memory allocation management, not by twisting and bending the design around them. You should consider yourself lucky that you are using C++, which allows you to do your own memory allocation management, as opposed to languages like Java and C#, where you are pretty much stuck with what the language runtime has to offer.
There are various ways of going about memory management in C++, and the ability to overload the new
operator may come in handy. A simplistic memory allocator for your project would preallocate a huge array of bytes and use it as a heap. (byte* heap
.) You would have a firstFreeByte
index, initialized to zero, which indicates the first free byte in the heap. When a request for N
bytes comes in, you return the address heap + firstFreeByte
and you add N
to firstFreeByte
. So, memory allocation becomes so fast and efficient that it becomes virtually no issue.
Of course, preallocating all of your memory may not be a good idea, so you may have to break your heap into banks which are allocated on demand, and keep serving allocation requests from the at-any-given-moment-newest bank.
Since your data are immutable, this is a good solution. It allows you to abandon the idea of variable length objects, and to have each Pair
contain a pointer to its data as it should, since the extra memory allocation for the data costs virtually nothing.
If you want to be able to discard objects from the heap, so as to be able to reclaim their memory, then things become more complicated: you will need to be using not pointers, but pointers to pointers, so that you can always move objects around in the heaps so as to reclaim the space of deleted objects. Everything becomes a bit slower due to the extra indirection, but everything is still lightning fast compared to using standard runtime library memory allocation routines.
But all this is of course really useless to be concerned with if you don't first build a straightforward, bare-minimal, working version of your database, and put it to use in a real application.
Best Answer
If anybody interested I ended up using gzip from zlib. Never figured out why LZ4 doesn't work, as suggested in the comments this could be an endianess problem or a 64/32-bit mismatch. However, I tested this on a single machine compressing and decompressing a local file. The same compilation settings worked for gzip.
C/C++ sample compressor code
C# sample decompressor code