Why not use a RTOS with microkernel architecture for web servers? The scheduler is deterministic and all requests will be handled quickly leading to faster response time? It's easy to extend an OS based on microkernel architecture, since everything is like a client-server communication via message passing. Also, the server will be very light weight and requires less resources. Say that you are developing the web service using C++. I am thinking about QNX for the OS. Is it a bad idea, or it does not matter?
Server Architecture – RTOS with Microkernel for Concurrent Web Servers
server
Related Solutions
Economics sometimes govern much more critical answer than the core theory behind choices. The most important thing is that if you are looking at something 'vast' in your case, where you application needs genuinely tough deployment - the lesser the number of wheels you invent yourself, the better it is. If it works, i won't care if it is monolithic or micro; if it doesn't i won't care it either!
It may be true that very conventional Webpage based apps might not be for you - but you need to take a little broader look and see you can use things ready to go and later evolve your way out to see if you can replace that element by something better. That way, not only you will save time for the first thing - but you will also improve your understanding on what really matters in real life. Here are some pointers you might consider.
If you really need very high scalability, consider how your servers will divide the task rather than churn numbers as fast as they can. Eventually, you will have a load that one server cannot really take up even if intel makes fastest processor on earth and customer is ready to pay! So request routing and load balancing is more important than the efficiency of the protocol itself.
HTTP is still best -if you need to scale up. (You can also get easy to buy load balancers if you use it). Custom protocol requires custom arrangements.
HTTP doesn't mean you need to throw around HTML, java script at all. Doesn't even mean you need to have regular apache server and web browser. You can use HTTP between two custom clients and elements like AOL server or lighthttpd that can be used as a library if communication is not a huge data transfers. You can also use SOAP on both sides with tool kits like gSOAP.
Even if HTTP definitely doesn't fit the bill, consider something like BEEP, that helps you make things more efficient. Alternatively there are many proven RPC, RMI mechanisms that exists.
Try to see that you can do maximal work more by doing parallel processing in the back-end and servers are only queried when work is done. If this works -there are frameworks like MPI, or there are many other distributed computing tool kits that can be of help.
Finally, while i may not be in a position to know exact needs, but you may either need to distribute a lot of data or do a lot of computation, if you need both - there is still some core architectural inefficiencies.
To me, creating a new Protocol, and than creating a new server framework is a double experiment that you shouldn't do together. If your stakes are so high, first you should actually try to experiment with existing tools to see the limits of what is been done so far - only then you will know the real challenges.
Lot more has been accomplished in the research of distributed systems than just web apps. So you should research that.
Dipan.
You're trying to make things too complicated. If you want to use xinetd, write your application to run each new instance talking over stdin/stdout. If you want to directly use sockets, have your service listen on a fixed socket and handle the connections itself.
You don't have any compelling arguments for why you need to complicate things like this, you just seem to think it's the only solution. Either you're leaving out major details, you're confused about how networking works or we're dealing with an X Y Problem.
You say you want to use a "bidirectional socket" - when you have stdin/stdout that'a bidirectional channel. Unless you plan on using complex ioctls, there's not really any difference - "passing a socket" isn't buying you anything.
You say that xinetd is "hardened" - if you're on a private, firewalled network, it's safe to write your own daemon to listen directly on a fixed port. If you're concerned about security or DOSes, xinetd doesn't buy you much.
What is wrong with either of the simple approaches?
Best Answer
Deterministic does not imply faster. An RTOS schedules guarantees that all requests will be served in some given time T, but a best effort scheduler may provide better average performance with the downside that some requests take a lot longer than T to serve.