You can certainly do that, but it may not save you as much complexity as you think.
One of the main benefits of virtual memory is that it keeps different processes from needing to know which parts of memory other processes are using.
On a system with virtual memory, one of the main jobs of the MMU is to give each process the illusion that the whole machine is theirs. As an example, if you have three programs you want to run on your OS, each can be compiled to start at memory address 0 -- and none of them has to be changed if you want to run all three at the same time. You are right, however, that this adds complexity to the kernel.
If you decide to save this complexity by making it the program's job to not use the same addresses used by other programs, there are two main ways you can go:
One option is to make each program use a different range of memory -- if program1 only uses addresses 0 .. 8191
, and program2 only uses addresses 8192 .. 16383
, then (barring a bug), they won't interfere with each other, but -- and this is a big deal -- you need to plan in advance what range of memory each program will use, so you need to know beforehand all the programs which will run on your OS. This may be workable for an embedded system (and may be the only choice there), but is obviously a show stopper for an OS which you hope other people will build software for. Historically, a method like this was also used for some early shared library implementations, and proved very hard to get right.
Another option is to compile each program using what's called position independent code -- code where no absolute addresses are compiled into the program, and all access to variables and code within the program is done by first checking a register (possibly the program counter) for a value pointing to where the program actually is in memory. This allows multiple programs to coexist, but requires the compiler and linker to do more work to make the program work this way, requires the operating system to do more work when loading each program, and has a cost in performance every time access to a variable or function has to be computed using this method. Historically, some operating systems, including classic MacOS before version 7.1, have taken this approach, and something like this is still used in modern shared library implementations.
So, you can move the complexity from the kernel to each running program, or from the kernel to your record-keeping of which program gets loaded where -- but you can't get rid of it altogether.
They say
a bird in the hand is better than 2 in the bush
Applied here, I take it to mean its better to have something that works and is maintainable than to tear it all down for some fantasy daydream of "but it'll be vaguely better if we use X Y or Z cool new technology for the sake of using the cool new technology".
I mean, you could have rewritten it in a functional language a few years ago, or Ruby on Rails a couple of years ago, or Node.js a year ago or... whatever comes along next that gets attention in the technical blogosphere. None of these things would create you a stable product that satisfies the 'must use new stuff' people as they will be considered old technologies before they're even finished and they'll want to re-rewrite!
So I have to ask "what's your problem?". You have good code that might be a bit mucky here and there, but in my extensive experience, when you start a big rewrite you end up with code that's a bit mucky (often quickly implemented due to inexperience with the tooling or done under time pressure that you will come back to and fix up nicely, promise.)
When you use a framework is when you are doing a new project and you want a load of code written for you, that's the time to go framework, to re-use all that boring boilerplate that you'd have to write yourself. You never go for a framework because its there to be used, especially when you have an existing framework that you know (even if you don't call it a framework because its grown by itself, it's effectively still one).
One thing to note, I do find a lot of the people who insist on using any kind of new technology want to for 1 of 2 reasons: they either want to boost their CV, or they do not want to learn the existing technology you use (after all, learning tech x is way more fun than actually doing work on existing product). I treat all calls with suspicion as in either case their intention is not in the best interest of the product or the business.
Heavy refactoring...that's a different story, and usually a good one.
Best Answer
JMX would be the answer (Jolokia being a JMX interface).
You might want to also look at - https://stackoverflow.com/questions/242958/best-tools-to-monitor-tomcat