Using multiple connections per user could be problematic because you will hit the servers connection limit earlier when you have a lot of users.
TCP, however, has the advantage that packets are always received in whole and in order. You can use that to implement your own application-level session handling system which works in a preemtive ("time-sharing") multi-threading way.
When there are multiple file downloads taking place at the same time, you could split each file transfer into individual packets and prefix each packet with an identifier which says to which file transfer it belongs. Then you put the packets of each transfer into a queue and process these queues in round-robin order. That means when you have 3 file transfers A, B and C consisting of 3, 5 and 7 packets each, you would send these 15 packages in the order ABCABCABCBCBCCC. Make sure that you are able to add a new queue at any time between sends so that you can enter a new transfer while other transfers are running.
On the other side you read the prefix of each packet to find out to which file transfer it belongs and append it to the corresponding input stream.
A good API to manage all this on Java is non-blocking IO. You should use that anyway on a server which handles a lot of users, because that way you avoid to create individual threads for all users.
They say
a bird in the hand is better than 2 in the bush
Applied here, I take it to mean its better to have something that works and is maintainable than to tear it all down for some fantasy daydream of "but it'll be vaguely better if we use X Y or Z cool new technology for the sake of using the cool new technology".
I mean, you could have rewritten it in a functional language a few years ago, or Ruby on Rails a couple of years ago, or Node.js a year ago or... whatever comes along next that gets attention in the technical blogosphere. None of these things would create you a stable product that satisfies the 'must use new stuff' people as they will be considered old technologies before they're even finished and they'll want to re-rewrite!
So I have to ask "what's your problem?". You have good code that might be a bit mucky here and there, but in my extensive experience, when you start a big rewrite you end up with code that's a bit mucky (often quickly implemented due to inexperience with the tooling or done under time pressure that you will come back to and fix up nicely, promise.)
When you use a framework is when you are doing a new project and you want a load of code written for you, that's the time to go framework, to re-use all that boring boilerplate that you'd have to write yourself. You never go for a framework because its there to be used, especially when you have an existing framework that you know (even if you don't call it a framework because its grown by itself, it's effectively still one).
One thing to note, I do find a lot of the people who insist on using any kind of new technology want to for 1 of 2 reasons: they either want to boost their CV, or they do not want to learn the existing technology you use (after all, learning tech x is way more fun than actually doing work on existing product). I treat all calls with suspicion as in either case their intention is not in the best interest of the product or the business.
Heavy refactoring...that's a different story, and usually a good one.
Best Answer
Why not just use javascript? (JSON is Javascript Object Notation after all). You then won't have to parse or manipulate the JSON.
EDIT Have a look at http://json.org/java
It isn't. Deserializing an object is cheap (bench test it yourself). Talking to the external API will be an order of magnitude more expensive. You could directly manipulate the string which might be slightly faster but you would risk bugs, reduce extensibility and reduce readability. A high cost.