---- Edited to provide example as the point isn't coming across ----
A process launches and asks for 1 GB of memory.
That process then starts eight threads (which all have access to the allocated 1 GB of memory).
Someone runs a tool to determine how much memory is being used. The tool works as follows:
- Find every scheduleable item (each thread).
- See how much memory it can access.
- Add that memory together
- Report the sum.
The tool will report that the process is using 9 GB of memory, when it is (hopefully) obvious that there are eight spawned threads (and the thread for the original process) all using the same GB of memory.
It's a defect in how some tools report memory; however, it is not an easily fixable defect (as fixing it would require changing the output of some very old (but important) tools). I don't want to be the guy who rewrites top or ps, it would make the OS non-POSIX.
---- Original post follows ----
Some versions of memory reporting tools (like top) mistakenly confuse threads (which all have access to the same memory) with processes. As a result, a tomcat instance that spawns five threads will misreport it's memory consumption by five times.
The only way to make sure is to list out the processes with their memory consumption individually, and then read the memory for one of the threads (that is getting listed like a process). That way you know the true memory consumption of the application. If you rely on tools that do the addition for you, you will overestimate the amount of memory actually used by the number of threads referencing the same shared memory.
I've had boxes with 2 GB of memory (and 1 GB of swap) reporting that ~7GB of memory was in use on a particularly thread heavy application before.
For a better understanding of how memory is reported by many tools, look here. If they python code is parsing the text of one of these tools, or obtaining that data from the same system calls, then it is subject to the same errors in over reporting memory.
Okay - there are a few things to keep in mind. First, in modern Linux kernels, threads are simply processes that share a few resources and flags, most notably, address space. So yep, threads in the output of something like 'top' can make things pretty confusing. Example: Our Oracle server has about 30 threads running simply appearing as 'oracle' - they each show up as consuming 38 GB of ram. Pretty amusing on first glance.
Ok - the suggestion in the comment above is spot on the money. Use 'pmap' to see exactly not just the memory consumption of the process, but the breakdown of that memory usage. The shared library entries (.so) entries aren't much to worry about. Your real concern is the [ anon ] entries and the [ stack ] entry. See if you can get a total of those. That will give you the honest picture of how much non-shared memory the process is consuming.
Best Answer
It seems to just connect to port 80 for the usual external repository.
You could easily configure a repository that uses another port, but you should be able to see one in the list of repos in Nexus' web interface.