I'm pretty sure that the answer you got from the ElasticSearch community relates to the ZFS ARC (Adaptive Replacement Cache). This of course assumes that your file system is ZFS?
On ZFS the ARC will potentially take up all of the available RAM on the host less 1 Gb. So on a ZFS host tools like top
will sometimes show that your physical RAM is close to the limit even if it isn't. This is by design. The ARC will automatically release memory to processes that need memory. What memory the ARC uses counts as kernel memory so you can't really see it in a process output.
On most of the Solaris systems that I look at daily the physical RAM consumption is about 90%. This is not because they are very utilized, it is the ZFS that grabs unused RAM for its own purpose. Don't be alarmed by this. As the ARC is part of the kernel it can release memory to processes that need it at the speed of the light so to speak. Hence - although you can - I typically do not see a point in limiting the size of the ZFS ARC. Better to let ZFS do its job.
So, if we're talking about ZFS, then yes, file system caching doesn't show up as memory consumption on an individual process. You'll need to execute something like:
echo "::memstat" | mdb -k
to reveal how your memory is actually used. The line "Anon" covers all the user land processes that you see in e.g. prstat
output.
The other thing for you to know is how the JVM works in terms of memory allocation and release of memory. The JVM grabs memory from the OS as it needs it only restricted by the JVM -Xmx
command line parameter. The open question is how (if ever) the JVM will release memory back to the OS if it no longer needs it ? You'll find that is is very difficult to find information on this subject. It seems to depend on which garbage collector is used. As it is difficult to get precise information on this subject (don't know why really) your best option is to assume that the JVM is extremely reluctant to release memory back to the OS. In other words : if you allow a JVM process to grab say 50 GB memory then you better be in a position where you can afford this permanently rather than assuming that this is just a burst.
So if you want to limit how much memory the ElasticSearch process can consume then you need to look into the JVM command line parameters, in particular the -Xmx
option.
Based on the output of the last command, your two indices don't have replicas because you haven't told them to.
You'll need to update your index settings, changing the "number_of_replicas"
to whatever number of replicas you want.
The Update Indices Settings page of the ES docs has an example of exactly this.
This will change all indices to 1 replicas:
curl -XPUT 'localhost:9200/_settings' -d '
{
"index" : {
"number_of_replicas" : 1
}
}'
This will change just the scores
index to 4 replicas:
curl -XPUT 'localhost:9200/scores/_settings' -d '
{
"index" : {
"number_of_replicas" : 4
}
}'
Best Answer
Responding to my own question. Assuming that all set from OS limits point of view, it could also be that your "/tmp" containing partition missing "exec" option. You have several options to correct it and start elasticsearch process here:
Then go ahead and restart your elasticsearch systemctl daemon (you do not run it on CentOS < 7, do you? :-) More details here.