I am having trouble running Solr 9.2 on the above virtual machine (Rhel9.1 on Azure, image from CIS).
The problem is low entropy as seen in the logs below:
Started Apache Solr 9.
Java 17 detected. Enabled workaround for SOLR-16463
Warning: Available entropy is low. As a result, use of the UUIDField, SSL, or any other features that require
RNG might not work properly. To check for the amount of available entropy, use 'cat /proc/sys/kernel/random/entropy_avail'.
I have tried installing rng-tools and haveged but the problem still persists.
The ouput of cat /proc/sys/kernel/random/entropy_avail
remains 256
.
Below is the status of haveged service:
Started Entropy Daemon based on the HAVEGE algorithm.
haveged[14569]: haveged: command socket is listening at fd 3
haveged[14569]: haveged: fills: 1, generated: 512 K bytes, RNDADDENTROPY: 256
and the status of rngd service:
Started Hardware RNG Entropy Gatherer Daemon.
rngd[14506]: Disabling 7: PKCS11 Entropy generator (pkcs11)
rngd[14506]: Disabling 5: NIST Network Entropy Beacon (nist)
rngd[14506]: Disabling 9: Qrypt quantum entropy beacon (qrypt)
rngd[14506]: Initializing available sources
rngd[14506]: [hwrng ]: Initialization Failed
rngd[14506]: [rdrand]: Enabling RDSEED rng support
rngd[14506]: [rdrand]: Initialized
rngd[14506]: [jitter]: JITTER timeout set to 5 sec
rngd[14506]: [jitter]: Initializing AES buffer
What am I missing?
Best Answer
Solr has a check that
/proc/sys/kernel/random/entropy_avail < 300
. Version control and SOLR-10352 SOLR-10338 suggests it was added circa Solr 7, around discussions of slow tests involving cryptography. This check serves no purpose on modern Linux, and I think should be removed by the developers.I am not familiar with this application to know where it ultimately sources random bits, as in /dev/random or /dev/urandom or which flags on getrandom system calls. Solr never had to use a blocking source to get quantity random, but it may have been blocking in the past, by accident or design on the application's part.
man 7 random
will explain that Linux random driver exists to initialize a cryptographically secure RNG (crng). Very recently the source code documentation got a long overdue update,char/random.c
at 6.3 summarizes:Note "expands it indefinitely". The algorithms here are sufficiently good that the blocking pool has been removed.
I tested this lack of blocking, here is CentOS Stream 9 reading 20 billion bits from /dev/random. This is an enormous volume, given 256 bits is sufficient to seed any crypto key. Note this is just some idle test VM, and I shut down rngd. Despite these limitations, it has a decent showing on a test suite designed by NIST and implemented by the maintainers of rng-tools.
Regarding why the warning triggers now in the EL 9 era, it is an implementation detail that the pool is this small. It does not affect the quality of output. The kernel maintainers are not going to return to the old 4096-bit LFSR pool. There's also a skepticism from the maintainers about putting much weight into it: "The general concept of entropy estimation to begin with is probably impossible to do robustly." Despite this, the sysctl API remains, for maximum transparency, and not to break user program assumptions.
Feel free to continue using programs to feed it more random bits. They'll get mixed in to the many other sources, and seed more output of the CRNG.
Further reading: