I think the critical piece you're missing is that the result of require("foo")
is always the same object. Consider this REPL example:
> var myHttpModule = require("http")
{ ... }
> myHttpModule.someNewProperty = "someValue"
'someValue'
> require("http").someNewProperty
'someValue'
The second call to require("http")
did not re-create the http
module -- it summoned up the only http
module that exists in Node's current execution environment, which had been altered on the previous line.
So, in your example, there's only one Alpha module. When Beta and Gamma call require("alpha")
, they're each getting the same reference to the one-and-only Alpha module singleton object.
(Or, to be perfectly precise, they're getting the same reference to the exports
value of the Alpha module. require
creates a Module
object, but returns only the exports
property of that module. If you don't fully understand the distinction right now, it's not a big deal.)
Behind the scenes: what's really going on with require
caching?
The first time you use require
to include a module, Node actually runs the code in that module. The resulting module is stored in require.cache
. Subsequent attempts to load the module with require
first check for an already-loaded module object in require.cache
. (Note: environment-native built-in Node modules like http
are exceptions in that they don't use require.cache
-- you'll need to test out my code below with a custom module.)
require.cache
is an object whose keys are module file path, with associated values that are module objects. For example, assume some module named "foo
" in C:\node_modules\foo.js
:
> require.cache // empty cache
{ }
> require("foo") // require foo
{ bar: 'baz' }
> require.cache // cache now populated
{ 'C:\\node_modules\\foo.js':
{ id: 'C: \\node_modules\\foo.js',
exports: { bar: 'baz' },
parent: { ... },
filename: 'C:\\node_modules\\foo.js',
...
paths:
[ ... ] } }
The current value of the foo
module is in require.cache["C:\node_modules\foo.js"].exports
. We can use require.resolve
to get the file path of the module from the name, so we can express it also as require.cache[require.resolve("foo")].exports
.
If we call require("foo")
a second time, Node sees that require.cache[require.resolve("foo")]
is defined, and so it returns the value of require.cache[require.resolve("foo")].exports
instead of re-running the module creation code. This exports
property of the foo
module in require.cache
is the single instance of the foo
module export value, returned with every subsequent call to require("foo")
.
One interesting implication here is that you can delete require.cache[require.resolve("foo")]
to force a reload of the foo
module with the next call to require("foo")
, because the object describing the module is removed from the require.cache
object.
So what does this mean for me?
Without require
, you could still share values between modules using global variables. For example, your Alpha/Beta/Gamma case would work just as well with just Beta and Gamma setting and reading the global value global.a
. Instead, with Node's require
system, you're actually setting and reading global.require.cache[require.resolve("alpha")].exports.a
, which Node lets you read neatly as just require("alpha").a
.
In fact, if you just want to share a value and don't need to import your shared data as a module, you could also use a namespacing object, i.e., have Beta and Gamma set and read properties of the object global.alpha
. The major advantage to using require
is that you don't need to set up require("alpha")
external to the other scripts, whereas you would need to define global.alpha = {}
before require
ing Beta and Gamma. (Alternatively, you could conditionally define global.alpha
in each module based on a typeof global.alpha == 'undefined'
check instead, to see if it's the first module to use it.)
Code that works for you and is easy to maintain is by definition "good". You should never change things just for the sake of obeying someone's idea of "good practice" if that person cannot point out what the problem with your code is.
In this case, the most obvious problem is that resources are hard-coded into your application - even if they're selected dynamically, they're still hard-coded. This means that you cannot change these resources without recompiling/redeploying your application. With an external configuration file, you'd only have to change that file and restart/reload your application.
Whether or not that is a problem depends on what you do with it. In a Javascript framework that is automatically redistributed with every request anyway, it is no problem at all - the changed value will propagate to every user the next time they use the application. With an on-premises deployment in a compiled language in an inaccessible location it is a very big problem indeed. Reinstalling the application might take a long time, cost a lot of money or have to be done at night to preserve availability.
Whether or not hard-coded values are a problem depends on whether your situation is more like the first example or more like the second example.
Best Answer
Don't worry about that.
A first
require
involves a bunch of input/output operations in order to find the matching file and read it into memory. “Any performance impact here will be inconsequential relative to everything else the server is doing” Given that JavaScript modules rarely exceed several megabytes in size, the performance impact of the operation is close to zero.A second
require
to the very same module won't even involve that. Since it's already in memory, it is unnecessary to find it on disk or read the actual file. So the performance footprint is even smaller (and much smaller!) than when the module wasrequire
d for the first time.If you want to see how it works, create a script which, in a loop,
require
s the same module many times. Vary the number of iterations and see how it impacts the time spent inside the loop.