You probably meant to have Environment
as a Facade (facade pattern) and not to implement every detail in it.
Break out all functionality into other classes. Imho each of the responsibilities below should be handled by a separate class that the Environment
class uses internally.
- Keep information about services in the environment,i.e. environment definition (Service is another class)
- Start/stop services meeting some criteria
- Apply configuration changes to different service and keep list of updated services
- Restart services with configuration change
- Revert configuration changes at the end of session
- Search/Give Service class based on criteria (name etc.)
Those should give you six new classes. It will probably lead to some other classes too so that you for instance can manage configuration from different classes.
There's really two things to understand:
- prototypal inheritance has nothing to do with performance at all. The performance issues come from runtime changes to the inheritance structure.
- prototypal inheritance (and flexible object structures) are not so much inherently slower, as they are harder to optimize.
To illustrate the first claim, Ruby would be a flagship example of class based objects being abysmally slow. The problem generally persists throughout Smalltalk-like langauges, e.g. Objective-C, that employ message passing for method calls. The Objective-C runtime uses some nifty method caching to tackle the issue, but it did take them a while to get that far.
What makes method calls cheap is that the compiler (or JIT or runtime even) has some definite knowledge about the structure of an object. Be that explicitly given in terms of language features, or implicitly inferred from static analysis, the knowledge exists and the compiler can use that to optimize.
Now when the structure can change at runtime, this makes things a little tricky, because you need good heuristics to detect which portions of the code are worth being optimized at all (you want a good ratio between the frequency of the code being run to the frequency at which it changes and the cost of optimizing it), to get the best overall runtime characteristics.
So what is the point of classes in dynamic languages? Well, there must be one, because there's numerous JavaScript class systems (like that of Ext). Sure, familiarity for programmers used to classes is one reason. But the real benefit comes from helping to ensure explicit definitions of object types. Such a class definition is one (albeit complex) statement. With vanilla JavaScript constructors, you have a whole bunch of statements, that are grouped together, if you're lucky. The class structure is really just a side effect of imperative code, if you will. Class declarations are unsurprisingly meant to facilitate a more declarative style.
It's worth to note that the Self language that really made prototypal inheritance explicit (although it's really straight forward to achieve in a language with message passing), was created for an environment where you programmed a fully interactive system while it was running. You could actually see an object on screen. This allowed for a clean declaration that was still modifiable at runtime, because the declaration and the result were so intimately coupled. Without such a coupling, fiddling with object structure at runtime can quickly become an unintelligible mess that is really hard to fit in your brain, let alone reason about.
You can pretty much get prototypal inheritance if you have good support for delegation. You just delegate all unimplemented calls to an object that you consider a prototype and you're done. It's more flexible. However it's equally harder to optimize.
Best Answer
After many edits, this answer has become a monster in length. I apologize in advance.
First of all,
eval()
isn't always bad, and can bring benefit in performance when used in lazy-evaluation, for example. Lazy-evaluation is similar to lazy-loading, but you essentially store your code within strings, and then useeval
ornew Function
to evaluate the code. If you use some tricks, then it'll become much more useful than evil, but if you don't, it can lead to bad things. You can look at my module system that uses this pattern: https://github.com/TheHydroImpulse/resolve.js. Resolve.js uses eval instead ofnew Function
primarily to model the CommonJSexports
andmodule
variables available in each module, andnew Function
wraps your code within an anonymous function, though, I do end up wrapping each module in a function I do it manually in combination with eval.You read more about it in the following two articles, the later also referring to the first.
Harmony Generators
Now that generators have finally landed in V8 and thus in Node.js, under a flag (
--harmony
or--harmony-generators
). These greatly reduce the amount of callback hell you have. It makes writing asynchronous code truly great.The best way to utilize generators is to employ some sort of control-flow library. This will enable to flow to continue going as you yield within generators.
Recap/Overview:
If you're unfamiliar with generators, they're a practice of pausing the execution of special functions (called generators). This practice is called yielding using the
yield
keyword.Example:
Thus, whenever you call this function the first time, it'll return a new generator instance. This allows you to call
next()
on that object to start or resume the generator.You would keep calling
next
untildone
returnstrue
. This means the generator has completely finished it's execution, and there are no moreyield
statements.Control-Flow:
As you can see, controlling generators are not automatic. You need to manually continue each one. That's why control-flow libraries like co are used.
Example:
This allows the possibility to write everything in Node (and the browser with Facebook's Regenerator which takes, as input, source code that utilize harmony generators and splits out fully compatible ES5 code) with a synchronous style.
Generators are still pretty new, and thus requires Node.js >=v11.2. As I'm writing this, v0.11.x is still unstable and thus many native modules are broken and will be until v0.12, where the native API will calm down.
To add to my original answer:
I've recently been preferring a more functional API in JavaScript. The convention does use OOP behind the scenes when needed but it simplifies everything.
Take for example a view system (client or server).
Is much easier to read or follow than:
The
view
function simply checks if the same view already exists in a local map. If the view does not exist, it'll create a new view and add a new entry to the map.Extremely basic, right? I find it dramatically simplifies the public interface and makes it easier to use. I also employ chain-ability...
Tower, a framework I'm developing (with someone else) or developing the next version (0.5.0) will use this functional approach in most of it's exposing interfaces.
Some people take advantage of fibers as a way to avoid "callback hell". It's quite a different approach to JavaScript, and I'm not a huge fan of it, but many frameworks / platforms use it; including Meteor, as they treat Node.js as a thread/per connection platform.
I'd rather use an abstracted method to avoid callback hell. It may become cumbersome, but it greatly simplifies the actual application code. When helping on building the TowerJS framework, it solved a lot of our problems, though, you'll obviously still have some level of callbacks, but the nesting isn't deep.
An example of our, currently being developed, routing system and "controllers", though fairly different from traditional "rails-like". But the example is extremely powerful and minimizes the amount of callbacks and makes things fairly apparent.
The problem with this approach is that everything is abstracted. Nothing runs as-is, and requires a "framework" behind it. But if these kinds of features and coding style is implemented within a framework, then it's a huge win.
For patterns in JavaScript, it honestly depends. Inheritance is only really useful when using CoffeeScript, Ember, or any "class" framework/infrastructure. When you're inside a "pure" JavaScript environment, using the traditional prototype interface works like a charm:
Ember.js started, for me at least, using a different approach to constructing objects. Instead of constructing each prototype methods independently, you'd use a module-like interface.
All these are different "coding" styles, but do add to your code base.
Polymorphism
Polymorphism isn't widely used in pure JavaScript, where working with inheritance and copying the "class"-like model requires a lot of boilerplate code.
Event/Component Based Design
Event-based and Component-based models are the winners IMO, or the easiest to work with, especially when working with Node.js, which has a built-in EventEmitter component, though, implementing such emitters is trivial, it's just a nice addition.
Just an example, but it's a nice model to work with. Especially in a game/component oriented project.
Component design is a separate concept by itself, but I think it works extremely well in combination to event systems. Games are traditionally known for component based design, where object oriented programming takes you only so far.
Component based design has it's uses. It depends on what type of system your building. I'm sure it would work with web apps, but it'd work extremely well in a gaming environment, because of the number of objects, and separate systems, but other examples surely exist.
Pub/Sub Pattern
Event-binding and pub/sub is similar. The pub/sub pattern really shines in Node.js applications because of the unifying language, but it can work in any langauge. Works extremely well in real-time applications, games, etc..
Observer
This might be a subjective one, as some people choose to think of the Observer pattern as pub/sub, but they have their differences.
"The Observer is a design pattern where an an object (known as a subject) maintains a list of objects depending on it (observers), automatically notifying them of any changes to state." - The Observer Pattern
The observer pattern is a step beyond typical pub/sub systems. Objects have strict relationships or communication methods with each other. An object "Subject" would keep a list of dependents "Observers". The subject would keep it's observers up-to-date.
Reactive Programming
Reactive programming is a smaller, more unknown concept, especially in JavaScript. There is one framework/library (that I know of) that exposes an easy to work with API to use this "reactive programming".
Resources on reactive programming:
Basically, it's having a set of syncing data (be it variables, functions, etc..).
I believe reactive programming is considerably hidden, especially in imperative languages. It's an amazingly powerful programming paradigm, especially in Node.js. Meteor has created it's own reactive engine in which the framework is basically based upon. How does Meteor's reactivity work behind the scenes? is a great overview of how it works internally.
This will execute normally, displaying the value of
name
, but if we change the itSession.set('name', 'Bob');
It will re-output the console.log displaying
Hello Bob
. A basic example, but you can apply this technique to real-time data models and transactions. You can create extremely powerful systems behind this protocol.Meteor's...
Reactive pattern and Observer pattern are quite similar. The main difference is that the observer pattern commonly describes data-flow with whole objects/classes vs reactive programming describes data-flow to specific properties instead.
Meteor is a great example of reactive programming. It's runtime is a little bit complicated because of JavaScript's lack of native value change events (Harmony proxies change that). Other client-side frameworks, Ember.js and AngularJS also utilize reactive programming (to some extend).
The later two frameworks use the reactive pattern most notably on their templates (auto-updating that is). Angular.js uses a simple dirty checking technique. I wouldn't call this exactly reactive programming, but it's close, as dirty checking isn't real-time. Ember.js uses a different approach. Ember use
set()
andget()
methods which allow them to immediately update depending values. With their runloop it's extremely efficient and allows for more depending values, where angular has a theoretical limit.Promises
Not a fix to callbacks, but takes some indentation out, and keeps the nested functions to a minimum. It also adds some nice syntax to the problem.
You could also spread the callback functions so that they aren't inline, but that's another design decision.
Another approach would be to combine events and promises to where you would have a function to dispatch events appropriately, then the real functional functions (ones that have the real logic within them) would bind to a particular event. You'd then pass the dispatcher method inside each callback position, though, you'd have to work out some kinks that would come to mind, such as parameters, knowing which function to dispatch to, etc...
Single Function Function
Instead of having a huge mess of callback hell, keep a single function to a single task, and do that task well. Sometimes you can get ahead of yourself and add more functionality within each function, but ask yourself: Can this become an independent function? Name the function, and this cleans up your indentation and, as a result, cleans up the callback hell problem.
In the end, I'd suggest developing, or using a small "framework", basically just a backbone for your application, and take time to make abstractions, decide on an event-based system, or a "loads of small modules that are independent" system. I've worked with several Node.js projects where the code was extremely messy with callback hell in particular, but also a lack of thought before they began coding. Take your time to think through the different possibilities in terms of API, and syntax.
Ben Nadel has made some really good blog posts about JavaScript and some pretty strict and advanced patterns that may work in your situation. Some good posts that I'll emphasis:
Inversion-of-Control
Though not exactly related to callback hell, it can help you overall architecture, especially in the unit tests.
The two main sub-versions of inversion-of-control is Dependency Injection and Service Locator. I find Service Locator to be the easiest within JavaScript, as opposed to Dependency Injection. Why? Mainly because JavaScript is a dynamic language and no static typing exists. Java and C#, among others, are "known" for dependency injection because your able to detect types, and they have built in interfaces, classes, etc... This makes things fairly easy. You can, however, re-create this functionality within JavaScript, though, it's not going to be identical and a bit hacky, I prefer using a service locator inside my systems.
Any kind of inversion-of-control will dramatically decouple your code into separate modules that can be mocked or faked at anytime. Designed a second version of your rendering engine? Awesome, just substitute the old interface for the new one. Service locators are especially interesting with the new Harmony Proxies, though, only effectively usable within Node.js, it provides a nicer API, rather then using
Service.get('render');
and insteadService.render
. I'm currently working on that kind of system: https://github.com/TheHydroImpulse/Ettore .Though the lack of static typing (static typing being a possible reason for the effective usages in dependency injection in Java, C#, PHP - It's not static typed, but it has type hints.) might be looked at as a negative point, you can definitely turn it into a strong point. Because everything is dynamic, you can engineer a "fake" static system. In combination with a service locator, you could have each component/module/class/instance tied to a type.
A simplistic example. For a real world, effective usage, you'll need to take this concept further, but it could help decouple your system if you really want traditional dependency injection. You might need to fiddle with this concept a little bit. I haven't put much thought into the previous example.
Model-View-Controller
The most obvious pattern, and the most used on the web. A few years ago, JQuery was all the rage, and so, JQuery plugins were born. You didn't need a full-on framework on the client-side, just use jquery and a few plugins.
Now, there's a huge client-side JavaScript framework war. Most of which use the MVC pattern, and they all use it differently. MVC isn't always implemented the same.
If you're using the traditional prototypal interfaces, you might have a hard time getting a syntactical sugar or a nice API when working with MVC, unless you want to do some manual work. Ember.js solves this by creating a "class"/object" system. A controller might look like:
Most client-side libraries also extend the MVC pattern by introducing view-helpers (becoming views) and templates (becoming views).
New JavaScript Features:
This will only be effective if you're using Node.js, but nonetheless, it's invaluable. This talk at NodeConf by Brendan Eich brings some cool new features. The proposed function syntax, and especially the Task.js js library.
This will probably fix most of the issues with function nesting and will bring slightly better performance because of the lack of function overhead.
I'm not too sure if V8 supports this natively, last I checked you needed to enable some flags, but this works in a port of Node.js that uses SpiderMonkey.
Extra Resources: