Design Patterns – Are There Any OO-Principles Practically Applicable for JavaScript?

design-patternsdesign-principlesjavascriptobject-orientedpatterns-and-practices

Javascript is a prototype-based object oriented language but can become class-based in a variety of ways, either by:

  • Writing the functions to be used as classes by yourself
  • Use a nifty class system in a framework (such as mootools Class.Class)
  • Generate it from Coffeescript

In the beginning I tended do write class based code in Javascript and heavily relied on it. Recently however I've been using Javascript frameworks, and NodeJS, that go away from this notion of classes and rely more on the dynamic nature of the code such as:

  • Async programming, using and writing writing code that uses callbacks/events
  • Loading modules with RequireJS (so that they don't leak to the global namespace)
  • Functional programming concepts such as list comprehensions (map, filter, etc.)
  • Among other things

What I gathered so far is that most OO principles and patterns I've read (such as SOLID and GoF patterns) were written for class-based OO languages in mind like Smalltalk and C++. But are there any of them applicable for a prototype-based language such as Javascript?

Are there any principles or patterns that are just specific to Javascript? Principles to avoid callback hell, evil eval, or any other anti-patterns etc.

Best Answer

After many edits, this answer has become a monster in length. I apologize in advance.

First of all, eval() isn't always bad, and can bring benefit in performance when used in lazy-evaluation, for example. Lazy-evaluation is similar to lazy-loading, but you essentially store your code within strings, and then use eval or new Function to evaluate the code. If you use some tricks, then it'll become much more useful than evil, but if you don't, it can lead to bad things. You can look at my module system that uses this pattern: https://github.com/TheHydroImpulse/resolve.js. Resolve.js uses eval instead of new Function primarily to model the CommonJS exports and module variables available in each module, and new Function wraps your code within an anonymous function, though, I do end up wrapping each module in a function I do it manually in combination with eval.

You read more about it in the following two articles, the later also referring to the first.

Harmony Generators

Now that generators have finally landed in V8 and thus in Node.js, under a flag (--harmony or --harmony-generators). These greatly reduce the amount of callback hell you have. It makes writing asynchronous code truly great.

The best way to utilize generators is to employ some sort of control-flow library. This will enable to flow to continue going as you yield within generators.

Recap/Overview:

If you're unfamiliar with generators, they're a practice of pausing the execution of special functions (called generators). This practice is called yielding using the yield keyword.

Example:

function* someGenerator() {
  yield []; // Pause the function and pass an empty array.
}

Thus, whenever you call this function the first time, it'll return a new generator instance. This allows you to call next() on that object to start or resume the generator.

var gen = someGenerator();
gen.next(); // { value: Array[0], done: false }

You would keep calling next until done returns true. This means the generator has completely finished it's execution, and there are no more yield statements.

Control-Flow:

As you can see, controlling generators are not automatic. You need to manually continue each one. That's why control-flow libraries like co are used.

Example:

var co = require('co');

co(function*() {
  yield query();
  yield query2();
  yield query3();
  render();
});

This allows the possibility to write everything in Node (and the browser with Facebook's Regenerator which takes, as input, source code that utilize harmony generators and splits out fully compatible ES5 code) with a synchronous style.

Generators are still pretty new, and thus requires Node.js >=v11.2. As I'm writing this, v0.11.x is still unstable and thus many native modules are broken and will be until v0.12, where the native API will calm down.


To add to my original answer:

I've recently been preferring a more functional API in JavaScript. The convention does use OOP behind the scenes when needed but it simplifies everything.

Take for example a view system (client or server).

view('home.welcome');

Is much easier to read or follow than:

var views = {};
views['home.welcome'] = new View('home.welcome');

The view function simply checks if the same view already exists in a local map. If the view does not exist, it'll create a new view and add a new entry to the map.

function view(name) {
  if (!name) // Throw an error

  if (view.views[name]) return view.views[name];

  return view.views[name] = new View({
    name: name
  });
}

// Local Map
view.views = {};

Extremely basic, right? I find it dramatically simplifies the public interface and makes it easier to use. I also employ chain-ability...

view('home.welcome')
   .child('menus')
   .child('auth')

Tower, a framework I'm developing (with someone else) or developing the next version (0.5.0) will use this functional approach in most of it's exposing interfaces.

Some people take advantage of fibers as a way to avoid "callback hell". It's quite a different approach to JavaScript, and I'm not a huge fan of it, but many frameworks / platforms use it; including Meteor, as they treat Node.js as a thread/per connection platform.

I'd rather use an abstracted method to avoid callback hell. It may become cumbersome, but it greatly simplifies the actual application code. When helping on building the TowerJS framework, it solved a lot of our problems, though, you'll obviously still have some level of callbacks, but the nesting isn't deep.

// app/config/server/routes.js
App.Router = Tower.Router.extend({
  root: Tower.Route.extend({
    route: '/',
    enter: function(context, next) {
      context.postsController.page(1).all(function(error, posts) {
        context.bootstrapData = {posts: posts};
        next();
      });
    },
    action: function(context, next) {
      context.response.render('index', context);
      next();
    },
    postRoutes: App.PostRoutes
  })
});

An example of our, currently being developed, routing system and "controllers", though fairly different from traditional "rails-like". But the example is extremely powerful and minimizes the amount of callbacks and makes things fairly apparent.

The problem with this approach is that everything is abstracted. Nothing runs as-is, and requires a "framework" behind it. But if these kinds of features and coding style is implemented within a framework, then it's a huge win.

For patterns in JavaScript, it honestly depends. Inheritance is only really useful when using CoffeeScript, Ember, or any "class" framework/infrastructure. When you're inside a "pure" JavaScript environment, using the traditional prototype interface works like a charm:

function Controller() {
    this.resource = get('resource');
}

Controller.prototype.index = function(req, res, next) {
    next();
};

Ember.js started, for me at least, using a different approach to constructing objects. Instead of constructing each prototype methods independently, you'd use a module-like interface.

Ember.Controller.extend({
   index: function() {
      this.hello = 123;
   },
   constructor: function() {
      console.log(123);
   }
});

All these are different "coding" styles, but do add to your code base.

Polymorphism

Polymorphism isn't widely used in pure JavaScript, where working with inheritance and copying the "class"-like model requires a lot of boilerplate code.

Event/Component Based Design

Event-based and Component-based models are the winners IMO, or the easiest to work with, especially when working with Node.js, which has a built-in EventEmitter component, though, implementing such emitters is trivial, it's just a nice addition.

event.on("update", function(){
    this.component.ship.velocity = 0;
    event.emit("change.ship.velocity");
});

Just an example, but it's a nice model to work with. Especially in a game/component oriented project.

Component design is a separate concept by itself, but I think it works extremely well in combination to event systems. Games are traditionally known for component based design, where object oriented programming takes you only so far.

Component based design has it's uses. It depends on what type of system your building. I'm sure it would work with web apps, but it'd work extremely well in a gaming environment, because of the number of objects, and separate systems, but other examples surely exist.

Pub/Sub Pattern

Event-binding and pub/sub is similar. The pub/sub pattern really shines in Node.js applications because of the unifying language, but it can work in any langauge. Works extremely well in real-time applications, games, etc..

model.subscribe("message", function(event){
    console.log(event.params.message);
});

model.publish("message", {message: "Hello, World"});

Observer

This might be a subjective one, as some people choose to think of the Observer pattern as pub/sub, but they have their differences.

"The Observer is a design pattern where an an object (known as a subject) maintains a list of objects depending on it (observers), automatically notifying them of any changes to state." - The Observer Pattern

The observer pattern is a step beyond typical pub/sub systems. Objects have strict relationships or communication methods with each other. An object "Subject" would keep a list of dependents "Observers". The subject would keep it's observers up-to-date.

Reactive Programming

Reactive programming is a smaller, more unknown concept, especially in JavaScript. There is one framework/library (that I know of) that exposes an easy to work with API to use this "reactive programming".

Resources on reactive programming:

Basically, it's having a set of syncing data (be it variables, functions, etc..).

 var a = 1;
 var b = 2;
 var c = a + b;

 a = 2;

 console.log(c); // should output 4

I believe reactive programming is considerably hidden, especially in imperative languages. It's an amazingly powerful programming paradigm, especially in Node.js. Meteor has created it's own reactive engine in which the framework is basically based upon. How does Meteor's reactivity work behind the scenes? is a great overview of how it works internally.

Meteor.autosubscribe(function() {
   console.log("Hello " + Session.get("name"));
});

This will execute normally, displaying the value of name, but if we change the it

Session.set('name', 'Bob');

It will re-output the console.log displaying Hello Bob. A basic example, but you can apply this technique to real-time data models and transactions. You can create extremely powerful systems behind this protocol.

Meteor's...

Reactive pattern and Observer pattern are quite similar. The main difference is that the observer pattern commonly describes data-flow with whole objects/classes vs reactive programming describes data-flow to specific properties instead.

Meteor is a great example of reactive programming. It's runtime is a little bit complicated because of JavaScript's lack of native value change events (Harmony proxies change that). Other client-side frameworks, Ember.js and AngularJS also utilize reactive programming (to some extend).

The later two frameworks use the reactive pattern most notably on their templates (auto-updating that is). Angular.js uses a simple dirty checking technique. I wouldn't call this exactly reactive programming, but it's close, as dirty checking isn't real-time. Ember.js uses a different approach. Ember use set() and get() methods which allow them to immediately update depending values. With their runloop it's extremely efficient and allows for more depending values, where angular has a theoretical limit.

Promises

Not a fix to callbacks, but takes some indentation out, and keeps the nested functions to a minimum. It also adds some nice syntax to the problem.

fs.open("fs-promise.js", process.O_RDONLY).then(function(fd){
  return fs.read(fd, 4096);
}).then(function(args){
  util.puts(args[0]); // print the contents of the file
});

You could also spread the callback functions so that they aren't inline, but that's another design decision.

Another approach would be to combine events and promises to where you would have a function to dispatch events appropriately, then the real functional functions (ones that have the real logic within them) would bind to a particular event. You'd then pass the dispatcher method inside each callback position, though, you'd have to work out some kinks that would come to mind, such as parameters, knowing which function to dispatch to, etc...

Single Function Function

Instead of having a huge mess of callback hell, keep a single function to a single task, and do that task well. Sometimes you can get ahead of yourself and add more functionality within each function, but ask yourself: Can this become an independent function? Name the function, and this cleans up your indentation and, as a result, cleans up the callback hell problem.

In the end, I'd suggest developing, or using a small "framework", basically just a backbone for your application, and take time to make abstractions, decide on an event-based system, or a "loads of small modules that are independent" system. I've worked with several Node.js projects where the code was extremely messy with callback hell in particular, but also a lack of thought before they began coding. Take your time to think through the different possibilities in terms of API, and syntax.

Ben Nadel has made some really good blog posts about JavaScript and some pretty strict and advanced patterns that may work in your situation. Some good posts that I'll emphasis:

Inversion-of-Control

Though not exactly related to callback hell, it can help you overall architecture, especially in the unit tests.

The two main sub-versions of inversion-of-control is Dependency Injection and Service Locator. I find Service Locator to be the easiest within JavaScript, as opposed to Dependency Injection. Why? Mainly because JavaScript is a dynamic language and no static typing exists. Java and C#, among others, are "known" for dependency injection because your able to detect types, and they have built in interfaces, classes, etc... This makes things fairly easy. You can, however, re-create this functionality within JavaScript, though, it's not going to be identical and a bit hacky, I prefer using a service locator inside my systems.

Any kind of inversion-of-control will dramatically decouple your code into separate modules that can be mocked or faked at anytime. Designed a second version of your rendering engine? Awesome, just substitute the old interface for the new one. Service locators are especially interesting with the new Harmony Proxies, though, only effectively usable within Node.js, it provides a nicer API, rather then using Service.get('render'); and instead Service.render. I'm currently working on that kind of system: https://github.com/TheHydroImpulse/Ettore .

Though the lack of static typing (static typing being a possible reason for the effective usages in dependency injection in Java, C#, PHP - It's not static typed, but it has type hints.) might be looked at as a negative point, you can definitely turn it into a strong point. Because everything is dynamic, you can engineer a "fake" static system. In combination with a service locator, you could have each component/module/class/instance tied to a type.

var Service, componentA;

function Manager() {
  this.instances = {};
}

Manager.prototype.get = function(name) {
  return this.instances[name];
};

Manager.prototype.set = function(name, value) {
  this.instances[name] = value;
};

Service = new Manager();
componentA = {
  type: "ship",
  value: new Ship()
};

Service.set('componentA', componentA);

// DI
function World(ship) {
  if (ship === Service.matchType('ship', ship))
    this.ship = new ship();
  else
    throw Error("Wrong type passed.");
}

// Use Case:
var worldInstance = new World(Service.get('componentA'));

A simplistic example. For a real world, effective usage, you'll need to take this concept further, but it could help decouple your system if you really want traditional dependency injection. You might need to fiddle with this concept a little bit. I haven't put much thought into the previous example.

Model-View-Controller

The most obvious pattern, and the most used on the web. A few years ago, JQuery was all the rage, and so, JQuery plugins were born. You didn't need a full-on framework on the client-side, just use jquery and a few plugins.

Now, there's a huge client-side JavaScript framework war. Most of which use the MVC pattern, and they all use it differently. MVC isn't always implemented the same.

If you're using the traditional prototypal interfaces, you might have a hard time getting a syntactical sugar or a nice API when working with MVC, unless you want to do some manual work. Ember.js solves this by creating a "class"/object" system. A controller might look like:

 var Controller = Ember.Controller.extend({
      index: function() {
        // Do something....
      }
 });

Most client-side libraries also extend the MVC pattern by introducing view-helpers (becoming views) and templates (becoming views).


New JavaScript Features:

This will only be effective if you're using Node.js, but nonetheless, it's invaluable. This talk at NodeConf by Brendan Eich brings some cool new features. The proposed function syntax, and especially the Task.js js library.

This will probably fix most of the issues with function nesting and will bring slightly better performance because of the lack of function overhead.

I'm not too sure if V8 supports this natively, last I checked you needed to enable some flags, but this works in a port of Node.js that uses SpiderMonkey.

Extra Resources:

Related Topic