Вы находитесь на странице: 1из 10

9 Common node.

js
Design Patterns and How
to use them Effectively
+

Stephen Belanger
Software Developer, AppNeta

Common Patterns in node.js


From event emitters and streams to constructors and promises, patterns are central to node.js
development. While traditional design patterns exist in the JavaScript world, many have been retrofitted
and updated to take advantage of the asynchronous nature of node.js. Many new node.js developers
find themselves using patterns like Callbacks, Streams and Singletons as if they were coding in Java or
other languages. This leads to misuse and messy code that doesnt utilize the power node.js has to offer.
There are many ways to use the most common design patterns, but outlined below are common uses
and common mistakes that even seasoned developers will make starting out in node.js. To illustrate
these patterns examples have been included to the explanations and links have been added to relevant
docs on the nodejs.org site.
The common design patterns covered include:
Callbacks
Event Emitters
Streams
Singletons
Constructors
Middleware/Pipelining
Promises
Dependency Injection
Factory Functions

About the Author


Stephen is a contributor to node.js and founder of the node.js tracing
working group, which aspires to make the tracing situation better for
everyone. Stephen has worked in node.js for about 6 years and also writes
the node.js instrumentation for TraceView at AppNeta. Reach out to him on
GitHub here https://github.com/Qard.

1.800.508.5233 | SALES@APPNETA.COM | APPNETA.COM

Callbacks
Callbacks are possibly the most central pattern of node.js development. Most APIs in node core are
based on callbacks, so it is important to understand how they work.
A callback is an anonymous function given to another function with the intent of calling it later. It looks
something like this:
keyValueStore.get(my-data, function (err, data) {
})
This is a particular variety of callback node.js users often refer to as an errback. An errback always has
an error parameter first and subsequent parameters can be used to pass whatever data the interface
is expected to return.
Callback-accepting functions in node.js almost always expect errbacks, with the exception of fs.exists,
which looks like this:
fs.exists(/etc/passwd, function (exists) {
console.log(exists ? its there : no passwd!)
})
An important thing to understand about callbacks is that in node.js they are usually, but not always,
asynchronous.
Calling `fs.readFile` is async:
fs.readFile(something, function (err, data) {
console.log(this is reached second)
})
console.log(this is reached first)
But calling `list.forEach` is sync.
var list = [1]
list.forEach(function (v) {
console.log(this is reached first)
})
console.log(this is reached second)
Confusing, right? This trips up most people new to node.js. The way to understand it is that any code
which interacts with data outside of the process memory should be async, as disks and network are
slow so we dont want to wait for them.

1.800.508.5233 | SALES@APPNETA.COM | APPNETA.COM

Event Emitters
An event emitter is a special construct designed to allow an interface to designate many callbacks for
many different behaviors that may occur once, many times or even never. In node.js, event emitters
work by exposing a common API on an object, providing several functions for registering and triggering
these callbacks. This interface is often attached via inheritance.
Consider this example:
var emitter = new EventEmitter
emitter.on(triggered, function () {
console.log(The event was triggered)
})
emitter.emit(triggered)
Event emitters are themselves synchronous, but the things they can be attached to sometimes are
not, which is another source of confusion regarding asynchrony in node.js. The point from calling
`emitter.emit(...)` to when the callback given to `emitter.on(...)` is triggered is synchronous, but the
object can be passed around and the emit function could be used some time in the future.

Streams
A stream is a special variety of event emitter designed specifically for consuming a sequence of data
events without having to buffer the entire sequence in memory. This is particularly useful in cases
where a sequence is infinite.
A common use for streams is to read files. Loading a large file into memory all at once does not scale
well, so you can use streams to allow you to operate on chunks of the file. To read a file as a stream,
youd do something like this:
var file = fs.createReadStream(something)
file.on(error, function (err) {
console.error(an error occured, err)
})
file.on(data, function (data) {
console.log(I got some data: + data)
})
file.on(end, function () {
console.log(no more data)
})
The `data` event is actually part of the flowing or push mode introduced in the streams 1 API.
It pushes data through the pipe as fast as it can. Often what one really needs, to scale well, is a pull
stream, which can be done using the `readable` event and the `read(...)` function.

1.800.508.5233 | SALES@APPNETA.COM | APPNETA.COM

file.on(readable, function () {
var chunk = file.read()
if (chunk) {
console.log(I got some data: + data)
}
})
Note that you need to check if the chunk is null, as streams are null terminated.
Streams include extra functions on them beyond what event emitters provide and there are a few
different varieties of streams. There are Readable, Writable, Duplex, and Transform streams to cover
various forms of data access.
If you want to read from a file stream, but also want to write the output of the output to a write stream,
all that event machinery might be a bit excessive. Theres a handy function on streams called pipe that
automatically propagates the appropriate events from the source stream to the target stream. Its also
chainable, which is great for using Transform streams to interpret data protocols.
A theoretical use of a streams to parse a JSON array coming over the network might look something
like this:
socket.pipe(jsonParseStream()).pipe(eachObject(function (item) {
console.log(got an item, item)
}))
The `jsonParseStream` would emit each element of the parsed array, as it gets to it, as an object
mode stream. The `eachObject` stream would then be able to receive each of those object mode
events and do some operation on them, in this case logging them.
Its important to understand that, on piped streams, error events are *not* propagated. You therefore
would need to write the previous example more like this, for safety:
socket
.on(error, function (err) {
console.error(a socket error occurred, err)
})
.pipe(jsonParseStream())
.on(error, function (err) {
console.error(a json parsing error occurred, err)
})
.pipe(iterateStream(function (item) {
console.log(got an item, item)
}))
.on(error, function (err) {
console.error(an iteration error occurred, err)
})
1.800.508.5233 | SALES@APPNETA.COM | APPNETA.COM

Singletons
A singleton is a single instance of an object. A basic singleton in JavaScript might look something like
this:
var linesSingleton = {
content: ,
push: function (line) {
this.content += line + \n
},
forEach: function (fn) {
this.content.split(\n).forEach(fn)
}
}
They are also sometimes constructed as closures, when one wants to prevent access to private data:
var linesSingleton = (function () {
var content =
return {
push: function (line) {
content += line + \n
},
forEach: function (fn) {
content.split(\n).forEach(fn)
}
}
})()
In some cases, the simplicity of singletons simplifies your code, however its not a very common
pattern in node.js. Because a node.js process is long-lived and will serve many requests throughout
its lifetime, unique objects are generally needed for each request to interact with.
The module system in node.js, called CommonJS, is actually singleton-based. Its content is controlled
via the `module` and `exports` objects inside the JavaScript file. Putting the above code into a module
would look like this:
var content =
module.exports = {
push: function (line) {
content += line + \n
},
forEach: function (fn) {
content.split(\n).forEach(fn)
}
}

1.800.508.5233 | SALES@APPNETA.COM | APPNETA.COM

Constructors
Constructors are very useful for creating an object type that can be instantiated uniquely, many times.
This is very important when you need to use an object type many times across many separate requests.
A constructor is simply a function which includes a `prototype` and is called with the `new` keyword.
function Person (name) {
this.name = name
}
Person.prototype.greet = function () {
return Hello, I am + this.name + .
}
Alternatively, that could have been written something like this:
function makePerson (name) {
return {
greet: function () {
return Hello, I am + name + .
}
}
}
Note that not using constructors can have some serious performance penalties. The JavaScript virtual
machines are optimized to identify and compile prototype functions at the time the definition runs
rather than when the individual call runs, so using prototype functions can substantially reduce load
on the virtual machine.

Middleware/Pipelining
The middleware approach, also sometimes called pipelining, is similar to what streams do when using
`pipe`, but is generally used to build a flow of transformations on something that is not an event
emitter.
The most notable example of middleware usage in the node.js ecosystem is the express framework,
or more accurately the connect framework it originated from. It implements a `use` function to
attach handlers which do transformations on the request and response objects of an http request.
The interface looks something like this:
var app = connect()
app.use(function (req, res, next) {
req.name = World
next()
})
app.use(function (req, res) {
res.send(Hello, + req.name)
})
1.800.508.5233 | SALES@APPNETA.COM | APPNETA.COM

http.createServer(app).listen(3000)
If the `next` function is not called, the flow stops at that handler.

Promises
With ES6 comes a new construct called a Promise. These are used to encapsulate a resolvable or
rejectable behaviour that may be sync or async, but will not continue execution until a future iteration
of the event loop, for consistency. The interface includes a `then` function, which takes several
callbacks to receive the eventual value or error. You can also use the `catch` function to specifically
assigning only an error handler.
The promise API is chainable, and each `then` or `catch` function produces a new promise
representing the result of the previous handler. The handlers can also return more promises that will
get resolved before the chain continues.
Heres an example of how all this works:
function delay (n) {
return new Promise(function (resolve, reject) {
setTimeout(function () {
resolve(n)
}, n)
})
}
var promise = delay(100)
.then(function (n) {
return delay(n)
})
.then(function (n) {
console.log(this was delayed for + (n * 2) + ms)
})

Dependency Injection
New users coming from more structure-heavy languages or platforms are often familiar with the
concept of Dependency Injection, and tend to reach for it too often in node.js. While node.js *can*
do it, its not generally needed, due to the flexibility of JavaScript. Sometimes, however, it can become
a necessary tool to manage the complexity of a growing codebase. There are many different ways to
manage dependency injection in node.js. Most are overly verbose or not very performant, but it can
work well if you are careful about it.

1.800.508.5233 | SALES@APPNETA.COM | APPNETA.COM

The general idea of dependency injection is that segments of code can be modularized, but can
depend on a specific set of other modules that the environment should provide somehow. Does this
sound familiar? It should, because this is exactly what CommonJS does. Layering more dependency
management over top of node.js can feel kind of redundant.
Sometimes you need more than a simple `require` call can provide though, and that is when
dependency injection starts to really make sense. Consider, for example, a complicated hierarchy of
connected code that needs to interact with each other. If you want to write tests for that code, you
may need to mock the objects, but managing the connections between them may be very complicated.
With dependency injection, it can be as easy as simply registering a mock object with the same name
the real one would have used, leaving the rest of the code blissfully ignorant that it is not interacting
with what it thinks it is.
The various implementations of DI in node.js vary wildly in how they work. Some parse a function as
a string and use the argument names to identify what to look for, others use an $inject property on
the function, and others yet use a sort of pattern matching on object keys for identity. All are valid
approaches, but each needs to be carefully considered. Parsing a function as a string is expensive, so
it shouldnt be done in a hot code path. The $inject style requires some extra manual effort, but works
well in most cases. Lastly, the pattern matching approach, while very flexible, can also be prone to
edge cases.

Factory Functions
Factory functions return instances of a constructor without the needing the `new` keyword. Often it
is just a convenience, but it can be useful at times.
The most common use of factory functions in node.js core is `http.createServer`, which actually just
calls `new http.Server`:
function hello (req, res) {
res.send(hello)
}
// This...
var server = http.createServer(hello)
// is the same as...
var server = new http.Server(hello)
Theres not a huge benefit there, but it can have advantages in more complex hierarchies. Consider a
connection instance and an interface for interacting with a specific database:

1.800.508.5233 | SALES@APPNETA.COM | APPNETA.COM

function Database (connection, name) {


// Here, the database interface can store the
// connection instance and use it later
}
Connection.prototype.database = function (name) {
return new Database(this, name)
}
// This...
var database = connection.database(test)
// is a little friendlier than this...
var database = new Database(connection, test)

Wrap Up
While not an exhaustive list, the patterns above serve as good examples of some common practices
in node.js with a few notes on where they can be misused and how to use them more effectively. For
more information on node.js and extensive information on the API check out https://nodejs.org/.

ABOUT APPNETA

AppNeta is the only application performance monitoring (APM) company to provide solutions for all applications - applications you develop internally, businesscritical SaaS applications you use and the networks that deliver them. AppNetas SaaS-based solutions give Development, DevOps and IT Operations teams
essential performance data to see across their web, mobile and cloud-delivered application environments as well as pinpoint tough performance bottlenecks.
With AppNeta, customers have all of the performance data they need to assure continual and exceptional delivery of business-critical applications and enduser experience. For more information, visit www.appneta.com.

1.800.508.5233 | SALES@APPNETA.COM | APPNETA.COM

Вам также может понравиться