If you are installing Node.js in Linux to use it in production, there is a big chance that you are using NodeSource Node.js Binary Distributions.
In this talk you can find the process in which NodeSource Node.js Binary Distributions is updated, how new versions are supported, the human and infrastructure process, and some limitations of maintaining the channel. Also and most importantly, how the community can get involved with this project.
Read MoreIn this volume of Need to Node, you can find the latest news on Deno, a recording of our webinar ‘New and Exciting Features to Land in Node.js version 14’ and ‘JavaScript features to forget’.
Need to Node is a weekly bulletin designed to keep you up-to-date with the latest news on the Node.js project, events and articles. You are always welcome to collaborate and participate. Please let us know if we missed a piece of content you think should be included!
Deno 1.0 Released. Some of the most important features include:
Promise.all
....args
in ES6 replaced the arguments object, we don’t use document.write()
anymore or join()
to concatenate strings and template literals are much better. Find out more here!String.prototype.matchAll
, Dynamic import()
, Promise.allSettled
, Optional Chaining
among others. Check it out! If you find any Node.js or JavaScript related content over the next week (or beyond!), never hesitate to reach out to us on Twitter at @NodeSource to share and get it included in Need to Node - our DMs are open if you don’t want to share publicly!
Read MoreYou've written the next great application, in Node, and you are ready to unleash it upon the world. Which means you can no longer run it on your laptop, you're going to actually have to put it up on some server somewhere and connect it to the real Internet. Eek.
There are a lot of different ways to run an app in production. This post is going to cover the specific case of running something on a "standard" Linux server that uses systemd
, which means that we are not going to be talking about using Docker, AWS Lambda, Heroku, or any other sort of managed environment. It's just going to be you, your code, and terminal with a ssh
session my friend.
Before we get started though, let's talk for just a brief minute about what systemd
actually is and why you should care.
systemd
Anyway?The full answer to this question is big, as in, "ginormous" sized big. So we're not going to try and answer it fully since we want to get on the the part where we can launch our app. What you need to know is that systemd
is a thing that runs on "new-ish" Linux servers that is responsible for starting / stopping / restarting programs for you. If you install mysql
, for example, and whenever you reboot the server you find that mysql
is already running for you, that happens because systemd
knows to turn mysql
on when the machine boots up.
This systemd
machinery has replaced older systems such as init
and upstart
on "new-ish" Linux systems. There is a lot of arguably justified angst in the world about exactly how systemd
works and how intrusive it is to your system. We're not here to discuss that though. If your system is "new-ish", it's using systemd
, and that's what we're all going to be working with for the forseeable future.
What does "new-ish" mean specifically? If you are using any of the following, you are using systemd
:
I'm going to assume you have a fresh installation of Ubuntu Xenial to work with, and that you have set up a default user named ubuntu
that has sudo
privileges. This is what the default will be if you spin up a Xenial instance in Amazon EC2. I'm using Xenial because it is currently the newest LTS (Long Term Support) version available from Canonical. Ubuntu Yakkety is available now, and is even newer, but Xenial is quite up-to-date at the time of this writing and will be getting security updates for many years to come because of its LTS status.
Use ssh
with the ubuntu
user to get into your server, and let's install Node.
$ sudo apt-get -y install curl
$ curl -sL https://deb.nodesource.com/setup_6.x | sudo bash -
$ sudo apt-get -y install nodejs
Next let's create an app and run it manually. Here's a trivial app I've written that simply echoes out the user's environment variables.
const http = require('http');
const hostname = '0.0.0.0';
const port = process.env.NODE_PORT || 3000;
const env = process.env;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
for (var k in env) {
res.write(k + ": " + env[k] + "\n");
}
res.end();
});
server.listen(port, hostname, () => {
console.log("Server running at http://" + hostname + ":" + port + "/");
});
Using your text editor of choice (which should obviously be Emacs but I suppose it's a free country if you want to use something inferior), create a file called hello_env.js
in the user's home directory /home/ubuntu
with the contents above. Next run it with
$ /usr/bin/node /home/ubuntu/hello_env.js
You should be able to go to
http://11.22.33.44:3000
in a web browser now, substituting 11.22.33.44
with whatever the actual IP address of your server is, and see a printout of the environment variables for the ubuntu
user. If that is in fact what you see, great! We know the app runs, and we know the command needed to start it up. Go ahead and press Ctrl-c
to close down the application. Now we'll move on to the systemd
parts.
systemd
Service FileThe "magic" that's needed to make systemd
start working for us is a text file called a service
file. I say "magic" because for whatever reason, this seems to be the part that people block on when they are going through this process. Fortunately, it's much less difficult and scary than you might think.
We will be creating a file in a "system area" where everything is owned by the root user, so we'll be executing a bunch of commands using sudo
. Again, don't be nervous, it's really very straightforward.
The service files for the things that systemd
controls all live under the directory path
/lib/systemd/system
so we'll create a new file there. If you're using Nano as your editor, open up a new file there with:
sudo nano /lib/systemd/system/hello_env.service
and put the following contents in it:
[Unit]
Description=hello_env.js - making your environment variables rad
Documentation=https://example.com
After=network.target
[Service]
Environment=NODE_PORT=3001
Type=simple
User=ubuntu
ExecStart=/usr/bin/node /home/ubuntu/hello_env.js
Restart=on-failure
[Install]
WantedBy=multi-user.target
Let's go ahead and talk about what's in that file. In the [Unit]
section, the Description
and Documentation
variables are obvious. What's less obvious is the part that says
After=network.target
That tells systemd
that if it's supposed to start our app when the machine boots up, it should wait until after the main networking functionality of the server is online to do so. This is what we want, since our app can't bind to NODE_PORT
until the network is up and running.
Moving on to the [Service]
section we find the meat of today's project. We can specify environment variables here, so I've gone ahead and put in:
Environment=NODE_PORT=3001
so our app, when it starts, will be listening on port 3001. This is different than the default 3000 that we saw when we launched the app by hand. You can specify the Environment
directive multiple times if you need multiple environment variables. Next is
Type=simple
which tells systemd
how our app launches itself. Specifically, it lets systemd
know that the app won't try and fork itself to drop user privileges or anything like that. It's just going to start up and run. After that we see
User=ubuntu
which tells systemd
that our app should be run as the unprivileged ubuntu
user. You definitely want to run your apps as unprivileged users to that attackers can't aim at something running as the root
user.
The last two parts here are maybe the most interesting to us
ExecStart=/usr/bin/node /home/ubuntu/hello_env.js
Restart=on-failure
First, ExecStart
tells systemd
what command it should run to launch our app. Then, Restart
tells systemd
under what conditions it should restart the app if it sees that it has died. The on-failure
value is likely what you will want. Using this, the app will NOT restart if it goes away "cleanly". Going away "cleanly" means that it either exits by itself with an exit value of 0
, or it gets killed with a "clean" signal, such as the default signal sent by the kill
command. Basically, if our app goes away because we want it to, then systemd
will leave it turned off. However, if it goes away for any other reason (an unhandled exception crashes the app, for example), then systemd
will immediately restart it for us. If you want it to restart no matter what, change the value from on-failure
to always
.
Last is the [Install]
stanza. We're going to gloss over this part as it's not very interesting. It tells systemd
how to handle things if we want to start our app on boot, and you will probably want to use the values shown for most things until you are a more advanced systemd
user.
systemctl
To Control Our AppThe hard part is done! We will now learn how to use the system provided tools to control our app. To being with, enter the command
$ sudo systemctl daemon-reload
You have to do this whenever any of the service files change at all so that systemd
picks up the new info.
Next, let's launch our app with
$ sudo systemctl start hello_env
After you do this, you should be able to go to
http://11.22.33.44:3001
in your web browser and see the output. If it's there, congratulations, you've launched your app using systemd
! If the output looks very different than it did when you launched the app manually don't worry, that's normal. When systemd
kicks off an application, it does so from a much more minimal environment than the one you have when you ssh
into a machine. In particular, the $HOME
environment variable may not be set by default, so be sure to pay attention to this if your app makes use of any environment variables. You may need to set them yourself when using systemd
.
You may be interested in what state systemd
thinks the app is in, and if so, you can find out with
$ sudo systemctl status hello_env
Now, if you want to stop your app, the command is simply
$ sudo systemctl stop hello_env
and unsurprisingly, the following will restart things for us
$ sudo systemctl restart hello_env
If you want to make the application start up when the machine boots, you accomplish that by enabling it
$ sudo systemtl enable hello_env
and finally, if you previously enabled the app, but you change your mind and want to stop it from coming up when the machine starts, you correspondingly disable it
$ sudo systemctl disable hello_env
That concludes today's exercise. There is much, much more to learn and know about systemd
, but this should help get you started with some basics. In a follow up blog post, we will learn how to launch multiple instances of our app, and load balance those behind Nginx to illustrate a more production ready example.
This article was first published NodeSource blog post in November 2016
Read MoreThis blog post was first published on March 2017. Find out more here
For Node.js developers, npm
is an everyday tool. It's literally something we interact with multiple times on a daily basis, and it's one of the pieces of the ecosystem that's led to the success of Node.js.
One of the most useful, important, and enabling aspects of the npm
CLI is that its highly configurable. It provides an enormous amount of configurability that enables everyone from huge enterprises to individual developers to use it effectively.
One part of this high-configurability is the .npmrc
file. For a long time I'd seen discussion about it - the most memorable being the time I thought you could change the name of the node_modules
directory with it. For a long time, I didn't truly understand just how useful the .npmrc
file could be, or how to even use it.
So, today I've collected a few of the optimizations that .npmrc
allows that have been awesome for speeding up my personal workflow when scaffolding out Node.js modules and working on applications long-term.
npm init
Just a Bit MoreWhen you're creating a new module from scratch, you'll typically start out with the npm init
command. One thing that some developers don't know is that you can actually automate this process fairly heftily with a few choice npm config set ...
commands that set default values for the npm init
prompts.
You can easily set your name, email, URL, license, and initial module version with a few commands:
npm config set init.author.name "Hiro Protagonist"
npm config set init.author.email "hiro@showcrash.io"
npm config set init.author.url "http://hiro.snowcrash.io"
npm config set init.license "MIT"
npm config set init.version "0.0.1"
In the above example, I've set up some defaults for Hiro. This personal information won't change too frequently, so setting up some defaults is helpful and allows you to skip over entering the same information in manually every time.
Additionally, the above commands set up two defaults that are related to your module.
The first default is the initial license that will be automatically suggested by the npm init
command. I personally like to default to MIT
, and much of the rest of the Node.js ecosystem does the same. That said, you can set this to whatever you'd like - it's a nice optimization to just be able to nearly automatically select your license of choice.
The second default is the initial version. This is actually one that made me happy, as whenever I tried building out a module I never wanted it to start out at version 1.0.0
, which is what npm init
defaults to. I personally set it to 0.0.1
and then increment the version as I go with the npm version [ major | minor | patch ]
command.
As time moves forward, we're seeing more options for registries arise. For example, you may want to set your registry to a cache of the modules you know you need for your apps. Or, you may be using Certified Modules as a custom npm registry. There's even a separate registry for Yarn, a topic that is both awesome and totally out of scope for this post.
So, if you'd like to set a custom registry, you can run a pretty simple one-line command:
npm config set registry "https://my-custom-registry.registry.nodesource.io/"
In this example, I've set the registry URL to an example of a Certified Modules registry - that said, the exact URL in the command can be replaced with any registry that's compatible. To reset your registry back to the default npm registry, you can simply run the same command pointing to the standard registry:
npm config set registry "https://registry.npmjs.com/"
npm install
with loglevelWhen you npm install
a bunch of information gets piped to you. By default, the npm
command line tool limits how much of this information is actually output into the console when installing. There are varying degrees of output that you can assign at install, or by default, if you change it with npm config
in your .npmrc
file. The options, from least to most output, are: silent
, error
, warn
, http
, info
, verbose
, and silly
.
Here's an example of the silent
loglevel:
And here's an example of the silly
loglevel:
If you'd like to get a bit more information (or a bit less, depending on your preferences) when you npm install
, you can change the default loglevel.
npm config set loglevel="http"
If you tinker around with this config a bit and would like to reset to what the npm
CLI currently defaults to, you can run the above command with warn
as the loglevel:
npm config set loglevel="warn"
This is a really awesome change - it has a few steps, but is really worth it. With a few commands, you can change where the npm
CLI installs global modules by default. Normally, it installs them to a privileged system folder - this requires administrative access, meaning that a global install requires sudo
access on UNIX-based systems.
If you change the default global prefix for npm
to an unprivileged directory, for example, ~/.global-modules
, you'll not need to authenticate when you install a global module. That's one benefit - another is that globally installed modules won't be in a system directory, reducing the likelihood of a malicious module (intentionally or not) doing something you didn't want it to on your system.
To get started, we're going to create a new folder called global-modules
and set the npm prefix to it:
mkdir ~/.global-modules
npm config set prefix "~/.global-modules"
Next, if you don't already have a file called ~/.profile
, create one in your root user directory. Now, add the following line to the ~/.profile
file:
export PATH=~/.global-modules/bin:$PATH
Adding that line to the ~/.profile
file will add the global-modules
directory to your PATH, and enable you to use it for npm global modules.
Now, flip back over to your terminal and run the following command to update the PATH with the newly updated file:
source ~/.profile
If you'd like to keep reading about Node.js, npm, configuration options, and development with the Node.js stack, I've got some fantastic articles for you.
Our most recent guide is a deep-dive into the core concepts of the package.json file. You'll find a ton of info about package.json
in there, including a ton of super helpful configuration information. We also published an absolute beginner's guide to npm that you may be interested in reading - even though it's a beginner's guide, I'd bet you'll find something useful in it.
With this article, the intent was to help you set up a great configuration for Node.js development. If you'd like to take the leap and ensure that you're always on a rock-solid platform when developing and deploying you Node.js apps, check out NodeSource Certified Modules - it's a new tool we launched last week that will help enable you to spend more time building apps and less time worrying about modules.
This blog post was first published on March 2017. Find out more here
Using npm effectively can be difficult. There are a ton of features built-in, and it can be a daunting task to try to approach learning them.
Personally, even learning and using just one of these tricks (npm prune
, which is #4) saved me from getting rid of unused modules manually by deleting node_modules
and re-installing everything with npm install
. As you can probably imagine, that was insanely stressful.
We've compiled this list of 11 simple-to-use npm tricks that will allow you to speed up development using npm, no matter what project you're working on.
Run: npm home $package
Running the home
command will open the homepage of the package you're running it against. Running against the lodash
package will bring you to the Lodash website. This command can run without needing to have the package installed globally on your machine or within the current project.
Run: npm repo $package
Similar to home
, the repo
command will open the GitHub repository of the package you're running it against. Running against the express
package will bring you to the official Express repo. Also like home
, you don’t need to have the package installed.
Run: npm outdated
You can run the outdated
command within a project, and it will check the npm registry to see if any of your packages are outdated. It will print out a list in your command line of the current version, the wanted version, and the latest version.
Run: npm prune
When you run prune
, the npm CLI will run through your package.json
and compare it to your project’s /node_modules
directory. It will print a list of modules that aren’t in your package.json
.
The npm prune
command then strips out those packages, and removes any you haven't manually added to package.json
or that were npm install
ed without using the --save
flag.
Update: Thanks to @EvanHahn for noticing a personal config setting that made npm prune
provide a slightly different result than the default npm
would provide!
Run: npm shrinkwrap
Using shrinkwrap
in your project generates an npm-shrinkwrap.json
file. This allows you to pin the dependencies of your project to the specific version you’re currently using within your node_modules
directory. When you run npm install
and there is a npm-shrinkwrap.json
present, it will override the listed dependencies and any semver ranges in package.json
.
If you need verified consistency across package.json
, npm-shrinkwrap.json
and node_modules
for your project, you should consider using npm-shrinkwrap.
Run: npm install -g npm@3
Installing npm@3
globally with npm will update your npm v2 to npm v3, including on the Node.js v4 LTS release (“Argon”) ships with the npm v2 LTS release. This will install the latest stable release of npm v3 within your v4 LTS runtime.
npm install -g
without needing sudo
Run: npm config set prefix $dir
After running the command, where $dir
is the directory you want npm to install your global modules to, you won’t need to use sudo to install modules globally anymore. The directory you use in the command becomes your global bin directory.
The only caveat: you will need to make sure you adjust your user permissions for that directory with chown -R $USER $dir
and you add $dir/bin
to your PATH.
Run: npm config set save-prefix="~"
The tilde (~
) is more conservative than what npm defaults to, the caret (^
), when installing a new package with the --save
or --save-dev
flags. The tilde pins the dependency to the minor version, allowing patch releases to be installed with npm update
. The caret pins the dependency to the major version, allowing minor releases to be installed with npm update
.
devDependencies
for a production environmentWhen your project is ready for production, make sure you install your packages with the added --production
flag. The --production
flag installs your dependencies
, ignoring your devDependencies
. This ensures that your development tooling and packages won’t go into the production environment.
Additionally, you can set your NODE_ENV
environment variable to production
to ensure that your project’s devDependencies
are never installed.
.npmignore
If you haven't been using .npmignore
, it defaults to .gitignore
with a few additional sane defaults.
What many don't realize that once you add a .npmignore
file to your project the .gitignore
rules are (ironically) ignored. The result is you will need to audit the two ignore files in sync to prevent sensitive leaks when publishing.
npm init
with defaultsWhen you run npm init
in a new project, you’re able to go through and set up your package.json
’s details. If you want to set defaults that npm init
will always use, you can use the config set
command, with some extra arguments:
npm config set init.author.name $name
npm config set init.author.email $email
If, instead, you want to completely customize your init script, you can point to a self-made default init script by running
npm config set init-module ~/.npm-init.js`
Here’s a sample script that prompts for private settings and creates a GitHub repo if you want. Make sure you change the default GitHub username (YOUR_GITHUB_USERNAME
) as the fallback for the GitHub username environment variable.
var cp = require('child_process');
var priv;
var USER = process.env.GITHUB_USERNAME || 'YOUR_GITHUB_USERNAME';
module.exports = {
name: prompt('name', basename || package.name),
version: '0.0.1',
private: prompt('private', 'true', function(val){
return priv = (typeof val === 'boolean') ? val : !!val.match('true')
}),
create: prompt('create github repo', 'yes', function(val){
val = val.indexOf('y') !== -1 ? true : false;
if(val){
console.log('enter github password:');
cp.execSync("curl -u '"+USER+"' https://api.github.com/user/repos -d " +
"'{\"name\": \""+basename+"\", \"private\": "+ ((priv) ? 'true' : 'false') +"}' ");
cp.execSync('git remote add origin '+ 'https://github.com/'+USER+'/' + basename + '.git');
}
return undefined;
}),
main: prompt('entry point', 'index.js'),
repository: {
type: 'git',
url: 'git://github.com/'+USER+'/' + basename + '.git' },
bugs: { url: 'https://github.com/'+USER'/' + basename + '/issues' },
homepage: "https://github.com/"+USER+"/" + basename,
keywords: prompt(function (s) { return s.split(/\s+/) }),
license: 'MIT',
cleanup: function(cb){
cb(null, undefined)
}
}
If you want to learn more about npm, Node.js, JavaScript, Docker, Kubernetes, Electron, and tons more, you should follow @NodeSource on Twitter. We're always around and would love to hear from you!
Read MoreApplication containers have emerged as a powerful tool in modern software development. Lighter and more resource efficient than traditional virtual machines, containers offer IT organizations new opportunities in version control, deployment, scaling, and security.
This post will address what exactly containers are, why they are proving to be so advantageous, how people are using them, and best practices for containerizing your Node.js applications with Docker.
Put simply, containers are running instances of container images. Images are layered alternatives to virtual machine disks that allow applications to be abstracted from the environment in which they are actually being run. Container images are executable, isolated software with access to the host's resources, network, and filesystem. These images are created with their own system tools, libraries, code, runtime, and associated dependencies hardcoded. This allows for containers to be spun up irrespective of the surrounding environment. This everything-it-needs approach helps silo application concerns, providing improved systems security and a tighter scope for debugging.
Unlike traditional virtual machines, container images give each of its instances shared access to the host operating system through a container runtime. This shared access to the host OS resources enables performance and resource efficiencies not found in other virtualization methods.
Imagine a container image that requires 500 mb. In a containerized environment, this 500 mb can be shared between hundreds of containers assuming they are are all running the same base image. VMs, on the other hand, would need that 500 mb per virtual machine. This makes containers much more suitable for horizontal scaling and resource-restricted environments.
The lightweight and reproducible nature of containers have made them an increasingly favored option for organizations looking to develop software applications that are scalable, highly available, and version controlled.
Containers offer several key advantages to developers:
Not all applications and organizations are going to have the same infrastructure requirements. The aforementioned benefits of containers make them particularly adept at addressing the following needs:
For teams working to practice ‘infrastructure as code’ and seeking to embrace the DevOps paradigm, containers offer unparalleled opportunities. Their portability, resistance to configuration drift, and quick boot time make containers an excellent tool for quickly and reproducibly testing different code environments, regardless of machine or location.
A common phrase in microservice development is “do one thing and do it well,” and this aligns tightly with application containers. Containers offer a great way to wrap microservices and isolate them from the wider application environment. This is very useful when wanting to update specific (micro-)services of an application suite without updating the whole application.
Containers make it easy to roll out multiple versions of the same application. When coupled with incremental rollouts, containers can keep your application in a dynamic, responsive state to testing. Want to test a new performance feature? Spin up a new container, add some updates, route 1% of traffic to it, and collect user and performance feedback. As the changes stabilize and your team decides to apply it to the application at large, containers can make this transition smooth and efficient.
Because of application containers suitability for focused application environments, Node.js is arguably the best runtime for containerization.
Docker is a layered filesystem for shipping images, and allows organizations to abstract their applications away from their infrastructure.
With Docker, images are generated via a Dockerfile. This file provides configurations and commands for programmatically generating images.
Each Docker command in a Dockerfile adds a ‘layer’. The more layers, the larger the resulting container.
Here is a simple Dockerfile example:
1 FROM node:8
2
3 WORKDIR /home/nodejs/app
4
5 COPY . .
6 RUN npm install --production
7
8 CMD [“node”, “index.js”]
The FROM
command designates the base image that will be used; in this case, it is the image for Node.js 8 LTS release line.
The RUN
command takes bash commands as its arguments. In Line 2 we are creating a directory to place the Node.js application. Line 3 lets Docker know that the working directory for every command after line 3 is going to be the application directory.
Line 5 copies everything the current directory into the current directory of the image, which is /home/nodejs/app
previously set by the WORKDIR command in like 3. On Line 6, we are setting up the production install.
Finally, on line 8, we pass Docker a command and argument to run the Node.js app inside the container.
The above example provides a basic, but ultimately problematic, Dockerfile.
In the next section we will look at some Dockerfile best practices for running Node.js in production.
root
Make sure the application running inside the Docker container is not being run as root
.
1 FROM node:8
2
3 RUN groupadd -r nodejs && useradd -m -r -g -s /bin/bash nodejs nodejs
4
5 USER nodejs
6
7 ...
In the above example, a few lines of code have been added to the original Dockerfile example to pull down the image of the latest LTS version of Node.js, as well as add and set a new user, nodejs
. This way, in the event that a vulnerability in the application is exploited, and someone manages to get into the container at the system level, at best they are user nodejs
which does not have root
permissions, and does not exist on the host.
Docker builds each line of a Dockerfile individually. This forms the 'layers' of the Docker image. As an image is built, Docker caches each layer.
7 ...
8 WORKDIR /home/nodejs/app
9
10 COPY package.json .
12 RUN npm install --production
13 COPY . .
14
15 CMD [“node.js”, “index.js”]
16 ...
On line 10 of the above Dockerfile, the package.json file is being copied to the working directory established on line 8. After the npm install
on line 12, line 13 copies the entire current directory into the working directory (the image).
If no changes are made to your package.json, Docker won’t rebuild the npm install
image layer, which can dramatically improve build times.
It’s important to explicitly set any environmental variables that your Node.js application will be expecting to remain constant throughout the container lifecycle.
12 ...
13 COPY . .
14
15 ENV NODE_ENV production
16
17 CMD [“node.js”, “index.js”]
18
With aims of comprehensive image and container services, DockerHub “provides a centralized resource for container image discovery, distribution and change management, user and team collaboration, and workflow automation throughout the development pipeline.”
To link the Docker CLI to your DockerHub account, use
docker login
:docker login [OPTIONS] [SERVER]
Docker runs its builds inside of a sandbox, and this sandbox environment doesn’t have access to information like ssh
keys or npm credentials. To bypass this constraint, there are a couple recommended options available to developers:
Tags help differentiate between different versions of images. Tags can be used to identify builds, teams that are working on the image, and literally any other designation that is useful to an organization for managing development of and around images. If no tag is explicitly added, Docker will assign a default tag of latest
after running docker build
. As a tag, latest
is okay in development, but can be very problematic in staging and production environments.
To avoid the problems around latest
, be explicit with your build tags. Here is an example script assigning tags with environment variables for the build’s git sha, branch name, and build number, all three of which can be very useful in versioning, debugging, and deployment management:
1 # !/bin/sh
2 docker tag helloworld:latest yourorg/helloworld:$SHA1
3 docker tag helloworld:latest yourorg/helloworld:$BRANCH_NAME
4 docker tag helloworld:latest yourorg/build_$BUILD_NUM
5
Read more on tagging here.
Containers are designed to be lightweight and map well at the process level, which helps keep process management simple: if the process exits, the container exits. However, this 1:1 mapping is an idealization that is not always maintained in practice.
As Docker containers do not come with a process manager included, add a tool for simple process management.
dumb-init from Yelp is a simple, lightweight process supervisor and init system designed to run as PID 1
inside container environments. This PID 1
designation to the dumb-init process is normally assigned to a running Linux container, and has its own kernel-signaling idiosyncrasies that complicate process management. dumb-init provides a level of abstraction that allows it to act as a signal proxy, ensuring expected process behavior.
A principal advantage of containers is that they provide only what is needed. Keep this in mind when adding layers to your images.
Here is a checklist for what to include when building container images:
That’s it.
Containers are a modern virtualization solution best-suited for infrastructures that call for efficient resource sharing, fast startup times, and rapid scaling.
Application containers are being used by DevOps organizations working to implement “infrastructure as code,” teams developing microservices and relying on distributed architectures, and QA groups leveraging strategies like A/B testing and incremental rollouts in production.
Just as the recommended approach for single-threaded Node.js is 1 process: 1 application, best practice for application containers is 1 process: 1 container. This mirrored relationship arguably makes Node.js the most suitable runtime for container development.
Docker is an open platform for developing, shipping, and running containerized applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. When using Docker with Node.js, keep in mind:
root
node_modules
If you’re interested in deploying Node.js applications within Docker containers, you may be interested in N|Solid. We work to make sure Docker is a first-class citizen for enterprise users of Node.js who need insight and assurance for their Node.js deployments.
Deploying N|Solid with Docker is as simple as changing your FROM
statement!
If you’d like to tune into the world of Node.js, Docker, Kubernetes, and large-scale Node.js deployments, be sure to follow us at @NodeSource on Twitter.
As part of the NodeSource Support team, I spend much of my time helping our customers analyze and resolve complex issues in Node.js. While factors like architecture and environment mean that some issues are quite unique, there are some familiar struggles that we’ve seen repeatedly from a wide variety of customers. I’ve listed a few of these common and relatively easy-to-avoid challenges below, along with our recommended strategy for avoiding (or resolving) these issues, as I think this information could help more teams working with Node.js avoid major headaches.
Issue
The Support team frequently sees questions about the most effective way to share the components, models, and/or libraries between projects. In some cases, our customers are already using Flow and Stampit, which are useful tools for ReactJs components, and they’re looking for tools with less complexity built for Node.js codebases.
Answer
When this question comes up, we usually recommend turning each component (or model, or library) into a module and listing these in each project’s package.json
file. This allows teams to share code across unified codebases by re-using localized modules.
Importing these components to a project can be accomplished with a fairly simple addition to the project’s package.json
file:
“db-models”: “file:../mainproject/models”,
To use this approach, make sure you are using npm@5, or use the linklocal package for earlier npm versions.
Issue
Many teams have web scans to identify and analyze cookie violations in their Node.js environments when Express is also part of their tech stack. Some of the most common cookie violations found are:
httpOnly
Flagsecure
Flag: if set to true, “the browser will not send a cookie with the secure flag set over an unencrypted HTTP request”We’re frequently asked how best to set the HttpOnly
and secure
flags for cookies, and whether that can be done at the server level.
Answer
The default cookie settings in Express aren’t highly secure; however, these settings can be manually tightened to enhance security - for both an application and its users.
httpOnly
- This should be set to “true”-- Flags cookies to be accessible by the issuing web server, which assists in preventing session hijacking.secure
- This should be set to “true”-- which requires TLS/SSL -- to allow the cookie to only be used with HTTPS requests, and not insecure HTTP requests.
For more information about how to deal with this issue, I recommend checking out these two blog posts:Issue
We often talk to teams who are working to migrate individual tasks or functionality from Java into a microservices-oriented Node.js application. The best practices approach is to replace a single, monolithic Java app with multiple Node.js apps, each of which is dedicated to a specific task or closely-related set of tasks. We’re frequently asked to recommend a library or pattern that will allow a Node.js app to read from an OracleDB and push to an MQ-enabled application.
Answer
To connect to an OracleDB, we recommend the node-oracledb package, which is developed and maintained by Oracle and includes detailed documentation and examples.
There are a number of ways to access MQ from Node.js, depending on your needs:
When migrating from a Java project or starting a new Node.js project we also recommend:
Don’t use console.log
or console.error
; instead utilize an abstraction library like Winston to control logging levels.
Set the ability to adjust logging level using env variables
Issue
The npm registry contains more than 800,000 packages, so it’s not surprising that teams have a hard time deciding which package offers both the features and functionality as well as the level of security that is most suitable for their Node.js applications. Among the most common asks we see are recommendations related to creating pdfs, managing RxJS and Promises, and setting up proxy servers and http error handling. That said, needs vary wildly by project, so the advice below is intended to be generally applicable.
Answer
There are a few tools in the Node ecosystem that allow easy checking for vulnerabilities in Node.js application dependencies. These tools are highly valuable in as they can ensure that the packages installed in an application have no known vulnerabilities, and can prevent the installation of package updates if a vulnerability has been detected in a more recent package version.
Once basic security checks have been passed, we recommend looking for the following factors to help you decide which package is best:
Issue
For teams using Node.js and Express, we often hear that a POST request containing a large body of JSON is returning a 413: Payload Too Large
response. Most of the time, the engineers we talk to want to know how to safely increase the size limit of the request body.
Answer
There are multiple ways to safely increase the size limit of the request body.
For a quick fix, either of the following two options would work:
app.use(BodyParser.json({ limit: ‘50mb’, type: ‘application/json’ }))
app.use(BodyParser.urlencoded({ limit: ‘50mb’, ‘extended’: ‘true’, parameterLimit: 50000 }));
Both of the above examples raise the maximum size of the request body to 50mb; in the second example, the parameterLimit
value is also defined.
While a quick fix will work, implementing a caching solution with Redis is a good option too. The idea is to store the data in cache and then send a reference from the client to the data instead of sending a big payload.
Similarly, you will not need to receive back a massive amount of data in JSON format; instead, you send the reference from the client and retrieve the cached info at the backend. This allows comparatively lightweight requests and avoids a negative impact on the performance of the application.
Hopefully the suggestions above help your team resolve (or avoid entirely) some of the most common issues reported by our Node.js Support customers. If you have questions or are experiencing an issue not covered above, feel free to contact us on Twitter @NodeSource, or consider joining one of our upcoming Office Hours sessions, which are hosted by one of our senior Solutions Architects and open to anyone with Node.js-related questions.
Read More