List of useful articles and blog post on Node.js

pdf or png from nodejs.png
Exporting a web page as PDF or PNG from Node.js

This blog post was first published in thedreaming.org blog.


I’m working on a reporting engine right now, and one of the requirements I have is to be able to export reports as PDF files, and send PDF reports via email. We also have a management page where you can see all the dashboards, and we’d like to render a PNG thumbnail of the dashboard to show here. Our back-end is all Node.js based, and you’re probably aware that Node is not known for it’s graphics prowess. All of our reports are currently being generated by a React app, written with Gatsby.js, so what we’d like to do is load up a report in a headless browser of some sort, and then either “print” that to PDF or take a PNG image. We’re going to look at doing exactly this using Playwright and Browserless.

The basic idea here is we want an API endpoint we can call into as an authenticated user, and get it to render a PDF of a web page for us. To do this we’re going to use two tools. The first is playwright, which is a library from Microsoft for controlling a headless Chrome or Firefox browser. The second is browserless, a microservice which hosts one or more headless Chrome browsers for us.

If you have a very simple use case, you might actually be able to do this entirely with browserless with their APIs. There’s a /pdf REST API which would do pretty much what we want here. We can pass in cookies and custom headers to authenticate our user to the reporting backend, we can set the width of the viewport to something reasonable (since of course our analytics page is responsive, and we want the desktop version in our PDF reports, not the mobile version).

But, playwright offers us two nice features that make it desirable here. First it has a much more expressive API. It’s easy, for example, to load a page, wait for a certain selector to be present (or not be present) to signal that the page is loaded, and then generate a PDF or screenshot. Second, when you npm install playwright, it will download a copy of Chrome and Firefox automatically, so when we’re running this locally in development mode we don’t even need the browserless container running.

Setting up and configuring playwright

In our backend analytics project, we’re going to start out by installing playwright:

$ npm install playwright

You’ll notice the first time you do this, it downloads a few hundred megs of browser binaries and stores them (on my mac) in ~/Library/Caches/ms-playwright. Later on, when we deploy our reporting microservice in a docker container, this is something we do not want to happen, because we don’t need a few hundred megs of browser binariers - that’s what browserless is for. So, in our backend project’s Dockerfile, we’ll modify our npm ci command:

RUN PLAYWRIGHT_SKIP_BROWSER_DOWNLOAD=1 npm ci && \
    rm -rf ~/.npm ~/.cache

If you’re not using docker, then however you run npm install in your backend, you want to add this PLAYWRIGHT_SKIP_BROWSER_DOWNLOAD environment variable to prevent all the browser binaries from being installed.

We do want playwright to use these browser binaries when it runs locally, but in production we want to use browserless. To control this, we’re going to set up an environment variable that controls playwright’s configuration:

// Set BROWSERLESS_URL environment variable to connect to browserless.
// e.g. BROWSERLESS_URL=wss://chrome.browserless.io?token=YOUR-API-TOKEN
const BROWSERLESS_URL = process.env.BROWSERLESS_URL;

let browser: pw.Browser | undefined;

export async function getBrowser() {
  if (!browser) {
    if (BROWSERLESS_URL) {
      // Connect to the browserless server.
      browser = await pw.chromium.connect({
        wsEndpoint: BROWSERLESS_URL,
      });
    } else {
      // In dev mode, just run a headless copy of Chrome locally/
      browser = await pw.chromium.launch({
        headless: false,
      });
    }

    // If the browser disconnects, create a new browser next time.
    browser.on("disconnected", () => {
      browser = undefined;
    });
  }

  return browser;
}

This creates a shared “browser” instance which we’ll use across multiple clients. Each client will create a “context”, which is sort of like an incognito window for that client. Contexts are cheap and lots can exist in parallel, whereas browser instances are expensive, so this lets us use a single instance of Chrome to serve many users.

We’re also going to write a little helper function here to make it easier to make sure we always release a browser context when we’re done with it:

async function withBrowserContext<T = void>(
  fn: (browserContext: pw.BrowserContext) => Promise<T>
) {
  const browser = await getBrowser();
  const browserContext = await browser.newContext();

  try {
    return await fn(browserContext);
  } finally {
    // Close the borwser context when we're done with
    // it. Otherwise browserify will keep it open forever.
    await browserContext.close();
  }
}

The idea here is you call this function, and pass in a function fn that accepts a browser context as a parameter. This lets us write code like:

await withBrowserContext((context) => {
  // Do stuff here!
});

And then we don’t have to worry about cleaning up the context if we throw an expection or otherwise exit our funciton in an “unexpected” way.

Generating the PDF or PNG

Now we get to the meaty part. We want an express endpoint which actually generates the PDF or the PNG. Here’s the code, with lots of comments. Obivously this is written with my “reporting” example in mind, so you’ll need to make a few small changes if you want to give this a try; notably you’ll probably want to change the express endpoint, and change the dashboardUrl to point to whatever page you want to make a PDF or PNG of.

// We set up an enviornment variable called REPORTING_BASE_URL
// which will point to our front end on localhost when running
// this locally, or to the real frontend in production.
const REPORTING_BASE_URL =
  process.env.REPORTING_BASE_URL || "http://localhost:8000";

// Helper function for getting numbers out of query parameters.
function toNumber(val: string | undefined, def: number) {
  const result = val ? parseInt(val, 10) : def;
  return isNaN(result) ? def : result;
}

// Our express endpoint
app.post("/api/reporting/export/:dashboardId", (req, res, next) => {
  Promise.resolve()
    .then(async () => {
      const type = -req.query.type || "pdf";
      const width = toNumber(req.query.width, 1200);
      const height = toNumber(req.query.height, 800);

      // Figure out what account and dashboard we want to show in the PDF.
      // Probably want to do some validation on these in a real system, although
      // since we're passing them as query parameters to another URL, there's
      // not much chance of an injection attack here...
      const qs = new URLSearchParams({ dashboardId: req.params.dashboardId });

      // If you're not building a reporting engine, change this
      // URL to whatever you want to make into a PDF/PNG. :)
      const dashboardUrl = `${REPORTING_BASE_URL}/dashboards/dashboard?${qs.toString()}`;

      await withBrowserContext(async (browserContext) => {
        // Set up authentication in the new browser context.
        await setupAuth(context.req, browserContext);

        // Create a new tab in Chrome.
        const page = await browserContext.newPage();
        await page.setViewportSize({ width, height });

        // Load our dashboard page.
        await page.goto(dashboardUrl);

        // Since this is a client-side app, we need to
        // wait for our dashboard to actually be visible.
        await page.waitForSelector("css=.dashboard-page");

        // And then wait until all the loading spinners are done spinning.
        await page.waitForFunction(
          '!document.querySelector(".loading-spinner")'
        );

        if (type === "pdf") {
          // Generate the PDF
          const pdfBuffer = await page.pdf();

          // Now that we have the PDF, we could email it to a user, or do whatever
          // we want with it. Here we'll just return it back to the client,
          // so the client can save it.
          res.setHeader(
            "content-disposition",
            'attachment; filename="report.pdf"'
          );
          res.setHeader("content-type", "application/pdf");
          res.send(pdfBuffer);
        } else {
          // Generate a PNG
          const image = await page.screenshot();
          res.setHeader("content-type", "image/png");
          res.send(image);
        }
      });
    })
    .catch((err) => next(err));
});

As the comments say, this creates a new “browser context” and uses the context to visit our page and take a snapshot.

Authentication

There’s one bit here I kind of glossed over:

// Set up authentication in the new browser context.
await setupAuth(context.req, browserContext);

Remember that browserless is really just a headless Chrome browser. The problem here is we need to authenticate our user twice; the user needs to authenticate to this backend express endpoint, but then this is going to launch a Chrome browser and navigate to a webpage, so we need to authenticate the user in that new browser session, too (since obviously not just any user off the Internet is allowed to look at our super secret reporting dashboards). We could visit a login page and log the user in, but it’s a lot faster to just set the headers and/or cookies we need to access the page.

What exactly this setupAuth() function does is going to depend on your use case. Maybe you’ll call browserContext.setExtraHTTPHeaders() to set a JWT token. In my case, my client is authenticating with a session cookie. Unfortunately we can’t use setExtraHTTPHeaders() to just set the session cookie, because Playwright generates it’s own cookie header and we can’t override it. So instead, I’ll parse the cookie that was sent to us and then forward it on to the browser context: via browserContext.addCookies().

import * as cookieParse from "cookie-parse";
import { URL } from "url";

// Playwright doesn't let us just set the "cookie"
// header directly. We need to actually create a
// cookie in the browser. In order to make sure
// the browser sends this cookie, we need to set
// the domain of the cookie to be the same as the
// domain of the page we are loading.
const REPORTING_DOMAIN = new URL(REPORTING_BASE_URL).hostname;

async function setupAuth(
  req: HttpIncomingMessage,
  browserContext: pw.BrowserContext
) {
  // Pass through the session cookie from the client.
  const cookie = req.headers.cookie;
  const cookies = cookie ? cookieParse.parse(cookie) : undefined;

  if (cookies && cookies["session"]) {
    console.log(`Adding session cookie: ${cookies["session"]}`);
    await browserContext.addCookies([
      {
        name: "session",
        value: cookies["session"],
        httpOnly: true,
        domain: REPORTING_DOMAIN,
        path: "/",
      },
    ]);
  }
}

We actually don’t have to worry about the case where the session cookie isn’t set - the client will just get a screenshot of whatever they would have gotten if they weren’t logged in!

If we log in to our UI, and then visit http://localhost:8000/api/reporting/export/1234, we should now see a screenshot of our beautiful reporting dashboard.

Cleaning up site headers

If you actually give this a try, you might notice that your PDF has all your menus and navigation in it. Fortunately, this is easy to fix with a little CSS. Playwright/browserless will render the PDF with the CSS media set to “print”, exactly as if you were trying to print this page from a browser. So all we need to do is set up some CSS to hide our site’s navigation for that media type:

@media print {
  .menu,
  .sidebar {
    display: none;
  }
}

An alternative here might be to pass in an extra query parameter like bare=true, although the CSS version is nice because then if someone tries to actually print our page from their browser, it’ll work.

Setting up Browserless

So far we’ve been letting Playwright launch a Chrome instance for us. This works fine when we’re debugging locally, but in production, we’re going to want to run that Chrome instance as a microservice in another container, and for that we’re going to use browserless. How you deploy browserless in your production system is outside the scope of this article, just because there’s so many different ways you could do it, but let’s use docker-compose to run a local copy of browserless, and get our example above to connect to it.

First, our docker-compose.yml file:

version: "3.6"
services:
  browserless:
    image: browserless/chrome:latest
    ports:
      - "3100:3000"
    environment:
      - MAX_CONCURRENT_SESSIONS=10
      - TOKEN=2cbc5771-38f2-4dcf-8774-50ad51a971b8

Now we can run docker-compose up -d to run a copy of browserless, and it will be available on port 3100, and set up a token so not just anyone can access it.

Now here I’m going to run into a little problem. The UI I want to take PDFs and screenshots from is a Gatsby app running on localhost:8000 on my Mac. The problem is, this port is not going to be visible from inside my docker VM. There’s a variety of ways around this problem, but here we’ll use ngrok to set up a publically accessible tunnel to localhost, which will be visible inside the docker container:

$ ngrok http 8000
Forwarding                    http://c3b3d3d435f1.ngrok.io -> localhost:8000
Forwarding                    https://c3b3d3d435f1.ngrok.io -> localhost:8000

Now we can run our reporting engine with:

$ BROWSERLESS_URL=ws://localhost:3100?token=2cbc5771-38f2-4dcf-8774-50ad51a971b8 \
    REPORTING_BASE_URL=https://c3b3d3d435f1.ngrok.io
    node ./dist/bin/server.js

And now if we try to hit up our endpoint, we should once again see a screenshot of our app!

Hopefully this was helpful! If you try this out, or have a similar (or better!) solution, please let us know!

Read More
part5.png
Node Is Simple — Part 5

tl;drThis is the fifth article of the Node is Simple article series. In this article series, I will be discussing how to create a simple and secure NodeJS, Express, MongoDB web application.

To follow the past tutorials:

Hello, hello, my dear fellow developers, glad you could make it to my article where I discuss developing web applications with NodeJS. As I promised in the previous article, in this article I will tell you how to use custom middleware with Express.

So what is this Middleware?

That’s the question, isn’t it? As I told you many times, Express is great with middleware. As discussed in the Express documentation,

Middleware functions are functions that have access to the request object (req), the response object (res), and the next middleware function in the application’s request-response cycle. The next middleware function is commonly denoted by a variable named next.

Express is nothing without middleware. So I am going to implement a simple error handling middleware. As the Express documentation describes,

app.use(function (err, req, res, next) {  
  console.error(err.stack)  
  res.status(500).send('Something broke!')  
})

this is how we create a simple middleware to log errors. But in this tutorial, I will extend this middleware to return specific error messages and HTTP error codes in the response and log the error messages in the console.

So let’s get to it, shall we?

First, let’s create some error classes to represent errors that we are going to get from the web application.

Let’s create /errors/index.js file.

module.exports = {
  AccessDeniedError: class AccessDeniedError {
    constructor(message) {
      this.message = message;
    }
  },
  AuthenticationError: class AuthenticationError {
    constructor(message) {
      this.message = message;
    }
  },
  ValidationError: class ValidationError {
    constructor(message) {
      this.message = message;
    }
  },
  NotFoundError: class NotFoundError {
    constructor(message) {
      this.message = message;
    }
  }
};
/errors/index.js file (Contains error classes)

Let’s see what’s happening here. Here, we are declaring four classes to describe errors we encounter in web applications.

  1. Access Denied Error (HTTP 403)
  2. Authentication Error (HTTP 401)
  3. Not Found Error (HTTP 404)
  4. Validation Error (HTTP 400)

Of course, there is another famous HTTP error which is called Internal Server Error (HTTP 500) but since it is a very generic error, we are not declaring a specific class to describe that and it will be the default error.

Then let’s create the error handling middleware. Let’s create /middleware/error-handling.js file.

const {
  ValidationError,
  AuthenticationError,
  AccessDeniedError,
  NotFoundError
} = require("../errors");
const chalk = require("chalk");

const errorLogger = (err, req, res, next) => {
  if (err.message) {
    console.log(chalk.red(err.message));
  }
  if (err.stack) {
    console.log(chalk.red(err.message));
  }
  next(err);
};

const authenticationErrorHandler = (err, req, res, next) => {
  if (err instanceof AuthenticationError)
    return res.status(401).send({ message: err.message });
  next(err);
};

const validationErrorHandler = (err, req, res, next) => {
  if (err instanceof ValidationError)
    return res.status(400).send({ message: err.message });
  next(err);
};

const accessDeniedErrorHandler = (err, req, res, next) => {
  if (err instanceof AccessDeniedError)
    return res.status(403).send({ message: err.message });
  next(err);
};

const notFoundErrorHandler = (err, req, res, next) => {
  if (err instanceof NotFoundError)
    return res.status(404).send({ message: err.message });
  next(err);
};

const genericErrorHandler = (err, req, res, next) => {
  res.status(500).send({ message: err.message });
  next();
};

const ErrorHandlingMiddleware = app => {
  app.use([
    errorLogger,
    authenticationErrorHandler,
    validationErrorHandler,
    accessDeniedErrorHandler,
    notFoundErrorHandler,
    genericErrorHandler
  ]);
};

module.exports = ErrorHandlingMiddleware;
/middleware/error-handling.js file (Contains the error handling middleware configurations)

1

From https://knowyourmeme.com/memes/what-the-hell-happened-here

Well, the code is somewhat complicated, isn’t it? Fear not, I will discuss what is happening here. First, we need to know how the Express middleware work. In Express middleware, if you provide error first callback, it will create the signature as an error-handling middleware.

Error-handling middleware always takes four arguments. You must provide four arguments to identify it as an error-handling middleware function. Even if you don’t need to use the next object, you must specify it to maintain the signature. Otherwise, the next object will be interpreted as regular middleware and will fail to handle errors.

(from https://expressjs.com/en/guide/using-middleware.html)

And another thing to mention is that the middleware can be defined as arrays. The array order will be preserved and the middleware will be applied in that order. So according to our code,

1. errorLogger //Logging errors  
2. authenticationErrorHandler //return authentication error  
3. validationErrorHandler //return validation error  
4. accessDeniedErrorHandler //return access denied error  
5. notFoundErrorHandler //return not found error  
6. genericErrorHandler //return internal server error

this middleware will be called in the particular order. In our web application, if there is an error thrown, first the error will be logged in the console and the error will be passed to the authenticationErrorHandler with the next(err) handler. If the error is an instance of AuthenticationError class, then it will end the request-response cycle by returning the relevant error. If not it will be passed on to the next error handling middleware. So as you can imagine it will be passed to down in the middleware stack and finally it will be passed to the genericErrorHandler middleware and the request-response cycle will be ended. And it is simple as that.

Now let’s bind this middleware as an application-level middleware. Let’s update /index.js file.

const express = require("express");
const chalk = require("chalk");
const http = require("http");
const https = require("https");
const config = require("./config");

const HTTP_PORT = config.HTTP_PORT;
const HTTPS_PORT = config.HTTPS_PORT;
const SERVER_CERT = config.SERVER_CERT;
const SERVER_KEY = config.SERVER_KEY;

const app = express();
const Middleware = require("./middleware");
const ErrorHandlingMiddleware = require("./middleware/error-handling");
const MainController = require("./controllers");

Middleware(app);
app.use("", MainController);
ErrorHandlingMiddleware(app);
app.set("port", HTTPS_PORT);

/**
 * Create HTTPS Server
 */

const server = https.createServer(
  {
    key: SERVER_KEY,
    cert: SERVER_CERT
  },
  app
);

const onError = error => {
  if (error.syscall !== "listen") {
    throw error;
  }

  const bind =
    typeof HTTPS_PORT === "string"
      ? "Pipe " + HTTPS_PORT
      : "Port " + HTTPS_PORT;

  switch (error.code) {
    case "EACCES":
      console.error(chalk.red(`[-] ${bind} requires elevated privileges`));
      process.exit(1);
      break;
    case "EADDRINUSE":
      console.error(chalk.red(`[-] ${bind} is already in use`));
      process.exit(1);
      break;
    default:
      throw error;
  }
};

const onListening = () => {
  const addr = server.address();
  const bind = typeof addr === "string" ? `pipe ${addr}` : `port ${addr.port}`;
  console.log(chalk.yellow(`[!] Listening on HTTPS ${bind}`));
};

server.listen(HTTPS_PORT);
server.on("error", onError);
server.on("listening", onListening);

/**
 * Create HTTP Server (HTTP requests will be 301 redirected to HTTPS)
 */
http
  .createServer((req, res) => {
    res.writeHead(301, {
      Location:
        "https://" +
        req.headers["host"].replace(
          HTTP_PORT.toString(),
          HTTPS_PORT.toString()
        ) +
        req.url
    });
    res.end();
  })
  .listen(HTTP_PORT)
  .on("error", onError)
  .on("listening", () =>
    console.log(chalk.yellow(`[!] Listening on HTTP port ${HTTP_PORT}`))
  );

module.exports = app;
/index.js file (Updated to reflect the binding of error handling middleware)

Now I have something to clarify. As you can see line 19, I have added the ErrorHandlingMiddleware(app) there. You might wonder is there a specific reason behind that. Actually there is.

Middleware(app);  
app.use("", MainController);  
ErrorHandlingMiddleware(app);  
app.set("port", HTTPS\_PORT);

As you can remember, the Express treats the middleware in their order we are binding it with the app. So if we bind the error handling middleware before binding the Express router, the Express app would not know where are the errors are being thrown. So we first bind the Express router and then we bind the error handling middleware.

Now if you start the web application as usual,

$ pm2 start 

and do some CRUD, operation, you won’t see any errors return since we haven’t added any error handling in our /services/index.js file. So I will show you an example of an error by explicitly throwing an error.

Let’s add this error endpoint. Every time you hit it, it will throw you a not found error. Let’s update /controllers/index.js file.

GET /error

_const_ { NotFoundError } = require("../errors");_/\*\*_ **_@route_** _GET /error  
 \*_ **_@desc_** _Return an example error  
 \*_ **_@access_** _Public  
 \*/  
_router.get(  
  "/error",  
  asyncWrapper(_async_ () => {  
    _throw new_ NotFoundError("Sorry content not found");  
  })  
);

2

Figure 1: Request and response of custom error endpoint

As you can see in figure 1, the response status is 404 and the message says “Sorry content not found”. So it works as it should.

Now let’s see how our console looks. Now type,

$ pm2 logs

in your terminal and see the console log.

3

Figure 2: console log

Bonus super-secret round

So I’m going to tell you a super-secret about how to view colored logs in PM2. You just have to add a single line to the /ecosystem.config.js file.

const fs = require("fs");

const SERVER_CERT = fs.readFileSync(__dirname + "/config/server.cert", "utf8");
const SERVER_KEY = fs.readFileSync(__dirname + "/config/server.key", "utf8");

module.exports = {
  apps: [
    {
      name: "node-is-simple",
      script: "./index.js",
      watch: true,
      args: ["--color"],
      env: {
        NODE_ENV: "development",
        SERVER_CERT,
        SERVER_KEY,
        HTTP_PORT: 8080,
        HTTPS_PORT: 8081,
        MONGO_URI: "mongodb://localhost/students"
      }
    }
  ]
};
ecosystem.config.js file (Reflects the update to view colored console output)

ecosystem.config.js file (Reflects the update to view colored console output)

All you have to do is add line 12. args: ["--color"],

Bonus Bonus round

Now I’m going to tell you about two additional Express middleware libraries which are going to save your life 🤣.

  1. Morgan (HTTP request logger middleware for node.js)
  2. CORS (CORS is a node.js package for providing a Connect/Express middleware that can be used to enable CORS with various options.)

I guarantee that these two libraries will save you from certain pitfalls and you’ll thank them pretty much all the time.

This is a good article on Morgan.

Getting Started With morgan: the Node.js Logger Middleware

This is a good article on CORS.

Using CORS in Express

So let’s update our /middleware/common.js file.

const bodyParser = require("body-parser");
const helmet = require("helmet");
const morgan = require("morgan");
const cors = require("cors");

module.exports = app => {
  app.use(bodyParser.json());
  app.use(morgan("dev"));
  app.use(cors());
  app.use(helmet());
};
/middleware/common.js file (Updated to reflect the new middleware CORS and Morgan)

Then let’s install these two packages.

$ npm install cors morgan

Let’s see how the Morgan would show the HTTP requests in our console. We have specified “dev” as the logging level for Morgan. You can find more details on that from the documentation.

4

Figure 3: Morgan request logging in the console
GET / 200 3.710 ms - 39

Let’s understand what this means.

:method :url :status :response-time ms - :res\[content-length\]

This is what the documentation says. Let’s break it down.

:method = GET (The HTTP method of the request.):url = / (The URL of the request. This will use req.originalUrl if exists, otherwise req.url.):status = 200 (The status code of the response.):response-time = 3.710 (The time between the request coming into morgan and when the response headers are written, in milliseconds.):res\[content-length\] = 39 (content-length of the response. If the content-length is not present, the value will be displayed as "-" in the log.)

This is it. Now if we move on to what the CORS module does, it ain’t much but it’s honest work 😁. It will save you from Cross-Origin Resource Sharing issues. If you have worked with APIs and front-ends, the CORS error is something you’ve seen at least once in your lifetime.

5

From https://fdezromero.com/cors-errors-in-ionic-apps/

So CORS module will allow you to add CORS header with ease. I know this article was a little bit longer because I had to share some super-secrets with you. So this is it for this article and I will tell you how to secure our endpoints with JWT authentication in the next article. As usual, you can find the updated code in the GitHub repository. (Look for the commit message “Tutorial 5 checkpoint”.)

Niweera/node-is-simple

So until we meet again, happy coding.

Read More
nodejs-14-features_ (1).jpg
New Node.js 14 features will see it disrupt AI, IoT and more, Node latest version report

New or latest Node.js features aren’t the usual selling point of this platform. Node.js is primarily well-known for its speed and simplicity. This is why so many companies are willing to give it a shot. However, with the release of a new LTS (long-term support) Node.js 14 version, Node.js will gain a lot of new features every Node.js developer can be excited about. Why? That’s because the new Node.js features added in the version 12 through 14 and the possibilities they create are simply that amazing! Find our more from this Node latest version report.

This article has been updated to reflect the latest changes added in Node.js 14. What you can find here is a thorough overview of the latest Node.js features added in version 12 through 14.

Node + Web Assembly = <3

Web Assembly is slowly gaining in popularity. More and more Node modules are experimenting with this language and so is Node itself! With the upcoming Node 14 release, we will gain access to the experimental Web Assembly System Interface – WASI.

The main idea behind this is to provide us with an easier way to access the underlying operating system. I’m really looking forward to seeing more Web Assembly in Node.

V8 8.1 is here!

I did mention that the new Node comes with the V8. This gives us not only access to the private field, but also some performance optimizations.

Awaits should work much faster, as should JS parsing.

Our apps should load quicker and asyncs should be much easier to debug, because we’re finally getting stack traces for them.

What’s more, the heap size is getting changed. Until now, it was either 700MB (for 32bit systems) or 1400MB (for 64bit). With new changes, it’s based on the available memory!

With the latest Node version 14, we’re getting access to the newest V8. This means that we’re getting some popular features of the JavaScript engine.

If you’re using TypeScript, you probably try nullish coalescing and optional chaining. Both of those are natively supported by Node 14.

We’re also getting a few updates to Intl. The first one is support for Intl.DisplayNames and the second one is support for calendar and numberingSystem for Intl.DateTimeFormat.

Stable Diagnostic Reports

In Node 12, we’ve got a new experimental feature called Diagnostic Reports. By using it, we could easily get a report that contains information about the current system. What’s more, we can generate it not only on demand but also after a certain event.

If you have any production running a Node app, then this is something you should be checking out.

Threads are almost stable!

With the last LTS we’ve got access to threads. Of course, it was an experimental feature and required a special flag called –experimental-worker for it to work.

With the upcoming LTS (Node 12) it’s still experimental, but won’t require a flag anymore. We are getting closer to a stable version!

ES modules support

Let’s face it, ES modules are currently the way to go in JavaScript development. We are using it in our frontend apps. We are using it on our desktop or even mobile apps. And yet, in case of Node we were stuck with common.js modules.

Of course, we could use Babel or Typescript, but since Node.js is a backend technology, the only thing we should care about is a Node version installed on the server. We don’t need to care about multiple different browsers and support for them, so what’s the point of installing a tool that was made precisely with that in mind (Babel/Webpack etc.)?

With Node 10, we could finally play a little with ES modules (current LTS has experimental implementation for modules), but it required us to use special file extension – .mjs (module javascript).

With Node 12, it’s getting a little bit easier to work with. Much like it is with web apps, we get a special property type called that will define if code should be treated like common.js or es module.

The only thing you need to do to treat all your files as a module is to add the property type with the value module to your package.json.

{
  "type": "module"
}

From now on, if this package.json is the closest to our .js file, it will be treated like a module. No more mjs (we can still use it if we want to)!

So, what if we wanted to use some common.js code?

As long as the closest package.json does not contain a module type property, it will be treated like common.js code.

What’s more, we are getting new an extension called cjs – a common.js file.

Every mjs file is treated as a module and every cjs as a common.js file.

If you didn’t have a chance to try it out, now is the time!

Interested in developing microservices? 🤔 Make sure to check out our State of Microservices 2020 report – based on opinions of 650+ microservice experts!

JS and private variables

When it comes to JavaScript, we have always struggled to protect some data in our classes/functions from the outside.

JS is famous for its monkey patching, meaning we could always somehow access almost everything.

We tried with closures, symbols and more to simulate private-like variables. Node 12 ships with the new V8 and so we’ve got access to one cool feature – private properties in the class.

I’m sure you all remember the old approach to privates in Node:

class MyClass {
  constructor() {
    this._x = 10
  }
  
  get x() {
    return this._x
  }
}

We all know it’s not really a private – we are still able to access it anyway, but most of IDEs treated it like a private field and most of Node devs knew about this convention. Finally, we can all forget about it.

class MyClass {
  #x = 10
  
  get x() {
    return this.#x
  }
}

Can you see the difference? Yes, we use # character to tell Node that this variable is private and we want it to be accessible only from the inside of this class.

Try to access it directly, you’ll get an error that this variable does not exists.

Sadly some IDE do not recognize them as proper variables yet.

Flat and flatMap

With Node 12, we’re getting access to new JavaScript features.

First of all, we’re getting access to new array methods – flat and flatMap. The first one is similar to Lodash’s flattenDepth method.

If we pass a nested arrays to it, we will get a flatten array as a result.

[10, [20, 30], [40, 50, [60, 70]]].flat() // => [10, 20, 30, 40, 50, [60, 70]]
[10, [20, 30], [40, 50, [60, 70]]].flat(2) // => [10, 20, 30, 40, 50, 60, 70]

As you can see, it also has a special parameter – depth. By using it, you can decide how many levels down you want to flatten.

The second one – flatMap – works just like map, followed by flat 🙂

Optional catch binding

Another new feature is optional catch binding. Until now we always had to define an error variable for try – catch.

try {
  someMethod()
} catch(err) {
  // err is required
}

With Node 12 we can’t skip the entire catch clause, but we can skip the variable at least.

try {
  someMethod()
} catch {
  // err is optional
}

Object.fromEntries

Another new JavaScript feature is the Object.fromEntries method. It’s main usage is to create an object either from Map or from a key/value array.

Object.fromEntries(new Map([['key', 'value'], ['otherKey', 'otherValue']]));
// { key: 'value', otherKey: 'otherValue' }


Object.fromEntries([['key', 'value'], ['otherKey', 'otherValue']]);
// { key: 'value', otherKey: 'otherValue' }

The new Node.js is all about threads!

If there is one thing we can all agree on, it’s that every programming language has its pros and cons. Most popular technologies have found their own niche in the world of technology. Node.js is no exception.

We’ve been told for years that Node.js is good for API gateways and real-time dashboards (e.g. with websockets). As a matter of fact, its design itself forced us to depend on the microservice architecture to overcome some of its common obstacles.

At the end of the day, we knew that Node.js was simply not meant for time-consuming, CPU-heavy computation or blocking operations due to its single-threaded design. This is the nature of the event loop itself.

If we block the loop with a complex synchronous operation, it won’t be able to do anything until it’s done. That’s the very reason we use async so heavily or move time-consuming logic to a separate microservice.

This workaround may no longer be necessary thanks to new Node.js features that debuted in its 10 version. The tool that will make the difference are worker threads. Finally, Node.js will be able to excel in fields where normally we would use a different language.

A good example could be AI, machine learning or big data processing. Previously, all of those required CPU-heavy computation, which left us no choice, but to build another service or pick a better-suited language. No more.

Threads!? But how?

This new Node.js feature is still experimental – it’s not meant to be used in a production environment just yet. Still, we are free to play with it. So where do we start?

Starting from Node 12+ we no longer need to use special feature flag –experimental-worker. Workers are on by default!

node index.js

Now we can take full advantage of the worker_threads module. Let’s start with a simple HTTP server with two methods:

→ GET /hello (returning JSON object with “Hello World” message),

→ GET /compute (loading a big JSON file multiple times using a synchronous method).

const express = require('express');
const fs = require('fs');

const app = express();

app.get('/hello', (req, res) => {
  res.json({
    message: 'Hello world!'
  })
});

app.get('/compute', (req, res) => {
  let json = {};
  for (let i=0;i<100;i++) {
    json = JSON.parse(fs.readFileSync('./big-file.json', 'utf8'));
  }

  json.data.sort((a, b) => a.index - b.index);

  res.json({
    message: 'done'
  })
});

app.listen(3000);

The results are easy to predict. When GET /compute and /hello are called simultaneously, we have to wait for the compute path to finish before we can get a response from our hello path. The Event loop is blocked until file loading is done.

Let’s fix it with threads!

const express = require('express');
const fs = require('fs');
const { Worker, isMainThread, parentPort, workerData } = require('worker_threads');

if (isMainThread) {
  console.log("Spawn http server");

  const app = express();

  app.get('/hello', (req, res) => {
    res.json({
      message: 'Hello world!'
    })
  });

  app.get('/compute', (req, res) => {

    const worker = new Worker(__filename, {workerData: null});
    worker.on('message', (msg) => {
      res.json({
        message: 'done'
      });
    })
    worker.on('error', console.error);
      worker.on('exit', (code) => {
        if(code != 0)
          console.error(new Error(`Worker stopped with exit code ${code}`))
    });
  });

  app.listen(3000);
} else {
  let json = {};
  for (let i=0;i<100;i++) {
    json = JSON.parse(fs.readFileSync('./big-file.json', 'utf8'));
  }

  json.data.sort((a, b) => a.index - b.index);

  parentPort.postMessage({});
}

As you can see, the syntax is very similar to what we know from Node.js scaling with Cluster. But the interesting part begins here.

Try to call both paths at the same time. Noticed something? Indeed, the event loop is no longer blocked so we can call /hello during file loading.

Now, this is something we have all been waiting for! All that’s left is to wait for a stable API.

Want even more new Node.js features? Here is an N-API for building C/C++ modules!

The raw speed of Node.js is one of the reason we choose this technology. Worker threads are the next step to improve it. But is it really enough?

Node.js is a C-based technology. Naturally, we use JavaScript as a main programming language. But what if we could use C for more complex computation?

Node.js 10 gives us a stable N-API. It’s a standardized API for native modules, making it possible to build modules in C/C++ or even Rust. Sounds cool, doesn’t it?

c-logo-267x300 Building native Node.js modules in C/C++ has just got way easier

A very simple native module can look like this:

#include <napi.h>
#include <math.h>

namespace helloworld {
    Napi::Value Method(const Napi::CallbackInfo& info) {
        Napi::Env env = info.Env();
        return Napi::String::New(env, "hello world");
    }

    Napi::Object Init(Napi::Env env, Napi::Object exports) {
        exports.Set(Napi::String::New(env, "hello"),
                    Napi::Function::New(env, Method));
        return exports;
    }

    NODE_API_MODULE(NODE_GYP_MODULE_NAME, Init)
}

If you have a basic knowledge of C++, it’s not too hard to write a custom module. The only thing you need to remember is to convert C++ types to Node.js at the end of your module.

Next thing we need is binding:

{
    "targets": [
        {
            "target_name": "helloworld",
            "sources": [ "hello-world.cpp"],
            "include_dirs": ["<!@(node -p \"require('node-addon-api').include\")"],
            "dependencies": ["<!(node -p \"require('node-addon-api').gyp\")"],
            "defines": [ 'NAPI_DISABLE_CPP_EXCEPTIONS' ]
        }
    ]
}

This simple configuration allows us to build *.cpp files, so we can later use them in Node.js apps.

Before we can make use of it in our JavaScript code, we have to build it and configure our package.json to look for gypfile (binding file).

{
  "name": "n-api-example",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "install": "node-gyp rebuild"
  },
  "gypfile": true,
  "keywords": [],
  "author": "",
  "license": "ISC",
  "dependencies": {
    "node-addon-api": "^1.5.0",
    "node-gyp": "^3.8.0"
  }
}

Once the module is good to go, we can use the node-gyp rebuild command to build and then require it in our code. Just like any popular module we use!

const addon = require('./build/Release/helloworld.node');

console.log(addon.hello());

Together with worker threads, N-API gives us a pretty good set of tools to build high-performance apps. Forget APIs or dashboards – even complex data processing or machine learning systems are far from impossible. Awesome!

Full support for HTTP/2 in Node.js? Sure, why not!

We’re able to compute faster. We’re able to compute in parallel. So how about assets and pages serving?

For years, we were stuck with the good old http module and HTTP/1.1. As more and more assets are being served by our servers, we increasingly struggle with loading times. Every browser has a maximum number of simultaneous persistent connections per server/proxy, especially for HTTP/1.1. With HTTP/2 support, we can finally kiss this problem goodbye.

So where do we start? Do you remember this basic Node.js server example from every tutorial on web ever? Yep, this one:

const http = require('http');

http.createServer(function (req, res) {
  res.write('Hello World!');
  res.end();
}).listen(3000);

With Node.js 10, we get a new http2 module allowing us to use HTTP/2.0! Finally!

const http = require('http2');
const fs = require('fs');

const options = {
  key: fs.readFileSync('example.key'),
  cert: fs.readFileSync('example.crt')
 };

http.createSecureServer(options, function (req, res) {
  res.write('Hello World!');
  res.end();
}).listen(3000);

Full HTTP/2 support in Node.js 10 is what we have all been waiting for

With these new features, the future of Node.js is bright

The new Node.js features bring fresh air to our tech ecosystem. They open up completely new possibilities for Node.js. Have you ever imagined that this technology could one day be used for image processing or data science? Neither have I.

Node latest version gives us even more long-awaited features such as support for es modules (still experimental, though) or changes to fs methods, which finally use promises rather than callbacks.

Want even more new Node.js features? Watch this short video.

As you can see from the chart below, the popularity of Node.js seems to have peaked in early 2017, after years and years of growth. It’s not really a sign of slowdown, but rather of maturation of this technology.

node-popularity-over-the-years-chart-1024x425 Popularity of Node.js over time chart, peaked in 2017

However, I can definitely see how all of these new improvements, as well as the growing popularity of Node.js blockchain apps (based on the truffle.js framework), may give Node.js a further boost so that it can blossom again – in new types of projects, roles and circumstances.

With the release of Node.js 14, the future is looking brighter and brighter for Node.js development!

Read More
1_q4C3LGm0jGTaDtoSco5d1w.jpg
Node is Simple — Part 1

tl;drThis is the first article of the Node is Simple article series. In this article series, I will be discussing how to create a simple and secure NodeJS, Express, MongoDB web application.

First of all let me give a big shout out to JavaScript, who’s going to turn 25 years old this year. W/O JavaScript, the world would be a much darker place indeed. 😁 In this article series what I am going to do is create an API with NodeJS, ExpressJS, and MongoDB. I know there is a vast ocean of tutorials out there, describing how to build an API with these technologies, but the thing is that I have never found a very comprehensive all in one tutorial where you get the knowledge of the following things.

  1. Setting up a basic NodeJS, Express web app with SSL/TLS.
  2. Setting up ESLint in your favorite editor or IDE.
  3. Adding MongoDB as the database.
  4. Creating basic CRUD endpoints and testing with Postman.
  5. Uploading files and view them using MongoDB GridFS.
  6. Creating custom middleware with Express.
  7. Add logging for the web application.
  8. Securing endpoints with JWT authentication.
  9. Validating the input using @Hapi/Joi.
  10. Adding an OpenAPI Swagger Documentation.
  11. Caching the responses with Redis.
  12. Load balancing with PM2.
  13. Testing the API using Chai, Mocha.
  14. Create a CI/CD pipeline.
  15. Deploying to your favorite platform.

Well if you have done all of these things with your web application, it would be awesome. Since I learned all of these the hard way, I want to share them with all of you. Because sharing is caring 😇.

Since this tutorial is a series I won’t be making this a long; boring to read one. In this first article, I will describe how to set up a simple NodeJS and Express app with SSL/TLS. Just that, nothing else.

maxresdefault

On your feet soldier, we are starting.

First of all, let’s create the folder structure of our web application.

1 jnP vv30BKCx84yOSHbTRQ

Figure 1: node-is-simple folder structure

As shown in figure 1, you need to create the folders and the index.js file. If you use Linux or, Git Bash on Windows, let’s make a simple script to do this. So you won’t have to do this again when you need to create another application. (We are so lazy aren’t we? 😂)

#!/usr/bin/env bash

############################################################
# Remember to create a folder for your project first       #
# And run `npm init` to initialize a node project          #
# Inside that project folder run this bootstrap.sh script  #
############################################################

# Create the folders
mkdir config controllers errors middleware models services swagger utilities database

# Create the index.js file
touch index.js

############################################################
# Remember to check if you have node and npm installed     #
# I assume you have installed node and npm                 #
############################################################

# Install required packages
npm install express chalk --save

#That's it folks!

Let’s go through it again.

First, you need to create a folder for your project and run npm init and initialize your application. With this, you can specify a great name to your project, your license of preference, and many more. Ah, I just forgot. You need to check if you have the current version of Node and NPM installed in your machine. I hope you know how to install NodeJS on your computer. If not please refer to the following links.

How to Download & Install Node.js - NPM on Windows

How To Install Node.js on Ubuntu 18.04

After doing that just run this bootstrap.sh bash script inside your project folder and it will create a simple Node, Express application. So simple right?

After creating the necessary folder structure let’s set up the Express application. Let’s go to the index.js file on the root folder and add these lines.

const express = require("express");
const chalk = require("chalk");
const http = require("http");
const https = require("https");
const config = require("./config");

const HTTP_PORT = config.HTTP_PORT;
const HTTPS_PORT = config.HTTPS_PORT;
const SERVER_CERT = config.SERVER_CERT;
const SERVER_KEY = config.SERVER_KEY;

const app = express();
const MainController = require("./controllers");

app.use("", MainController);
app.set("port", HTTPS_PORT);

/**
 * Create HTTPS Server
 */

const server = https.createServer(
  {
    key: SERVER_KEY,
    cert: SERVER_CERT
  },
  app
);

const onError = error => {
  if (error.syscall !== "listen") {
    throw error;
  }

  const bind =
    typeof HTTPS_PORT === "string"
      ? "Pipe " + HTTPS_PORT
      : "Port " + HTTPS_PORT;

  switch (error.code) {
    case "EACCES":
      console.error(chalk.red(`[-] ${bind} requires elevated privileges`));
      process.exit(1);
      break;
    case "EADDRINUSE":
      console.error(chalk.red(`[-] ${bind} is already in use`));
      process.exit(1);
      break;
    default:
      throw error;
  }
};

const onListening = () => {
  const addr = server.address();
  const bind = typeof addr === "string" ? `pipe ${addr}` : `port ${addr.port}`;
  console.log(chalk.yellow(`[!] Listening on HTTPS ${bind}`));
};

server.listen(HTTPS_PORT);
server.on("error", onError);
server.on("listening", onListening);

/**
 * Create HTTP Server (HTTP requests will be 301 redirected to HTTPS)
 */
http
  .createServer((req, res) => {
    res.writeHead(301, {
      Location:
        "https://" +
        req.headers["host"].replace(
          HTTP_PORT.toString(),
          HTTPS_PORT.toString()
        ) +
        req.url
    });
    res.end();
  })
  .listen(HTTP_PORT)
  .on("error", onError)
  .on("listening", () =>
    console.log(chalk.yellow(`[!] Listening on HTTP port ${HTTP_PORT}`))
  );

module.exports = app;

Don’t run this yet, since we haven’t done anything yet. I know this is a bit too much just bear with me, I’ll explain everything. What we are doing here is, first we create an HTTP server and then we create an HTTPS server. Then we redirect all the HTTP traffic to the HTTPS server. First of all, we need to create certificates to use inside the HTTPS server. It is a little bit of pain, but it’ll worth it. This is a good resource about creating SSL certificates for your Node application.

How to Use SSL/TLS with Node.js

I am a little bit of a lazy person so I’ll just generate server.cert and server.key files using this link.

Self-Signed Certificate Generator

Inside the server name input, you just have to provide your domain name. For this purpose, I’ll use the localhost as the domain name.

After generating the *.cert and *.key files copy them to the /config folder. And rename them to server.cert and server.key.

Now let’s import the certificates and make them useful. Inside the /config folder create the file index.js and add these lines.

const fs = require("fs");

const SERVER_CERT = fs.readFileSync(__dirname + "/server.cert", "utf8");
const SERVER_KEY = fs.readFileSync(__dirname + "/server.key", "utf8");

module.exports = {
  SERVER_CERT,
  SERVER_KEY,
  HTTP_PORT: 8080,
  HTTPS_PORT: 8081
};

Not so simple after all right? Well, let’s see.

Now that we have set up the certificates, let’s create a simple controller (endpoint) to our application and check that out. First, let’s got to the /controllers folder and create an index.js file. Inside that add the following lines.

const router = require("express").Router();
const asyncWrapper = require("../utilities/async-wrapper");

/** @route  GET /
 *  @desc   Root endpoint
 *  @access Public
 */
router.get(
  "/",
  asyncWrapper(async (req, res) => {
    res.send({
      message: "Hello World!",
      status: 200
    });
  })
);

module.exports = router;

asyncWrapper what is that???

Let me tell you about that. Async Wrapper is a wrapper function that will catch all the errors happen inside your code and returns them to the error handling middleware. I’ll explain this a little more when I am discussing Express middleware. Now let’s create this infamous asyncWrapper. Go to the /utilities folder and create the two following files.

**async-wrapper.js**

module.exports = requestHandler => (req, res, next) =>
  requestHandler(req, res).catch(next);

**async.wrapper.d.ts** (Async wrapper type definition file)

We have done it, folks, we have done it. Now let’s check this beautiful Express app at work. Let’s run the web application first. Let’s go to the project folder and in the terminal (or Git Bash on Windows) run the following line.

node index.js

After that let’s fire up your favorite browser (mine is Chrome 😁) and hit the endpoint at,

https://localhost:8081/

Don’t worry if your browser says it is not secure to go to this URL. You trust yourself, don’t you? So let’s accept the risk and go ahead.

1 -j7gNIpPkuqIQj OQf9Y0A

This is the warning I told you about. (FYI, this screenshot is from Firefox)

Now if you can see something like this,

1 UNQTECXa-hXe2YM4ANG0DQ

Response from [https://localhost:8081](https://localhost:8081)

You have successfully created a NodeJS, Express application. Also since we have set up an HTTP server, you can try this URL too. http://localhost:8080

If you go to this URL, you’ll see that you’ll be redirected to https://localhost:8081

Now that’s the magic with our Express application. Awesome right? So this is it for this tutorial. In the next tutorial, I’ll tell you all about adding MongoDB as the database.

https://github.com/Niweera/node-is-simple

This is the GitHub repo for this tutorial and I’ll update this repo as the tutorial grows. Check for the commit messages for the snapshot of the repo for a particular tutorial. If you have any queries, don’t forget to hit me up on any social media as shown on my website.

https://niweera.gq/

So until we meet again with part two of this tutorial series, happy coding…

Read More
5-min.jpg
Running Your Node.js App With Systemd - Part 1

You've written the next great application, in Node, and you are ready to unleash it upon the world. Which means you can no longer run it on your laptop, you're going to actually have to put it up on some server somewhere and connect it to the real Internet. Eek.

There are a lot of different ways to run an app in production. This post is going to cover the specific case of running something on a "standard" Linux server that uses systemd, which means that we are not going to be talking about using Docker, AWS Lambda, Heroku, or any other sort of managed environment. It's just going to be you, your code, and terminal with a ssh session my friend.

Before we get started though, let's talk for just a brief minute about what systemd actually is and why you should care.

What is systemd Anyway?

The full answer to this question is big, as in, "ginormous" sized big. So we're not going to try and answer it fully since we want to get on the the part where we can launch our app. What you need to know is that systemd is a thing that runs on "new-ish" Linux servers that is responsible for starting / stopping / restarting programs for you. If you install mysql, for example, and whenever you reboot the server you find that mysql is already running for you, that happens because systemd knows to turn mysql on when the machine boots up.

This systemd machinery has replaced older systems such as init and upstart on "new-ish" Linux systems. There is a lot of arguably justified angst in the world about exactly how systemd works and how intrusive it is to your system. We're not here to discuss that though. If your system is "new-ish", it's using systemd, and that's what we're all going to be working with for the forseeable future.

What does "new-ish" mean specifically? If you are using any of the following, you are using systemd:

  • CentOS 7 / RHEL 7
  • Fedora 15 or newer
  • Debian Jessie or newer
  • Ubuntu Xenial or newer

Running our App Manually

I'm going to assume you have a fresh installation of Ubuntu Xenial to work with, and that you have set up a default user named ubuntu that has sudo privileges. This is what the default will be if you spin up a Xenial instance in Amazon EC2. I'm using Xenial because it is currently the newest LTS (Long Term Support) version available from Canonical. Ubuntu Yakkety is available now, and is even newer, but Xenial is quite up-to-date at the time of this writing and will be getting security updates for many years to come because of its LTS status.

Use ssh with the ubuntu user to get into your server, and let's install Node.

$ sudo apt-get -y install curl
$ curl -sL https://deb.nodesource.com/setup_6.x | sudo bash -
$ sudo apt-get -y install nodejs

Next let's create an app and run it manually. Here's a trivial app I've written that simply echoes out the user's environment variables.

const http = require('http');

const hostname = '0.0.0.0';
const port = process.env.NODE_PORT || 3000;
const env = process.env;

const server = http.createServer((req, res) => {
  res.statusCode = 200;
  res.setHeader('Content-Type', 'text/plain');
  for (var k in env) {
    res.write(k + ": " + env[k] + "\n");
  }
  res.end();
});

server.listen(port, hostname, () => {
  console.log("Server running at http://" + hostname + ":" + port + "/");
});

Using your text editor of choice (which should obviously be Emacs but I suppose it's a free country if you want to use something inferior), create a file called hello_env.js in the user's home directory /home/ubuntu with the contents above. Next run it with

$ /usr/bin/node /home/ubuntu/hello_env.js

You should be able to go to

http://11.22.33.44:3000

in a web browser now, substituting 11.22.33.44 with whatever the actual IP address of your server is, and see a printout of the environment variables for the ubuntu user. If that is in fact what you see, great! We know the app runs, and we know the command needed to start it up. Go ahead and press Ctrl-c to close down the application. Now we'll move on to the systemd parts.

Creating a systemd Service File

The "magic" that's needed to make systemd start working for us is a text file called a service file. I say "magic" because for whatever reason, this seems to be the part that people block on when they are going through this process. Fortunately, it's much less difficult and scary than you might think.

We will be creating a file in a "system area" where everything is owned by the root user, so we'll be executing a bunch of commands using sudo. Again, don't be nervous, it's really very straightforward.

The service files for the things that systemd controls all live under the directory path

/lib/systemd/system

so we'll create a new file there. If you're using Nano as your editor, open up a new file there with:

sudo nano /lib/systemd/system/hello_env.service

and put the following contents in it:

[Unit]
Description=hello_env.js - making your environment variables rad
Documentation=https://example.com
After=network.target

[Service]
Environment=NODE_PORT=3001
Type=simple
User=ubuntu
ExecStart=/usr/bin/node /home/ubuntu/hello_env.js
Restart=on-failure

[Install]
WantedBy=multi-user.target

Let's go ahead and talk about what's in that file. In the [Unit] section, the Description and Documentation variables are obvious. What's less obvious is the part that says

After=network.target

That tells systemd that if it's supposed to start our app when the machine boots up, it should wait until after the main networking functionality of the server is online to do so. This is what we want, since our app can't bind to NODE_PORT until the network is up and running.

Moving on to the [Service] section we find the meat of today's project. We can specify environment variables here, so I've gone ahead and put in:

Environment=NODE_PORT=3001

so our app, when it starts, will be listening on port 3001. This is different than the default 3000 that we saw when we launched the app by hand. You can specify the Environment directive multiple times if you need multiple environment variables. Next is

Type=simple

which tells systemd how our app launches itself. Specifically, it lets systemd know that the app won't try and fork itself to drop user privileges or anything like that. It's just going to start up and run. After that we see

User=ubuntu

which tells systemd that our app should be run as the unprivileged ubuntu user. You definitely want to run your apps as unprivileged users to that attackers can't aim at something running as the root user.

The last two parts here are maybe the most interesting to us

ExecStart=/usr/bin/node /home/ubuntu/hello_env.js
Restart=on-failure

First, ExecStart tells systemd what command it should run to launch our app. Then, Restart tells systemd under what conditions it should restart the app if it sees that it has died. The on-failure value is likely what you will want. Using this, the app will NOT restart if it goes away "cleanly". Going away "cleanly" means that it either exits by itself with an exit value of 0, or it gets killed with a "clean" signal, such as the default signal sent by the kill command. Basically, if our app goes away because we want it to, then systemd will leave it turned off. However, if it goes away for any other reason (an unhandled exception crashes the app, for example), then systemd will immediately restart it for us. If you want it to restart no matter what, change the value from on-failure to always.

Last is the [Install] stanza. We're going to gloss over this part as it's not very interesting. It tells systemd how to handle things if we want to start our app on boot, and you will probably want to use the values shown for most things until you are a more advanced systemd user.

Using systemctl To Control Our App

The hard part is done! We will now learn how to use the system provided tools to control our app. To being with, enter the command

$ sudo systemctl daemon-reload

You have to do this whenever any of the service files change at all so that systemd picks up the new info.

Next, let's launch our app with

$ sudo systemctl start hello_env

After you do this, you should be able to go to

http://11.22.33.44:3001

in your web browser and see the output. If it's there, congratulations, you've launched your app using systemd! If the output looks very different than it did when you launched the app manually don't worry, that's normal. When systemd kicks off an application, it does so from a much more minimal environment than the one you have when you ssh into a machine. In particular, the $HOME environment variable may not be set by default, so be sure to pay attention to this if your app makes use of any environment variables. You may need to set them yourself when using systemd.

You may be interested in what state systemd thinks the app is in, and if so, you can find out with

$ sudo systemctl status hello_env

Now, if you want to stop your app, the command is simply

$ sudo systemctl stop hello_env

and unsurprisingly, the following will restart things for us

$ sudo systemctl restart hello_env

If you want to make the application start up when the machine boots, you accomplish that by enabling it

$ sudo systemtl enable hello_env

and finally, if you previously enabled the app, but you change your mind and want to stop it from coming up when the machine starts, you correspondingly disable it

$ sudo systemctl disable hello_env

Wrapping Up

That concludes today's exercise. There is much, much more to learn and know about systemd, but this should help get you started with some basics. In a follow up blog post, we will learn how to launch multiple instances of our app, and load balance those behind Nginx to illustrate a more production ready example.


This article was first published NodeSource blog post in November 2016

Read More
6-min.jpg
Configuring Your .npmrc for an Optimal Node.js Environment

This blog post was first published on March 2017. Find out more here


For Node.js developers, npm is an everyday tool. It's literally something we interact with multiple times on a daily basis, and it's one of the pieces of the ecosystem that's led to the success of Node.js.

One of the most useful, important, and enabling aspects of the npm CLI is that its highly configurable. It provides an enormous amount of configurability that enables everyone from huge enterprises to individual developers to use it effectively.

One part of this high-configurability is the .npmrc file. For a long time I'd seen discussion about it - the most memorable being the time I thought you could change the name of the node_modules directory with it. For a long time, I didn't truly understand just how useful the .npmrc file could be, or how to even use it.

So, today I've collected a few of the optimizations that .npmrc allows that have been awesome for speeding up my personal workflow when scaffolding out Node.js modules and working on applications long-term.

Automating npm init Just a Bit More

When you're creating a new module from scratch, you'll typically start out with the npm init command. One thing that some developers don't know is that you can actually automate this process fairly heftily with a few choice npm config set ... commands that set default values for the npm init prompts.

You can easily set your name, email, URL, license, and initial module version with a few commands:

npm config set init.author.name "Hiro Protagonist"
npm config set init.author.email "hiro@showcrash.io"
npm config set init.author.url "http://hiro.snowcrash.io"
npm config set init.license "MIT"
npm config set init.version "0.0.1"

In the above example, I've set up some defaults for Hiro. This personal information won't change too frequently, so setting up some defaults is helpful and allows you to skip over entering the same information in manually every time.

Additionally, the above commands set up two defaults that are related to your module.

The first default is the initial license that will be automatically suggested by the npm init command. I personally like to default to MIT, and much of the rest of the Node.js ecosystem does the same. That said, you can set this to whatever you'd like - it's a nice optimization to just be able to nearly automatically select your license of choice.

The second default is the initial version. This is actually one that made me happy, as whenever I tried building out a module I never wanted it to start out at version 1.0.0, which is what npm init defaults to. I personally set it to 0.0.1 and then increment the version as I go with the npm version [ major | minor | patch ] command.

Change Your npm Registry

As time moves forward, we're seeing more options for registries arise. For example, you may want to set your registry to a cache of the modules you know you need for your apps. Or, you may be using Certified Modules as a custom npm registry. There's even a separate registry for Yarn, a topic that is both awesome and totally out of scope for this post.

So, if you'd like to set a custom registry, you can run a pretty simple one-line command:

npm config set registry "https://my-custom-registry.registry.nodesource.io/"

In this example, I've set the registry URL to an example of a Certified Modules registry - that said, the exact URL in the command can be replaced with any registry that's compatible. To reset your registry back to the default npm registry, you can simply run the same command pointing to the standard registry:

npm config set registry "https://registry.npmjs.com/"

Changing the console output of npm install with loglevel

When you npm install a bunch of information gets piped to you. By default, the npm command line tool limits how much of this information is actually output into the console when installing. There are varying degrees of output that you can assign at install, or by default, if you change it with npm config in your .npmrc file. The options, from least to most output, are: silent, error, warn, http, info, verbose, and silly.

Here's an example of the silent loglevel: npm install express loglevel silent

And here's an example of the silly loglevel: npm install express loglevel silly

If you'd like to get a bit more information (or a bit less, depending on your preferences) when you npm install, you can change the default loglevel.

npm config set loglevel="http"

If you tinker around with this config a bit and would like to reset to what the npm CLI currently defaults to, you can run the above command with warn as the loglevel:

npm config set loglevel="warn"
Looking for more info on npm? Check out our complete guide: Read now: The Ultimate Guide to npm

Change Where npm Installs Global Modules

This is a really awesome change - it has a few steps, but is really worth it. With a few commands, you can change where the npm CLI installs global modules by default. Normally, it installs them to a privileged system folder - this requires administrative access, meaning that a global install requires sudo access on UNIX-based systems.

If you change the default global prefix for npm to an unprivileged directory, for example, ~/.global-modules, you'll not need to authenticate when you install a global module. That's one benefit - another is that globally installed modules won't be in a system directory, reducing the likelihood of a malicious module (intentionally or not) doing something you didn't want it to on your system.

To get started, we're going to create a new folder called global-modules and set the npm prefix to it:

mkdir ~/.global-modules
npm config set prefix "~/.global-modules"

Next, if you don't already have a file called ~/.profile, create one in your root user directory. Now, add the following line to the ~/.profile file:

export PATH=~/.global-modules/bin:$PATH

Adding that line to the ~/.profile file will add the global-modules directory to your PATH, and enable you to use it for npm global modules.

Now, flip back over to your terminal and run the following command to update the PATH with the newly updated file:

source ~/.profile

Just one more thing...

If you'd like to keep reading about Node.js, npm, configuration options, and development with the Node.js stack, I've got some fantastic articles for you.

Our most recent guide is a deep-dive into the core concepts of the package.json file. You'll find a ton of info about package.json in there, including a ton of super helpful configuration information. We also published an absolute beginner's guide to npm that you may be interested in reading - even though it's a beginner's guide, I'd bet you'll find something useful in it.

With this article, the intent was to help you set up a great configuration for Node.js development. If you'd like to take the leap and ensure that you're always on a rock-solid platform when developing and deploying you Node.js apps, check out NodeSource Certified Modules - it's a new tool we launched last week that will help enable you to spend more time building apps and less time worrying about modules.

Learn more and get started with NCM Create your free NodeSource account

Read More
8-min.jpg
11 Simple npm Tricks That Will Knock Your Wombat Socks Off

This blog post was first published on March 2017. Find out more here

Using npm effectively can be difficult. There are a ton of features built-in, and it can be a daunting task to try to approach learning them.

Personally, even learning and using just one of these tricks (npm prune, which is #4) saved me from getting rid of unused modules manually by deleting node_modules and re-installing everything with npm install. As you can probably imagine, that was insanely stressful.

We've compiled this list of 11 simple-to-use npm tricks that will allow you to speed up development using npm, no matter what project you're working on.

1. Open a package’s homepage

Run: npm home $package

Running the home command will open the homepage of the package you're running it against. Running against the lodash package will bring you to the Lodash website. This command can run without needing to have the package installed globally on your machine or within the current project.

2. Open package’s GitHub repo

Run: npm repo $package

Similar to home, the repo command will open the GitHub repository of the package you're running it against. Running against the express package will bring you to the official Express repo. Also like home, you don’t need to have the package installed.

3. Check a package for outdated dependencies

Run: npm outdated

You can run the outdated command within a project, and it will check the npm registry to see if any of your packages are outdated. It will print out a list in your command line of the current version, the wanted version, and the latest version.

Running npm outdated on a Node project

4. Check for packages not declared in package.json

Run: npm prune

When you run prune, the npm CLI will run through your package.json and compare it to your project’s /node_modules directory. It will print a list of modules that aren’t in your package.json.

The npm prune command then strips out those packages, and removes any you haven't manually added to package.json or that were npm installed without using the --save flag.

Running npm prune on a Node project

Update: Thanks to @EvanHahn for noticing a personal config setting that made npm prune provide a slightly different result than the default npm would provide!

5. Lock down your dependencies versions

Run: npm shrinkwrap

Using shrinkwrap in your project generates an npm-shrinkwrap.json file. This allows you to pin the dependencies of your project to the specific version you’re currently using within your node_modules directory. When you run npm install and there is a npm-shrinkwrap.json present, it will override the listed dependencies and any semver ranges in package.json.

If you need verified consistency across package.json, npm-shrinkwrap.json and node_modules for your project, you should consider using npm-shrinkwrap.

Running npm shrinkwrap on a Node project

6. Use npm v3 with Node.js v4 LTS

Run: npm install -g npm@3

Installing npm@3 globally with npm will update your npm v2 to npm v3, including on the Node.js v4 LTS release (“Argon”) ships with the npm v2 LTS release. This will install the latest stable release of npm v3 within your v4 LTS runtime.

7. Allow npm install -g without needing sudo

Run: npm config set prefix $dir

After running the command, where $dir is the directory you want npm to install your global modules to, you won’t need to use sudo to install modules globally anymore. The directory you use in the command becomes your global bin directory.

The only caveat: you will need to make sure you adjust your user permissions for that directory with chown -R $USER $dir and you add $dir/bin to your PATH.

8. Change the default save prefix for all your projects

Run: npm config set save-prefix="~"

The tilde (~) is more conservative than what npm defaults to, the caret (^), when installing a new package with the --save or --save-dev flags. The tilde pins the dependency to the minor version, allowing patch releases to be installed with npm update. The caret pins the dependency to the major version, allowing minor releases to be installed with npm update.

9. Strip your project's devDependencies for a production environment

When your project is ready for production, make sure you install your packages with the added --production flag. The --production flag installs your dependencies, ignoring your devDependencies. This ensures that your development tooling and packages won’t go into the production environment.

Additionally, you can set your NODE_ENV environment variable to production to ensure that your project’s devDependencies are never installed.

10. Be careful when using .npmignore

If you haven't been using .npmignore, it defaults to .gitignore with a few additional sane defaults.

What many don't realize that once you add a .npmignore file to your project the .gitignore rules are (ironically) ignored. The result is you will need to audit the two ignore files in sync to prevent sensitive leaks when publishing.

11. Automate npm init with defaults

When you run npm init in a new project, you’re able to go through and set up your package.json’s details. If you want to set defaults that npm init will always use, you can use the config set command, with some extra arguments:

npm config set init.author.name $name
npm config set init.author.email $email

If, instead, you want to completely customize your init script, you can point to a self-made default init script by running

npm config set init-module ~/.npm-init.js`

Here’s a sample script that prompts for private settings and creates a GitHub repo if you want. Make sure you change the default GitHub username (YOUR_GITHUB_USERNAME) as the fallback for the GitHub username environment variable.

var cp = require('child_process');
var priv;

var USER = process.env.GITHUB_USERNAME || 'YOUR_GITHUB_USERNAME';

module.exports = {

  name: prompt('name', basename || package.name),

  version: '0.0.1',

  private: prompt('private', 'true', function(val){
    return priv = (typeof val === 'boolean') ? val : !!val.match('true')
  }),

  create: prompt('create github repo', 'yes', function(val){
    val = val.indexOf('y') !== -1 ? true : false;

    if(val){
      console.log('enter github password:');
      cp.execSync("curl -u '"+USER+"' https://api.github.com/user/repos -d " +
        "'{\"name\": \""+basename+"\", \"private\": "+ ((priv) ? 'true' : 'false')  +"}' ");
      cp.execSync('git remote add origin '+ 'https://github.com/'+USER+'/' + basename + '.git');
    }

    return undefined;
  }),

  main: prompt('entry point', 'index.js'),

  repository: {
    type: 'git',
    url: 'git://github.com/'+USER+'/' + basename + '.git' },

  bugs: { url: 'https://github.com/'+USER'/' + basename + '/issues' },

  homepage: "https://github.com/"+USER+"/" + basename,

  keywords: prompt(function (s) { return s.split(/\s+/) }),

  license: 'MIT',

  cleanup: function(cb){

    cb(null, undefined)
  }

}

# One last thing...

If you want to learn more about npm, Node.js, JavaScript, Docker, Kubernetes, Electron, and tons more, you should follow @NodeSource on Twitter. We're always around and would love to hear from you!

Read More
  • 1 / 5