List of useful articles and blog post on Node.js

3.png
Node Is Simple — Part 3

tl;drThis is the third article of the Node is Simple article series. In this article series, I will be discussing how to create a simple and secure NodeJS, Express, MongoDB web application.

To follow the past tutorials, Node is Simple Part 1, and Node is Simple Part 2.

Hello, fellow developers, I am once again, asking you to learn some NodeJS with me. In this tutorial, I will be discussing creating basic CRUD operations in NodeJS. So without further ado, let’s start, shall we?

CREATE endpoint

If you can remember in my previous article, I have created a simple endpoint to POST a name and city of a student and created a record (document) of the student. What I am going to do is enhance what I’ve already done. Let’s update the model, and the service to reflect our needs.

We are going to add some new fields to the student document.

const mongoose = require("../database");
const Schema = mongoose.Schema;

const studentSchema = new Schema(
  {
    _id: {
      type: mongoose.SchemaTypes.String,
      unique: true,
      required: true,
      index: true
    },
    name: { type: mongoose.SchemaTypes.String, required: true },
    city: { type: mongoose.SchemaTypes.String, required: true },
    telephone: { type: mongoose.SchemaTypes.Number, required: true },
    birthday: { type: mongoose.SchemaTypes.Date, required: true }
  },
  { strict: true, timestamps: true, _id: false }
);

const collectionName = "student";

const Student = mongoose.model(collectionName, studentSchema, collectionName);

module.exports = {
  Student
};
Updated /models/index.js file

What I’ve done here is added \id, telephone,_ and birthday as the new fields. And I have disabled the Mongoose default \id_ and specified my own.

Now let’s update the service file.

const { Student } = require("../models");

module.exports = class StudentService {
  async registerStudent(data) {
    const { _id, name, city, telephone, birthday } = data;

    const new_student = new Student({
      _id,
      name,
      city,
      telephone,
      birthday
    });

    const response = await new_student.save();
    const res = response.toJSON();
    delete res.__v;
    return res;
  }
};
Updated /services/index.js file

Before we are going to test what we have done, I am going to let you in on a super-secret. If you have experience in developing NodeJS applications, I hope you have come across the Nodemon tool. It restarts the server once you changed the files. But today I am going to tell you about this amazing tool called PM2.

What is PM2?

PM2 is a production-grade process management tool. It has various capabilities such as load balancing (which I will discuss in a later tutorial), enhanced logging features, adding environment variables, and many more. Their documentation is super nifty and worth checking out.

So let’s install PM2 and start using it in our web app.

$ npm install pm2@latest -g

Let’s create ecosystem.config.js file in the project root, which will contain all of our environment variables, keys, secrets, and PM2 configurations.

const fs = require("fs");

const SERVER_CERT = fs.readFileSync(__dirname + "/config/server.cert", "utf8");
const SERVER_KEY = fs.readFileSync(__dirname + "/config/server.key", "utf8");

module.exports = {
  apps: [
    {
      name: "node-is-simple",
      script: "./index.js",
      watch: true,
      env: {
        NODE_ENV: "development",
        SERVER_CERT,
        SERVER_KEY,
        HTTP_PORT: 8080,
        HTTPS_PORT: 8081,
        MONGO_URI: "mongodb://localhost/students"
      }
    }
  ]
};
ecosystem.config.js file (The configuration file for PM2)

Since we have moved all of our keys as the environment variables, now we have to change /config/index.js file to reflect these changes.

module.exports = {
  SERVER_CERT: process.env.SERVER_CERT,
  SERVER_KEY: process.env.SERVER_KEY,
  HTTP_PORT: process.env.HTTP_PORT,
  HTTPS_PORT: process.env.HTTPS_PORT,
  MONGO_URI: process.env.MONGO_URI
};
Updated /config/index.js file (here we have moved all the secrets to the ecosystem.config.js file as environment variables)

Now, remember, /config/index.js is safe to commit to a public repository. But not the ecosystem.config.js file. Also never commit your private keys to a public repository (But I’ve done it for the demonstration purposes). Protect your secrets like your cash 😂.

Since we have done our initial setup let’s run our application. And one thing to keep in mind. If you start the application, as usual,

$ node index.js

it won’t work. Now we have to use PM2 to start our application because it contains all the environment variables needed for the Node web application. Now go to the project root folder and run the following command.

$ pm2 start

1

Figure 1: PM2 startup

If you see this (figure 1) in your console it is working as it should. Now to see the logs run the following command.

$ pm2 logs

2

Figure 2: PM2 logs

If you see this (figure 2) in your console then the node app is working as it should.

Enough with the PM2

Yes, let’s move to test our enhanced CREATE endpoint.

3

Figure 3: POST request in Postman

Now create a request like this (figure 3) and hit Send.

4

Figure 4: POST response in Postman

If you see this response (figure 4), it is safe to say everything works as it should. Yay!

Now since we have a CREATE endpoint, let’s add a READ endpoint.

READ endpoint

Now let’s read what’s inside our student collection. We can view all the students who are registered, or we can see details from only one student.

GET /students

Now let’s get all the students’ details. We only get the \id, name,_ and city of the student here. Add these lines to the /controllers/index.js file.

/** @route  GET /students
 *  @desc   Get all students
 *  @access Public
 */
router.get(
  "/students",
  asyncWrapper(async (req, res) => {
    const response = await studentService.getAllStudents();
    res.send(response);
  })
);

And add these lines (inside the StudentService class) to the /servcies/index.js file.

_async_ getAllStudents() {  
  _return_ Student.find({}, "\_id name city");  
}

Let me give a summary of the above lines. Student.find() is the method to apply queries to the MongoDB. Since we need all the students we pass the empty object as the first argument. As the second argument (which is called a projection) we provide the fields we want to return and the fields we do not want to return. Here I want \id, name,_ and city. We can use “-field\name”_ to provide the field we do not want.

GET /students/:id

Now let’s get all the details from a single student. Now here we are getting all the details of a single student.

Now add these lines to the /controllers/index.js file.

/** @route  GET /students/:id
 *  @desc   Get a single student
 *  @access Public
 */
router.get(
  "/students/:id",
  asyncWrapper(async (req, res) => {
    const response = await studentService.getStudent(req.params.id);
    res.send(response);
  })
);

In the Express router, we can specify a path parameter via /:param syntax. Now we can access this path parameter via req.params.param. This is basic Express and to get more knowledge on this please refer to the documentation, and it is a great source of good knowledge.

Now add these lines to the /servcies/index.js file.

_async_ getStudent(\_id) {  
  _return_ Student.findById(\_id, "-\_\_v -createdAt -updatedAt");  
}

Here we provide the \id_ of the student to get the details. As the projection, we do not want \_v, createdAt,_ and updatedAt fields.

Now let’s check these endpoints.

5

Figure 5: Request /students

6

Figure 6: Response /students

I have added two more students’ details, so I got three records.

Now let’s check the single student endpoint.

7

Figure 7: Request /students/stud\_1

8

Figure 8: Response /students/stud\_1

If you get similar results in figure 6 and figure 8 let’s say it was a success.

Update endpoint

Since we created student records, viewed these student records, now it is time to update these student records.

To PUT, or to PATCH?

So this is the biggest question, to update a resource, should we use PUT or PATCH? The answer is somewhat simple. If you want to update the whole resource every time, use PUT. If you want to update the resource partially, use PATCH. It is that simple. This article clarified this dilemma.

Differences between PUT and PATCH

Since I am going to partially update the student record, I will be using PATCH.

PATCH /students/:id

Now let’s update the /controllers/index.js file.

/** @route  PATCH /students/:id
 *  @desc   Update a single student
 *  @access Public
 */
router.patch(
  "/students/:id",
  asyncWrapper(async (req, res) => {
    const response = await studentService.updateStudent(
      req.params.id,
      req.body
    );
    res.send(response);
  })
);

After adding this let’s update the /services/index.js file.

_async_ updateStudent(\_id, { name, city, telephone, birthday }) {  
  _return_ Student.findOneAndUpdate(  
    { \_id },  
    {  
      name,  
      city,  
      telephone,  
      birthday  
    },  
    {  
      _new_: _true_,  
      omitUndefined: _true_,  
      fields: "-\_\_v -createdAt -updatedAt"  
    }  
  );  
}

Let me give a brief description of what’s going on here. We update the student by the given id, and we provide the name, city, telephone, and birthday data to be updated. As the third argument we provide new: true to return the updated document to us, omitUndefined: true to partially update the resource and fields: “-\_v -createdAt -updatedAt”_ to remove these fields from the returning document.

Now let’s check this out.

9

Figure 9: Updating the name of the student (stud\_1)

10

Figure 10: The student’s name has changed from June to Jane

So if you get similar results as figure 10 then let’s say yay! Now let’s move on to the final part of this tutorial, which is DELETE.

DELETE endpoint

Since we create, read, and update, now it is time to delete some students 😁.

DELETE /students/:id

Since we should not delete all the students at once (It is a best practice IMO), let’s delete student by the provided student_id.

Now let’s update the /controllers/index.js file.

/** @route  DELETE /students/:id
 *  @desc   Delete a single student
 *  @access Public
 */
router.delete(
  "/students/:id",
  asyncWrapper(async (req, res) => {
    const response = await studentService.deleteStudent(req.params.id);
    res.send(response);
  })
);

Now let’s update the /services/index.js file.

_async_ deleteStudent(\_id) {  
  _await_ Student.deleteOne({ \_id });  
  _return_ { message: \`Student \[${\_id}\] deleted successfully\` };  
}

Now the time to see this in action.

11

Figure 11: Request to delete student with stdent\_id \[stud\_2\]

12

Figure 12: Response of deleting student with stdent\_id \[stud\_2\]

If the results are similar to figure 12, then it is safe to assume, the application works as it should.

So here are the current /controllers/index.js file and /services/index.js file for your reference.

const router = require("express").Router();
const asyncWrapper = require("../utilities/async-wrapper");
const StudentService = require("../services");
const studentService = new StudentService();

/** @route  GET /
 *  @desc   Root endpoint
 *  @access Public
 */
router.get(
  "/",
  asyncWrapper(async (req, res) => {
    res.send({
      message: "Hello World!",
      status: 200
    });
  })
);

/** @route  POST /register
 *  @desc   Register a student
 *  @access Public
 */
router.post(
  "/register",
  asyncWrapper(async (req, res) => {
    const response = await studentService.registerStudent(req.body);
    res.send(response);
  })
);

/** @route  GET /students
 *  @desc   Get all students
 *  @access Public
 */
router.get(
  "/students",
  asyncWrapper(async (req, res) => {
    const response = await studentService.getAllStudents();
    res.send(response);
  })
);

/** @route  GET /students/:id
 *  @desc   Get a single student
 *  @access Public
 */
router.get(
  "/students/:id",
  asyncWrapper(async (req, res) => {
    const response = await studentService.getStudent(req.params.id);
    res.send(response);
  })
);

/** @route  PATCH /students/:id
 *  @desc   Update a single student
 *  @access Public
 */
router.patch(
  "/students/:id",
  asyncWrapper(async (req, res) => {
    const response = await studentService.updateStudent(
      req.params.id,
      req.body
    );
    res.send(response);
  })
);

/** @route  DELETE /students/:id
 *  @desc   Delete a single student
 *  @access Public
 */
router.delete(
  "/students/:id",
  asyncWrapper(async (req, res) => {
    const response = await studentService.deleteStudent(req.params.id);
    res.send(response);
  })
);

module.exports = router;
Updated /controllers/index.js file (contains CRUD endpoints)
const { Student } = require("../models");

module.exports = class StudentService {
  async registerStudent(data) {
    const { _id, name, city, telephone, birthday } = data;

    const new_student = new Student({
      _id,
      name,
      city,
      telephone,
      birthday
    });

    const response = await new_student.save();
    const res = response.toJSON();
    delete res.__v;
    return res;
  }

  async getAllStudents() {
    return Student.find({}, "_id name city");
  }

  async getStudent(_id) {
    return Student.findById(_id, "-__v -createdAt -updatedAt");
  }

  async updateStudent(_id, { name, city, telephone, birthday }) {
    return Student.findOneAndUpdate(
      { _id },
      {
        name,
        city,
        telephone,
        birthday
      },
      {
        new: true,
        omitUndefined: true,
        fields: "-__v -createdAt -updatedAt"
      }
    );
  }

  async deleteStudent(_id) {
    await Student.deleteOne({ _id });
    return { message: `Student [${_id}] deleted successfully` };
  }
};
Updated /services/index.js file (Contains all CRUD endpoints)

So this is it for this tutorial, and we will meet again in a future tutorial about uploading files to MongoDB using GridFS. As usual, you can find the code here (Check for the commit message “Tutorial 3 checkpoint”).

node-is-simple

So until we meet again, happy coding…

Read More
node_is_simple.png
Node Is Simple — Part 2

tl;drThis is the second article of the Node is Simple article series. In this article series, I will be discussing how to create a simple and secure NodeJS, Express, MongoDB web application.

Hello, my friends, this is part two of Node is Simple article series, where I will be discussing how to add MongoDB to your web application. If you haven’t read my first article you can read it from here: Node is Simple - Part 1


So what is MongoDB?

If you haven’t heard of MongoDB that is news to me. It is the most popular database for modern web applications. (Yeah, yeah, Firebase, Firestore are there too.) MongoDB is a NoSQL database where it has documents as the atomic data structure and a collection of documents is a Collection. With these documents and collections, we can store data as we want.

{     
 "\_id": "5cf0029caff5056591b0ce7d",     
 "firstname": "Jane",     
 "lastname": "Wu",     
 "address": {       
     "street": "1 Circle Rd",       
     "city": "Los Angeles",       
     "state": "CA",       
     "zip": "90404"     
 },     
 "hobbies": \["surfing", "coding"\]   
}

This is how a simple document is constructed in MongoDB. For our tutorial, we will use MongoDB Cloud Atlas which is a MongoDB server as a service platform. I am not going to explain how to create an Atlas account and set up a database since it is really easy and the following is a really good reference.

Getting Started with MongoDB Atlas: Overview and Tutorial

After creating the MongoDB account and set up the database obtain the MongoDB URI which looks like this.

**_mongodb+srv://your\_user\_name:your\_password@cluster0-v6q0g.mongodb.net/database\_name_**

You can specify the following with the URI.

your\user_name:_ The username of the MongoDB database

your\password:_ The password for the MongoDB database

database\name:_ The MongoDB database name

Now that you have the MongoDB URI, you can use MongoDB Compass to view the database and create new collections.

Now let’s move on to the coding, shall we?

First of all, let me tell you something awesome about linting. To catch errors in your code, before running any tests, it is vital to do linting. In linting, tools like ESLint, look at your code and displays warnings and errors about your code. So let’s set up ESLint in your development environment.

  1. Setting up ESLint in VSCode

This is a good reference to setting up ESLint in VSCode: Linting and Formatting with ESLint in VS Code

  1. Setting up ESLint in WebStorm

First, install ESLint in your project path.

$ npm install eslint --save-dev

There are several ESLint configurations to use. Let’s create .eslintrc.json file in the project root folder and specify the configurations.

{
  "env": {
    "browser": true,
    "commonjs": true,
    "es6": true,
    "node": true
  },
  "extends": ["eslint:recommended"],
  "globals": {
    "Atomics": "readonly",
    "SharedArrayBuffer": "readonly"
  },
  "parserOptions": {
    "ecmaVersion": 2018
  },
  "rules": {}
}
_.eslintrc.json file (The configuration file for ESLint)_

After that go to settings in WebStorm and then select Manual ESLint configuration.

1

ESLint configuration settings on WebStorm

Then click OK and done and dusted.

Show me the Code!!!

Alright, alright, let’s move on to the real deal. Now we are going to create a Mongoose reference for the MongoDB database and create some models. Mongoose is the Object Document Model for the MongoDB in NodeJS environment. It is really easy to use and the documentation is really good. Let me tell you a super-secret. Read the documentation, always read the documentation. Documentation is your best friend. 🤣

So first let's install the mongoose package inside the project folder.

$ npm install mongoose --save

Now let’s create the database connection file index.js in /database folder.

const mongoose = require("mongoose");
const config = require("../config");
const dbPath = config.MONGO_URI;
const chalk = require("chalk");

mongoose
  .connect(dbPath, {
    useNewUrlParser: true,
    useUnifiedTopology: true,
    useCreateIndex: true,
    useFindAndModify: false
  })
  .then(() => {
    console.log(chalk.yellow("[!] Successfully connected to the database"));
  })
  .catch(err => {
    console.log(chalk.red(err.message));
  });

const db = mongoose.connection;

db.on("error", () => {
  console.log(chalk.red("[-] Error occurred from the database"));
});

db.once("open", () => {
  console.log(
    chalk.yellow("[!] Successfully opened connection to the database")
  );
});

module.exports = mongoose;
/database/index.js file (Contains MongoDB configurations)

Now let’s update the /config/index.js file.

const fs = require("fs");

const SERVER_CERT = fs.readFileSync(__dirname + "/server.cert", "utf8");
const SERVER_KEY = fs.readFileSync(__dirname + "/server.key", "utf8");

module.exports = {
  SERVER_CERT,
  SERVER_KEY,
  HTTP_PORT: 8080,
  HTTPS_PORT: 8081,
  MONGO_URI:
    "mongodb+srv://your_user_name:your_password@cluster0-v6q0g.mongodb.net/students"
};
/config/index.js file (Contains configurations of the project)

Remember to change the MONGO_URI according to the one you obtained from MongoDB Cloud Atlas instance.

Now let’s create a simple mongoose model. Create index.js inside /models folder.

const mongoose = require("../database");
const Schema = mongoose.Schema;

const studentSchema = new Schema(
  {
    name: { type: mongoose.SchemaTypes.String },
    city: { type: mongoose.SchemaTypes.String }
  },
  { strict: true, timestamps: true }
);

const collectionName = "student";

const Student = mongoose.model(collectionName, studentSchema, collectionName);

module.exports = {
  Student
};
/models/index.js file (Contains Mongoose model schemas)

As simple as that. Now let’s create a simple service to create a student using the endpoint. Create index.js file inside /services folder.

const { Student } = require("../models");

module.exports = class StudentService {
  async registerStudent(data) {
    const { name, city } = data;

    const new_student = new Student({
      name,
      city
    });

    const response = await new_student.save();
    const res = response.toJSON();
    delete res.__v;
    return res;
  }
};
/services/index.js file

Simple right? Now let’s use this service inside a controller. Remember our controller, we created in the first article. Let’s update it.

const router = require("express").Router();
const asyncWrapper = require("../utilities/async-wrapper");
const StudentService = require("../services");
const studentService = new StudentService();

/** @route  GET /
 *  @desc   Root endpoint
 *  @access Public
 */
router.get(
  "/",
  asyncWrapper(async (req, res) => {
    res.send({
      message: "Hello World!",
      status: 200
    });
  })
);

/** @route  POST /register
 *  @desc   Register a student
 *  @access Public
 */
router.post(
  "/register",
  asyncWrapper(async (req, res) => {
    const response = await studentService.registerStudent(req.body);
    res.send(response);
  })
);

module.exports = router;
/controllers/index.js file (Contains Express Router to handle requests)

It’s not over yet. If you run this application as of now and send a POST request to the /register endpoint, it would just return a big error message. It is because our application still doesn’t know how to parse a JSON payload. It would just complain that req.body is undefined. So let’s teach how to parse a JSON payload to our web application. It is not much but it’s honest work. We have to use a simple Express middleware called Body-Parser for this situation. Now let’s set up this.

First, install the following packages inside the project folder.

$ npm install body-parser helmet --save

Create the file common.js inside /middleware folder.

const bodyParser = require("body-parser");
const helmet = require("helmet");

module.exports = app => {
  app.use(bodyParser.json());
  app.use(helmet());
};
/middleware/common.js file (Contains all the common middleware for the Express app)

Body-Parser is the middleware which parses the body payload. And Helmet is there to secure your web application by setting various security HTTP headers.

After that let’s export our middleware as a combined middleware. Now if we want to add new middleware all we have to do is update the common.js file. Create index.js file inside /middleware folder.

const CommonMiddleware = require("./common");

const Middleware = app => {
  CommonMiddleware(app);
};

module.exports = Middleware;
/middleware/index.js file (Exports all the common middleware)

We are not done yet. Now we have to include this main middleware file inside the index.js root file. Remember we created the index.js file inside the project root folder. Let’s update it.

const express = require("express");
const chalk = require("chalk");
const http = require("http");
const https = require("https");
const config = require("./config");

const HTTP_PORT = config.HTTP_PORT;
const HTTPS_PORT = config.HTTPS_PORT;
const SERVER_CERT = config.SERVER_CERT;
const SERVER_KEY = config.SERVER_KEY;

const app = express();
const Middleware = require("./middleware");
const MainController = require("./controllers");

Middleware(app);
app.use("", MainController);
app.set("port", HTTPS_PORT);

/**
 * Create HTTPS Server
 */

const server = https.createServer(
  {
    key: SERVER_KEY,
    cert: SERVER_CERT
  },
  app
);

const onError = error => {
  if (error.syscall !== "listen") {
    throw error;
  }

  const bind =
    typeof HTTPS_PORT === "string"
      ? "Pipe " + HTTPS_PORT
      : "Port " + HTTPS_PORT;

  switch (error.code) {
    case "EACCES":
      console.error(chalk.red(`[-] ${bind} requires elevated privileges`));
      process.exit(1);
      break;
    case "EADDRINUSE":
      console.error(chalk.red(`[-] ${bind} is already in use`));
      process.exit(1);
      break;
    default:
      throw error;
  }
};

const onListening = () => {
  const addr = server.address();
  const bind = typeof addr === "string" ? `pipe ${addr}` : `port ${addr.port}`;
  console.log(chalk.yellow(`[!] Listening on HTTPS ${bind}`));
};

server.listen(HTTPS_PORT);
server.on("error", onError);
server.on("listening", onListening);

/**
 * Create HTTP Server (HTTP requests will be 301 redirected to HTTPS)
 */
http
  .createServer((req, res) => {
    res.writeHead(301, {
      Location:
        "https://" +
        req.headers["host"].replace(
          HTTP_PORT.toString(),
          HTTPS_PORT.toString()
        ) +
        req.url
    });
    res.end();
  })
  .listen(HTTP_PORT)
  .on("error", onError)
  .on("listening", () =>
    console.log(chalk.yellow(`[!] Listening on HTTP port ${HTTP_PORT}`))
  );

module.exports = app;
/index.js file (Contains the Express app configuration)

Well done people, now we have completed setting up MongoDB connection and the model. So how do we test this? It is so simple, we just have to use a super easy tool called Postman.

Testing API endpoints with Postman

I hope you all know what Postman does. (Not the one that delivers letters ofc.) Install Postman and fire it up.

22

Figure 1: Postman Settings

As in figure 1, change the settings for SSL certificate validation. And set it to off. Since we only have a self-signed SSL certificate. (Remember tutorial one.) After that, we are ready to go.

3

Figure 2: Request creation in Postman

As in figure 2, create your JSON body request. Add your name and city. Then click Send.

4

Figure 3: Response to the request

If everything goes as it should, you’ll see this response. If you see this response, voila, everything works correctly. Now let’s take a look at MongoDB Compass.

5

Figure 4: MongoDB Compass view

As in figure 3, you'll see how it shows in the database. Here, the MongoDB database is “students” and the collection is “student” and the document is the one that is showed in the view.

Now that was easy right? So that’s it for this tutorial. In the coming tutorial, I’ll add some more CRUD operations to give you more details on working with MongoDB. All the code is saved in the GitHub repo and matched with the commit message. Look for the commit message “Tutorial 2 checkpoint”.

Niweera/node-is-simple

Until we meet again, happy coding…

Read More
nodejs-14-features_ (1).jpg
New Node.js 14 features will see it disrupt AI, IoT and more, Node latest version report

New or latest Node.js features aren’t the usual selling point of this platform. Node.js is primarily well-known for its speed and simplicity. This is why so many companies are willing to give it a shot. However, with the release of a new LTS (long-term support) Node.js 14 version, Node.js will gain a lot of new features every Node.js developer can be excited about. Why? That’s because the new Node.js features added in the version 12 through 14 and the possibilities they create are simply that amazing! Find our more from this Node latest version report.

This article has been updated to reflect the latest changes added in Node.js 14. What you can find here is a thorough overview of the latest Node.js features added in version 12 through 14.

Node + Web Assembly = <3

Web Assembly is slowly gaining in popularity. More and more Node modules are experimenting with this language and so is Node itself! With the upcoming Node 14 release, we will gain access to the experimental Web Assembly System Interface – WASI.

The main idea behind this is to provide us with an easier way to access the underlying operating system. I’m really looking forward to seeing more Web Assembly in Node.

V8 8.1 is here!

I did mention that the new Node comes with the V8. This gives us not only access to the private field, but also some performance optimizations.

Awaits should work much faster, as should JS parsing.

Our apps should load quicker and asyncs should be much easier to debug, because we’re finally getting stack traces for them.

What’s more, the heap size is getting changed. Until now, it was either 700MB (for 32bit systems) or 1400MB (for 64bit). With new changes, it’s based on the available memory!

With the latest Node version 14, we’re getting access to the newest V8. This means that we’re getting some popular features of the JavaScript engine.

If you’re using TypeScript, you probably try nullish coalescing and optional chaining. Both of those are natively supported by Node 14.

We’re also getting a few updates to Intl. The first one is support for Intl.DisplayNames and the second one is support for calendar and numberingSystem for Intl.DateTimeFormat.

Stable Diagnostic Reports

In Node 12, we’ve got a new experimental feature called Diagnostic Reports. By using it, we could easily get a report that contains information about the current system. What’s more, we can generate it not only on demand but also after a certain event.

If you have any production running a Node app, then this is something you should be checking out.

Threads are almost stable!

With the last LTS we’ve got access to threads. Of course, it was an experimental feature and required a special flag called –experimental-worker for it to work.

With the upcoming LTS (Node 12) it’s still experimental, but won’t require a flag anymore. We are getting closer to a stable version!

ES modules support

Let’s face it, ES modules are currently the way to go in JavaScript development. We are using it in our frontend apps. We are using it on our desktop or even mobile apps. And yet, in case of Node we were stuck with common.js modules.

Of course, we could use Babel or Typescript, but since Node.js is a backend technology, the only thing we should care about is a Node version installed on the server. We don’t need to care about multiple different browsers and support for them, so what’s the point of installing a tool that was made precisely with that in mind (Babel/Webpack etc.)?

With Node 10, we could finally play a little with ES modules (current LTS has experimental implementation for modules), but it required us to use special file extension – .mjs (module javascript).

With Node 12, it’s getting a little bit easier to work with. Much like it is with web apps, we get a special property type called that will define if code should be treated like common.js or es module.

The only thing you need to do to treat all your files as a module is to add the property type with the value module to your package.json.

{
  "type": "module"
}

From now on, if this package.json is the closest to our .js file, it will be treated like a module. No more mjs (we can still use it if we want to)!

So, what if we wanted to use some common.js code?

As long as the closest package.json does not contain a module type property, it will be treated like common.js code.

What’s more, we are getting new an extension called cjs – a common.js file.

Every mjs file is treated as a module and every cjs as a common.js file.

If you didn’t have a chance to try it out, now is the time!

Interested in developing microservices? 🤔 Make sure to check out our State of Microservices 2020 report – based on opinions of 650+ microservice experts!

JS and private variables

When it comes to JavaScript, we have always struggled to protect some data in our classes/functions from the outside.

JS is famous for its monkey patching, meaning we could always somehow access almost everything.

We tried with closures, symbols and more to simulate private-like variables. Node 12 ships with the new V8 and so we’ve got access to one cool feature – private properties in the class.

I’m sure you all remember the old approach to privates in Node:

class MyClass {
  constructor() {
    this._x = 10
  }
  
  get x() {
    return this._x
  }
}

We all know it’s not really a private – we are still able to access it anyway, but most of IDEs treated it like a private field and most of Node devs knew about this convention. Finally, we can all forget about it.

class MyClass {
  #x = 10
  
  get x() {
    return this.#x
  }
}

Can you see the difference? Yes, we use # character to tell Node that this variable is private and we want it to be accessible only from the inside of this class.

Try to access it directly, you’ll get an error that this variable does not exists.

Sadly some IDE do not recognize them as proper variables yet.

Flat and flatMap

With Node 12, we’re getting access to new JavaScript features.

First of all, we’re getting access to new array methods – flat and flatMap. The first one is similar to Lodash’s flattenDepth method.

If we pass a nested arrays to it, we will get a flatten array as a result.

[10, [20, 30], [40, 50, [60, 70]]].flat() // => [10, 20, 30, 40, 50, [60, 70]]
[10, [20, 30], [40, 50, [60, 70]]].flat(2) // => [10, 20, 30, 40, 50, 60, 70]

As you can see, it also has a special parameter – depth. By using it, you can decide how many levels down you want to flatten.

The second one – flatMap – works just like map, followed by flat 🙂

Optional catch binding

Another new feature is optional catch binding. Until now we always had to define an error variable for try – catch.

try {
  someMethod()
} catch(err) {
  // err is required
}

With Node 12 we can’t skip the entire catch clause, but we can skip the variable at least.

try {
  someMethod()
} catch {
  // err is optional
}

Object.fromEntries

Another new JavaScript feature is the Object.fromEntries method. It’s main usage is to create an object either from Map or from a key/value array.

Object.fromEntries(new Map([['key', 'value'], ['otherKey', 'otherValue']]));
// { key: 'value', otherKey: 'otherValue' }


Object.fromEntries([['key', 'value'], ['otherKey', 'otherValue']]);
// { key: 'value', otherKey: 'otherValue' }

The new Node.js is all about threads!

If there is one thing we can all agree on, it’s that every programming language has its pros and cons. Most popular technologies have found their own niche in the world of technology. Node.js is no exception.

We’ve been told for years that Node.js is good for API gateways and real-time dashboards (e.g. with websockets). As a matter of fact, its design itself forced us to depend on the microservice architecture to overcome some of its common obstacles.

At the end of the day, we knew that Node.js was simply not meant for time-consuming, CPU-heavy computation or blocking operations due to its single-threaded design. This is the nature of the event loop itself.

If we block the loop with a complex synchronous operation, it won’t be able to do anything until it’s done. That’s the very reason we use async so heavily or move time-consuming logic to a separate microservice.

This workaround may no longer be necessary thanks to new Node.js features that debuted in its 10 version. The tool that will make the difference are worker threads. Finally, Node.js will be able to excel in fields where normally we would use a different language.

A good example could be AI, machine learning or big data processing. Previously, all of those required CPU-heavy computation, which left us no choice, but to build another service or pick a better-suited language. No more.

Threads!? But how?

This new Node.js feature is still experimental – it’s not meant to be used in a production environment just yet. Still, we are free to play with it. So where do we start?

Starting from Node 12+ we no longer need to use special feature flag –experimental-worker. Workers are on by default!

node index.js

Now we can take full advantage of the worker_threads module. Let’s start with a simple HTTP server with two methods:

→ GET /hello (returning JSON object with “Hello World” message),

→ GET /compute (loading a big JSON file multiple times using a synchronous method).

const express = require('express');
const fs = require('fs');

const app = express();

app.get('/hello', (req, res) => {
  res.json({
    message: 'Hello world!'
  })
});

app.get('/compute', (req, res) => {
  let json = {};
  for (let i=0;i<100;i++) {
    json = JSON.parse(fs.readFileSync('./big-file.json', 'utf8'));
  }

  json.data.sort((a, b) => a.index - b.index);

  res.json({
    message: 'done'
  })
});

app.listen(3000);

The results are easy to predict. When GET /compute and /hello are called simultaneously, we have to wait for the compute path to finish before we can get a response from our hello path. The Event loop is blocked until file loading is done.

Let’s fix it with threads!

const express = require('express');
const fs = require('fs');
const { Worker, isMainThread, parentPort, workerData } = require('worker_threads');

if (isMainThread) {
  console.log("Spawn http server");

  const app = express();

  app.get('/hello', (req, res) => {
    res.json({
      message: 'Hello world!'
    })
  });

  app.get('/compute', (req, res) => {

    const worker = new Worker(__filename, {workerData: null});
    worker.on('message', (msg) => {
      res.json({
        message: 'done'
      });
    })
    worker.on('error', console.error);
      worker.on('exit', (code) => {
        if(code != 0)
          console.error(new Error(`Worker stopped with exit code ${code}`))
    });
  });

  app.listen(3000);
} else {
  let json = {};
  for (let i=0;i<100;i++) {
    json = JSON.parse(fs.readFileSync('./big-file.json', 'utf8'));
  }

  json.data.sort((a, b) => a.index - b.index);

  parentPort.postMessage({});
}

As you can see, the syntax is very similar to what we know from Node.js scaling with Cluster. But the interesting part begins here.

Try to call both paths at the same time. Noticed something? Indeed, the event loop is no longer blocked so we can call /hello during file loading.

Now, this is something we have all been waiting for! All that’s left is to wait for a stable API.

Want even more new Node.js features? Here is an N-API for building C/C++ modules!

The raw speed of Node.js is one of the reason we choose this technology. Worker threads are the next step to improve it. But is it really enough?

Node.js is a C-based technology. Naturally, we use JavaScript as a main programming language. But what if we could use C for more complex computation?

Node.js 10 gives us a stable N-API. It’s a standardized API for native modules, making it possible to build modules in C/C++ or even Rust. Sounds cool, doesn’t it?

c-logo-267x300 Building native Node.js modules in C/C++ has just got way easier

A very simple native module can look like this:

#include <napi.h>
#include <math.h>

namespace helloworld {
    Napi::Value Method(const Napi::CallbackInfo& info) {
        Napi::Env env = info.Env();
        return Napi::String::New(env, "hello world");
    }

    Napi::Object Init(Napi::Env env, Napi::Object exports) {
        exports.Set(Napi::String::New(env, "hello"),
                    Napi::Function::New(env, Method));
        return exports;
    }

    NODE_API_MODULE(NODE_GYP_MODULE_NAME, Init)
}

If you have a basic knowledge of C++, it’s not too hard to write a custom module. The only thing you need to remember is to convert C++ types to Node.js at the end of your module.

Next thing we need is binding:

{
    "targets": [
        {
            "target_name": "helloworld",
            "sources": [ "hello-world.cpp"],
            "include_dirs": ["<!@(node -p \"require('node-addon-api').include\")"],
            "dependencies": ["<!(node -p \"require('node-addon-api').gyp\")"],
            "defines": [ 'NAPI_DISABLE_CPP_EXCEPTIONS' ]
        }
    ]
}

This simple configuration allows us to build *.cpp files, so we can later use them in Node.js apps.

Before we can make use of it in our JavaScript code, we have to build it and configure our package.json to look for gypfile (binding file).

{
  "name": "n-api-example",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "install": "node-gyp rebuild"
  },
  "gypfile": true,
  "keywords": [],
  "author": "",
  "license": "ISC",
  "dependencies": {
    "node-addon-api": "^1.5.0",
    "node-gyp": "^3.8.0"
  }
}

Once the module is good to go, we can use the node-gyp rebuild command to build and then require it in our code. Just like any popular module we use!

const addon = require('./build/Release/helloworld.node');

console.log(addon.hello());

Together with worker threads, N-API gives us a pretty good set of tools to build high-performance apps. Forget APIs or dashboards – even complex data processing or machine learning systems are far from impossible. Awesome!

Full support for HTTP/2 in Node.js? Sure, why not!

We’re able to compute faster. We’re able to compute in parallel. So how about assets and pages serving?

For years, we were stuck with the good old http module and HTTP/1.1. As more and more assets are being served by our servers, we increasingly struggle with loading times. Every browser has a maximum number of simultaneous persistent connections per server/proxy, especially for HTTP/1.1. With HTTP/2 support, we can finally kiss this problem goodbye.

So where do we start? Do you remember this basic Node.js server example from every tutorial on web ever? Yep, this one:

const http = require('http');

http.createServer(function (req, res) {
  res.write('Hello World!');
  res.end();
}).listen(3000);

With Node.js 10, we get a new http2 module allowing us to use HTTP/2.0! Finally!

const http = require('http2');
const fs = require('fs');

const options = {
  key: fs.readFileSync('example.key'),
  cert: fs.readFileSync('example.crt')
 };

http.createSecureServer(options, function (req, res) {
  res.write('Hello World!');
  res.end();
}).listen(3000);

Full HTTP/2 support in Node.js 10 is what we have all been waiting for

With these new features, the future of Node.js is bright

The new Node.js features bring fresh air to our tech ecosystem. They open up completely new possibilities for Node.js. Have you ever imagined that this technology could one day be used for image processing or data science? Neither have I.

Node latest version gives us even more long-awaited features such as support for es modules (still experimental, though) or changes to fs methods, which finally use promises rather than callbacks.

Want even more new Node.js features? Watch this short video.

As you can see from the chart below, the popularity of Node.js seems to have peaked in early 2017, after years and years of growth. It’s not really a sign of slowdown, but rather of maturation of this technology.

node-popularity-over-the-years-chart-1024x425 Popularity of Node.js over time chart, peaked in 2017

However, I can definitely see how all of these new improvements, as well as the growing popularity of Node.js blockchain apps (based on the truffle.js framework), may give Node.js a further boost so that it can blossom again – in new types of projects, roles and circumstances.

With the release of Node.js 14, the future is looking brighter and brighter for Node.js development!

Read More
1_q4C3LGm0jGTaDtoSco5d1w.jpg
Node is Simple — Part 1

tl;drThis is the first article of the Node is Simple article series. In this article series, I will be discussing how to create a simple and secure NodeJS, Express, MongoDB web application.

First of all let me give a big shout out to JavaScript, who’s going to turn 25 years old this year. W/O JavaScript, the world would be a much darker place indeed. 😁 In this article series what I am going to do is create an API with NodeJS, ExpressJS, and MongoDB. I know there is a vast ocean of tutorials out there, describing how to build an API with these technologies, but the thing is that I have never found a very comprehensive all in one tutorial where you get the knowledge of the following things.

  1. Setting up a basic NodeJS, Express web app with SSL/TLS.
  2. Setting up ESLint in your favorite editor or IDE.
  3. Adding MongoDB as the database.
  4. Creating basic CRUD endpoints and testing with Postman.
  5. Uploading files and view them using MongoDB GridFS.
  6. Creating custom middleware with Express.
  7. Add logging for the web application.
  8. Securing endpoints with JWT authentication.
  9. Validating the input using @Hapi/Joi.
  10. Adding an OpenAPI Swagger Documentation.
  11. Caching the responses with Redis.
  12. Load balancing with PM2.
  13. Testing the API using Chai, Mocha.
  14. Create a CI/CD pipeline.
  15. Deploying to your favorite platform.

Well if you have done all of these things with your web application, it would be awesome. Since I learned all of these the hard way, I want to share them with all of you. Because sharing is caring 😇.

Since this tutorial is a series I won’t be making this a long; boring to read one. In this first article, I will describe how to set up a simple NodeJS and Express app with SSL/TLS. Just that, nothing else.

maxresdefault

On your feet soldier, we are starting.

First of all, let’s create the folder structure of our web application.

1 jnP vv30BKCx84yOSHbTRQ

Figure 1: node-is-simple folder structure

As shown in figure 1, you need to create the folders and the index.js file. If you use Linux or, Git Bash on Windows, let’s make a simple script to do this. So you won’t have to do this again when you need to create another application. (We are so lazy aren’t we? 😂)

#!/usr/bin/env bash

############################################################
# Remember to create a folder for your project first       #
# And run `npm init` to initialize a node project          #
# Inside that project folder run this bootstrap.sh script  #
############################################################

# Create the folders
mkdir config controllers errors middleware models services swagger utilities database

# Create the index.js file
touch index.js

############################################################
# Remember to check if you have node and npm installed     #
# I assume you have installed node and npm                 #
############################################################

# Install required packages
npm install express chalk --save

#That's it folks!

Let’s go through it again.

First, you need to create a folder for your project and run npm init and initialize your application. With this, you can specify a great name to your project, your license of preference, and many more. Ah, I just forgot. You need to check if you have the current version of Node and NPM installed in your machine. I hope you know how to install NodeJS on your computer. If not please refer to the following links.

How to Download & Install Node.js - NPM on Windows

How To Install Node.js on Ubuntu 18.04

After doing that just run this bootstrap.sh bash script inside your project folder and it will create a simple Node, Express application. So simple right?

After creating the necessary folder structure let’s set up the Express application. Let’s go to the index.js file on the root folder and add these lines.

const express = require("express");
const chalk = require("chalk");
const http = require("http");
const https = require("https");
const config = require("./config");

const HTTP_PORT = config.HTTP_PORT;
const HTTPS_PORT = config.HTTPS_PORT;
const SERVER_CERT = config.SERVER_CERT;
const SERVER_KEY = config.SERVER_KEY;

const app = express();
const MainController = require("./controllers");

app.use("", MainController);
app.set("port", HTTPS_PORT);

/**
 * Create HTTPS Server
 */

const server = https.createServer(
  {
    key: SERVER_KEY,
    cert: SERVER_CERT
  },
  app
);

const onError = error => {
  if (error.syscall !== "listen") {
    throw error;
  }

  const bind =
    typeof HTTPS_PORT === "string"
      ? "Pipe " + HTTPS_PORT
      : "Port " + HTTPS_PORT;

  switch (error.code) {
    case "EACCES":
      console.error(chalk.red(`[-] ${bind} requires elevated privileges`));
      process.exit(1);
      break;
    case "EADDRINUSE":
      console.error(chalk.red(`[-] ${bind} is already in use`));
      process.exit(1);
      break;
    default:
      throw error;
  }
};

const onListening = () => {
  const addr = server.address();
  const bind = typeof addr === "string" ? `pipe ${addr}` : `port ${addr.port}`;
  console.log(chalk.yellow(`[!] Listening on HTTPS ${bind}`));
};

server.listen(HTTPS_PORT);
server.on("error", onError);
server.on("listening", onListening);

/**
 * Create HTTP Server (HTTP requests will be 301 redirected to HTTPS)
 */
http
  .createServer((req, res) => {
    res.writeHead(301, {
      Location:
        "https://" +
        req.headers["host"].replace(
          HTTP_PORT.toString(),
          HTTPS_PORT.toString()
        ) +
        req.url
    });
    res.end();
  })
  .listen(HTTP_PORT)
  .on("error", onError)
  .on("listening", () =>
    console.log(chalk.yellow(`[!] Listening on HTTP port ${HTTP_PORT}`))
  );

module.exports = app;

Don’t run this yet, since we haven’t done anything yet. I know this is a bit too much just bear with me, I’ll explain everything. What we are doing here is, first we create an HTTP server and then we create an HTTPS server. Then we redirect all the HTTP traffic to the HTTPS server. First of all, we need to create certificates to use inside the HTTPS server. It is a little bit of pain, but it’ll worth it. This is a good resource about creating SSL certificates for your Node application.

How to Use SSL/TLS with Node.js

I am a little bit of a lazy person so I’ll just generate server.cert and server.key files using this link.

Self-Signed Certificate Generator

Inside the server name input, you just have to provide your domain name. For this purpose, I’ll use the localhost as the domain name.

After generating the *.cert and *.key files copy them to the /config folder. And rename them to server.cert and server.key.

Now let’s import the certificates and make them useful. Inside the /config folder create the file index.js and add these lines.

const fs = require("fs");

const SERVER_CERT = fs.readFileSync(__dirname + "/server.cert", "utf8");
const SERVER_KEY = fs.readFileSync(__dirname + "/server.key", "utf8");

module.exports = {
  SERVER_CERT,
  SERVER_KEY,
  HTTP_PORT: 8080,
  HTTPS_PORT: 8081
};

Not so simple after all right? Well, let’s see.

Now that we have set up the certificates, let’s create a simple controller (endpoint) to our application and check that out. First, let’s got to the /controllers folder and create an index.js file. Inside that add the following lines.

const router = require("express").Router();
const asyncWrapper = require("../utilities/async-wrapper");

/** @route  GET /
 *  @desc   Root endpoint
 *  @access Public
 */
router.get(
  "/",
  asyncWrapper(async (req, res) => {
    res.send({
      message: "Hello World!",
      status: 200
    });
  })
);

module.exports = router;

asyncWrapper what is that???

Let me tell you about that. Async Wrapper is a wrapper function that will catch all the errors happen inside your code and returns them to the error handling middleware. I’ll explain this a little more when I am discussing Express middleware. Now let’s create this infamous asyncWrapper. Go to the /utilities folder and create the two following files.

**async-wrapper.js**

module.exports = requestHandler => (req, res, next) =>
  requestHandler(req, res).catch(next);

**async.wrapper.d.ts** (Async wrapper type definition file)

We have done it, folks, we have done it. Now let’s check this beautiful Express app at work. Let’s run the web application first. Let’s go to the project folder and in the terminal (or Git Bash on Windows) run the following line.

node index.js

After that let’s fire up your favorite browser (mine is Chrome 😁) and hit the endpoint at,

https://localhost:8081/

Don’t worry if your browser says it is not secure to go to this URL. You trust yourself, don’t you? So let’s accept the risk and go ahead.

1 -j7gNIpPkuqIQj OQf9Y0A

This is the warning I told you about. (FYI, this screenshot is from Firefox)

Now if you can see something like this,

1 UNQTECXa-hXe2YM4ANG0DQ

Response from [https://localhost:8081](https://localhost:8081)

You have successfully created a NodeJS, Express application. Also since we have set up an HTTP server, you can try this URL too. http://localhost:8080

If you go to this URL, you’ll see that you’ll be redirected to https://localhost:8081

Now that’s the magic with our Express application. Awesome right? So this is it for this tutorial. In the next tutorial, I’ll tell you all about adding MongoDB as the database.

https://github.com/Niweera/node-is-simple

This is the GitHub repo for this tutorial and I’ll update this repo as the tutorial grows. Check for the commit messages for the snapshot of the repo for a particular tutorial. If you have any queries, don’t forget to hit me up on any social media as shown on my website.

https://niweera.gq/

So until we meet again with part two of this tutorial series, happy coding…

Read More
5-min.jpg
Running Your Node.js App With Systemd - Part 1

You've written the next great application, in Node, and you are ready to unleash it upon the world. Which means you can no longer run it on your laptop, you're going to actually have to put it up on some server somewhere and connect it to the real Internet. Eek.

There are a lot of different ways to run an app in production. This post is going to cover the specific case of running something on a "standard" Linux server that uses systemd, which means that we are not going to be talking about using Docker, AWS Lambda, Heroku, or any other sort of managed environment. It's just going to be you, your code, and terminal with a ssh session my friend.

Before we get started though, let's talk for just a brief minute about what systemd actually is and why you should care.

What is systemd Anyway?

The full answer to this question is big, as in, "ginormous" sized big. So we're not going to try and answer it fully since we want to get on the the part where we can launch our app. What you need to know is that systemd is a thing that runs on "new-ish" Linux servers that is responsible for starting / stopping / restarting programs for you. If you install mysql, for example, and whenever you reboot the server you find that mysql is already running for you, that happens because systemd knows to turn mysql on when the machine boots up.

This systemd machinery has replaced older systems such as init and upstart on "new-ish" Linux systems. There is a lot of arguably justified angst in the world about exactly how systemd works and how intrusive it is to your system. We're not here to discuss that though. If your system is "new-ish", it's using systemd, and that's what we're all going to be working with for the forseeable future.

What does "new-ish" mean specifically? If you are using any of the following, you are using systemd:

  • CentOS 7 / RHEL 7
  • Fedora 15 or newer
  • Debian Jessie or newer
  • Ubuntu Xenial or newer

Running our App Manually

I'm going to assume you have a fresh installation of Ubuntu Xenial to work with, and that you have set up a default user named ubuntu that has sudo privileges. This is what the default will be if you spin up a Xenial instance in Amazon EC2. I'm using Xenial because it is currently the newest LTS (Long Term Support) version available from Canonical. Ubuntu Yakkety is available now, and is even newer, but Xenial is quite up-to-date at the time of this writing and will be getting security updates for many years to come because of its LTS status.

Use ssh with the ubuntu user to get into your server, and let's install Node.

$ sudo apt-get -y install curl
$ curl -sL https://deb.nodesource.com/setup_6.x | sudo bash -
$ sudo apt-get -y install nodejs

Next let's create an app and run it manually. Here's a trivial app I've written that simply echoes out the user's environment variables.

const http = require('http');

const hostname = '0.0.0.0';
const port = process.env.NODE_PORT || 3000;
const env = process.env;

const server = http.createServer((req, res) => {
  res.statusCode = 200;
  res.setHeader('Content-Type', 'text/plain');
  for (var k in env) {
    res.write(k + ": " + env[k] + "\n");
  }
  res.end();
});

server.listen(port, hostname, () => {
  console.log("Server running at http://" + hostname + ":" + port + "/");
});

Using your text editor of choice (which should obviously be Emacs but I suppose it's a free country if you want to use something inferior), create a file called hello_env.js in the user's home directory /home/ubuntu with the contents above. Next run it with

$ /usr/bin/node /home/ubuntu/hello_env.js

You should be able to go to

http://11.22.33.44:3000

in a web browser now, substituting 11.22.33.44 with whatever the actual IP address of your server is, and see a printout of the environment variables for the ubuntu user. If that is in fact what you see, great! We know the app runs, and we know the command needed to start it up. Go ahead and press Ctrl-c to close down the application. Now we'll move on to the systemd parts.

Creating a systemd Service File

The "magic" that's needed to make systemd start working for us is a text file called a service file. I say "magic" because for whatever reason, this seems to be the part that people block on when they are going through this process. Fortunately, it's much less difficult and scary than you might think.

We will be creating a file in a "system area" where everything is owned by the root user, so we'll be executing a bunch of commands using sudo. Again, don't be nervous, it's really very straightforward.

The service files for the things that systemd controls all live under the directory path

/lib/systemd/system

so we'll create a new file there. If you're using Nano as your editor, open up a new file there with:

sudo nano /lib/systemd/system/hello_env.service

and put the following contents in it:

[Unit]
Description=hello_env.js - making your environment variables rad
Documentation=https://example.com
After=network.target

[Service]
Environment=NODE_PORT=3001
Type=simple
User=ubuntu
ExecStart=/usr/bin/node /home/ubuntu/hello_env.js
Restart=on-failure

[Install]
WantedBy=multi-user.target

Let's go ahead and talk about what's in that file. In the [Unit] section, the Description and Documentation variables are obvious. What's less obvious is the part that says

After=network.target

That tells systemd that if it's supposed to start our app when the machine boots up, it should wait until after the main networking functionality of the server is online to do so. This is what we want, since our app can't bind to NODE_PORT until the network is up and running.

Moving on to the [Service] section we find the meat of today's project. We can specify environment variables here, so I've gone ahead and put in:

Environment=NODE_PORT=3001

so our app, when it starts, will be listening on port 3001. This is different than the default 3000 that we saw when we launched the app by hand. You can specify the Environment directive multiple times if you need multiple environment variables. Next is

Type=simple

which tells systemd how our app launches itself. Specifically, it lets systemd know that the app won't try and fork itself to drop user privileges or anything like that. It's just going to start up and run. After that we see

User=ubuntu

which tells systemd that our app should be run as the unprivileged ubuntu user. You definitely want to run your apps as unprivileged users to that attackers can't aim at something running as the root user.

The last two parts here are maybe the most interesting to us

ExecStart=/usr/bin/node /home/ubuntu/hello_env.js
Restart=on-failure

First, ExecStart tells systemd what command it should run to launch our app. Then, Restart tells systemd under what conditions it should restart the app if it sees that it has died. The on-failure value is likely what you will want. Using this, the app will NOT restart if it goes away "cleanly". Going away "cleanly" means that it either exits by itself with an exit value of 0, or it gets killed with a "clean" signal, such as the default signal sent by the kill command. Basically, if our app goes away because we want it to, then systemd will leave it turned off. However, if it goes away for any other reason (an unhandled exception crashes the app, for example), then systemd will immediately restart it for us. If you want it to restart no matter what, change the value from on-failure to always.

Last is the [Install] stanza. We're going to gloss over this part as it's not very interesting. It tells systemd how to handle things if we want to start our app on boot, and you will probably want to use the values shown for most things until you are a more advanced systemd user.

Using systemctl To Control Our App

The hard part is done! We will now learn how to use the system provided tools to control our app. To being with, enter the command

$ sudo systemctl daemon-reload

You have to do this whenever any of the service files change at all so that systemd picks up the new info.

Next, let's launch our app with

$ sudo systemctl start hello_env

After you do this, you should be able to go to

http://11.22.33.44:3001

in your web browser and see the output. If it's there, congratulations, you've launched your app using systemd! If the output looks very different than it did when you launched the app manually don't worry, that's normal. When systemd kicks off an application, it does so from a much more minimal environment than the one you have when you ssh into a machine. In particular, the $HOME environment variable may not be set by default, so be sure to pay attention to this if your app makes use of any environment variables. You may need to set them yourself when using systemd.

You may be interested in what state systemd thinks the app is in, and if so, you can find out with

$ sudo systemctl status hello_env

Now, if you want to stop your app, the command is simply

$ sudo systemctl stop hello_env

and unsurprisingly, the following will restart things for us

$ sudo systemctl restart hello_env

If you want to make the application start up when the machine boots, you accomplish that by enabling it

$ sudo systemtl enable hello_env

and finally, if you previously enabled the app, but you change your mind and want to stop it from coming up when the machine starts, you correspondingly disable it

$ sudo systemctl disable hello_env

Wrapping Up

That concludes today's exercise. There is much, much more to learn and know about systemd, but this should help get you started with some basics. In a follow up blog post, we will learn how to launch multiple instances of our app, and load balance those behind Nginx to illustrate a more production ready example.


This article was first published NodeSource blog post in November 2016

Read More
6-min.jpg
Configuring Your .npmrc for an Optimal Node.js Environment

This blog post was first published on March 2017. Find out more here


For Node.js developers, npm is an everyday tool. It's literally something we interact with multiple times on a daily basis, and it's one of the pieces of the ecosystem that's led to the success of Node.js.

One of the most useful, important, and enabling aspects of the npm CLI is that its highly configurable. It provides an enormous amount of configurability that enables everyone from huge enterprises to individual developers to use it effectively.

One part of this high-configurability is the .npmrc file. For a long time I'd seen discussion about it - the most memorable being the time I thought you could change the name of the node_modules directory with it. For a long time, I didn't truly understand just how useful the .npmrc file could be, or how to even use it.

So, today I've collected a few of the optimizations that .npmrc allows that have been awesome for speeding up my personal workflow when scaffolding out Node.js modules and working on applications long-term.

Automating npm init Just a Bit More

When you're creating a new module from scratch, you'll typically start out with the npm init command. One thing that some developers don't know is that you can actually automate this process fairly heftily with a few choice npm config set ... commands that set default values for the npm init prompts.

You can easily set your name, email, URL, license, and initial module version with a few commands:

npm config set init.author.name "Hiro Protagonist"
npm config set init.author.email "hiro@showcrash.io"
npm config set init.author.url "http://hiro.snowcrash.io"
npm config set init.license "MIT"
npm config set init.version "0.0.1"

In the above example, I've set up some defaults for Hiro. This personal information won't change too frequently, so setting up some defaults is helpful and allows you to skip over entering the same information in manually every time.

Additionally, the above commands set up two defaults that are related to your module.

The first default is the initial license that will be automatically suggested by the npm init command. I personally like to default to MIT, and much of the rest of the Node.js ecosystem does the same. That said, you can set this to whatever you'd like - it's a nice optimization to just be able to nearly automatically select your license of choice.

The second default is the initial version. This is actually one that made me happy, as whenever I tried building out a module I never wanted it to start out at version 1.0.0, which is what npm init defaults to. I personally set it to 0.0.1 and then increment the version as I go with the npm version [ major | minor | patch ] command.

Change Your npm Registry

As time moves forward, we're seeing more options for registries arise. For example, you may want to set your registry to a cache of the modules you know you need for your apps. Or, you may be using Certified Modules as a custom npm registry. There's even a separate registry for Yarn, a topic that is both awesome and totally out of scope for this post.

So, if you'd like to set a custom registry, you can run a pretty simple one-line command:

npm config set registry "https://my-custom-registry.registry.nodesource.io/"

In this example, I've set the registry URL to an example of a Certified Modules registry - that said, the exact URL in the command can be replaced with any registry that's compatible. To reset your registry back to the default npm registry, you can simply run the same command pointing to the standard registry:

npm config set registry "https://registry.npmjs.com/"

Changing the console output of npm install with loglevel

When you npm install a bunch of information gets piped to you. By default, the npm command line tool limits how much of this information is actually output into the console when installing. There are varying degrees of output that you can assign at install, or by default, if you change it with npm config in your .npmrc file. The options, from least to most output, are: silent, error, warn, http, info, verbose, and silly.

Here's an example of the silent loglevel: npm install express loglevel silent

And here's an example of the silly loglevel: npm install express loglevel silly

If you'd like to get a bit more information (or a bit less, depending on your preferences) when you npm install, you can change the default loglevel.

npm config set loglevel="http"

If you tinker around with this config a bit and would like to reset to what the npm CLI currently defaults to, you can run the above command with warn as the loglevel:

npm config set loglevel="warn"
Looking for more info on npm? Check out our complete guide: Read now: The Ultimate Guide to npm

Change Where npm Installs Global Modules

This is a really awesome change - it has a few steps, but is really worth it. With a few commands, you can change where the npm CLI installs global modules by default. Normally, it installs them to a privileged system folder - this requires administrative access, meaning that a global install requires sudo access on UNIX-based systems.

If you change the default global prefix for npm to an unprivileged directory, for example, ~/.global-modules, you'll not need to authenticate when you install a global module. That's one benefit - another is that globally installed modules won't be in a system directory, reducing the likelihood of a malicious module (intentionally or not) doing something you didn't want it to on your system.

To get started, we're going to create a new folder called global-modules and set the npm prefix to it:

mkdir ~/.global-modules
npm config set prefix "~/.global-modules"

Next, if you don't already have a file called ~/.profile, create one in your root user directory. Now, add the following line to the ~/.profile file:

export PATH=~/.global-modules/bin:$PATH

Adding that line to the ~/.profile file will add the global-modules directory to your PATH, and enable you to use it for npm global modules.

Now, flip back over to your terminal and run the following command to update the PATH with the newly updated file:

source ~/.profile

Just one more thing...

If you'd like to keep reading about Node.js, npm, configuration options, and development with the Node.js stack, I've got some fantastic articles for you.

Our most recent guide is a deep-dive into the core concepts of the package.json file. You'll find a ton of info about package.json in there, including a ton of super helpful configuration information. We also published an absolute beginner's guide to npm that you may be interested in reading - even though it's a beginner's guide, I'd bet you'll find something useful in it.

With this article, the intent was to help you set up a great configuration for Node.js development. If you'd like to take the leap and ensure that you're always on a rock-solid platform when developing and deploying you Node.js apps, check out NodeSource Certified Modules - it's a new tool we launched last week that will help enable you to spend more time building apps and less time worrying about modules.

Learn more and get started with NCM Create your free NodeSource account

Read More
8-min.jpg
11 Simple npm Tricks That Will Knock Your Wombat Socks Off

This blog post was first published on March 2017. Find out more here

Using npm effectively can be difficult. There are a ton of features built-in, and it can be a daunting task to try to approach learning them.

Personally, even learning and using just one of these tricks (npm prune, which is #4) saved me from getting rid of unused modules manually by deleting node_modules and re-installing everything with npm install. As you can probably imagine, that was insanely stressful.

We've compiled this list of 11 simple-to-use npm tricks that will allow you to speed up development using npm, no matter what project you're working on.

1. Open a package’s homepage

Run: npm home $package

Running the home command will open the homepage of the package you're running it against. Running against the lodash package will bring you to the Lodash website. This command can run without needing to have the package installed globally on your machine or within the current project.

2. Open package’s GitHub repo

Run: npm repo $package

Similar to home, the repo command will open the GitHub repository of the package you're running it against. Running against the express package will bring you to the official Express repo. Also like home, you don’t need to have the package installed.

3. Check a package for outdated dependencies

Run: npm outdated

You can run the outdated command within a project, and it will check the npm registry to see if any of your packages are outdated. It will print out a list in your command line of the current version, the wanted version, and the latest version.

Running npm outdated on a Node project

4. Check for packages not declared in package.json

Run: npm prune

When you run prune, the npm CLI will run through your package.json and compare it to your project’s /node_modules directory. It will print a list of modules that aren’t in your package.json.

The npm prune command then strips out those packages, and removes any you haven't manually added to package.json or that were npm installed without using the --save flag.

Running npm prune on a Node project

Update: Thanks to @EvanHahn for noticing a personal config setting that made npm prune provide a slightly different result than the default npm would provide!

5. Lock down your dependencies versions

Run: npm shrinkwrap

Using shrinkwrap in your project generates an npm-shrinkwrap.json file. This allows you to pin the dependencies of your project to the specific version you’re currently using within your node_modules directory. When you run npm install and there is a npm-shrinkwrap.json present, it will override the listed dependencies and any semver ranges in package.json.

If you need verified consistency across package.json, npm-shrinkwrap.json and node_modules for your project, you should consider using npm-shrinkwrap.

Running npm shrinkwrap on a Node project

6. Use npm v3 with Node.js v4 LTS

Run: npm install -g npm@3

Installing npm@3 globally with npm will update your npm v2 to npm v3, including on the Node.js v4 LTS release (“Argon”) ships with the npm v2 LTS release. This will install the latest stable release of npm v3 within your v4 LTS runtime.

7. Allow npm install -g without needing sudo

Run: npm config set prefix $dir

After running the command, where $dir is the directory you want npm to install your global modules to, you won’t need to use sudo to install modules globally anymore. The directory you use in the command becomes your global bin directory.

The only caveat: you will need to make sure you adjust your user permissions for that directory with chown -R $USER $dir and you add $dir/bin to your PATH.

8. Change the default save prefix for all your projects

Run: npm config set save-prefix="~"

The tilde (~) is more conservative than what npm defaults to, the caret (^), when installing a new package with the --save or --save-dev flags. The tilde pins the dependency to the minor version, allowing patch releases to be installed with npm update. The caret pins the dependency to the major version, allowing minor releases to be installed with npm update.

9. Strip your project's devDependencies for a production environment

When your project is ready for production, make sure you install your packages with the added --production flag. The --production flag installs your dependencies, ignoring your devDependencies. This ensures that your development tooling and packages won’t go into the production environment.

Additionally, you can set your NODE_ENV environment variable to production to ensure that your project’s devDependencies are never installed.

10. Be careful when using .npmignore

If you haven't been using .npmignore, it defaults to .gitignore with a few additional sane defaults.

What many don't realize that once you add a .npmignore file to your project the .gitignore rules are (ironically) ignored. The result is you will need to audit the two ignore files in sync to prevent sensitive leaks when publishing.

11. Automate npm init with defaults

When you run npm init in a new project, you’re able to go through and set up your package.json’s details. If you want to set defaults that npm init will always use, you can use the config set command, with some extra arguments:

npm config set init.author.name $name
npm config set init.author.email $email

If, instead, you want to completely customize your init script, you can point to a self-made default init script by running

npm config set init-module ~/.npm-init.js`

Here’s a sample script that prompts for private settings and creates a GitHub repo if you want. Make sure you change the default GitHub username (YOUR_GITHUB_USERNAME) as the fallback for the GitHub username environment variable.

var cp = require('child_process');
var priv;

var USER = process.env.GITHUB_USERNAME || 'YOUR_GITHUB_USERNAME';

module.exports = {

  name: prompt('name', basename || package.name),

  version: '0.0.1',

  private: prompt('private', 'true', function(val){
    return priv = (typeof val === 'boolean') ? val : !!val.match('true')
  }),

  create: prompt('create github repo', 'yes', function(val){
    val = val.indexOf('y') !== -1 ? true : false;

    if(val){
      console.log('enter github password:');
      cp.execSync("curl -u '"+USER+"' https://api.github.com/user/repos -d " +
        "'{\"name\": \""+basename+"\", \"private\": "+ ((priv) ? 'true' : 'false')  +"}' ");
      cp.execSync('git remote add origin '+ 'https://github.com/'+USER+'/' + basename + '.git');
    }

    return undefined;
  }),

  main: prompt('entry point', 'index.js'),

  repository: {
    type: 'git',
    url: 'git://github.com/'+USER+'/' + basename + '.git' },

  bugs: { url: 'https://github.com/'+USER'/' + basename + '/issues' },

  homepage: "https://github.com/"+USER+"/" + basename,

  keywords: prompt(function (s) { return s.split(/\s+/) }),

  license: 'MIT',

  cleanup: function(cb){

    cb(null, undefined)
  }

}

# One last thing...

If you want to learn more about npm, Node.js, JavaScript, Docker, Kubernetes, Electron, and tons more, you should follow @NodeSource on Twitter. We're always around and would love to hear from you!

Read More
  • 2 / 5