List of useful articles and blog post on Node.js

pdf or png from nodejs.png
Exporting a web page as PDF or PNG from Node.js

This blog post was first published in thedreaming.org blog.


I’m working on a reporting engine right now, and one of the requirements I have is to be able to export reports as PDF files, and send PDF reports via email. We also have a management page where you can see all the dashboards, and we’d like to render a PNG thumbnail of the dashboard to show here. Our back-end is all Node.js based, and you’re probably aware that Node is not known for it’s graphics prowess. All of our reports are currently being generated by a React app, written with Gatsby.js, so what we’d like to do is load up a report in a headless browser of some sort, and then either “print” that to PDF or take a PNG image. We’re going to look at doing exactly this using Playwright and Browserless.

The basic idea here is we want an API endpoint we can call into as an authenticated user, and get it to render a PDF of a web page for us. To do this we’re going to use two tools. The first is playwright, which is a library from Microsoft for controlling a headless Chrome or Firefox browser. The second is browserless, a microservice which hosts one or more headless Chrome browsers for us.

If you have a very simple use case, you might actually be able to do this entirely with browserless with their APIs. There’s a /pdf REST API which would do pretty much what we want here. We can pass in cookies and custom headers to authenticate our user to the reporting backend, we can set the width of the viewport to something reasonable (since of course our analytics page is responsive, and we want the desktop version in our PDF reports, not the mobile version).

But, playwright offers us two nice features that make it desirable here. First it has a much more expressive API. It’s easy, for example, to load a page, wait for a certain selector to be present (or not be present) to signal that the page is loaded, and then generate a PDF or screenshot. Second, when you npm install playwright, it will download a copy of Chrome and Firefox automatically, so when we’re running this locally in development mode we don’t even need the browserless container running.

Setting up and configuring playwright

In our backend analytics project, we’re going to start out by installing playwright:

$ npm install playwright

You’ll notice the first time you do this, it downloads a few hundred megs of browser binaries and stores them (on my mac) in ~/Library/Caches/ms-playwright. Later on, when we deploy our reporting microservice in a docker container, this is something we do not want to happen, because we don’t need a few hundred megs of browser binariers - that’s what browserless is for. So, in our backend project’s Dockerfile, we’ll modify our npm ci command:

RUN PLAYWRIGHT_SKIP_BROWSER_DOWNLOAD=1 npm ci && \
    rm -rf ~/.npm ~/.cache

If you’re not using docker, then however you run npm install in your backend, you want to add this PLAYWRIGHT_SKIP_BROWSER_DOWNLOAD environment variable to prevent all the browser binaries from being installed.

We do want playwright to use these browser binaries when it runs locally, but in production we want to use browserless. To control this, we’re going to set up an environment variable that controls playwright’s configuration:

// Set BROWSERLESS_URL environment variable to connect to browserless.
// e.g. BROWSERLESS_URL=wss://chrome.browserless.io?token=YOUR-API-TOKEN
const BROWSERLESS_URL = process.env.BROWSERLESS_URL;

let browser: pw.Browser | undefined;

export async function getBrowser() {
  if (!browser) {
    if (BROWSERLESS_URL) {
      // Connect to the browserless server.
      browser = await pw.chromium.connect({
        wsEndpoint: BROWSERLESS_URL,
      });
    } else {
      // In dev mode, just run a headless copy of Chrome locally/
      browser = await pw.chromium.launch({
        headless: false,
      });
    }

    // If the browser disconnects, create a new browser next time.
    browser.on("disconnected", () => {
      browser = undefined;
    });
  }

  return browser;
}

This creates a shared “browser” instance which we’ll use across multiple clients. Each client will create a “context”, which is sort of like an incognito window for that client. Contexts are cheap and lots can exist in parallel, whereas browser instances are expensive, so this lets us use a single instance of Chrome to serve many users.

We’re also going to write a little helper function here to make it easier to make sure we always release a browser context when we’re done with it:

async function withBrowserContext<T = void>(
  fn: (browserContext: pw.BrowserContext) => Promise<T>
) {
  const browser = await getBrowser();
  const browserContext = await browser.newContext();

  try {
    return await fn(browserContext);
  } finally {
    // Close the borwser context when we're done with
    // it. Otherwise browserify will keep it open forever.
    await browserContext.close();
  }
}

The idea here is you call this function, and pass in a function fn that accepts a browser context as a parameter. This lets us write code like:

await withBrowserContext((context) => {
  // Do stuff here!
});

And then we don’t have to worry about cleaning up the context if we throw an expection or otherwise exit our funciton in an “unexpected” way.

Generating the PDF or PNG

Now we get to the meaty part. We want an express endpoint which actually generates the PDF or the PNG. Here’s the code, with lots of comments. Obivously this is written with my “reporting” example in mind, so you’ll need to make a few small changes if you want to give this a try; notably you’ll probably want to change the express endpoint, and change the dashboardUrl to point to whatever page you want to make a PDF or PNG of.

// We set up an enviornment variable called REPORTING_BASE_URL
// which will point to our front end on localhost when running
// this locally, or to the real frontend in production.
const REPORTING_BASE_URL =
  process.env.REPORTING_BASE_URL || "http://localhost:8000";

// Helper function for getting numbers out of query parameters.
function toNumber(val: string | undefined, def: number) {
  const result = val ? parseInt(val, 10) : def;
  return isNaN(result) ? def : result;
}

// Our express endpoint
app.post("/api/reporting/export/:dashboardId", (req, res, next) => {
  Promise.resolve()
    .then(async () => {
      const type = -req.query.type || "pdf";
      const width = toNumber(req.query.width, 1200);
      const height = toNumber(req.query.height, 800);

      // Figure out what account and dashboard we want to show in the PDF.
      // Probably want to do some validation on these in a real system, although
      // since we're passing them as query parameters to another URL, there's
      // not much chance of an injection attack here...
      const qs = new URLSearchParams({ dashboardId: req.params.dashboardId });

      // If you're not building a reporting engine, change this
      // URL to whatever you want to make into a PDF/PNG. :)
      const dashboardUrl = `${REPORTING_BASE_URL}/dashboards/dashboard?${qs.toString()}`;

      await withBrowserContext(async (browserContext) => {
        // Set up authentication in the new browser context.
        await setupAuth(context.req, browserContext);

        // Create a new tab in Chrome.
        const page = await browserContext.newPage();
        await page.setViewportSize({ width, height });

        // Load our dashboard page.
        await page.goto(dashboardUrl);

        // Since this is a client-side app, we need to
        // wait for our dashboard to actually be visible.
        await page.waitForSelector("css=.dashboard-page");

        // And then wait until all the loading spinners are done spinning.
        await page.waitForFunction(
          '!document.querySelector(".loading-spinner")'
        );

        if (type === "pdf") {
          // Generate the PDF
          const pdfBuffer = await page.pdf();

          // Now that we have the PDF, we could email it to a user, or do whatever
          // we want with it. Here we'll just return it back to the client,
          // so the client can save it.
          res.setHeader(
            "content-disposition",
            'attachment; filename="report.pdf"'
          );
          res.setHeader("content-type", "application/pdf");
          res.send(pdfBuffer);
        } else {
          // Generate a PNG
          const image = await page.screenshot();
          res.setHeader("content-type", "image/png");
          res.send(image);
        }
      });
    })
    .catch((err) => next(err));
});

As the comments say, this creates a new “browser context” and uses the context to visit our page and take a snapshot.

Authentication

There’s one bit here I kind of glossed over:

// Set up authentication in the new browser context.
await setupAuth(context.req, browserContext);

Remember that browserless is really just a headless Chrome browser. The problem here is we need to authenticate our user twice; the user needs to authenticate to this backend express endpoint, but then this is going to launch a Chrome browser and navigate to a webpage, so we need to authenticate the user in that new browser session, too (since obviously not just any user off the Internet is allowed to look at our super secret reporting dashboards). We could visit a login page and log the user in, but it’s a lot faster to just set the headers and/or cookies we need to access the page.

What exactly this setupAuth() function does is going to depend on your use case. Maybe you’ll call browserContext.setExtraHTTPHeaders() to set a JWT token. In my case, my client is authenticating with a session cookie. Unfortunately we can’t use setExtraHTTPHeaders() to just set the session cookie, because Playwright generates it’s own cookie header and we can’t override it. So instead, I’ll parse the cookie that was sent to us and then forward it on to the browser context: via browserContext.addCookies().

import * as cookieParse from "cookie-parse";
import { URL } from "url";

// Playwright doesn't let us just set the "cookie"
// header directly. We need to actually create a
// cookie in the browser. In order to make sure
// the browser sends this cookie, we need to set
// the domain of the cookie to be the same as the
// domain of the page we are loading.
const REPORTING_DOMAIN = new URL(REPORTING_BASE_URL).hostname;

async function setupAuth(
  req: HttpIncomingMessage,
  browserContext: pw.BrowserContext
) {
  // Pass through the session cookie from the client.
  const cookie = req.headers.cookie;
  const cookies = cookie ? cookieParse.parse(cookie) : undefined;

  if (cookies && cookies["session"]) {
    console.log(`Adding session cookie: ${cookies["session"]}`);
    await browserContext.addCookies([
      {
        name: "session",
        value: cookies["session"],
        httpOnly: true,
        domain: REPORTING_DOMAIN,
        path: "/",
      },
    ]);
  }
}

We actually don’t have to worry about the case where the session cookie isn’t set - the client will just get a screenshot of whatever they would have gotten if they weren’t logged in!

If we log in to our UI, and then visit http://localhost:8000/api/reporting/export/1234, we should now see a screenshot of our beautiful reporting dashboard.

Cleaning up site headers

If you actually give this a try, you might notice that your PDF has all your menus and navigation in it. Fortunately, this is easy to fix with a little CSS. Playwright/browserless will render the PDF with the CSS media set to “print”, exactly as if you were trying to print this page from a browser. So all we need to do is set up some CSS to hide our site’s navigation for that media type:

@media print {
  .menu,
  .sidebar {
    display: none;
  }
}

An alternative here might be to pass in an extra query parameter like bare=true, although the CSS version is nice because then if someone tries to actually print our page from their browser, it’ll work.

Setting up Browserless

So far we’ve been letting Playwright launch a Chrome instance for us. This works fine when we’re debugging locally, but in production, we’re going to want to run that Chrome instance as a microservice in another container, and for that we’re going to use browserless. How you deploy browserless in your production system is outside the scope of this article, just because there’s so many different ways you could do it, but let’s use docker-compose to run a local copy of browserless, and get our example above to connect to it.

First, our docker-compose.yml file:

version: "3.6"
services:
  browserless:
    image: browserless/chrome:latest
    ports:
      - "3100:3000"
    environment:
      - MAX_CONCURRENT_SESSIONS=10
      - TOKEN=2cbc5771-38f2-4dcf-8774-50ad51a971b8

Now we can run docker-compose up -d to run a copy of browserless, and it will be available on port 3100, and set up a token so not just anyone can access it.

Now here I’m going to run into a little problem. The UI I want to take PDFs and screenshots from is a Gatsby app running on localhost:8000 on my Mac. The problem is, this port is not going to be visible from inside my docker VM. There’s a variety of ways around this problem, but here we’ll use ngrok to set up a publically accessible tunnel to localhost, which will be visible inside the docker container:

$ ngrok http 8000
Forwarding                    http://c3b3d3d435f1.ngrok.io -> localhost:8000
Forwarding                    https://c3b3d3d435f1.ngrok.io -> localhost:8000

Now we can run our reporting engine with:

$ BROWSERLESS_URL=ws://localhost:3100?token=2cbc5771-38f2-4dcf-8774-50ad51a971b8 \
    REPORTING_BASE_URL=https://c3b3d3d435f1.ngrok.io
    node ./dist/bin/server.js

And now if we try to hit up our endpoint, we should once again see a screenshot of our app!

Hopefully this was helpful! If you try this out, or have a similar (or better!) solution, please let us know!

Read More
Best-Cross-Platform-App-Frameworks (1).jpg
10 Best Cross-Platform App Frameworks to Consider for Your App Development

Mobile app development is time-intensive and complex. For developers, it’s a challenge to build an app that’s efficient and could also run on several platforms. Cross-platform app development helps organizations have a better reach to the target audience.

Cross-platform is a framework through which a mobile application runs on various platforms. Developers should write code just once and run it anywhere for any platform of choice. Cross-platform application development has its own merits, which play a critical role in its popularity today.

As its reach grew, several development tools and frameworks started coming out in the market. Gradually, and then suddenly, mobile app developers, including a Nodejs development company are trying their hands at this one-of-a-kind technology.

The Pros Of Cross-platform Apps

1. Reusable codes. Rather than writing fresh codes for each platform, developers could reuse the same code across platforms. Furthermore, this cuts down on tasks that are repetitive, eliminating hard work. This however is not a new concept. It’s been used in software development for several years now, and the benefits of being able to reuse code have been witnessed too.

2. Faster development time. Developing cross-platform apps is quicker when a single script is deployed. On the other hand, increased speed in development results in a product reaching the marketplace much sooner. Time could be allocated on working out codes for a brand new app. The situation is a win-win for all concerned, such as developers, marketers, and users.

3. Seamless implementation. Today, many technologies offer cross-platform solutions, making it easy for developers to do changes. A tool like Appcelerator for instance is used, codes could be written easily in HTML5, and converted for various platforms. This makes app development faster, and also easier to synch updates across different mobile devices.

4. Cost control. Thanks to cross-platform mobile application development, and the services of developers, like for instance NodeJS development services, companies now only need a one-time investment to have their app developed, in contrast to earlier times when they need to heavily spend on various technologies and tools. They need not spend on separately developing apps for every platform. Furthermore, the same team could be made to work on different platforms for app developers.

Top Ten Cross-Platform App Frameworks 2020

As technology continues to evolve, the IT field and business organizations need to keep up with the changes to remain relevant. When it comes to developing cross-platform apps, there are numerous frameworks to choose from. Here, I have gathered an in-depth view of the top cross-platform app frameworks that are highly used by service providers.

1. React Native

React_Native

When it comes to cross-platform app development, it’s not easy to not include React Native. Created on JavaScript, it’s great for writing real code and providing a native-feel to mobile apps.

Features

  • Has huge community support, boosting it via introducing and improving features, and bug fixing.
  • The one-time coding reduces development time instantly, along with keeping the cost at its lowest.
  • No need to require developers to separately code twice for the same application on various platforms.
  • To a big extent, focuses on the UI, rendering an interface that’s very responsive.

2. Ionic

Ionic

One of the most popular and astonishing cross-platform app frameworks, which is AngularJS-based. Developers could combine several languages. Moreover, developers could build a flawlessly creative UI, along with adding features that are user-friendly.

Features

  • Based on a SAAS UI framework, it’s especially designed for mobile OS. It provides several UI components for robust apps development.
  • A front-end and open-source framework, which means that it enables code structure alterations, suitable to every developer and saves time.
  • AngularJS-based offering HTML syntax extensions is easy.
  • Provides a native-like feel to the app, making it a favorite among developers.
  • Utilizes the plugins of Cordova, allowing access to built-in features of a device, such as camera, GPS, and audio recorder.

3. Xamarin

xamarin

Is considerably different from the other frameworks discussed so far. It’s a simplified framework for the development of Windows, iOS, and Android, with the help of .Net and C#, rather than HTML and JS libraries. Developers are able to use 90 percent code to build an app for various platforms.

Features

  • Apps are built using C#, a modern language with leverage over Java and Objective-C.
  • Reduces cost and time of development since it supports WORA, or the write once, run anywhere, and with humongous class libraries collection.
  • Has an astounding native UI and controls assisting, enabling developers to design a native-like application.

4. NodeJS

Node.js

For developing cross-platform apps, this is a wonderful framework. Essentially, it’s a JavaScript runtime framework made on the JavaScript Chrome V8 engine. An open-source environment, it supports developer scalable and server-side networking apps. A NodeJS development company could develop highly effective solutions using the numerous rich libraries of JavaScript modules, which simplify web apps development.

Features

  • Amazingly fast library in the process of executing code, because it’s built on the Chrome V8 engine.
  • Apps don’t buffer, but instead it output data in chunks.
  • Deliver perfect and smooth functioning apps. It makes use of a single-threaded model that has an event looping functionality.

5. NativeScript

NativeScript

A free, terrific cross-platform framework, which is based on JavaScript. It’s right to say that it’s the preferred option among developers looking for a WORA functionality. It moreover offers all native APIs in which developers could reuse existing plugins right from NPTM towards projects.

Features

  • Renders accessible, beautiful, and platform-native User Interface. It only requires developers to define once and allow it to run everywhere.
  • Gives developers a complete web resource, packed with plugins for all sorts of solutions. This inevitably eliminates third-party solutions.
  • Provides the freedom of accessing native APIs seamlessly, meaning that developers need no further knowledge on native development languages.
  • Uses TypeScript and Angular for the purposes of programming.

6. Electron

Electron

Electron, a wonderful framework for building desktop apps with all emerging technologies today, which include HTML, CSS, and JavaScript. The framework basically handles all the difficult aspects so you could concentrate on the core of the app and make a design that’s revolutionary. Designed as an open-source framework, Electron combines the best web technologies today and is also a cross-platform, which means that it’s very compatible with Windows, Mac, and Linux.

It has automatic updates, notifications, native menus, and debugging, crash reporting, and profiling as well.

Features

  • Helps in developing feature-loader, high-quality, cross-platform desktop apps.
  • Support for the Chromium engine
  • It’s app structure and business logic.
  • Secure access to storage.
  • Hardware-level APIs of high-quality.
  • Seamless management system.
  • Support for different UI/UX tools.
  • Compatible with the different libraries and frameworks.

7. Flutter

flutter

Flutter, an impressive cross-platform app framework was introduced by Google in 2017. It’s a software development kit that’s designed to help in iOS and Android app development.

Features

  • Does not need to manually update the UI, since it has a reactive framework. Developers only have to update variables and changes in the UI becomes visible after.
  • Poses as the perfect choice for the development of MVP, or Minimum Viable Product, since it initiates a fast development process and cost-efficient as well
  • Promotes a portable GPU, rendering UI power, allowing it to function on the current interfaces.

8. Corona Software Development Kit

corona labs

The framework lets programmers develop 2D mobile apps for all major platforms. Furthermore, it provides ten times faster mobile and game application development, delivering remarkable results. The focus of the framework is on the key elements of development, namely, portability, extensibility, speed, scalability, and usage ease.

Features

  • Responds to changes made in code almost instantaneously, while providing a review of app performance in real-time, the same as on a real device.
  • Provides developers the ability to sprite music, audio, animations, and so on with more than 100 APIs.
  • Supports nearly 200 plugins, which include analytics, in-app advertising, hardware features, and media.

9. PhoneGap Framework

PhoneGap

One of the cross-platform frameworks that’s flawless for mobile apps development using HTML5, JavaScript, and CSS. Furthermore, it has a cloud solution for developers with an option to share apps in the development process to derive feedback from other developers.

Features

  • Considered a great cross-platform framework since it enables creating cross-platform apps with existing web technologies.
  • Follows an architecture that by nature is plugin-able, which means that access to native device APIs is possible and could be extended modularly.
  • It has support for the use of a single code base to build apps for various platforms.

10. Sencha Touch Framework

Sencha Touch

The framework is great for the development of cross-platform, web-based applications, which are typically used to create efficient apps with hardware acceleration methods. With Sencha Touch, developers could build securely integrated and well-tested UI libraries and components.

The features

  • Known to provide themes that are built-in and native looking for all the major platforms.
  • The Cordova support is one of its most celebrated features, for native API access, along with the packaging.
  • Created with an agnostic backend data package that’s effective.
  • Packed with customizable and over 50 built-in widgets.

Conclusion

The top ten frameworks mentioned here are just among the many frameworks you can choose from to develop cross-platform app frameworks. With the continuous evolution of technology and the constant updates, new frameworks would soon emerge. In the meantime, for all your cross-platform app development requirements, the ten that are mentioned here are among the best you can find available nowadays.

Read More
part5.png
Node Is Simple — Part 5

tl;drThis is the fifth article of the Node is Simple article series. In this article series, I will be discussing how to create a simple and secure NodeJS, Express, MongoDB web application.

To follow the past tutorials:

Hello, hello, my dear fellow developers, glad you could make it to my article where I discuss developing web applications with NodeJS. As I promised in the previous article, in this article I will tell you how to use custom middleware with Express.

So what is this Middleware?

That’s the question, isn’t it? As I told you many times, Express is great with middleware. As discussed in the Express documentation,

Middleware functions are functions that have access to the request object (req), the response object (res), and the next middleware function in the application’s request-response cycle. The next middleware function is commonly denoted by a variable named next.

Express is nothing without middleware. So I am going to implement a simple error handling middleware. As the Express documentation describes,

app.use(function (err, req, res, next) {  
  console.error(err.stack)  
  res.status(500).send('Something broke!')  
})

this is how we create a simple middleware to log errors. But in this tutorial, I will extend this middleware to return specific error messages and HTTP error codes in the response and log the error messages in the console.

So let’s get to it, shall we?

First, let’s create some error classes to represent errors that we are going to get from the web application.

Let’s create /errors/index.js file.

module.exports = {
  AccessDeniedError: class AccessDeniedError {
    constructor(message) {
      this.message = message;
    }
  },
  AuthenticationError: class AuthenticationError {
    constructor(message) {
      this.message = message;
    }
  },
  ValidationError: class ValidationError {
    constructor(message) {
      this.message = message;
    }
  },
  NotFoundError: class NotFoundError {
    constructor(message) {
      this.message = message;
    }
  }
};
/errors/index.js file (Contains error classes)

Let’s see what’s happening here. Here, we are declaring four classes to describe errors we encounter in web applications.

  1. Access Denied Error (HTTP 403)
  2. Authentication Error (HTTP 401)
  3. Not Found Error (HTTP 404)
  4. Validation Error (HTTP 400)

Of course, there is another famous HTTP error which is called Internal Server Error (HTTP 500) but since it is a very generic error, we are not declaring a specific class to describe that and it will be the default error.

Then let’s create the error handling middleware. Let’s create /middleware/error-handling.js file.

const {
  ValidationError,
  AuthenticationError,
  AccessDeniedError,
  NotFoundError
} = require("../errors");
const chalk = require("chalk");

const errorLogger = (err, req, res, next) => {
  if (err.message) {
    console.log(chalk.red(err.message));
  }
  if (err.stack) {
    console.log(chalk.red(err.message));
  }
  next(err);
};

const authenticationErrorHandler = (err, req, res, next) => {
  if (err instanceof AuthenticationError)
    return res.status(401).send({ message: err.message });
  next(err);
};

const validationErrorHandler = (err, req, res, next) => {
  if (err instanceof ValidationError)
    return res.status(400).send({ message: err.message });
  next(err);
};

const accessDeniedErrorHandler = (err, req, res, next) => {
  if (err instanceof AccessDeniedError)
    return res.status(403).send({ message: err.message });
  next(err);
};

const notFoundErrorHandler = (err, req, res, next) => {
  if (err instanceof NotFoundError)
    return res.status(404).send({ message: err.message });
  next(err);
};

const genericErrorHandler = (err, req, res, next) => {
  res.status(500).send({ message: err.message });
  next();
};

const ErrorHandlingMiddleware = app => {
  app.use([
    errorLogger,
    authenticationErrorHandler,
    validationErrorHandler,
    accessDeniedErrorHandler,
    notFoundErrorHandler,
    genericErrorHandler
  ]);
};

module.exports = ErrorHandlingMiddleware;
/middleware/error-handling.js file (Contains the error handling middleware configurations)

1

From https://knowyourmeme.com/memes/what-the-hell-happened-here

Well, the code is somewhat complicated, isn’t it? Fear not, I will discuss what is happening here. First, we need to know how the Express middleware work. In Express middleware, if you provide error first callback, it will create the signature as an error-handling middleware.

Error-handling middleware always takes four arguments. You must provide four arguments to identify it as an error-handling middleware function. Even if you don’t need to use the next object, you must specify it to maintain the signature. Otherwise, the next object will be interpreted as regular middleware and will fail to handle errors.

(from https://expressjs.com/en/guide/using-middleware.html)

And another thing to mention is that the middleware can be defined as arrays. The array order will be preserved and the middleware will be applied in that order. So according to our code,

1. errorLogger //Logging errors  
2. authenticationErrorHandler //return authentication error  
3. validationErrorHandler //return validation error  
4. accessDeniedErrorHandler //return access denied error  
5. notFoundErrorHandler //return not found error  
6. genericErrorHandler //return internal server error

this middleware will be called in the particular order. In our web application, if there is an error thrown, first the error will be logged in the console and the error will be passed to the authenticationErrorHandler with the next(err) handler. If the error is an instance of AuthenticationError class, then it will end the request-response cycle by returning the relevant error. If not it will be passed on to the next error handling middleware. So as you can imagine it will be passed to down in the middleware stack and finally it will be passed to the genericErrorHandler middleware and the request-response cycle will be ended. And it is simple as that.

Now let’s bind this middleware as an application-level middleware. Let’s update /index.js file.

const express = require("express");
const chalk = require("chalk");
const http = require("http");
const https = require("https");
const config = require("./config");

const HTTP_PORT = config.HTTP_PORT;
const HTTPS_PORT = config.HTTPS_PORT;
const SERVER_CERT = config.SERVER_CERT;
const SERVER_KEY = config.SERVER_KEY;

const app = express();
const Middleware = require("./middleware");
const ErrorHandlingMiddleware = require("./middleware/error-handling");
const MainController = require("./controllers");

Middleware(app);
app.use("", MainController);
ErrorHandlingMiddleware(app);
app.set("port", HTTPS_PORT);

/**
 * Create HTTPS Server
 */

const server = https.createServer(
  {
    key: SERVER_KEY,
    cert: SERVER_CERT
  },
  app
);

const onError = error => {
  if (error.syscall !== "listen") {
    throw error;
  }

  const bind =
    typeof HTTPS_PORT === "string"
      ? "Pipe " + HTTPS_PORT
      : "Port " + HTTPS_PORT;

  switch (error.code) {
    case "EACCES":
      console.error(chalk.red(`[-] ${bind} requires elevated privileges`));
      process.exit(1);
      break;
    case "EADDRINUSE":
      console.error(chalk.red(`[-] ${bind} is already in use`));
      process.exit(1);
      break;
    default:
      throw error;
  }
};

const onListening = () => {
  const addr = server.address();
  const bind = typeof addr === "string" ? `pipe ${addr}` : `port ${addr.port}`;
  console.log(chalk.yellow(`[!] Listening on HTTPS ${bind}`));
};

server.listen(HTTPS_PORT);
server.on("error", onError);
server.on("listening", onListening);

/**
 * Create HTTP Server (HTTP requests will be 301 redirected to HTTPS)
 */
http
  .createServer((req, res) => {
    res.writeHead(301, {
      Location:
        "https://" +
        req.headers["host"].replace(
          HTTP_PORT.toString(),
          HTTPS_PORT.toString()
        ) +
        req.url
    });
    res.end();
  })
  .listen(HTTP_PORT)
  .on("error", onError)
  .on("listening", () =>
    console.log(chalk.yellow(`[!] Listening on HTTP port ${HTTP_PORT}`))
  );

module.exports = app;
/index.js file (Updated to reflect the binding of error handling middleware)

Now I have something to clarify. As you can see line 19, I have added the ErrorHandlingMiddleware(app) there. You might wonder is there a specific reason behind that. Actually there is.

Middleware(app);  
app.use("", MainController);  
ErrorHandlingMiddleware(app);  
app.set("port", HTTPS\_PORT);

As you can remember, the Express treats the middleware in their order we are binding it with the app. So if we bind the error handling middleware before binding the Express router, the Express app would not know where are the errors are being thrown. So we first bind the Express router and then we bind the error handling middleware.

Now if you start the web application as usual,

$ pm2 start 

and do some CRUD, operation, you won’t see any errors return since we haven’t added any error handling in our /services/index.js file. So I will show you an example of an error by explicitly throwing an error.

Let’s add this error endpoint. Every time you hit it, it will throw you a not found error. Let’s update /controllers/index.js file.

GET /error

_const_ { NotFoundError } = require("../errors");_/\*\*_ **_@route_** _GET /error  
 \*_ **_@desc_** _Return an example error  
 \*_ **_@access_** _Public  
 \*/  
_router.get(  
  "/error",  
  asyncWrapper(_async_ () => {  
    _throw new_ NotFoundError("Sorry content not found");  
  })  
);

2

Figure 1: Request and response of custom error endpoint

As you can see in figure 1, the response status is 404 and the message says “Sorry content not found”. So it works as it should.

Now let’s see how our console looks. Now type,

$ pm2 logs

in your terminal and see the console log.

3

Figure 2: console log

Bonus super-secret round

So I’m going to tell you a super-secret about how to view colored logs in PM2. You just have to add a single line to the /ecosystem.config.js file.

const fs = require("fs");

const SERVER_CERT = fs.readFileSync(__dirname + "/config/server.cert", "utf8");
const SERVER_KEY = fs.readFileSync(__dirname + "/config/server.key", "utf8");

module.exports = {
  apps: [
    {
      name: "node-is-simple",
      script: "./index.js",
      watch: true,
      args: ["--color"],
      env: {
        NODE_ENV: "development",
        SERVER_CERT,
        SERVER_KEY,
        HTTP_PORT: 8080,
        HTTPS_PORT: 8081,
        MONGO_URI: "mongodb://localhost/students"
      }
    }
  ]
};
ecosystem.config.js file (Reflects the update to view colored console output)

ecosystem.config.js file (Reflects the update to view colored console output)

All you have to do is add line 12. args: ["--color"],

Bonus Bonus round

Now I’m going to tell you about two additional Express middleware libraries which are going to save your life 🤣.

  1. Morgan (HTTP request logger middleware for node.js)
  2. CORS (CORS is a node.js package for providing a Connect/Express middleware that can be used to enable CORS with various options.)

I guarantee that these two libraries will save you from certain pitfalls and you’ll thank them pretty much all the time.

This is a good article on Morgan.

Getting Started With morgan: the Node.js Logger Middleware

This is a good article on CORS.

Using CORS in Express

So let’s update our /middleware/common.js file.

const bodyParser = require("body-parser");
const helmet = require("helmet");
const morgan = require("morgan");
const cors = require("cors");

module.exports = app => {
  app.use(bodyParser.json());
  app.use(morgan("dev"));
  app.use(cors());
  app.use(helmet());
};
/middleware/common.js file (Updated to reflect the new middleware CORS and Morgan)

Then let’s install these two packages.

$ npm install cors morgan

Let’s see how the Morgan would show the HTTP requests in our console. We have specified “dev” as the logging level for Morgan. You can find more details on that from the documentation.

4

Figure 3: Morgan request logging in the console
GET / 200 3.710 ms - 39

Let’s understand what this means.

:method :url :status :response-time ms - :res\[content-length\]

This is what the documentation says. Let’s break it down.

:method = GET (The HTTP method of the request.):url = / (The URL of the request. This will use req.originalUrl if exists, otherwise req.url.):status = 200 (The status code of the response.):response-time = 3.710 (The time between the request coming into morgan and when the response headers are written, in milliseconds.):res\[content-length\] = 39 (content-length of the response. If the content-length is not present, the value will be displayed as "-" in the log.)

This is it. Now if we move on to what the CORS module does, it ain’t much but it’s honest work 😁. It will save you from Cross-Origin Resource Sharing issues. If you have worked with APIs and front-ends, the CORS error is something you’ve seen at least once in your lifetime.

5

From https://fdezromero.com/cors-errors-in-ionic-apps/

So CORS module will allow you to add CORS header with ease. I know this article was a little bit longer because I had to share some super-secrets with you. So this is it for this article and I will tell you how to secure our endpoints with JWT authentication in the next article. As usual, you can find the updated code in the GitHub repository. (Look for the commit message “Tutorial 5 checkpoint”.)

Niweera/node-is-simple

So until we meet again, happy coding.

Read More
Untitled-1.png
An Open Source Maintainer's Guide to Publishing npm Packages

This article was first published in DEV.TO


The JavaScript community is built on Open Source and being able to give back to the community feels most gratifying. However, publishing a package for the first time might feel rather daunting, and publishing a release candidate may be a bit scary even for experienced authors. I hope to give some insight and helpful tips, especially for new authors.

I have owned two open source libraries: a tiny utility library for DraftJS called draftjs-md-converter and a React Native security library called react-native-app-auth. I am certainly not as heavily involved in Open Source as some, but I have had the sole responsibility of publishing new versions of these libraries for several years so I think I have enough experience to justify this blog post! I remember how scary it was to publish a library for the first time, and I still remember how apprehensive I felt publishing a release candidate. The purpose of this writeup is to produce a guide on how to publish packages: both a comprehensive guide for new authors, and a sanity check for future me.

I will cover the following:

  • Setting up your repo for publishing
  • Publishing a package (initial release)
  • Versioning
  • Publishing a package (subsequent releases)
  • Publishing alpha / beta / release candidate versions

You can use npm or yarn, it's completely up to you. The publish commands are identical. I will be using npm throughout.

Setting up your repo for publishing

Before you're ready to run the publish command, you should check that your package is in a good state to be published. Here are a few things you might want to consider:

Check the package name

The name field in your package.json will be the name of your published package. So for instance if you name your package package-name, users will import it with

import ... from "package-name";

The name needs to be unique, so make sure you check the name is available on https://www.npmjs.com/ or your publish command will fail.

Set the initial version

The version attribute in your package.json will determine the version of the package when published. For your initial release you might use:

{
  "version": "0.0.1"
}

or

{
  "version": "1.0.0"
}

NPM packages use semver for versioning, which means the version consists of 3 numbers: the major, minor and patch version. We will talk more about versioning in the "Versioning" section.

Make sure your repository is not private

In the package.json you may have an attribute "private": true. It is a built in protection so you wouldn't accidentally publish something that's meant to be private. It is generally a good practice to use this if you're building something that is not meant to be open source, like a personal or a client project. However if you're about to publish the repo as a package, this line should be removed.

Add a license

Ensure you have set the license in your package.json. This is to let people know how you are permitting them to use your package. The most common licenses are "MIT" and "Apache-2.0". They are both permissive, allowing users to distribute, modify, or otherwise use the software for any purpose. You can read more about the differences here. I have always used the "MIT" license.

Check the entry point

The main field in your package.json defined the main entry file for the package. This is the file the users will access then importing your package. It'll usually be something like ./index.js or ./src/index.js depending on the location of your entry file.

Restrict the files you want to publish

Be default, publishing a package will publish everything in the directory. Sometimes you might not want to do that, e.g. if your repository includes an example folder or if you have a build step and only want to publish the built bundle. For that purpose, there is a files field in the package.json. If omitted, the field defaults to ["*"], but when set, it includes only the files and directories listed in the array. Note that certain files are always included, even if not listed: package.json, README, CHANGES / CHANGELOG / HISTORY, LICENSE / LICENCE, NOTICE, and the file in the "main" field.

Add a build step

Sometimes, you may need a build step. For example if you've written your package using Flow, TypeScript or cutting edge JavaScript features that aren't implemented in all browsers, and want to compile/transpile your package to vanilla JavaScript that anyone could use. For this you can use a prepublish script like so:

{
  "scripts": {
    "prepublish": "npm run your-build-script"
  }
}

This will be run automatically when you run the publish command. For example in this package I use the prepublish script to rebuild the app in the dist directory. Notice also that the main field in this package.json is set to "main": "dist/index.js" since I want the users to access the built file.

There are more built in scripts for various occasions, but this is the only one I've had to use when publishing packages. You can read more about other available scripts here.

Publishing a package (initial release)

Clone your repo, and make sure you're on the latest master branch (or whatever your main branch is) with no uncommitted changes.

Create an account on https://www.npmjs.com/ if you don't have one already, and use these credentials to log in on your terminal:

npm login

Finally, in your project repo, run:

npm publish

If you've set up two factor authentication, you'll get a prompt for it in your terminal before the publish is completed. Once the command has finished successfully, you should be able to instantly see your package at https://www.npmjs.com/package/package-name where package-name is the name of your package set in the package.json, and people will be able to install your package with:

npm install package-name

Versioning

Publishing subsequent versions of the package involves more thought, because now we need to consider versioning. As mentioned above, npm packages are versioned using semver. This means that there are 3 types of releases: major, minor and patch, and you should use these to communicate the types of changes in your library.

  • Major changes include anything that is incompatible with the previous versions
  • Minor changes are usually new features and bigger bug fixes
  • Patch changes are tiny bug fixes or additive changes

One thing to note is that the semver naming is a bit misleading, specifically in "major" - a major release doesn't necessarily mean that a lot has changed. It means that when going from the previous to current version, there is a breaking change, and that the users of your library might need to change something in their code in order to accommodate the latest version. For instance, changing an exported function name or parameter order would be considered major changes. A lot of maintainers tend to collect breaking changes and release them all together to minimise how often the major version is incremented.

The reason you should only do breaking changes in major releases is because the users of your library may opt in to all future patch and minor versions silently, so the next time they run npm install you might end up breaking their build.

In the package.json, this is communicated with ~ and ^ where ~ opts in for all future patch releases and ^ opts in for all future patch and minor releases:

~1.2.3 will match all 1.2.x versions but will miss 1.3.0

^1.2.3 will match any 1.x.x release including 1.3.0, but will hold off on 2.0.0.

Side note: when the major version is 0, the package is considered unstable and so the ^ acts the same as ~ and for all releases before major 1 you may publish breaking changes in minor releases.

There is no option to opt in for all future major releases, because they are expected to be breaking and thus should be manual upgrades. Source.

Publishing a package (subsequent releases)

This is how you do releases after the initial one. As before, you should ensure you're on the master branch with no uncommitted changes. Consider what type of release this is: major, minor or patch. Now run the command to increment the version in your working directory, using major, minor or patch depending on the type of the change:

npm version minor

This is a convenience method that does a couple of things:

  • Increments the version in your package.json based on the type of the change
  • commits this version bump
  • creates a tag for the current release in your .git repo

Now all that's left to do is to run the publish command as before:

npm publish

It is important to make sure you do the version bump before the publish. If you try to publish the same version of the library twice, the command will fail.

Finally, you'll want to push the version bump commit and the tag:

git push
git push --tags

Notice that you'll have to push the tags separately using the --tags flag.

If you're using GitHub to host your repository, the tags will show up under the Releases tab on your repository, e.g. https://github.com/kadikraman/draftjs-md-converter/releases

It is good practice to write some release notes against the tag. I usually also use this space to thank any contributors!

Publishing an alpha / beta / release candidate

An alpha / beta / release candidate is usually an early access version of a major release. When you're about to do a major change, you might want to give your users a chance to try out the new release first on an opt-in basis but with the expectation that it may be unstable.

Basically what we want to achieve is to publish a new version "silently" that would allow only interested users to opt in for.

Let's look at how you might create a release candidate. Suppose you are currently on v3.4.5 and you want to do a release candidate for v4.

First, we increment the version. The npm version command allows you to provide your own version number. In this case we're going to say it's the first release candidate for v4:

npm version 4.0.0-rc1

As before, this does the version bump, commits the change, and creates the tag. Now to publish the package. This is the most important step, since you want to ensure none of your users get this release candidate version by default.

The publish command has a --tag argument, which defaults to "latest". This means that whenever someone npm installs your library, they get the last publish that has the "latest" tag. So all we need to do is provide a different tag. This can be anything really, I have usually used "next".

npm publish --tag next

This will publish your package under the next tag, so the users who do npm install package-name will still get v3.4.5, but users who install npm install package-name@next or npm install package-name@4.0.0-rc1 will get your release candidate. Note that if you publish another release candidate with the "next" tag, anyone installing npm install package-name@next will get the latest version.

When you're ready to do the next major release, you can run npm version major (this will bump the version to 4.0.0) and npm publish as normal.

Conclusion

That's pretty much it! Publishing packages can be a bit scary the first few times, but like anything it gets easier the more you do it.

Thank you for contributing to Open Source!

Read More
slack-api.png
Working with the Slack API in Node.js

This blog post was first published in The Code Barbarian Blog.


Integrating with the Slack API is becoming an increasingly common task. Slack is the de facto communication platform for many companies, so there's a lot of demand for getting data from Node.js apps to Slack in realtime. In this article, I'll explain the basics of how to send a Slack message from Node.js.

Getting Started

There are two npm modules that I recommend for working with the Slack API: the official @slack/web-api module and the slack module written primarily by Brian LeRoux of WTFJS fame. I've also used node-slack in the past, but that module is fairly out of date. However, both @slack/web-api and slack are fairly thin wrappers around the Slack API, so, for the purposes of this article, we'll just use axios.

In order to make an API request to Slack, you need a Slack token. There are several types of Slack token, each with its own permissions. For example, an OAuth token lets you post messages on behalf of a user. But, for the purposes of this article, I'll be using bot user tokens. Bot user tokens let you post messages as a bot user, with a custom name and avatar, as opposed to as an existing user.

In order to get a bot user token, you first need to create a new Slack app. Make sure you set "Development Slack Workspace" to the Slack Workspace you want to use.

1 Next, go through the steps to create a bot user. You'll have to add scopes (permissions) to your bot user, and then install your app. First, make sure you add the following scopes to your bot user:

  • chat-write
  • chat-write.customize
  • chat-write.public

2 Once you've added the above scopes, click "Install App" to install this new Slack app to your workspace. Once you've installed the app, Slack will give you a bot user token:

3

Making Your First Request

To send a message to a Slack channel, you need to make an HTTP POST request to the https://slack.com/api/chat.postMessage endpoint with your Slack token in the authorization header. The POST request body must contain the channel you want to post to, and the text of your message.

Below is how you can send a 'Hello, World' message with Axios. Note that you must have a #test channel in your Slack workspace for the below request to work.

const axios = require('axios');

const slackToken = 'xoxb-YOUR-TOKEN_HERE';

run().catch(err => console.log(err));

async function run() {
  const url = 'https://slack.com/api/chat.postMessage';
  const res = await axios.post(url, {
    channel: '#test',
    text: 'Hello, World!'
  }, { headers: { authorization: `Bearer ${slackToken}` } });

  console.log('Done', res.data);
}

If the request is successful, the response will look something like what you see below:

{
  ok: true,
  channel: 'C0179PL5K8E',
  ts: '1595354927.001300',
  message: {
    bot_id: 'B017GED1UEN',
    type: 'message',
    text: 'Hello, World!',
    user: 'U0171MZ51E3',
    ts: '1595354927.001300',
    team: 'T2CA1AURM',
    bot_profile: {
      id: 'B017GED1UEN',
      deleted: false,
      name: 'My Test App',
      updated: 1595353545,
      app_id: 'A017NKGAKHA',
      icons: [Object],
      team_id: 'T2CA1AURM'
    }
  }
}

And you should see the below message show up in your #test channel.

4 Because of the chat-write.customize scope, this bot can customize the name and avatar associated with this message. For example, you can set the username and icon_emoji properties to customize the user your messages come from as shown below.

const res = await axios.post(url, {
  channel: '#test2',
  text: 'Hello, World!',
  username: 'Test App',
  icon_emoji: ':+1:'
}, { headers: { authorization: `Bearer ${slackToken}` } });

Below is how the above message looks in Slack.

5

More Sophisticated Features

You can send plain text messages, but Slack allows you to add formatting to messages using a custom formatting language called mrkdwn. Slack's mrkdwn language is conceptually similar to markdown. Most notably, mrkdwn supports code samples in much the same way markdown supports code snippets:

- `foo` with backticks makes "foo" an inline code block.
- Multi-line code blocks are fenced with three backticks ```

But there are also several important differences:

  • *foo* makes foo bold in mrkdwn, as opposed to italic in markdown. To make text italic in mrkdwn, you should do _foo_.
  • To make the substring "Google" link to https://www.google.com/ in mrkdwn, you need to write <https://www.google.com/|Google> Mrkdwn does not support custom HTML. Any HTML is valid Markdown, but not mrkdwn.

For example, here's how you can send a message with some basic formatting, including a code block.

const url = 'https://slack.com/api/chat.postMessage';

const text = `
Check out this *cool* function!

\`\`\`
${hello.toString()}
\`\`\`
`;

const res = await axios.post(url, {
  channel: '#test',
  text,
  username: 'Test App',
  icon_emoji: ':+1:'
}, { headers: { authorization: `Bearer ${slackToken}` } });

Here's what the above message looks like:

6

Slack also supports more sophisticated layouts via blocks. Instead of specifying a message's text, you can specify a blocks array. There are several different types of blocks, including a generic section block and an image block for pictures.

For example, below is a simplified example of sending a notification that a new order came in to an eCommerce platform, including a qr code image.

const res = await axios.post(url, {
  channel: '#test',
  blocks: [
    {
      type: 'section',
      text: { type: 'mrkdwn', text: 'New order!' },
      fields: [
        { type: 'mrkdwn', text: '*Name*\nJohn Smith' },
        { type: 'mrkdwn', text: '*Amount*\n$8.50' },
      ]
    },
    {
      type: 'image',
      image_url: 'https://upload.wikimedia.org/wikipedia/commons/thumb/d/d0/QR_code_for_mobile_English_Wikipedia.svg/1200px-QR_code_for_mobile_English_Wikipedia.svg.png',
      alt_text: 'qrcode'
    }
  ],
  username: 'Test App',
  icon_emoji: ':+1:'
}, { headers: { authorization: `Bearer ${slackToken}` } });

Below is what the above message looks like in Slack.

7

Moving On

Once you get a bot token, working with the Slack API from Node.js is fairly straightforward. You can send neatly formatted messages to Slack to notify them of events they might be interested in, whether they be coworkers at your company or clients using one of your products.

Read More
4.png
Node Is Simple - Part 4

tl;drThis is the fourth article of the Node is Simple article series. In this article series, I will be discussing how to create a simple and secure NodeJS, Express, MongoDB web application.

To follow the past tutorials, Node is Simple Part 1, Node is Simple Part 2 and Node is Simple Part 3.

Hello fellow developers, now you’ve come across this article series, let’s get to it, shall we. In the past articles, I have discussed how to create simple CRUD endpoints, with MongoDB as the database and how to use Postman to test the endpoints. So in this tutorial, I will discuss how to upload files to MongoDB using MongoDB GridFS and view (or stream) them.

So what is this GridFS?

We can upload files to a folder easily with Multer-Middleware. This is a good reference for that.

Uploading Files and Serve Directory Listing Using NodeJS

But I am going to discuss how to use MongoDB as the file storage. However, there is a little hiccup, where you can only store 16MB as a document in MongoDB BSON format. To overcome this issue, there is a feature called GridFS.

Instead of storing a file in a single document, GridFS divides the file into parts, or chunks [1], and stores each chunk as a separate document.

Enough with the theory

Yes, let’s move on to implementation. First, we have to create a GridFS driver. Let’s create gridfs-service.js file inside /database directory.

const mongoose = require("mongoose");
const config = require("../config");
const dbPath = config.MONGO_URI;
const chalk = require("chalk");

const GridFsStorage = require("multer-gridfs-storage");
const Grid = require("gridfs-stream");
Grid.mongo = mongoose.mongo;

const conn = mongoose.createConnection(dbPath, {
  useNewUrlParser: true,
  useUnifiedTopology: true,
  useCreateIndex: true,
  useFindAndModify: false
});

conn.on("error", () => {
  console.log(chalk.red("[-] Error occurred from the database"));
});

let gfs, gridFSBucket;

conn.once("open", () => {
  gridFSBucket = new mongoose.mongo.GridFSBucket(conn.db, {
    bucketName: "file_uploads"
  });
  // Init stream
  gfs = Grid(conn.db);
  gfs.collection("file_uploads");
  console.log(
    chalk.yellow(
      "[!] The database connection opened successfully in GridFS service"
    )
  );
});

const getGridFSFiles = id => {
  return new Promise((resolve, reject) => {
    gfs.files.findOne({ _id: mongoose.Types.ObjectId(id) }, (err, files) => {
      if (err) reject(err);
      // Check if files
      if (!files || files.length === 0) {
        resolve(null);
      } else {
        resolve(files);
      }
    });
  });
};

const createGridFSReadStream = id => {
  return gridFSBucket.openDownloadStream(mongoose.Types.ObjectId(id));
};

const storage = new GridFsStorage({
  url: dbPath,
  cache: true,
  options: { useUnifiedTopology: true },
  file: (req, file) => {
    return new Promise(resolve => {
      const fileInfo = {
        filename: file.originalname,
        bucketName: "file_uploads"
      };
      resolve(fileInfo);
    });
  }
});

storage.on("connection", () => {
  console.log(chalk.yellow("[!] Successfully accessed the GridFS database"));
});

storage.on("connectionFailed", err => {
  console.log(chalk.red(err.message));
});

module.exports = mongoose;
module.exports.storage = storage;
module.exports.getGridFSFiles = getGridFSFiles;
module.exports.createGridFSReadStream = createGridFSReadStream;
/database/gridfs-service.js file (Contains the GridFS driver configurations)

If you can see line 63, the “bucketName” is set to “file_uploads”. This means the collection that we are using for storing the files is named as file_uploads. After running the application if you go to the MongoDB Compass, you can see there are two new collections are created.

file_uploads.chunks //contains the file chunks (one file is divided in to chunks of 255 kiloBytes.

file_uploads.files //contains the metadata of the file (such as lenght, chunkSize, uploadDate, filename, md5 hash, and the contentType)

Now we need to install several packages for this. Let’s install them.

$ npm install multer multer-gridfs-storage gridfs-stream

Multer is the middleware that handles multipart/form-data. This is a good reference if you are not familiar with them.

Understanding HTML Form Encoding: URL Encoded and Multipart Forms

The other two packages are for the file upload handling with MongoDB GridFS.

After creating this file, let’s create the GridFS middleware. It is really easy to integrate with Express since Express is handy with middleware. Let’s create /middleware/gridfs-middleware.js file.

const multer = require("multer");
const { storage } = require("../database/gridfs-service");

const upload = multer({
  storage
});

module.exports = function GridFSMiddleware() {
  return upload.single("image");
};
/middleware/gridfs-middleware.js file (Contains GridFS middleware configuration)

There is something to mention here. If you can see line 9, I set “image” as the name of the form field name. You can set it to anything you like, but you have to remember it for later use. FYI, There are several methods of uploading files.

upload.single("field_name"); //for uploading a single file

upload.array("field_name"); //for uploading an array of files

upload.fields([{name: 'avatar'}, {name: 'gallery'}]); //for uploading an array of files with multiple field names

upload.none(); //not uploading any files but contains text fields as multipart form data

Here I used, upload.single(“field_name”) because I want to upload only one image (Probably I’ll change this to uploading the student’s profile picture). However, the things I’ve described in the above code-block are not something I brew up myself 🤣. They are all described beautifully in the multer documentation.

Now we can set up the controller to upload files to MongoDB GridFS. Let’s update the /controllers/index.js file.

const GridFSMiddleware = require("../middleware/gridfs-middleware");
/** @route  POST /image
 *  @desc   Upload profile image
 *  @access Public
 */
router.post(
  "/image",
  [GridFSMiddleware()],
  asyncWrapper(async (req, res) => {
    const { originalname, mimetype, id, size } = req.file;
    res.send({ originalname, mimetype, id, size });
  })
);

Now let’s try to upload an image, shall we? Let’s start the web application as usual.

$ pm2 start

1

Figure 1: POST request containing the image file

As in figure 1, you have to select the request body as form-data and then you have to set the key as image. Now select the key type as “File”. (As seen in Figure 2)

2

Figure 2: Selecting the key type of the form data

3

Figure 3: Response from uploading the image to the DB

Now if you see the response as figure 3, it means everything worked out. Now let’s see how the MongoDB Compass shows the uploaded file.

4

Figure 4: file_uploads.files collection (Contains the metadata of the uploaded file)

5

Figure 5: file_uploads.chunks collection (Contains the file chunks of the uploaded file)

As you can see in figure 4, the file metadata is shown. In figure 5, you can see only one chunk. That is because the image we uploaded is less than 255kB in size.

Now that we have uploaded the file, how do we view it? We cannot see the image from the DB, now can we? So let’s implement a way to view the uploaded image.

Now let’s update the /controllers/index.js file.

const { getGridFSFiles } = require("../database/gridfs-service");
const { createGridFSReadStream } = require("../database/gridfs-service");
/** @route   GET /image/:id
 *  @desc    View profile picture
 *  @access  Public
 */
router.get(
  "/image/:id",
  asyncWrapper(async (req, res) => {
    const image = await getGridFSFiles(req.params.id);
    if (!image) {
      res.status(404).send({ message: "Image not found" });
    }
    res.setHeader("content-type", image.contentType);
    const readStream = createGridFSReadStream(req.params.id);
    readStream.pipe(res);
  })
);

What we are doing here is, we get the image id from the request and check for the image in the DB and if the image exists, we stream it. If the image is not found we return a 404 and a not found message. It is as simple as that.

Now let’s try that out.

6

Figure 6: GET request to view the image.

Remember the image id is what gets returned as the id in figure 3.

7

Figure 7: Image is streamed (Image is from [https://nodejs.org/en/about/resources/](https://nodejs.org/en/about/resources/))

If we send the GET request from a web browser, we’ll see the image in the browser. (If you get a “Your connection is not private” message, ignore it.) It will be the same as in figure 7.

So that was a lot of work, isn’t it? But totally worth it. Now you can save your precious images inside a MongoDB database and view them from a browser. Go show off that talent to your friends and impress them 😁.

This is the full /controllers/index.js file for your reference.

/controllers/index.js file (Updated to reflect the view and upload image endpoints)

So, this is the end for this tutorial and in the next tutorial, I will tell you how to create and use custom middleware with Express. (Remember I told you Express is great with middleware). As usual, you can see the whole code behind this tutorial in my GitHub repository. (Check for the commit message “Tutorial 4 checkpoint”.)

Node is Simple

So until we meet again, happy coding…

Read More
3.png
Node Is Simple — Part 3

tl;drThis is the third article of the Node is Simple article series. In this article series, I will be discussing how to create a simple and secure NodeJS, Express, MongoDB web application.

To follow the past tutorials, Node is Simple Part 1, and Node is Simple Part 2.

Hello, fellow developers, I am once again, asking you to learn some NodeJS with me. In this tutorial, I will be discussing creating basic CRUD operations in NodeJS. So without further ado, let’s start, shall we?

CREATE endpoint

If you can remember in my previous article, I have created a simple endpoint to POST a name and city of a student and created a record (document) of the student. What I am going to do is enhance what I’ve already done. Let’s update the model, and the service to reflect our needs.

We are going to add some new fields to the student document.

const mongoose = require("../database");
const Schema = mongoose.Schema;

const studentSchema = new Schema(
  {
    _id: {
      type: mongoose.SchemaTypes.String,
      unique: true,
      required: true,
      index: true
    },
    name: { type: mongoose.SchemaTypes.String, required: true },
    city: { type: mongoose.SchemaTypes.String, required: true },
    telephone: { type: mongoose.SchemaTypes.Number, required: true },
    birthday: { type: mongoose.SchemaTypes.Date, required: true }
  },
  { strict: true, timestamps: true, _id: false }
);

const collectionName = "student";

const Student = mongoose.model(collectionName, studentSchema, collectionName);

module.exports = {
  Student
};
Updated /models/index.js file

What I’ve done here is added \id, telephone,_ and birthday as the new fields. And I have disabled the Mongoose default \id_ and specified my own.

Now let’s update the service file.

const { Student } = require("../models");

module.exports = class StudentService {
  async registerStudent(data) {
    const { _id, name, city, telephone, birthday } = data;

    const new_student = new Student({
      _id,
      name,
      city,
      telephone,
      birthday
    });

    const response = await new_student.save();
    const res = response.toJSON();
    delete res.__v;
    return res;
  }
};
Updated /services/index.js file

Before we are going to test what we have done, I am going to let you in on a super-secret. If you have experience in developing NodeJS applications, I hope you have come across the Nodemon tool. It restarts the server once you changed the files. But today I am going to tell you about this amazing tool called PM2.

What is PM2?

PM2 is a production-grade process management tool. It has various capabilities such as load balancing (which I will discuss in a later tutorial), enhanced logging features, adding environment variables, and many more. Their documentation is super nifty and worth checking out.

So let’s install PM2 and start using it in our web app.

$ npm install pm2@latest -g

Let’s create ecosystem.config.js file in the project root, which will contain all of our environment variables, keys, secrets, and PM2 configurations.

const fs = require("fs");

const SERVER_CERT = fs.readFileSync(__dirname + "/config/server.cert", "utf8");
const SERVER_KEY = fs.readFileSync(__dirname + "/config/server.key", "utf8");

module.exports = {
  apps: [
    {
      name: "node-is-simple",
      script: "./index.js",
      watch: true,
      env: {
        NODE_ENV: "development",
        SERVER_CERT,
        SERVER_KEY,
        HTTP_PORT: 8080,
        HTTPS_PORT: 8081,
        MONGO_URI: "mongodb://localhost/students"
      }
    }
  ]
};
ecosystem.config.js file (The configuration file for PM2)

Since we have moved all of our keys as the environment variables, now we have to change /config/index.js file to reflect these changes.

module.exports = {
  SERVER_CERT: process.env.SERVER_CERT,
  SERVER_KEY: process.env.SERVER_KEY,
  HTTP_PORT: process.env.HTTP_PORT,
  HTTPS_PORT: process.env.HTTPS_PORT,
  MONGO_URI: process.env.MONGO_URI
};
Updated /config/index.js file (here we have moved all the secrets to the ecosystem.config.js file as environment variables)

Now, remember, /config/index.js is safe to commit to a public repository. But not the ecosystem.config.js file. Also never commit your private keys to a public repository (But I’ve done it for the demonstration purposes). Protect your secrets like your cash 😂.

Since we have done our initial setup let’s run our application. And one thing to keep in mind. If you start the application, as usual,

$ node index.js

it won’t work. Now we have to use PM2 to start our application because it contains all the environment variables needed for the Node web application. Now go to the project root folder and run the following command.

$ pm2 start

1

Figure 1: PM2 startup

If you see this (figure 1) in your console it is working as it should. Now to see the logs run the following command.

$ pm2 logs

2

Figure 2: PM2 logs

If you see this (figure 2) in your console then the node app is working as it should.

Enough with the PM2

Yes, let’s move to test our enhanced CREATE endpoint.

3

Figure 3: POST request in Postman

Now create a request like this (figure 3) and hit Send.

4

Figure 4: POST response in Postman

If you see this response (figure 4), it is safe to say everything works as it should. Yay!

Now since we have a CREATE endpoint, let’s add a READ endpoint.

READ endpoint

Now let’s read what’s inside our student collection. We can view all the students who are registered, or we can see details from only one student.

GET /students

Now let’s get all the students’ details. We only get the \id, name,_ and city of the student here. Add these lines to the /controllers/index.js file.

/** @route  GET /students
 *  @desc   Get all students
 *  @access Public
 */
router.get(
  "/students",
  asyncWrapper(async (req, res) => {
    const response = await studentService.getAllStudents();
    res.send(response);
  })
);

And add these lines (inside the StudentService class) to the /servcies/index.js file.

_async_ getAllStudents() {  
  _return_ Student.find({}, "\_id name city");  
}

Let me give a summary of the above lines. Student.find() is the method to apply queries to the MongoDB. Since we need all the students we pass the empty object as the first argument. As the second argument (which is called a projection) we provide the fields we want to return and the fields we do not want to return. Here I want \id, name,_ and city. We can use “-field\name”_ to provide the field we do not want.

GET /students/:id

Now let’s get all the details from a single student. Now here we are getting all the details of a single student.

Now add these lines to the /controllers/index.js file.

/** @route  GET /students/:id
 *  @desc   Get a single student
 *  @access Public
 */
router.get(
  "/students/:id",
  asyncWrapper(async (req, res) => {
    const response = await studentService.getStudent(req.params.id);
    res.send(response);
  })
);

In the Express router, we can specify a path parameter via /:param syntax. Now we can access this path parameter via req.params.param. This is basic Express and to get more knowledge on this please refer to the documentation, and it is a great source of good knowledge.

Now add these lines to the /servcies/index.js file.

_async_ getStudent(\_id) {  
  _return_ Student.findById(\_id, "-\_\_v -createdAt -updatedAt");  
}

Here we provide the \id_ of the student to get the details. As the projection, we do not want \_v, createdAt,_ and updatedAt fields.

Now let’s check these endpoints.

5

Figure 5: Request /students

6

Figure 6: Response /students

I have added two more students’ details, so I got three records.

Now let’s check the single student endpoint.

7

Figure 7: Request /students/stud\_1

8

Figure 8: Response /students/stud\_1

If you get similar results in figure 6 and figure 8 let’s say it was a success.

Update endpoint

Since we created student records, viewed these student records, now it is time to update these student records.

To PUT, or to PATCH?

So this is the biggest question, to update a resource, should we use PUT or PATCH? The answer is somewhat simple. If you want to update the whole resource every time, use PUT. If you want to update the resource partially, use PATCH. It is that simple. This article clarified this dilemma.

Differences between PUT and PATCH

Since I am going to partially update the student record, I will be using PATCH.

PATCH /students/:id

Now let’s update the /controllers/index.js file.

/** @route  PATCH /students/:id
 *  @desc   Update a single student
 *  @access Public
 */
router.patch(
  "/students/:id",
  asyncWrapper(async (req, res) => {
    const response = await studentService.updateStudent(
      req.params.id,
      req.body
    );
    res.send(response);
  })
);

After adding this let’s update the /services/index.js file.

_async_ updateStudent(\_id, { name, city, telephone, birthday }) {  
  _return_ Student.findOneAndUpdate(  
    { \_id },  
    {  
      name,  
      city,  
      telephone,  
      birthday  
    },  
    {  
      _new_: _true_,  
      omitUndefined: _true_,  
      fields: "-\_\_v -createdAt -updatedAt"  
    }  
  );  
}

Let me give a brief description of what’s going on here. We update the student by the given id, and we provide the name, city, telephone, and birthday data to be updated. As the third argument we provide new: true to return the updated document to us, omitUndefined: true to partially update the resource and fields: “-\_v -createdAt -updatedAt”_ to remove these fields from the returning document.

Now let’s check this out.

9

Figure 9: Updating the name of the student (stud\_1)

10

Figure 10: The student’s name has changed from June to Jane

So if you get similar results as figure 10 then let’s say yay! Now let’s move on to the final part of this tutorial, which is DELETE.

DELETE endpoint

Since we create, read, and update, now it is time to delete some students 😁.

DELETE /students/:id

Since we should not delete all the students at once (It is a best practice IMO), let’s delete student by the provided student_id.

Now let’s update the /controllers/index.js file.

/** @route  DELETE /students/:id
 *  @desc   Delete a single student
 *  @access Public
 */
router.delete(
  "/students/:id",
  asyncWrapper(async (req, res) => {
    const response = await studentService.deleteStudent(req.params.id);
    res.send(response);
  })
);

Now let’s update the /services/index.js file.

_async_ deleteStudent(\_id) {  
  _await_ Student.deleteOne({ \_id });  
  _return_ { message: \`Student \[${\_id}\] deleted successfully\` };  
}

Now the time to see this in action.

11

Figure 11: Request to delete student with stdent\_id \[stud\_2\]

12

Figure 12: Response of deleting student with stdent\_id \[stud\_2\]

If the results are similar to figure 12, then it is safe to assume, the application works as it should.

So here are the current /controllers/index.js file and /services/index.js file for your reference.

const router = require("express").Router();
const asyncWrapper = require("../utilities/async-wrapper");
const StudentService = require("../services");
const studentService = new StudentService();

/** @route  GET /
 *  @desc   Root endpoint
 *  @access Public
 */
router.get(
  "/",
  asyncWrapper(async (req, res) => {
    res.send({
      message: "Hello World!",
      status: 200
    });
  })
);

/** @route  POST /register
 *  @desc   Register a student
 *  @access Public
 */
router.post(
  "/register",
  asyncWrapper(async (req, res) => {
    const response = await studentService.registerStudent(req.body);
    res.send(response);
  })
);

/** @route  GET /students
 *  @desc   Get all students
 *  @access Public
 */
router.get(
  "/students",
  asyncWrapper(async (req, res) => {
    const response = await studentService.getAllStudents();
    res.send(response);
  })
);

/** @route  GET /students/:id
 *  @desc   Get a single student
 *  @access Public
 */
router.get(
  "/students/:id",
  asyncWrapper(async (req, res) => {
    const response = await studentService.getStudent(req.params.id);
    res.send(response);
  })
);

/** @route  PATCH /students/:id
 *  @desc   Update a single student
 *  @access Public
 */
router.patch(
  "/students/:id",
  asyncWrapper(async (req, res) => {
    const response = await studentService.updateStudent(
      req.params.id,
      req.body
    );
    res.send(response);
  })
);

/** @route  DELETE /students/:id
 *  @desc   Delete a single student
 *  @access Public
 */
router.delete(
  "/students/:id",
  asyncWrapper(async (req, res) => {
    const response = await studentService.deleteStudent(req.params.id);
    res.send(response);
  })
);

module.exports = router;
Updated /controllers/index.js file (contains CRUD endpoints)
const { Student } = require("../models");

module.exports = class StudentService {
  async registerStudent(data) {
    const { _id, name, city, telephone, birthday } = data;

    const new_student = new Student({
      _id,
      name,
      city,
      telephone,
      birthday
    });

    const response = await new_student.save();
    const res = response.toJSON();
    delete res.__v;
    return res;
  }

  async getAllStudents() {
    return Student.find({}, "_id name city");
  }

  async getStudent(_id) {
    return Student.findById(_id, "-__v -createdAt -updatedAt");
  }

  async updateStudent(_id, { name, city, telephone, birthday }) {
    return Student.findOneAndUpdate(
      { _id },
      {
        name,
        city,
        telephone,
        birthday
      },
      {
        new: true,
        omitUndefined: true,
        fields: "-__v -createdAt -updatedAt"
      }
    );
  }

  async deleteStudent(_id) {
    await Student.deleteOne({ _id });
    return { message: `Student [${_id}] deleted successfully` };
  }
};
Updated /services/index.js file (Contains all CRUD endpoints)

So this is it for this tutorial, and we will meet again in a future tutorial about uploading files to MongoDB using GridFS. As usual, you can find the code here (Check for the commit message “Tutorial 3 checkpoint”).

node-is-simple

So until we meet again, happy coding…

Read More
  • 1 / 5