List of useful articles and blog post on Node.js

social-bg-7.png
Breaking down Node.js and WebSockets to understand the use of those in real-time applications

In December 2019, I sadly dislocated my shoulder.

During the recovery process, as part of the therapy, my doctor recommended some exercises, but those were extremely boring. Subsequently, I decided to make the recovery fun and interactive.

Based on that, I took those exercises and mixed them with hardware and software to create a rehabilitation game. In order to make therapy fun, the application needed to transfer data in real-time to give the user a comfortable interaction with the game. A lot of references recommend Node.js and WebSocket for that, and many questions arose.

Why are these tools useful? What characteristics do they have? In which scenario is it recommended to be used? How do they help us in the process of building real-time applications?

The following article takes each tool, identifies the main characteristics, and explains how each concept works. Then, as a conclusion, you can learn how these pieces work together and highlight important considerations of their use.

Node.js

Node.js is a JavaScript runtime environment designed to build scalable network applications. Most of the time we used to hear the facts that Node is non-blocking, single-threaded, asynchronous, concurrent... But, what does it really mean in this context? Let’s discover it.

What is single-thread?

Node.js is single-threaded. One thread means one call stack, one call stack means one thing at a time. Nodejs call stack

Hmm, that's suspicious. If we want things in real-time, doing only one thing at a time does not sound like a good approach, right? Let's find out!

What is the call stack?

The call stack is a data structure that follows the LIFO approach (“Last In, First Out”), so basically what it does is push and pop different instructions read from the code. It is a very important piece of Node.js because it will store the execution order of the program and we should take special consideration with it because as there is only one if it is busy our application will be busy. Call stack nodejs

What is non-blocking?

Non-blocking is one of those concepts which are easier to understand if we understand first their opposite. Having said that, the concept of blocking refers to all those instructions that block the execution of others until that instruction finishes. Otherwise, non-blocking are instructions that can be done without blocking any other instruction.

Taken into consideration that our goal is to build applications that transfer data in real-time, the fact that JavaScript is a blocking language can surprise us. Fortunately, we can "make it” non-blocking by introducing the concepts of asynchronous and event loop.

What is asynchronous?

In JavaScript, asynchronous is the relation between now and later. But later does not mean after now. It means at one point in the future, a moment that you would not necessarily know when it will be.

Because of its definition and remembering what single-thread means if we don’t have certainty about when our asynchronous task will be done, having those into the call stack will slow down the performance of the application. For that reason, those are executed in the form of “promises” or “callbacks” out of the call stack by default.

When the asynchronous instructions are resolved, they need to return to the call stack, and for that, they follow a route that visits two other processes before: first the callback queue, and then the event loop.

What is the callback queue?

The callback queue is a “first in first out” (FIFO) data structure type that receives and stores momentarily, resolved asynchronous code and because of that, it is a crucial support to the event loop.

What is the event loop?

The event loop is a very simple but important part of how Node.js works. It is a continuously looping process where the main goal is to pass an instruction to the call stack from the callback queue but only if the call stack is empty. Event Loop in nodejs

What is concurrency?

Concurrency is when two or more processes run together but not at the same time, it means that the execution “jumps” between processes. Even when Node.js is single-threaded, it supports thousands of concurrent connections with a single server.

This happens because Node.js offloads I/O operations to the system kernel whenever possible and most of the modern operating systems’ kernels are multi-threaded. This way, Node.js can handle one JavaScript operation at a time but a lot of I/O operations at the same time.

Putting pieces together

Our applications can have tons of instructions, some of them could be “fast” to execute such as it is an assign (assigning a value to a variable) but there are others that can be kind of “slow” e.g. network requests.

Regarding the facts that Node.js is single-thread and JavaScript is a blocking language we could assume that if our application needs to make a lot of “slow” instructions, the call stack will be busy and the performance of the app will decrease, but there is where the relevance of Node.js lies because it brings a different approach to face that scenario based on its other features.

Taking into consideration the concepts of non-blocking and asynchronous, the “slow” instructions can be executed as promises or callbacks, and this way those will be taken off from the call stack and delegate the execution of them to “someone” with the capacity to deal with it.

Once the “slow” instruction is resolved, it will return to the call stack by the functionality of the callback queue and the event loop. Thus, the “slow” tasks can be done without saturating the call stack and without affecting the performance of the app. Nodejs cycle

Having described the principal parts of Node.js and how those work together, let’s do the same now with the WebSocket protocol.

WebSocket

WebSockets came to facilitate the process of building applications in real-time because it is a protocol designed to establish persistent bidirectional communication between a client and a server by a TCP connection.

Its functionality allows both parties to have a persistent “conversation” by first establishing a connection with an initial “handshake” and then by sending bidirectionally the packages to exchange. Websocket

Let’s detail other pieces related to this protocol such as its API, lifecycle, scalability, and more.

What is a protocol?

A protocol is a set of syntaxes, rules, and semantics that allow two or more entities or machines to transmit information. Communication worldwide will not be possible without those standards. Some of them are TCP, MQTT, HTTPS, and of course WebSocket.

What is real-time data transfer?

Real-time data transfer consists of basically delivered data immediately after collection without delays in the transmissions of them. It's very used nowadays in a lot of applications such as chats, navigation, gaming, tracking, and so on. By the persistent and bidirectional WebSocket’s connection, the exchange of packages in a few steps is possible, which facilitates the real-time data transfer between applications.

WebSockets’ connection lifecycle

The lifecycle of a WebSockets’ connection can be divided into 3 phases, the first step starts with requesting the establishment of a connection. Then, after the connection was established, the protocol is able to transfer packages bidirectional between sides. And at last, when for some reason there is not required to exchange more data, the connection needs to be closed. WebSockets connection lifecycle

Establish the connection

Before starting to exchange data is needed to establish the connection, it is known as an “initial handshake”. It consists of sending a regular HTTP connection with an “upgrade” header from the client to the server, which indicates the solicitude to change the protocol used. The server received it and in case it supports the WebSocket protocol, it agrees to the upgrade and communicates this through a header response.

After that, the handshake is done and the initial HTTP connection is replaced by a WebSocket connection that uses the same initial TCP/IP connection.

Use of the connection

At this time both sides are allowed to send and receive data by the WebSocket protocol. Packages can be exchanged bidirectionally anytime and are in this phase when more of the events and methods that the WebSocket API brings can be used in a practical way.

Close connection

In order to make smart use of resources, it is important to close the connection whenever it is not used anymore, for that, both sides have equal rights to close the connection, all it has to do is to send the request using the “close” method with two optional params who indicate, respectively, the closing code and the reason of the action.

WebSocket API

This protocol offers a simple but useful interface to interact with it. There are a total of two methods and four events.

  • Send (method): send data
  • Close (method): close the connection
  • Open (event): connection established. Available via the onopen property.
  • Message (event): data received. Available via the onmessage property.
  • Error (event): WebSocket error. Available via the onerror property.
  • Close (event): connection closed. Available via the onclose property.

Then, the API offers other attributes such as binaryType, readyState, or bufferedAmount that allow us to make custom logic implementations as for example a rate limit to prevent DoS attacks to the server. Also, there are many WebSocket libraries that facilitate this and other high-level mechanisms.

Encrypted WebSocket

Using the ws URL schema it is possible to establish a WebSocket connection.

let socket = new WebSocket("ws://nodesource.com");

But it is also possible using the ‘wss’ URL schema too.

let socket = new WebSocket("wss://nodesource.com");

The main difference between them is that there is an ‘s’ of more in the second URL schema but that change has more implications than just the addition of a letter. That “s” stands for “secure” which means that this connection is encrypted using WebSocket over SSL/TLS.

As an analogy, WSS is to WS the same as HTTPS is to HTTP because HTTPS is an encrypted version of HTTP. For security reasons, it is highly recommended to use the encrypted way in both protocols.

Scalability

Scalability is a crucial consideration to have in mind in the design process of an application because otherwise, we can face non-beneficent scenarios when the moment of growth comes.

In order to increase the capacity and functionalities of the app when it requires there are two approaches to apply: vertical scalability and horizontal scalability.

Vertical scalability consists of adding more resources to the server, as for example more RAM, better CPU, or so on. It is a fast way to grow but it has limitations because we can’t do that infinitely.

Horizontal scalability is about adding more instances of the server. It will require more configuration but it is a solution that can be implemented the number of times required. However, regarding the main characteristic of the WebSocket, which is the persistent bidirectional connection, we have that it is generated an important situation when we try to scale horizontally.

This happens because when the socket connection is established, it is bound to specific instances of a client and a server, and if we increase the number of instances in the backend, there is a possibility that the client requests a server's instance that has no idea about that connection established. Scalability Websockets

Anyway, there are alternatives to implement in order to avoid the situation explained, as for example, the implementation of a load balancer configured with a sticky-session. This way, those implementations will allow the client to request the correct server’s instance. Understanding Websockets

Conclusions

Node.js

Node.js is a relevant tool to develop real-time applications because, among others, it has the particularity to execute each instruction according to its characteristics and needs. It means that it can either execute tasks really fast on the call stack using synchronous code or delegate to someone else when it requires more processing power through the use of asynchronous code. This combination makes smart use of the resources and keeps in good shape the performance of the application.

We could get confused with some characteristics that at the beginning don't seem to be from a proficient tool for the development of real-time applications, such as the fact that JavaScript is a blocking language or that Node.js is single-threaded. But when we see the whole picture, we find other characteristics (for example, the asynchronous code and the event loop) that allow us to understand how Node.js works and how it takes advantage of its strengths and weaknesses.

It's important to understand too that because of all those characteristics we already mentioned before, Node.js is an excellent option not only for real-time applications but for applications that need to handle multiple I/O requests in general. Same way, it does not represent the best approach if it is required to deal with intensive CPU computing.

WebSocket

WebSocket is a protocol that establishes persistent bidirectional communication between parties. It provides a mechanism for browser-based applications that need two-way communication with servers that do not rely on opening multiple HTTP connections, which makes it a very useful tool in the development of real-time applications.

It presents an API that simplifies the interaction with the protocol but it does not include some common high-level mechanisms such as reconnection, authentication, or many others. However, it is possible to implement them manually or through a library. Also, the protocol offers an encrypted option for the exchange of packages and it is composed of a three-phase life cycle connection.

On other hand, it's important to have in mind that the horizontal scalability of an app with WebSockets will require extra steps, including adding more pieces to the architecture of the app and implementing configurations. Those extra steps are important to have in consideration in the design process of the app because, otherwise, it could represent non-beneficent scenarios in the future.

Finally, regarding my use case, the rehabilitation game consists of taking human movement and control in this way by running dinosaur game from Chrome. In that context, the WebSocket’s open connection allows exchanging, between the backend and frontend, the movements translated in data without the necessity to open a new HTTPS connection each time a movement is detected. About Node.js, its characteristics help the fast processing of the instructions which allows the real-time communication.

Read More
pdf or png from nodejs.png
Exporting a web page as PDF or PNG from Node.js

This blog post was first published in thedreaming.org blog.


I’m working on a reporting engine right now, and one of the requirements I have is to be able to export reports as PDF files, and send PDF reports via email. We also have a management page where you can see all the dashboards, and we’d like to render a PNG thumbnail of the dashboard to show here. Our back-end is all Node.js based, and you’re probably aware that Node is not known for it’s graphics prowess. All of our reports are currently being generated by a React app, written with Gatsby.js, so what we’d like to do is load up a report in a headless browser of some sort, and then either “print” that to PDF or take a PNG image. We’re going to look at doing exactly this using Playwright and Browserless.

The basic idea here is we want an API endpoint we can call into as an authenticated user, and get it to render a PDF of a web page for us. To do this we’re going to use two tools. The first is playwright, which is a library from Microsoft for controlling a headless Chrome or Firefox browser. The second is browserless, a microservice which hosts one or more headless Chrome browsers for us.

If you have a very simple use case, you might actually be able to do this entirely with browserless with their APIs. There’s a /pdf REST API which would do pretty much what we want here. We can pass in cookies and custom headers to authenticate our user to the reporting backend, we can set the width of the viewport to something reasonable (since of course our analytics page is responsive, and we want the desktop version in our PDF reports, not the mobile version).

But, playwright offers us two nice features that make it desirable here. First it has a much more expressive API. It’s easy, for example, to load a page, wait for a certain selector to be present (or not be present) to signal that the page is loaded, and then generate a PDF or screenshot. Second, when you npm install playwright, it will download a copy of Chrome and Firefox automatically, so when we’re running this locally in development mode we don’t even need the browserless container running.

Setting up and configuring playwright

In our backend analytics project, we’re going to start out by installing playwright:

$ npm install playwright

You’ll notice the first time you do this, it downloads a few hundred megs of browser binaries and stores them (on my mac) in ~/Library/Caches/ms-playwright. Later on, when we deploy our reporting microservice in a docker container, this is something we do not want to happen, because we don’t need a few hundred megs of browser binariers - that’s what browserless is for. So, in our backend project’s Dockerfile, we’ll modify our npm ci command:

RUN PLAYWRIGHT_SKIP_BROWSER_DOWNLOAD=1 npm ci && \
    rm -rf ~/.npm ~/.cache

If you’re not using docker, then however you run npm install in your backend, you want to add this PLAYWRIGHT_SKIP_BROWSER_DOWNLOAD environment variable to prevent all the browser binaries from being installed.

We do want playwright to use these browser binaries when it runs locally, but in production we want to use browserless. To control this, we’re going to set up an environment variable that controls playwright’s configuration:

// Set BROWSERLESS_URL environment variable to connect to browserless.
// e.g. BROWSERLESS_URL=wss://chrome.browserless.io?token=YOUR-API-TOKEN
const BROWSERLESS_URL = process.env.BROWSERLESS_URL;

let browser: pw.Browser | undefined;

export async function getBrowser() {
  if (!browser) {
    if (BROWSERLESS_URL) {
      // Connect to the browserless server.
      browser = await pw.chromium.connect({
        wsEndpoint: BROWSERLESS_URL,
      });
    } else {
      // In dev mode, just run a headless copy of Chrome locally/
      browser = await pw.chromium.launch({
        headless: false,
      });
    }

    // If the browser disconnects, create a new browser next time.
    browser.on("disconnected", () => {
      browser = undefined;
    });
  }

  return browser;
}

This creates a shared “browser” instance which we’ll use across multiple clients. Each client will create a “context”, which is sort of like an incognito window for that client. Contexts are cheap and lots can exist in parallel, whereas browser instances are expensive, so this lets us use a single instance of Chrome to serve many users.

We’re also going to write a little helper function here to make it easier to make sure we always release a browser context when we’re done with it:

async function withBrowserContext<T = void>(
  fn: (browserContext: pw.BrowserContext) => Promise<T>
) {
  const browser = await getBrowser();
  const browserContext = await browser.newContext();

  try {
    return await fn(browserContext);
  } finally {
    // Close the borwser context when we're done with
    // it. Otherwise browserify will keep it open forever.
    await browserContext.close();
  }
}

The idea here is you call this function, and pass in a function fn that accepts a browser context as a parameter. This lets us write code like:

await withBrowserContext((context) => {
  // Do stuff here!
});

And then we don’t have to worry about cleaning up the context if we throw an expection or otherwise exit our funciton in an “unexpected” way.

Generating the PDF or PNG

Now we get to the meaty part. We want an express endpoint which actually generates the PDF or the PNG. Here’s the code, with lots of comments. Obivously this is written with my “reporting” example in mind, so you’ll need to make a few small changes if you want to give this a try; notably you’ll probably want to change the express endpoint, and change the dashboardUrl to point to whatever page you want to make a PDF or PNG of.

// We set up an enviornment variable called REPORTING_BASE_URL
// which will point to our front end on localhost when running
// this locally, or to the real frontend in production.
const REPORTING_BASE_URL =
  process.env.REPORTING_BASE_URL || "http://localhost:8000";

// Helper function for getting numbers out of query parameters.
function toNumber(val: string | undefined, def: number) {
  const result = val ? parseInt(val, 10) : def;
  return isNaN(result) ? def : result;
}

// Our express endpoint
app.post("/api/reporting/export/:dashboardId", (req, res, next) => {
  Promise.resolve()
    .then(async () => {
      const type = -req.query.type || "pdf";
      const width = toNumber(req.query.width, 1200);
      const height = toNumber(req.query.height, 800);

      // Figure out what account and dashboard we want to show in the PDF.
      // Probably want to do some validation on these in a real system, although
      // since we're passing them as query parameters to another URL, there's
      // not much chance of an injection attack here...
      const qs = new URLSearchParams({ dashboardId: req.params.dashboardId });

      // If you're not building a reporting engine, change this
      // URL to whatever you want to make into a PDF/PNG. :)
      const dashboardUrl = `${REPORTING_BASE_URL}/dashboards/dashboard?${qs.toString()}`;

      await withBrowserContext(async (browserContext) => {
        // Set up authentication in the new browser context.
        await setupAuth(context.req, browserContext);

        // Create a new tab in Chrome.
        const page = await browserContext.newPage();
        await page.setViewportSize({ width, height });

        // Load our dashboard page.
        await page.goto(dashboardUrl);

        // Since this is a client-side app, we need to
        // wait for our dashboard to actually be visible.
        await page.waitForSelector("css=.dashboard-page");

        // And then wait until all the loading spinners are done spinning.
        await page.waitForFunction(
          '!document.querySelector(".loading-spinner")'
        );

        if (type === "pdf") {
          // Generate the PDF
          const pdfBuffer = await page.pdf();

          // Now that we have the PDF, we could email it to a user, or do whatever
          // we want with it. Here we'll just return it back to the client,
          // so the client can save it.
          res.setHeader(
            "content-disposition",
            'attachment; filename="report.pdf"'
          );
          res.setHeader("content-type", "application/pdf");
          res.send(pdfBuffer);
        } else {
          // Generate a PNG
          const image = await page.screenshot();
          res.setHeader("content-type", "image/png");
          res.send(image);
        }
      });
    })
    .catch((err) => next(err));
});

As the comments say, this creates a new “browser context” and uses the context to visit our page and take a snapshot.

Authentication

There’s one bit here I kind of glossed over:

// Set up authentication in the new browser context.
await setupAuth(context.req, browserContext);

Remember that browserless is really just a headless Chrome browser. The problem here is we need to authenticate our user twice; the user needs to authenticate to this backend express endpoint, but then this is going to launch a Chrome browser and navigate to a webpage, so we need to authenticate the user in that new browser session, too (since obviously not just any user off the Internet is allowed to look at our super secret reporting dashboards). We could visit a login page and log the user in, but it’s a lot faster to just set the headers and/or cookies we need to access the page.

What exactly this setupAuth() function does is going to depend on your use case. Maybe you’ll call browserContext.setExtraHTTPHeaders() to set a JWT token. In my case, my client is authenticating with a session cookie. Unfortunately we can’t use setExtraHTTPHeaders() to just set the session cookie, because Playwright generates it’s own cookie header and we can’t override it. So instead, I’ll parse the cookie that was sent to us and then forward it on to the browser context: via browserContext.addCookies().

import * as cookieParse from "cookie-parse";
import { URL } from "url";

// Playwright doesn't let us just set the "cookie"
// header directly. We need to actually create a
// cookie in the browser. In order to make sure
// the browser sends this cookie, we need to set
// the domain of the cookie to be the same as the
// domain of the page we are loading.
const REPORTING_DOMAIN = new URL(REPORTING_BASE_URL).hostname;

async function setupAuth(
  req: HttpIncomingMessage,
  browserContext: pw.BrowserContext
) {
  // Pass through the session cookie from the client.
  const cookie = req.headers.cookie;
  const cookies = cookie ? cookieParse.parse(cookie) : undefined;

  if (cookies && cookies["session"]) {
    console.log(`Adding session cookie: ${cookies["session"]}`);
    await browserContext.addCookies([
      {
        name: "session",
        value: cookies["session"],
        httpOnly: true,
        domain: REPORTING_DOMAIN,
        path: "/",
      },
    ]);
  }
}

We actually don’t have to worry about the case where the session cookie isn’t set - the client will just get a screenshot of whatever they would have gotten if they weren’t logged in!

If we log in to our UI, and then visit http://localhost:8000/api/reporting/export/1234, we should now see a screenshot of our beautiful reporting dashboard.

Cleaning up site headers

If you actually give this a try, you might notice that your PDF has all your menus and navigation in it. Fortunately, this is easy to fix with a little CSS. Playwright/browserless will render the PDF with the CSS media set to “print”, exactly as if you were trying to print this page from a browser. So all we need to do is set up some CSS to hide our site’s navigation for that media type:

@media print {
  .menu,
  .sidebar {
    display: none;
  }
}

An alternative here might be to pass in an extra query parameter like bare=true, although the CSS version is nice because then if someone tries to actually print our page from their browser, it’ll work.

Setting up Browserless

So far we’ve been letting Playwright launch a Chrome instance for us. This works fine when we’re debugging locally, but in production, we’re going to want to run that Chrome instance as a microservice in another container, and for that we’re going to use browserless. How you deploy browserless in your production system is outside the scope of this article, just because there’s so many different ways you could do it, but let’s use docker-compose to run a local copy of browserless, and get our example above to connect to it.

First, our docker-compose.yml file:

version: "3.6"
services:
  browserless:
    image: browserless/chrome:latest
    ports:
      - "3100:3000"
    environment:
      - MAX_CONCURRENT_SESSIONS=10
      - TOKEN=2cbc5771-38f2-4dcf-8774-50ad51a971b8

Now we can run docker-compose up -d to run a copy of browserless, and it will be available on port 3100, and set up a token so not just anyone can access it.

Now here I’m going to run into a little problem. The UI I want to take PDFs and screenshots from is a Gatsby app running on localhost:8000 on my Mac. The problem is, this port is not going to be visible from inside my docker VM. There’s a variety of ways around this problem, but here we’ll use ngrok to set up a publically accessible tunnel to localhost, which will be visible inside the docker container:

$ ngrok http 8000
Forwarding                    http://c3b3d3d435f1.ngrok.io -> localhost:8000
Forwarding                    https://c3b3d3d435f1.ngrok.io -> localhost:8000

Now we can run our reporting engine with:

$ BROWSERLESS_URL=ws://localhost:3100?token=2cbc5771-38f2-4dcf-8774-50ad51a971b8 \
    REPORTING_BASE_URL=https://c3b3d3d435f1.ngrok.io
    node ./dist/bin/server.js

And now if we try to hit up our endpoint, we should once again see a screenshot of our app!

Hopefully this was helpful! If you try this out, or have a similar (or better!) solution, please let us know!

Read More
Best-Cross-Platform-App-Frameworks (1).jpg
10 Best Cross-Platform App Frameworks to Consider for Your App Development

Mobile app development is time-intensive and complex. For developers, it’s a challenge to build an app that’s efficient and could also run on several platforms. Cross-platform app development helps organizations have a better reach to the target audience.

Cross-platform is a framework through which a mobile application runs on various platforms. Developers should write code just once and run it anywhere for any platform of choice. Cross-platform application development has its own merits, which play a critical role in its popularity today.

As its reach grew, several development tools and frameworks started coming out in the market. Gradually, and then suddenly, mobile app developers, including a Nodejs development company are trying their hands at this one-of-a-kind technology.

The Pros Of Cross-platform Apps

1. Reusable codes. Rather than writing fresh codes for each platform, developers could reuse the same code across platforms. Furthermore, this cuts down on tasks that are repetitive, eliminating hard work. This however is not a new concept. It’s been used in software development for several years now, and the benefits of being able to reuse code have been witnessed too.

2. Faster development time. Developing cross-platform apps is quicker when a single script is deployed. On the other hand, increased speed in development results in a product reaching the marketplace much sooner. Time could be allocated on working out codes for a brand new app. The situation is a win-win for all concerned, such as developers, marketers, and users.

3. Seamless implementation. Today, many technologies offer cross-platform solutions, making it easy for developers to do changes. A tool like Appcelerator for instance is used, codes could be written easily in HTML5, and converted for various platforms. This makes app development faster, and also easier to synch updates across different mobile devices.

4. Cost control. Thanks to cross-platform mobile application development, and the services of developers, like for instance NodeJS development services, companies now only need a one-time investment to have their app developed, in contrast to earlier times when they need to heavily spend on various technologies and tools. They need not spend on separately developing apps for every platform. Furthermore, the same team could be made to work on different platforms for app developers.

Top Ten Cross-Platform App Frameworks 2020

As technology continues to evolve, the IT field and business organizations need to keep up with the changes to remain relevant. When it comes to developing cross-platform apps, there are numerous frameworks to choose from. Here, I have gathered an in-depth view of the top cross-platform app frameworks that are highly used by service providers.

1. React Native

React_Native

When it comes to cross-platform app development, it’s not easy to not include React Native. Created on JavaScript, it’s great for writing real code and providing a native-feel to mobile apps.

Features

  • Has huge community support, boosting it via introducing and improving features, and bug fixing.
  • The one-time coding reduces development time instantly, along with keeping the cost at its lowest.
  • No need to require developers to separately code twice for the same application on various platforms.
  • To a big extent, focuses on the UI, rendering an interface that’s very responsive.

2. Ionic

Ionic

One of the most popular and astonishing cross-platform app frameworks, which is AngularJS-based. Developers could combine several languages. Moreover, developers could build a flawlessly creative UI, along with adding features that are user-friendly.

Features

  • Based on a SAAS UI framework, it’s especially designed for mobile OS. It provides several UI components for robust apps development.
  • A front-end and open-source framework, which means that it enables code structure alterations, suitable to every developer and saves time.
  • AngularJS-based offering HTML syntax extensions is easy.
  • Provides a native-like feel to the app, making it a favorite among developers.
  • Utilizes the plugins of Cordova, allowing access to built-in features of a device, such as camera, GPS, and audio recorder.

3. Xamarin

xamarin

Is considerably different from the other frameworks discussed so far. It’s a simplified framework for the development of Windows, iOS, and Android, with the help of .Net and C#, rather than HTML and JS libraries. Developers are able to use 90 percent code to build an app for various platforms.

Features

  • Apps are built using C#, a modern language with leverage over Java and Objective-C.
  • Reduces cost and time of development since it supports WORA, or the write once, run anywhere, and with humongous class libraries collection.
  • Has an astounding native UI and controls assisting, enabling developers to design a native-like application.

4. NodeJS

Node.js

For developing cross-platform apps, this is a wonderful framework. Essentially, it’s a JavaScript runtime framework made on the JavaScript Chrome V8 engine. An open-source environment, it supports developer scalable and server-side networking apps. A NodeJS development company could develop highly effective solutions using the numerous rich libraries of JavaScript modules, which simplify web apps development.

Features

  • Amazingly fast library in the process of executing code, because it’s built on the Chrome V8 engine.
  • Apps don’t buffer, but instead it output data in chunks.
  • Deliver perfect and smooth functioning apps. It makes use of a single-threaded model that has an event looping functionality.

5. NativeScript

NativeScript

A free, terrific cross-platform framework, which is based on JavaScript. It’s right to say that it’s the preferred option among developers looking for a WORA functionality. It moreover offers all native APIs in which developers could reuse existing plugins right from NPTM towards projects.

Features

  • Renders accessible, beautiful, and platform-native User Interface. It only requires developers to define once and allow it to run everywhere.
  • Gives developers a complete web resource, packed with plugins for all sorts of solutions. This inevitably eliminates third-party solutions.
  • Provides the freedom of accessing native APIs seamlessly, meaning that developers need no further knowledge on native development languages.
  • Uses TypeScript and Angular for the purposes of programming.

6. Electron

Electron

Electron, a wonderful framework for building desktop apps with all emerging technologies today, which include HTML, CSS, and JavaScript. The framework basically handles all the difficult aspects so you could concentrate on the core of the app and make a design that’s revolutionary. Designed as an open-source framework, Electron combines the best web technologies today and is also a cross-platform, which means that it’s very compatible with Windows, Mac, and Linux.

It has automatic updates, notifications, native menus, and debugging, crash reporting, and profiling as well.

Features

  • Helps in developing feature-loader, high-quality, cross-platform desktop apps.
  • Support for the Chromium engine
  • It’s app structure and business logic.
  • Secure access to storage.
  • Hardware-level APIs of high-quality.
  • Seamless management system.
  • Support for different UI/UX tools.
  • Compatible with the different libraries and frameworks.

7. Flutter

flutter

Flutter, an impressive cross-platform app framework was introduced by Google in 2017. It’s a software development kit that’s designed to help in iOS and Android app development.

Features

  • Does not need to manually update the UI, since it has a reactive framework. Developers only have to update variables and changes in the UI becomes visible after.
  • Poses as the perfect choice for the development of MVP, or Minimum Viable Product, since it initiates a fast development process and cost-efficient as well
  • Promotes a portable GPU, rendering UI power, allowing it to function on the current interfaces.

8. Corona Software Development Kit

corona labs

The framework lets programmers develop 2D mobile apps for all major platforms. Furthermore, it provides ten times faster mobile and game application development, delivering remarkable results. The focus of the framework is on the key elements of development, namely, portability, extensibility, speed, scalability, and usage ease.

Features

  • Responds to changes made in code almost instantaneously, while providing a review of app performance in real-time, the same as on a real device.
  • Provides developers the ability to sprite music, audio, animations, and so on with more than 100 APIs.
  • Supports nearly 200 plugins, which include analytics, in-app advertising, hardware features, and media.

9. PhoneGap Framework

PhoneGap

One of the cross-platform frameworks that’s flawless for mobile apps development using HTML5, JavaScript, and CSS. Furthermore, it has a cloud solution for developers with an option to share apps in the development process to derive feedback from other developers.

Features

  • Considered a great cross-platform framework since it enables creating cross-platform apps with existing web technologies.
  • Follows an architecture that by nature is plugin-able, which means that access to native device APIs is possible and could be extended modularly.
  • It has support for the use of a single code base to build apps for various platforms.

10. Sencha Touch Framework

Sencha Touch

The framework is great for the development of cross-platform, web-based applications, which are typically used to create efficient apps with hardware acceleration methods. With Sencha Touch, developers could build securely integrated and well-tested UI libraries and components.

The features

  • Known to provide themes that are built-in and native looking for all the major platforms.
  • The Cordova support is one of its most celebrated features, for native API access, along with the packaging.
  • Created with an agnostic backend data package that’s effective.
  • Packed with customizable and over 50 built-in widgets.

Conclusion

The top ten frameworks mentioned here are just among the many frameworks you can choose from to develop cross-platform app frameworks. With the continuous evolution of technology and the constant updates, new frameworks would soon emerge. In the meantime, for all your cross-platform app development requirements, the ten that are mentioned here are among the best you can find available nowadays.

Read More
part5.png
Node Is Simple — Part 5

tl;drThis is the fifth article of the Node is Simple article series. In this article series, I will be discussing how to create a simple and secure NodeJS, Express, MongoDB web application.

To follow the past tutorials:

Hello, hello, my dear fellow developers, glad you could make it to my article where I discuss developing web applications with NodeJS. As I promised in the previous article, in this article I will tell you how to use custom middleware with Express.

So what is this Middleware?

That’s the question, isn’t it? As I told you many times, Express is great with middleware. As discussed in the Express documentation,

Middleware functions are functions that have access to the request object (req), the response object (res), and the next middleware function in the application’s request-response cycle. The next middleware function is commonly denoted by a variable named next.

Express is nothing without middleware. So I am going to implement a simple error handling middleware. As the Express documentation describes,

app.use(function (err, req, res, next) {  
  console.error(err.stack)  
  res.status(500).send('Something broke!')  
})

this is how we create a simple middleware to log errors. But in this tutorial, I will extend this middleware to return specific error messages and HTTP error codes in the response and log the error messages in the console.

So let’s get to it, shall we?

First, let’s create some error classes to represent errors that we are going to get from the web application.

Let’s create /errors/index.js file.

module.exports = {
  AccessDeniedError: class AccessDeniedError {
    constructor(message) {
      this.message = message;
    }
  },
  AuthenticationError: class AuthenticationError {
    constructor(message) {
      this.message = message;
    }
  },
  ValidationError: class ValidationError {
    constructor(message) {
      this.message = message;
    }
  },
  NotFoundError: class NotFoundError {
    constructor(message) {
      this.message = message;
    }
  }
};
/errors/index.js file (Contains error classes)

Let’s see what’s happening here. Here, we are declaring four classes to describe errors we encounter in web applications.

  1. Access Denied Error (HTTP 403)
  2. Authentication Error (HTTP 401)
  3. Not Found Error (HTTP 404)
  4. Validation Error (HTTP 400)

Of course, there is another famous HTTP error which is called Internal Server Error (HTTP 500) but since it is a very generic error, we are not declaring a specific class to describe that and it will be the default error.

Then let’s create the error handling middleware. Let’s create /middleware/error-handling.js file.

const {
  ValidationError,
  AuthenticationError,
  AccessDeniedError,
  NotFoundError
} = require("../errors");
const chalk = require("chalk");

const errorLogger = (err, req, res, next) => {
  if (err.message) {
    console.log(chalk.red(err.message));
  }
  if (err.stack) {
    console.log(chalk.red(err.message));
  }
  next(err);
};

const authenticationErrorHandler = (err, req, res, next) => {
  if (err instanceof AuthenticationError)
    return res.status(401).send({ message: err.message });
  next(err);
};

const validationErrorHandler = (err, req, res, next) => {
  if (err instanceof ValidationError)
    return res.status(400).send({ message: err.message });
  next(err);
};

const accessDeniedErrorHandler = (err, req, res, next) => {
  if (err instanceof AccessDeniedError)
    return res.status(403).send({ message: err.message });
  next(err);
};

const notFoundErrorHandler = (err, req, res, next) => {
  if (err instanceof NotFoundError)
    return res.status(404).send({ message: err.message });
  next(err);
};

const genericErrorHandler = (err, req, res, next) => {
  res.status(500).send({ message: err.message });
  next();
};

const ErrorHandlingMiddleware = app => {
  app.use([
    errorLogger,
    authenticationErrorHandler,
    validationErrorHandler,
    accessDeniedErrorHandler,
    notFoundErrorHandler,
    genericErrorHandler
  ]);
};

module.exports = ErrorHandlingMiddleware;
/middleware/error-handling.js file (Contains the error handling middleware configurations)

1

From https://knowyourmeme.com/memes/what-the-hell-happened-here

Well, the code is somewhat complicated, isn’t it? Fear not, I will discuss what is happening here. First, we need to know how the Express middleware work. In Express middleware, if you provide error first callback, it will create the signature as an error-handling middleware.

Error-handling middleware always takes four arguments. You must provide four arguments to identify it as an error-handling middleware function. Even if you don’t need to use the next object, you must specify it to maintain the signature. Otherwise, the next object will be interpreted as regular middleware and will fail to handle errors.

(from https://expressjs.com/en/guide/using-middleware.html)

And another thing to mention is that the middleware can be defined as arrays. The array order will be preserved and the middleware will be applied in that order. So according to our code,

1. errorLogger //Logging errors  
2. authenticationErrorHandler //return authentication error  
3. validationErrorHandler //return validation error  
4. accessDeniedErrorHandler //return access denied error  
5. notFoundErrorHandler //return not found error  
6. genericErrorHandler //return internal server error

this middleware will be called in the particular order. In our web application, if there is an error thrown, first the error will be logged in the console and the error will be passed to the authenticationErrorHandler with the next(err) handler. If the error is an instance of AuthenticationError class, then it will end the request-response cycle by returning the relevant error. If not it will be passed on to the next error handling middleware. So as you can imagine it will be passed to down in the middleware stack and finally it will be passed to the genericErrorHandler middleware and the request-response cycle will be ended. And it is simple as that.

Now let’s bind this middleware as an application-level middleware. Let’s update /index.js file.

const express = require("express");
const chalk = require("chalk");
const http = require("http");
const https = require("https");
const config = require("./config");

const HTTP_PORT = config.HTTP_PORT;
const HTTPS_PORT = config.HTTPS_PORT;
const SERVER_CERT = config.SERVER_CERT;
const SERVER_KEY = config.SERVER_KEY;

const app = express();
const Middleware = require("./middleware");
const ErrorHandlingMiddleware = require("./middleware/error-handling");
const MainController = require("./controllers");

Middleware(app);
app.use("", MainController);
ErrorHandlingMiddleware(app);
app.set("port", HTTPS_PORT);

/**
 * Create HTTPS Server
 */

const server = https.createServer(
  {
    key: SERVER_KEY,
    cert: SERVER_CERT
  },
  app
);

const onError = error => {
  if (error.syscall !== "listen") {
    throw error;
  }

  const bind =
    typeof HTTPS_PORT === "string"
      ? "Pipe " + HTTPS_PORT
      : "Port " + HTTPS_PORT;

  switch (error.code) {
    case "EACCES":
      console.error(chalk.red(`[-] ${bind} requires elevated privileges`));
      process.exit(1);
      break;
    case "EADDRINUSE":
      console.error(chalk.red(`[-] ${bind} is already in use`));
      process.exit(1);
      break;
    default:
      throw error;
  }
};

const onListening = () => {
  const addr = server.address();
  const bind = typeof addr === "string" ? `pipe ${addr}` : `port ${addr.port}`;
  console.log(chalk.yellow(`[!] Listening on HTTPS ${bind}`));
};

server.listen(HTTPS_PORT);
server.on("error", onError);
server.on("listening", onListening);

/**
 * Create HTTP Server (HTTP requests will be 301 redirected to HTTPS)
 */
http
  .createServer((req, res) => {
    res.writeHead(301, {
      Location:
        "https://" +
        req.headers["host"].replace(
          HTTP_PORT.toString(),
          HTTPS_PORT.toString()
        ) +
        req.url
    });
    res.end();
  })
  .listen(HTTP_PORT)
  .on("error", onError)
  .on("listening", () =>
    console.log(chalk.yellow(`[!] Listening on HTTP port ${HTTP_PORT}`))
  );

module.exports = app;
/index.js file (Updated to reflect the binding of error handling middleware)

Now I have something to clarify. As you can see line 19, I have added the ErrorHandlingMiddleware(app) there. You might wonder is there a specific reason behind that. Actually there is.

Middleware(app);  
app.use("", MainController);  
ErrorHandlingMiddleware(app);  
app.set("port", HTTPS\_PORT);

As you can remember, the Express treats the middleware in their order we are binding it with the app. So if we bind the error handling middleware before binding the Express router, the Express app would not know where are the errors are being thrown. So we first bind the Express router and then we bind the error handling middleware.

Now if you start the web application as usual,

$ pm2 start 

and do some CRUD, operation, you won’t see any errors return since we haven’t added any error handling in our /services/index.js file. So I will show you an example of an error by explicitly throwing an error.

Let’s add this error endpoint. Every time you hit it, it will throw you a not found error. Let’s update /controllers/index.js file.

GET /error

_const_ { NotFoundError } = require("../errors");_/\*\*_ **_@route_** _GET /error  
 \*_ **_@desc_** _Return an example error  
 \*_ **_@access_** _Public  
 \*/  
_router.get(  
  "/error",  
  asyncWrapper(_async_ () => {  
    _throw new_ NotFoundError("Sorry content not found");  
  })  
);

2

Figure 1: Request and response of custom error endpoint

As you can see in figure 1, the response status is 404 and the message says “Sorry content not found”. So it works as it should.

Now let’s see how our console looks. Now type,

$ pm2 logs

in your terminal and see the console log.

3

Figure 2: console log

Bonus super-secret round

So I’m going to tell you a super-secret about how to view colored logs in PM2. You just have to add a single line to the /ecosystem.config.js file.

const fs = require("fs");

const SERVER_CERT = fs.readFileSync(__dirname + "/config/server.cert", "utf8");
const SERVER_KEY = fs.readFileSync(__dirname + "/config/server.key", "utf8");

module.exports = {
  apps: [
    {
      name: "node-is-simple",
      script: "./index.js",
      watch: true,
      args: ["--color"],
      env: {
        NODE_ENV: "development",
        SERVER_CERT,
        SERVER_KEY,
        HTTP_PORT: 8080,
        HTTPS_PORT: 8081,
        MONGO_URI: "mongodb://localhost/students"
      }
    }
  ]
};
ecosystem.config.js file (Reflects the update to view colored console output)

ecosystem.config.js file (Reflects the update to view colored console output)

All you have to do is add line 12. args: ["--color"],

Bonus Bonus round

Now I’m going to tell you about two additional Express middleware libraries which are going to save your life 🤣.

  1. Morgan (HTTP request logger middleware for node.js)
  2. CORS (CORS is a node.js package for providing a Connect/Express middleware that can be used to enable CORS with various options.)

I guarantee that these two libraries will save you from certain pitfalls and you’ll thank them pretty much all the time.

This is a good article on Morgan.

Getting Started With morgan: the Node.js Logger Middleware

This is a good article on CORS.

Using CORS in Express

So let’s update our /middleware/common.js file.

const bodyParser = require("body-parser");
const helmet = require("helmet");
const morgan = require("morgan");
const cors = require("cors");

module.exports = app => {
  app.use(bodyParser.json());
  app.use(morgan("dev"));
  app.use(cors());
  app.use(helmet());
};
/middleware/common.js file (Updated to reflect the new middleware CORS and Morgan)

Then let’s install these two packages.

$ npm install cors morgan

Let’s see how the Morgan would show the HTTP requests in our console. We have specified “dev” as the logging level for Morgan. You can find more details on that from the documentation.

4

Figure 3: Morgan request logging in the console
GET / 200 3.710 ms - 39

Let’s understand what this means.

:method :url :status :response-time ms - :res\[content-length\]

This is what the documentation says. Let’s break it down.

:method = GET (The HTTP method of the request.):url = / (The URL of the request. This will use req.originalUrl if exists, otherwise req.url.):status = 200 (The status code of the response.):response-time = 3.710 (The time between the request coming into morgan and when the response headers are written, in milliseconds.):res\[content-length\] = 39 (content-length of the response. If the content-length is not present, the value will be displayed as "-" in the log.)

This is it. Now if we move on to what the CORS module does, it ain’t much but it’s honest work 😁. It will save you from Cross-Origin Resource Sharing issues. If you have worked with APIs and front-ends, the CORS error is something you’ve seen at least once in your lifetime.

5

From https://fdezromero.com/cors-errors-in-ionic-apps/

So CORS module will allow you to add CORS header with ease. I know this article was a little bit longer because I had to share some super-secrets with you. So this is it for this article and I will tell you how to secure our endpoints with JWT authentication in the next article. As usual, you can find the updated code in the GitHub repository. (Look for the commit message “Tutorial 5 checkpoint”.)

Niweera/node-is-simple

So until we meet again, happy coding.

Read More
Untitled-1.png
An Open Source Maintainer's Guide to Publishing npm Packages

This article was first published in DEV.TO


The JavaScript community is built on Open Source and being able to give back to the community feels most gratifying. However, publishing a package for the first time might feel rather daunting, and publishing a release candidate may be a bit scary even for experienced authors. I hope to give some insight and helpful tips, especially for new authors.

I have owned two open source libraries: a tiny utility library for DraftJS called draftjs-md-converter and a React Native security library called react-native-app-auth. I am certainly not as heavily involved in Open Source as some, but I have had the sole responsibility of publishing new versions of these libraries for several years so I think I have enough experience to justify this blog post! I remember how scary it was to publish a library for the first time, and I still remember how apprehensive I felt publishing a release candidate. The purpose of this writeup is to produce a guide on how to publish packages: both a comprehensive guide for new authors, and a sanity check for future me.

I will cover the following:

  • Setting up your repo for publishing
  • Publishing a package (initial release)
  • Versioning
  • Publishing a package (subsequent releases)
  • Publishing alpha / beta / release candidate versions

You can use npm or yarn, it's completely up to you. The publish commands are identical. I will be using npm throughout.

Setting up your repo for publishing

Before you're ready to run the publish command, you should check that your package is in a good state to be published. Here are a few things you might want to consider:

Check the package name

The name field in your package.json will be the name of your published package. So for instance if you name your package package-name, users will import it with

import ... from "package-name";

The name needs to be unique, so make sure you check the name is available on https://www.npmjs.com/ or your publish command will fail.

Set the initial version

The version attribute in your package.json will determine the version of the package when published. For your initial release you might use:

{
  "version": "0.0.1"
}

or

{
  "version": "1.0.0"
}

NPM packages use semver for versioning, which means the version consists of 3 numbers: the major, minor and patch version. We will talk more about versioning in the "Versioning" section.

Make sure your repository is not private

In the package.json you may have an attribute "private": true. It is a built in protection so you wouldn't accidentally publish something that's meant to be private. It is generally a good practice to use this if you're building something that is not meant to be open source, like a personal or a client project. However if you're about to publish the repo as a package, this line should be removed.

Add a license

Ensure you have set the license in your package.json. This is to let people know how you are permitting them to use your package. The most common licenses are "MIT" and "Apache-2.0". They are both permissive, allowing users to distribute, modify, or otherwise use the software for any purpose. You can read more about the differences here. I have always used the "MIT" license.

Check the entry point

The main field in your package.json defined the main entry file for the package. This is the file the users will access then importing your package. It'll usually be something like ./index.js or ./src/index.js depending on the location of your entry file.

Restrict the files you want to publish

Be default, publishing a package will publish everything in the directory. Sometimes you might not want to do that, e.g. if your repository includes an example folder or if you have a build step and only want to publish the built bundle. For that purpose, there is a files field in the package.json. If omitted, the field defaults to ["*"], but when set, it includes only the files and directories listed in the array. Note that certain files are always included, even if not listed: package.json, README, CHANGES / CHANGELOG / HISTORY, LICENSE / LICENCE, NOTICE, and the file in the "main" field.

Add a build step

Sometimes, you may need a build step. For example if you've written your package using Flow, TypeScript or cutting edge JavaScript features that aren't implemented in all browsers, and want to compile/transpile your package to vanilla JavaScript that anyone could use. For this you can use a prepublish script like so:

{
  "scripts": {
    "prepublish": "npm run your-build-script"
  }
}

This will be run automatically when you run the publish command. For example in this package I use the prepublish script to rebuild the app in the dist directory. Notice also that the main field in this package.json is set to "main": "dist/index.js" since I want the users to access the built file.

There are more built in scripts for various occasions, but this is the only one I've had to use when publishing packages. You can read more about other available scripts here.

Publishing a package (initial release)

Clone your repo, and make sure you're on the latest master branch (or whatever your main branch is) with no uncommitted changes.

Create an account on https://www.npmjs.com/ if you don't have one already, and use these credentials to log in on your terminal:

npm login

Finally, in your project repo, run:

npm publish

If you've set up two factor authentication, you'll get a prompt for it in your terminal before the publish is completed. Once the command has finished successfully, you should be able to instantly see your package at https://www.npmjs.com/package/package-name where package-name is the name of your package set in the package.json, and people will be able to install your package with:

npm install package-name

Versioning

Publishing subsequent versions of the package involves more thought, because now we need to consider versioning. As mentioned above, npm packages are versioned using semver. This means that there are 3 types of releases: major, minor and patch, and you should use these to communicate the types of changes in your library.

  • Major changes include anything that is incompatible with the previous versions
  • Minor changes are usually new features and bigger bug fixes
  • Patch changes are tiny bug fixes or additive changes

One thing to note is that the semver naming is a bit misleading, specifically in "major" - a major release doesn't necessarily mean that a lot has changed. It means that when going from the previous to current version, there is a breaking change, and that the users of your library might need to change something in their code in order to accommodate the latest version. For instance, changing an exported function name or parameter order would be considered major changes. A lot of maintainers tend to collect breaking changes and release them all together to minimise how often the major version is incremented.

The reason you should only do breaking changes in major releases is because the users of your library may opt in to all future patch and minor versions silently, so the next time they run npm install you might end up breaking their build.

In the package.json, this is communicated with ~ and ^ where ~ opts in for all future patch releases and ^ opts in for all future patch and minor releases:

~1.2.3 will match all 1.2.x versions but will miss 1.3.0

^1.2.3 will match any 1.x.x release including 1.3.0, but will hold off on 2.0.0.

Side note: when the major version is 0, the package is considered unstable and so the ^ acts the same as ~ and for all releases before major 1 you may publish breaking changes in minor releases.

There is no option to opt in for all future major releases, because they are expected to be breaking and thus should be manual upgrades. Source.

Publishing a package (subsequent releases)

This is how you do releases after the initial one. As before, you should ensure you're on the master branch with no uncommitted changes. Consider what type of release this is: major, minor or patch. Now run the command to increment the version in your working directory, using major, minor or patch depending on the type of the change:

npm version minor

This is a convenience method that does a couple of things:

  • Increments the version in your package.json based on the type of the change
  • commits this version bump
  • creates a tag for the current release in your .git repo

Now all that's left to do is to run the publish command as before:

npm publish

It is important to make sure you do the version bump before the publish. If you try to publish the same version of the library twice, the command will fail.

Finally, you'll want to push the version bump commit and the tag:

git push
git push --tags

Notice that you'll have to push the tags separately using the --tags flag.

If you're using GitHub to host your repository, the tags will show up under the Releases tab on your repository, e.g. https://github.com/kadikraman/draftjs-md-converter/releases

It is good practice to write some release notes against the tag. I usually also use this space to thank any contributors!

Publishing an alpha / beta / release candidate

An alpha / beta / release candidate is usually an early access version of a major release. When you're about to do a major change, you might want to give your users a chance to try out the new release first on an opt-in basis but with the expectation that it may be unstable.

Basically what we want to achieve is to publish a new version "silently" that would allow only interested users to opt in for.

Let's look at how you might create a release candidate. Suppose you are currently on v3.4.5 and you want to do a release candidate for v4.

First, we increment the version. The npm version command allows you to provide your own version number. In this case we're going to say it's the first release candidate for v4:

npm version 4.0.0-rc1

As before, this does the version bump, commits the change, and creates the tag. Now to publish the package. This is the most important step, since you want to ensure none of your users get this release candidate version by default.

The publish command has a --tag argument, which defaults to "latest". This means that whenever someone npm installs your library, they get the last publish that has the "latest" tag. So all we need to do is provide a different tag. This can be anything really, I have usually used "next".

npm publish --tag next

This will publish your package under the next tag, so the users who do npm install package-name will still get v3.4.5, but users who install npm install package-name@next or npm install package-name@4.0.0-rc1 will get your release candidate. Note that if you publish another release candidate with the "next" tag, anyone installing npm install package-name@next will get the latest version.

When you're ready to do the next major release, you can run npm version major (this will bump the version to 4.0.0) and npm publish as normal.

Conclusion

That's pretty much it! Publishing packages can be a bit scary the first few times, but like anything it gets easier the more you do it.

Thank you for contributing to Open Source!

Read More
slack-api.png
Working with the Slack API in Node.js

This blog post was first published in The Code Barbarian Blog.


Integrating with the Slack API is becoming an increasingly common task. Slack is the de facto communication platform for many companies, so there's a lot of demand for getting data from Node.js apps to Slack in realtime. In this article, I'll explain the basics of how to send a Slack message from Node.js.

Getting Started

There are two npm modules that I recommend for working with the Slack API: the official @slack/web-api module and the slack module written primarily by Brian LeRoux of WTFJS fame. I've also used node-slack in the past, but that module is fairly out of date. However, both @slack/web-api and slack are fairly thin wrappers around the Slack API, so, for the purposes of this article, we'll just use axios.

In order to make an API request to Slack, you need a Slack token. There are several types of Slack token, each with its own permissions. For example, an OAuth token lets you post messages on behalf of a user. But, for the purposes of this article, I'll be using bot user tokens. Bot user tokens let you post messages as a bot user, with a custom name and avatar, as opposed to as an existing user.

In order to get a bot user token, you first need to create a new Slack app. Make sure you set "Development Slack Workspace" to the Slack Workspace you want to use.

1 Next, go through the steps to create a bot user. You'll have to add scopes (permissions) to your bot user, and then install your app. First, make sure you add the following scopes to your bot user:

  • chat-write
  • chat-write.customize
  • chat-write.public

2 Once you've added the above scopes, click "Install App" to install this new Slack app to your workspace. Once you've installed the app, Slack will give you a bot user token:

3

Making Your First Request

To send a message to a Slack channel, you need to make an HTTP POST request to the https://slack.com/api/chat.postMessage endpoint with your Slack token in the authorization header. The POST request body must contain the channel you want to post to, and the text of your message.

Below is how you can send a 'Hello, World' message with Axios. Note that you must have a #test channel in your Slack workspace for the below request to work.

const axios = require('axios');

const slackToken = 'xoxb-YOUR-TOKEN_HERE';

run().catch(err => console.log(err));

async function run() {
  const url = 'https://slack.com/api/chat.postMessage';
  const res = await axios.post(url, {
    channel: '#test',
    text: 'Hello, World!'
  }, { headers: { authorization: `Bearer ${slackToken}` } });

  console.log('Done', res.data);
}

If the request is successful, the response will look something like what you see below:

{
  ok: true,
  channel: 'C0179PL5K8E',
  ts: '1595354927.001300',
  message: {
    bot_id: 'B017GED1UEN',
    type: 'message',
    text: 'Hello, World!',
    user: 'U0171MZ51E3',
    ts: '1595354927.001300',
    team: 'T2CA1AURM',
    bot_profile: {
      id: 'B017GED1UEN',
      deleted: false,
      name: 'My Test App',
      updated: 1595353545,
      app_id: 'A017NKGAKHA',
      icons: [Object],
      team_id: 'T2CA1AURM'
    }
  }
}

And you should see the below message show up in your #test channel.

4 Because of the chat-write.customize scope, this bot can customize the name and avatar associated with this message. For example, you can set the username and icon_emoji properties to customize the user your messages come from as shown below.

const res = await axios.post(url, {
  channel: '#test2',
  text: 'Hello, World!',
  username: 'Test App',
  icon_emoji: ':+1:'
}, { headers: { authorization: `Bearer ${slackToken}` } });

Below is how the above message looks in Slack.

5

More Sophisticated Features

You can send plain text messages, but Slack allows you to add formatting to messages using a custom formatting language called mrkdwn. Slack's mrkdwn language is conceptually similar to markdown. Most notably, mrkdwn supports code samples in much the same way markdown supports code snippets:

- `foo` with backticks makes "foo" an inline code block.
- Multi-line code blocks are fenced with three backticks ```

But there are also several important differences:

  • *foo* makes foo bold in mrkdwn, as opposed to italic in markdown. To make text italic in mrkdwn, you should do _foo_.
  • To make the substring "Google" link to https://www.google.com/ in mrkdwn, you need to write <https://www.google.com/|Google> Mrkdwn does not support custom HTML. Any HTML is valid Markdown, but not mrkdwn.

For example, here's how you can send a message with some basic formatting, including a code block.

const url = 'https://slack.com/api/chat.postMessage';

const text = `
Check out this *cool* function!

\`\`\`
${hello.toString()}
\`\`\`
`;

const res = await axios.post(url, {
  channel: '#test',
  text,
  username: 'Test App',
  icon_emoji: ':+1:'
}, { headers: { authorization: `Bearer ${slackToken}` } });

Here's what the above message looks like:

6

Slack also supports more sophisticated layouts via blocks. Instead of specifying a message's text, you can specify a blocks array. There are several different types of blocks, including a generic section block and an image block for pictures.

For example, below is a simplified example of sending a notification that a new order came in to an eCommerce platform, including a qr code image.

const res = await axios.post(url, {
  channel: '#test',
  blocks: [
    {
      type: 'section',
      text: { type: 'mrkdwn', text: 'New order!' },
      fields: [
        { type: 'mrkdwn', text: '*Name*\nJohn Smith' },
        { type: 'mrkdwn', text: '*Amount*\n$8.50' },
      ]
    },
    {
      type: 'image',
      image_url: 'https://upload.wikimedia.org/wikipedia/commons/thumb/d/d0/QR_code_for_mobile_English_Wikipedia.svg/1200px-QR_code_for_mobile_English_Wikipedia.svg.png',
      alt_text: 'qrcode'
    }
  ],
  username: 'Test App',
  icon_emoji: ':+1:'
}, { headers: { authorization: `Bearer ${slackToken}` } });

Below is what the above message looks like in Slack.

7

Moving On

Once you get a bot token, working with the Slack API from Node.js is fairly straightforward. You can send neatly formatted messages to Slack to notify them of events they might be interested in, whether they be coworkers at your company or clients using one of your products.

Read More
4.png
Node Is Simple - Part 4

tl;drThis is the fourth article of the Node is Simple article series. In this article series, I will be discussing how to create a simple and secure NodeJS, Express, MongoDB web application.

To follow the past tutorials, Node is Simple Part 1, Node is Simple Part 2 and Node is Simple Part 3.

Hello fellow developers, now you’ve come across this article series, let’s get to it, shall we. In the past articles, I have discussed how to create simple CRUD endpoints, with MongoDB as the database and how to use Postman to test the endpoints. So in this tutorial, I will discuss how to upload files to MongoDB using MongoDB GridFS and view (or stream) them.

So what is this GridFS?

We can upload files to a folder easily with Multer-Middleware. This is a good reference for that.

Uploading Files and Serve Directory Listing Using NodeJS

But I am going to discuss how to use MongoDB as the file storage. However, there is a little hiccup, where you can only store 16MB as a document in MongoDB BSON format. To overcome this issue, there is a feature called GridFS.

Instead of storing a file in a single document, GridFS divides the file into parts, or chunks [1], and stores each chunk as a separate document.

Enough with the theory

Yes, let’s move on to implementation. First, we have to create a GridFS driver. Let’s create gridfs-service.js file inside /database directory.

const mongoose = require("mongoose");
const config = require("../config");
const dbPath = config.MONGO_URI;
const chalk = require("chalk");

const GridFsStorage = require("multer-gridfs-storage");
const Grid = require("gridfs-stream");
Grid.mongo = mongoose.mongo;

const conn = mongoose.createConnection(dbPath, {
  useNewUrlParser: true,
  useUnifiedTopology: true,
  useCreateIndex: true,
  useFindAndModify: false
});

conn.on("error", () => {
  console.log(chalk.red("[-] Error occurred from the database"));
});

let gfs, gridFSBucket;

conn.once("open", () => {
  gridFSBucket = new mongoose.mongo.GridFSBucket(conn.db, {
    bucketName: "file_uploads"
  });
  // Init stream
  gfs = Grid(conn.db);
  gfs.collection("file_uploads");
  console.log(
    chalk.yellow(
      "[!] The database connection opened successfully in GridFS service"
    )
  );
});

const getGridFSFiles = id => {
  return new Promise((resolve, reject) => {
    gfs.files.findOne({ _id: mongoose.Types.ObjectId(id) }, (err, files) => {
      if (err) reject(err);
      // Check if files
      if (!files || files.length === 0) {
        resolve(null);
      } else {
        resolve(files);
      }
    });
  });
};

const createGridFSReadStream = id => {
  return gridFSBucket.openDownloadStream(mongoose.Types.ObjectId(id));
};

const storage = new GridFsStorage({
  url: dbPath,
  cache: true,
  options: { useUnifiedTopology: true },
  file: (req, file) => {
    return new Promise(resolve => {
      const fileInfo = {
        filename: file.originalname,
        bucketName: "file_uploads"
      };
      resolve(fileInfo);
    });
  }
});

storage.on("connection", () => {
  console.log(chalk.yellow("[!] Successfully accessed the GridFS database"));
});

storage.on("connectionFailed", err => {
  console.log(chalk.red(err.message));
});

module.exports = mongoose;
module.exports.storage = storage;
module.exports.getGridFSFiles = getGridFSFiles;
module.exports.createGridFSReadStream = createGridFSReadStream;
/database/gridfs-service.js file (Contains the GridFS driver configurations)

If you can see line 63, the “bucketName” is set to “file_uploads”. This means the collection that we are using for storing the files is named as file_uploads. After running the application if you go to the MongoDB Compass, you can see there are two new collections are created.

file_uploads.chunks //contains the file chunks (one file is divided in to chunks of 255 kiloBytes.

file_uploads.files //contains the metadata of the file (such as lenght, chunkSize, uploadDate, filename, md5 hash, and the contentType)

Now we need to install several packages for this. Let’s install them.

$ npm install multer multer-gridfs-storage gridfs-stream

Multer is the middleware that handles multipart/form-data. This is a good reference if you are not familiar with them.

Understanding HTML Form Encoding: URL Encoded and Multipart Forms

The other two packages are for the file upload handling with MongoDB GridFS.

After creating this file, let’s create the GridFS middleware. It is really easy to integrate with Express since Express is handy with middleware. Let’s create /middleware/gridfs-middleware.js file.

const multer = require("multer");
const { storage } = require("../database/gridfs-service");

const upload = multer({
  storage
});

module.exports = function GridFSMiddleware() {
  return upload.single("image");
};
/middleware/gridfs-middleware.js file (Contains GridFS middleware configuration)

There is something to mention here. If you can see line 9, I set “image” as the name of the form field name. You can set it to anything you like, but you have to remember it for later use. FYI, There are several methods of uploading files.

upload.single("field_name"); //for uploading a single file

upload.array("field_name"); //for uploading an array of files

upload.fields([{name: 'avatar'}, {name: 'gallery'}]); //for uploading an array of files with multiple field names

upload.none(); //not uploading any files but contains text fields as multipart form data

Here I used, upload.single(“field_name”) because I want to upload only one image (Probably I’ll change this to uploading the student’s profile picture). However, the things I’ve described in the above code-block are not something I brew up myself 🤣. They are all described beautifully in the multer documentation.

Now we can set up the controller to upload files to MongoDB GridFS. Let’s update the /controllers/index.js file.

const GridFSMiddleware = require("../middleware/gridfs-middleware");
/** @route  POST /image
 *  @desc   Upload profile image
 *  @access Public
 */
router.post(
  "/image",
  [GridFSMiddleware()],
  asyncWrapper(async (req, res) => {
    const { originalname, mimetype, id, size } = req.file;
    res.send({ originalname, mimetype, id, size });
  })
);

Now let’s try to upload an image, shall we? Let’s start the web application as usual.

$ pm2 start

1

Figure 1: POST request containing the image file

As in figure 1, you have to select the request body as form-data and then you have to set the key as image. Now select the key type as “File”. (As seen in Figure 2)

2

Figure 2: Selecting the key type of the form data

3

Figure 3: Response from uploading the image to the DB

Now if you see the response as figure 3, it means everything worked out. Now let’s see how the MongoDB Compass shows the uploaded file.

4

Figure 4: file_uploads.files collection (Contains the metadata of the uploaded file)

5

Figure 5: file_uploads.chunks collection (Contains the file chunks of the uploaded file)

As you can see in figure 4, the file metadata is shown. In figure 5, you can see only one chunk. That is because the image we uploaded is less than 255kB in size.

Now that we have uploaded the file, how do we view it? We cannot see the image from the DB, now can we? So let’s implement a way to view the uploaded image.

Now let’s update the /controllers/index.js file.

const { getGridFSFiles } = require("../database/gridfs-service");
const { createGridFSReadStream } = require("../database/gridfs-service");
/** @route   GET /image/:id
 *  @desc    View profile picture
 *  @access  Public
 */
router.get(
  "/image/:id",
  asyncWrapper(async (req, res) => {
    const image = await getGridFSFiles(req.params.id);
    if (!image) {
      res.status(404).send({ message: "Image not found" });
    }
    res.setHeader("content-type", image.contentType);
    const readStream = createGridFSReadStream(req.params.id);
    readStream.pipe(res);
  })
);

What we are doing here is, we get the image id from the request and check for the image in the DB and if the image exists, we stream it. If the image is not found we return a 404 and a not found message. It is as simple as that.

Now let’s try that out.

6

Figure 6: GET request to view the image.

Remember the image id is what gets returned as the id in figure 3.

7

Figure 7: Image is streamed (Image is from [https://nodejs.org/en/about/resources/](https://nodejs.org/en/about/resources/))

If we send the GET request from a web browser, we’ll see the image in the browser. (If you get a “Your connection is not private” message, ignore it.) It will be the same as in figure 7.

So that was a lot of work, isn’t it? But totally worth it. Now you can save your precious images inside a MongoDB database and view them from a browser. Go show off that talent to your friends and impress them 😁.

This is the full /controllers/index.js file for your reference.

/controllers/index.js file (Updated to reflect the view and upload image endpoints)

So, this is the end for this tutorial and in the next tutorial, I will tell you how to create and use custom middleware with Express. (Remember I told you Express is great with middleware). As usual, you can see the whole code behind this tutorial in my GitHub repository. (Check for the commit message “Tutorial 4 checkpoint”.)

Node is Simple

So until we meet again, happy coding…

Read More
  • 1 / 5