Friday, 25 January 2019

Adding Code Coverage To Your NodeJS App Using Istanbul.

Writing unit test cases is not enough, for maintaining the quality of the application. 

How will we make sure that each scenario has been covered ? Every branches and conditions have been covered in the unit testing ? What if new features or functionality have been added, are they covered in unit test cases ?

Code/Test coverage is the answer to all the questions.

It not only helps in maintaining the quality of the application, but also helps developer in getting deeper understanding of the code. Most of the times, we skip certain conditions and scenario’s assuming that our code will execute properly as intended, but it may happen that, based on the certain inputs it can behave differently.

What can we do, to avoid these conditions and make sure that our code is thoroughly tested. There are certain code coverage tools, that we can integrate with test script.

So, when test cases will execute, they will record the execution of our code files and will check, what code/function/blocks have not been executed during unit testing and will generate the coverage report.

We can also set the threshold/limits for the code coverage tool, that how much percentage of functions/lines/statements needs to be covered while unit test case has been executed, and below that limits we will fail the build.

Lets start by adding coverage to our library, we will be using “Istanbul”.

We will add it as a dev dependency to our library:


npm install -D istanbul

After adding this, we will add/modify the test script in package.json, to add coverage.

"scripts": {
          "test" : "istanbul cover -x '*.test.js' node_modules/mocha/bin/_mocha -- -R spec src/api.test.js"
     }

In the above test script, we have added a “istanbul cover -x ‘*.test.js’”, which specifies that to record code coverage on all files, except the files with the test.js extensions. After that we have provided the path to the mocha executable and specified that the reporter is “spec” which is the most commonly used reporter.

Mocha available reporters.

Till now we have added the code coverage recording to our library/project.

Lets run the test scriptnpm run test

We will get the output similar to the below one, which will show the coverage report:


Also, a new directory has been created, with the name coverage, and it will contain the icov report and the coverage.json file.





The Percentage can vary based on the test cases return, in my example I have covered all the scenarios under unit testing.

Note : Do add the coverage directory to the .gitignore file to avoid committing it to the source control.

Viewing the Coverage Report In Browser:

The generated coverage report can be viewed in the browser, navigate to the Icov-report directory under the coverage directory, Open the index.html file in the browser.

In the left panel of the report, it will show the no of times the function or statement has been executed. Also, it will show a red cross, before the function/statement that has not been executed.




Setting the threshold/limit for the code coverage:

The above output shows us the standard coverage report, But how can we validate that, whenever a new feature added or existing functionality modified, the unit testing must be done for them, to avoid any breaking of existing functionality.

We can set the threshold/limits for our coverage reporting and before pushing it to source control, we can validate whether the set threshold have been met or not, if not met we can restrict the code to be pushed into the source control.

Istanbul has an inbuilt module called check-coverage, which we can use for setting the limits for the different specs.

Let’s add another command in the script part of package.json

"scripts": {
"check-coverage":"istanbul check-coverage --statements -100 --branches - 100 --functions 100 --lines 100",
"test":"istanbul cover -x '*.test.js' node_modules/mocha/bin/_mocha -- -R spec src/api.test.js"
}

We have added a check-coverage command in the existing script part of package.json and have specified some limits percentage, that we expect. We can set the threshold limits as required.

Lets run the command, to test whether our coverage report is matching the threshold criteria.

npm run check-coverage



if everything works fine then the output will be similar to the above one.

Now, lets try by adding some dummy function in our code and re-generate the code coverage report.

Example function:

function dummy(){
console.log(“not in use”);
}

1. Re-generate coverage report: npm run test



We can see that, the percentage have been dropped from 100%.

2. Check coverage: npm run coverage



We can see the coverage validations failed as it didn’t meet the set threshold, that simply states that the newly added code has not been covered in unit testing. We can add the test case for the dummy function, and re run the coverage.

Now, we can add it to the git hooks to avoid committing the code, until the threshold has been met.

Adding Git Hooks:

We will use the ghooks npm module for adding the git hooks to our library.We will install it as a dev dependency.

npm install -D ghooks

Add the config script in the package.json file, and under config add a “ghooks” node, with the 
"pre-commit" sub node:

"config": {
     "ghooks": {
                    "pre-commit": "npm run test && npm run check-coverage"
     }
}

Once, it is done and now whenever we try to commit the code, it will initially run the test script, which in turn generates the coverage report and secondly it will check, whether our coverage thresholds has been met or not.

In this way, we can assure that, whatever code has been pushed is unit tested.

In the next article, we will integrate the  code coverage reporting service codecov.io , which will take reporting to the next level.

Note: Do add the coverage directory to the gitignore file to avoid committing it to the source control.

Link to my GitHub Repository

Saturday, 23 June 2018

In Node JS, Update A Nested Level JSON Key Value With The New Value.

Most of the time, while working with JSON, we come across a scenario where we would like to update a existing json key with the new value and return the updated json. The key could exist at the nth level, and the value can be in any form (json, string, number..)

Directly updating the root node, is not that difficult.

The scenario become more complex, when we need to update the nth node in the Key value, with the new one.

I have written the basic API (with node js + ramda), where it can update the node at any level and it will return the updated json.

Here is the git repositories for it :

1. https://github.com/UtkarshYeolekar/update-jsonkey  (node js + ramda)

2. https://github.com/UtkarshYeolekar/update-jsonkey-express/ (node js + express + ramda)

Let me explain this by example :

Suppose, we have the following json structure :

```{
"testing":{
        "test1":{
            "a":11,
            "b":232
        },
        "test2":{
            "xy":233,
            "zz":"abc xyz",
            "json":{
                "msm":"sds",
                "abc":"weuewoew"
                }
            }
    }
  }
```

Example 1:

Now suppose, if we need to update the value of the key "abc" which is the not the direct value of key "test2".  We will require to iterate till the "json" node and then update the value of the "abc".

The key path is : testing->test2->json->abc , we need to iterate this full path to update the "abc" node.

To update the above node "abc" the API call would be :

Function Prototype : api.updateJson("keypathfromroot", "new value", existing json)

api.updateJson("/testing/test2/json/abc","newvalue", json)


Example 2:

Now, suppose if we need to update the node "test2" with the new json value.

let newValue =
{
  "key1" : "value1",
  "key2" : "value2"
}


The API call would be :

api.updateJson("/testing/test2/",newValue, json)

Note, the key path, we have just provided it to the node "test2". The keypath will always be from the root to the child node, which we need to update.

Both the git repository contains the enough documentation, to get started. Here is the link for the Readme.md file


Hope it helps.

Sunday, 15 April 2018

Sharing Host Directory/Folder with the Docker Container.

In this blog, we are going to learn, how we can mount a existing host folder/directory to a docker container.

Imagine a scenario, where you need to share local files with the docker container. And whenever you modify the files or folder on your host machine i.e outside a container, you need that to be updated in the container also.

This is possible by mounting a host directory with the docker container. Let's checkout the steps for it.

Here in the example, i am using boot2docker VM. So, the host is here boot2docker VM, not our machine, on which it is running. But as boot2docker is a linux VM and running over virtual box, we have the facility of having some folders from the machine mounted over the VM as a host folder.

We can check it, by going to the Oracle Virtual Box -> boot2docker VM -> Settings -> Shared Folders. And here you can see, c/users is already mounted there.



For mounting the host folders, other than c/users, we need to first share it with the VM, then only we can mount it with the container. For this session we will use the already shared folder.

Let's start mounting the host folder:

1. lets first create a folder, under c:/users directory for hosting our code files. I have created a folder name "terraform" under c://users and which consist of some javascript and json files.

2. Now, lets mount the terraform folder into the container at /home/app/config path.
 
Command :  docker run -v "hostfolder:folderInContainer" imageName

             docker run -it -v "/c/Users/terraform:/home/app/config" terraform /bin/ash

here i am mounting, the terraform host folder at the /home/app/config directory in the container. So, all the contents of terraform directory will be listed under the config folder in the container.

2. let's check, whether our files/content exist into the config folder or not, just "cd" to the config folder and execute the "ls" list command to list down the files.


As we can see, in the above screenshot, couple of json and java-script files listed there.

The good thing about this, whenever we do any changes to the host folder outside of the container, they are automatically sync/reflected in the container. Try yourself by adding couple of files and folder into the host folder and then just try listing the files into the container. You will be amazed to see, that changes are reflected there also.

Thanks for reading this blog.



Friday, 26 January 2018

Running gcloud/kubectl commands in docker container.

In this blog, we will see, how we can authenticate with the google cloud console from the docker container using service account.

I was having a scenario, where i need to run some gcloud commands from the docker container as a prerequisite for running the kubectl commands.

Example: initialize the .kube folder with the config file (google cloud cluster config).

Steps:

1. Create a service account, with the privileges you required for calling the google api's.
2. Download the service account JSON file on the local machine.
3. Create a docker file, which includes google cloud sdk and other components like kubectl in my case.
4. Passing the service account information to the docker container using environment variable.
5. Creating service account JSON file on the go, in the docker container using provided environment variable values.
6. Run the gcloud auth service account command and pass the service account json file to it.

In Brief: 

The first 3 steps are simple and lot of documentation available for it.I will start with the fourth one.

Service account information should not be copied directly into the image. They must be passed through the secrets or the environment variables. This make it more secure and configurable.

We can write a shell script, which creates a service account json file dynamically in the container using the environment variables.And we can copied that shell script file into the container and keep it is a entry point or manually run it for generating the service account json file.

Here is the link for, creating a JSON file dynamically inside the container.

Once, the file is generated we can use the following command for activating the service account and perform other operations :

./secrets is a folder, where account.json file generated from the environment variable

1. gcloud auth activate-service-account --key-file ./secrets/account.json
2. gcloud --quiet config set project $project
3. gcloud --quiet config set compute/zone $zone
4. gcloud container clusters get-credentials $cluster_name --zone $zone --project $project

We can also wrap the above 4 gcloud commands in one shell script and run that script file, instead of running commands independently.

Lets name the file init.sh

#!/bin/ash

sh ./generate.sh

gcloud auth activate-service-account --key-file ./secrets/account.json
gcloud --quiet config set project $project
gcloud --quiet config set compute/zone $zone
gcloud container clusters get-credentials $cluster_name --zone $zone --project $project

Where, sh ./generate.sh file, will generate the service acount json file in the secrets folder.

Now, lets just run the init file, and we are done.

sh ./init.sh

In the next blog, i will show you, how we can provision a google container engine using terraform.

How to create/generate a JSON file dynamically using shell script.

In this post, we will see that, how we can dynamically generate/create a JSON file using shell script.

Some days back, i was having a scenario where i need to generate a JSON file in a docker container using environment variable. And environment variable values are passed through the environment file into the docker container.

Will start with writing a shell/bash script. Lets name it, generate.sh

#!/bin/ash

cat > /home/app/secrets/account.json << EOF

{
  "type": $type,
  "project_id": $project_id,
  "private_key":$private_key,
  "client_email":$client_email,
  "client_id":$client_id,
  "auth_uri":$auth_uri,
  "token_uri":$token_uri
}


EOF

And we can now save this file. Where $type,$project_id,$private_key are the environment variables.

Now, we can run this shell script by executing the following command in the bash.

sh generate.sh

And this will generate a JSON file in the /home/app/secrets/ folder.

In the shebang, i have used #!/bin/ash as, i was using the alpine docker image.





Sunday, 13 August 2017

Authenticating With The Docker Hub V2 Api

This example is about authenticating with the docker hub v2 API and then getting the information/tags of the private repository.

Recently, i got this task and i was looking for any example. The public repository can be directly accessible using the v2 API, but for getting the private one, we must required authentication.

I have read their documentation and then tried the same using POSTMAN client first, once that is working, i have written a sample code.

First lets go with the postman client : 

There are two steps in it :

1. Getting the Auth Token by passing the username and password (POST)
2. Using that Auth token, query the docker hub v2 api (GET)


1. Getting the Auth Token : For getting the auth token we need to send a post request to https://hub.docker.com/v2/users/login/ with the username and password in the body. And in return it will return the auth token.


Post request to get the auth token.

2. Using that Auth token, query the docker hub v2 api : By using the above auth token, we can query the v2 api for getting the private repository tags info. We will require to pass the auth token in the headers.

Repository endpoint : https://hub.docker.com/v2/repositories/username/private-repo/tags



This will in-turn return the tags of the repository.

In node js :

1. Getting Auth token :

let dockerConfig = require('./config.js').dockerConfig,
    rp = require('request-promise'),
    _ = require('lodash'),
    R = require('ramda');


let getAuthToken = (username, password) => {

    let options = {
        method: 'POST',
        uri: `${dockerConfig.loginEndpoint}`,
        body: {
            "username": `${username}`,
            "password": `${password}`
        },
        json: true
    }
    return rp(options)

}

2. Getting the private repository tags using token :

let getImageTags = (username, repository, authtoken) => {
        let options = {
            method: 'GET',
            uri: `${dockerConfig.repositoryEndPoint}/${username}/${repository}/tags`,
            headers: {
                Authorization: `Bearer ${authtoken}`
            },
            json: true
        }
        return rp(options);
    }

Both the function will return promises, we can call them like this :

getAuthToken(config.username, config.password)
    .then((tokenInfo) => {
        console.log("token recieved");
        return getImageTags(config.username, config.repository, tokenInfo.token)
    })
    .then((tags) => {
        if (!_.isUndefined(tags) && !_.isNull(tags) && tags.count > 0) {
            let result = tags.results.map((tag) => (R.pick(["name"], tag)));
            console.log(result);
        }
        else
            console.log("No tags found");
    })
    .catch((err) => {
        console.error("Error Occured ", err.message);
    });


3. Config file, look like this : You will require to update it with your docker hub info.

module.exports = {
    dockerConfig : {
                loginEndpoint : "https://hub.docker.com/v2/users/login/",
                username :"username",
                password:"password",
                repository : "private_repo",
                repositoryEndPoint : "https://hub.docker.com/v2/repositories",
                tagsEndPoint : "tags"
        }

};

Here is the full working example, you can just clone it and start working.
https://github.com/UtkarshYeolekar/docker-auth-example

Hope it helps, Thanks!


Sunday, 16 July 2017

Debugging a Kubernetes Pod (Node.js Application)

Debugging a node.js application is very easy, if it is running locally. But when it is deployed on kubernetes, it requires a lot of effort.

Every time you have found some bug, you re-build your image , re-deploy your pod and again start debugging.

In this approach, we will attached a debugger to a running pod (node.js instance) in the kubernetes, and using chrome-dev tools, we will debug our application.

We have updated our instance image with the bash script, which will check whether to run application in a debug mode or a normal mode. The bash script will check for environment variable "DEBUG_MODE", whether it is defined or not, if not it will run the application in a normal mode.We will pass that environment variable with the deployment yaml/json file.

The main advantage of using a bash script is, if you have completed your debugging and now want to start a pod in a normal mode, you just remove the environment variable from the yaml and restart the pod, it will run in a normal manner. Which reduces our time of updating code and re-building image.

Let's start with the implementation :

1. Bash Script
2. Update Dockerfile.
3. Create Pod with the newly created image.
4. Port-Forward the pod.

1. Creating Bash Script : 

I am using node:alpine as a base image, it is pretty light weight. So, the terminal will be a /bin/ash instead of /bin/bash. So, do change the first line based on the base image you are using.

In this script, i am using a optional "DEBUG_FILE" variable, which allow us to provide a file path while debugging.

The script is pretty simple, i am just checking initially whether the "DEBUG_MODE" is defined or not (not checking any value), if it is defined then attaching a chrome-dev tools to it (node --debug-brk --inspect app.js).

Note: Update your startup file name, in place of app.js in the bash script.

#!/bin/ash
echo "
check-mode.sh script checks whether debugging is ON or not, while initiating a container.
It accepts two environment variables :
a. DEBUG_MODE (mandatory for debugging)
b. DEBUG_FILE (optional file path for debugging)
Example : docker run-it -e DEBUG_MODE=debug -e DEBUG_FILE=app.js 'imagename' /bin/ash
Example : kubectl --namespace=app-debug port-forward backend-0 9229:9229"

if [ -z "$DEBUG_MODE" ]
then
echo "DEBUG_MODE is not defined, initiating without debugging.."
node app.js
else
echo
echo "---- 1. Environemt Variable DEBUG_MODE is Defined -----"
echo "---- 2. Checking Environment Variable DEBUG_FILE is defined or not,
and also does the file exist at that path ? ----"

if [ ! -z "$DEBUG_FILE" ] && [ -f "$DEBUG_FILE" ]
then
echo "---- 3. Environment Variable DEBUG_FILE is defined and also File Exist ----"
echo
node --debug-brk --inspect $DEBUG_FILE
else
echo "----- 3. DEBUG_FILE or File Path doesn't exist ----"
echo "----- 4. Debugging the default entry point app.js ----"
echo
node --debug-brk --inspect app.js
fi
fi


2. Update a docker file : 

FROM node:6.10.3-alpine

ENV NODE_ENV=development app="/home/app"

RUN mkdir "/home/app"

WORKDIR "$app"

RUN npm install --production

COPY "app.js" "$app"

COPY "check-mode.sh" "$app"

EXPOSE 3000

RUN chmod +x $app/check-mode.sh

ENTRYPOINT  $app/check-mode.sh


3. Create Pod with the newly created image:

After the new image is successfully built using the above docker file, we can create a new pod on kubernetes with the newly create image. Also make sure to pass the "DEBUG_MODE" environement variable in the pod yaml/json. The value right now doesn't matter for the env variable as, in the script we are just checking whether it is defined or not.

After the pod is created, in the logs you can see that, it will log that the debugger is listening on some port, generally the default port is 9229, but it can varies also.

Here is the docker run output:

docker run -it -e DEBUG_MODE=debug -e DEBUG_FILE=app.js 30657b10fb02 /bin/ash
externally, i have passed the environment variable using the -e.

Here is the kuberentes pod output:

pod logs ouptut, shows that the debugger is running at 9229 port.



environment variable declared in the pod yaml/json.

Now, in the final step we will port-forward it to local using the kubectl command line, and will attach it to the chrome://inspect.


4. Port-Forward the pod :

To attached the running debugger to the local chrome://inspect we will require to port-forward it to local.

Using kubectl we can port-forward the running pod to the local.

Command : kubectl --namespace="your namespace name" port-forward "pod name" "debugger running port in a pod"

Example : kubectl --namespace=default port-forward testenv-0 9229:9229

Here is the output you will get after port-forwarding :


After successfully port-forwarding, we can open the chrome-dev tools, to start debugging :

a. Type chrome://inspect in your browser new tab.
b. In the remote target, you will see the startup file of your pod.



Now, after your debugging is completed, we can just remove the environment variable from the pod yaml/json and restart the pod. It will work as normal instance. 

This is only a one time investment, anytime you think of associating a debugger to a pod, just update the environment variable. You don't need to rebuild your code image and re-deploy.

Note : if you are ever facing issue for copying the bash script file while building docker image, just open the bash script file in sublime text editor and then go to view->line endings->unix and save your file again.