Saturday, 22 June 2019

How To Define Custom Static & Dynamic Roles In LoopBack 3.

There are many use-cases or scenario's where we felt like, we need some custom roles in our application, apart from the provided built in ones ($authenticated/$owner/$everyone).

LoopBack 3 provides a flexibility,  where we can define custom roles, based on our application requirements. It has majorly divides the custom roles into two categories, (Static and Dynamic one's).


Generally a beginner, who have just started looking into what loopback is, struggles in finding, what will be the scenario, where we should use static role or dynamic one, how they are different, and from where should he start.


In this article, i will explain each of them, with common scenarios/examples.


Static Roles : Suppose you owe a restaurant, and it has both managers and waiter's, manager's role is mostly related to administration, and waiter's role is mainly to deal with the customer orders.


Manager role is also to decide, what should be the rate of any specific dish, at any particular season, and also to decide the menu list, based on the season and the availability. No waiter can modify the rates of it.


So, now we have a predefined role and its responsibilities, that, only a person with role manager can do the above mentioned things. So, this is pretty straight forward. We know that the manager is a user, but with some administration responsibilities. 

So, how will we assign the user with this manager responsibilities in loopback :


1.  We create a user in the user model.

2. We define a new role called "Manager" (Role Model).
3. We mapped the newly created user with the newly created role. (Role Mapping Model).

All of the above are predefined/built in models, we just need to create a entry in it. Now, we have defined the role, but not have defined its responsibilities. 


Suppose, we have our custom model's called (Rate List) and (Menu List), we can define acl's on it, that, only a principal type "Role", with a principal id "Manager" can perform update/create operations on it, and all other's can only read/view it.


Sample, acl's for it.


    {

      "accessType": "*",
      "principalType": "ROLE",
      "principalId": "$everyone",
      "permission": "DENY"
    },
    {
      "accessType": "READ",
      "principalType": "ROLE",
      "principalId": "$everyone",
      "permission": "ALLOW"
    },
    {
      "accessType": "EXECUTE",
      "principalType": "ROLE",
      "principalId": "manager",
      "permission": "ALLOW",
      "property": "create"
    },
    {
      "accessType": "WRITE",
      "principalType": "ROLE",
      "principalId": "manager",
      "permission": "ALLOW"
    }


So, static role is something, where you do not need to query multiple custom model's for checking, whether we should allow the request or not. It is pretty straight forward, you have the user and its mapping done.


It is generally used, for implementing a restriction at the high level.What i mean by that is, suppose in the IT park, where lot of organization have offices, now a person is only allowed in the campus, if he has the company id card. So, we will only do a high level checking here :


a. First that company should have a office in the campus.

b. Should be a valid employee of that company.

Now, what if he swipes a card on the entry gate of any restricted area. We need to check, if his card is still valid or expired, do he has the enough permission or not. To check this we will need to query our models, if this card id is still valid or not, whether he/she access to that gate/room or not. So, here comes the role of dynamic roles, you cannot directly check if it belongs to that role, and let him go. You need to take decision here, based on other responses, and then only allow/reject the request.


Lets discuss this in detail, with different example.


Dynamic Roles : Suppose you have a leave management system, where every employee of your company can submit his request, and only a manager of that employee/or either HR manager can approve/reject the request.


Following ?


So, here we have two models till now :


A. User model, consist of all employees, having a assigned designated manager.

B. Time-Off model, consist of all time offs.

Roles : HR Manager (static role), employee (static role) can create time off request. 


So, suppose employee "B" is a manager of  "C" & "D", and employee "A" is a manager of  "B".  i.e. C & D reports to B, and B reports to A, and "H" is the HR manager.


So, suppose, employee "C" has applied for a leave, and now we want only his manager or HR can approve his time off.


Static role, won't work in this case, as just by checking the "manager" role, we will not be able to identify whether he is actually his manager or not. We need to query user table, with the current user id, to find, whether he is actually the manager of the "C" or not, then only we can allow him to update/approve the request.


In overall, i mean to say that, every time we need to take decision dynamically, based on the requested context( C or B, or D) find out whether current user is his manager or not, or he/she is the HR manager, and then only allow the request to be executed. So, we can do this with dynamic roles.


Dynamic roles, can be defined by writing the role resolver, and then register it in the boot script.


Example : role-resolver.js (inside boot directory) sample code.


module.exports = function(app) {

    var Role = app.models.Role;
  
    Role.registerResolver('approver', function(role, context, cb) {
      if (context.modelName !== 'Timeoff') {
        return cb(null, false);
      }
  
      var managerId = context.accessToken.userId;
      if (!managerId) {
        return cb(null, false);
      }
  
      context.model.findById(context.modelId, function(err, timeoff) {
        if(err) return cb(err);
        if(!timeoff) return cb(new Error("Timeoff not found"));
  
        let User = app.models.User;
        User.count({
          userId: timeoff.empId,
          managerId: managerId
        }, function(err, count) {
          if (err) return cb(err);
  
          if(count > 0){
            return cb(null, true);
          }
  
          else{
            return cb(null, false);
          }
        });
      });
    });
  };

And then on the Timeoff model, we can apply the acl for the same.


 {

      "accessType": "WRITE",
      "principalType": "ROLE",
      "principalId": ["approver"],
      "permission": "ALLOW",
      "property": "approve"
    }

I have applied the acl on the remote method "approve". So, if we have directly applied on all the methods, of write, then only manager can update the item(change date/reason), not even the employee who has created it. So, that is the reason custom remote endpoint is created.


Also, we can create a relation of Timeoff model with the User (belongsTo) , then we can add one more acl to the Timeoff model. $owner for rest of the write methods, so only the user/employee who have created the request can only modify it. No, other employee should modify each other timeoff's request accidentally.


    {

      "accessType": "WRITE",
      "principalType": "ROLE",
      "principalId": "$owner",
      "permission": "ALLOW"
    }


The whole idea of this blog, to give understanding of static/dynamic roles in loopback 3, and, how we can use it in our application. Hope this helps!.


Friday, 25 January 2019

Adding Code Coverage To Your NodeJS App Using Istanbul.

Writing unit test cases is not enough, for maintaining the quality of the application. 

How will we make sure that each scenario has been covered ? Every branches and conditions have been covered in the unit testing ? What if new features or functionality have been added, are they covered in unit test cases ?

Code/Test coverage is the answer to all the questions.

It not only helps in maintaining the quality of the application, but also helps developer in getting deeper understanding of the code. Most of the times, we skip certain conditions and scenario’s assuming that our code will execute properly as intended, but it may happen that, based on the certain inputs it can behave differently.

What can we do, to avoid these conditions and make sure that our code is thoroughly tested. There are certain code coverage tools, that we can integrate with test script.

So, when test cases will execute, they will record the execution of our code files and will check, what code/function/blocks have not been executed during unit testing and will generate the coverage report.

We can also set the threshold/limits for the code coverage tool, that how much percentage of functions/lines/statements needs to be covered while unit test case has been executed, and below that limits we will fail the build.

Lets start by adding coverage to our library, we will be using “Istanbul”.

We will add it as a dev dependency to our library:


npm install -D istanbul

After adding this, we will add/modify the test script in package.json, to add coverage.

"scripts": {
          "test" : "istanbul cover -x '*.test.js' node_modules/mocha/bin/_mocha -- -R spec src/api.test.js"
     }

In the above test script, we have added a “istanbul cover -x ‘*.test.js’”, which specifies that to record code coverage on all files, except the files with the test.js extensions. After that we have provided the path to the mocha executable and specified that the reporter is “spec” which is the most commonly used reporter.

Mocha available reporters.

Till now we have added the code coverage recording to our library/project.

Lets run the test scriptnpm run test

We will get the output similar to the below one, which will show the coverage report:


Also, a new directory has been created, with the name coverage, and it will contain the icov report and the coverage.json file.





The Percentage can vary based on the test cases return, in my example I have covered all the scenarios under unit testing.

Note : Do add the coverage directory to the .gitignore file to avoid committing it to the source control.

Viewing the Coverage Report In Browser:

The generated coverage report can be viewed in the browser, navigate to the Icov-report directory under the coverage directory, Open the index.html file in the browser.

In the left panel of the report, it will show the no of times the function or statement has been executed. Also, it will show a red cross, before the function/statement that has not been executed.




Setting the threshold/limit for the code coverage:

The above output shows us the standard coverage report, But how can we validate that, whenever a new feature added or existing functionality modified, the unit testing must be done for them, to avoid any breaking of existing functionality.

We can set the threshold/limits for our coverage reporting and before pushing it to source control, we can validate whether the set threshold have been met or not, if not met we can restrict the code to be pushed into the source control.

Istanbul has an inbuilt module called check-coverage, which we can use for setting the limits for the different specs.

Let’s add another command in the script part of package.json

"scripts": {
"check-coverage":"istanbul check-coverage --statements -100 --branches - 100 --functions 100 --lines 100",
"test":"istanbul cover -x '*.test.js' node_modules/mocha/bin/_mocha -- -R spec src/api.test.js"
}

We have added a check-coverage command in the existing script part of package.json and have specified some limits percentage, that we expect. We can set the threshold limits as required.

Lets run the command, to test whether our coverage report is matching the threshold criteria.

npm run check-coverage



if everything works fine then the output will be similar to the above one.

Now, lets try by adding some dummy function in our code and re-generate the code coverage report.

Example function:

function dummy(){
console.log(“not in use”);
}

1. Re-generate coverage report: npm run test



We can see that, the percentage have been dropped from 100%.

2. Check coverage: npm run coverage



We can see the coverage validations failed as it didn’t meet the set threshold, that simply states that the newly added code has not been covered in unit testing. We can add the test case for the dummy function, and re run the coverage.

Now, we can add it to the git hooks to avoid committing the code, until the threshold has been met.

Adding Git Hooks:

We will use the ghooks npm module for adding the git hooks to our library.We will install it as a dev dependency.

npm install -D ghooks

Add the config script in the package.json file, and under config add a “ghooks” node, with the 
"pre-commit" sub node:

"config": {
     "ghooks": {
                    "pre-commit": "npm run test && npm run check-coverage"
     }
}

Once, it is done and now whenever we try to commit the code, it will initially run the test script, which in turn generates the coverage report and secondly it will check, whether our coverage thresholds has been met or not.

In this way, we can assure that, whatever code has been pushed is unit tested.

In the next article, we will integrate the  code coverage reporting service codecov.io , which will take reporting to the next level.

Note: Do add the coverage directory to the gitignore file to avoid committing it to the source control.

Link to my GitHub Repository

Saturday, 23 June 2018

In Node JS, Update A Nested Level JSON Key Value With The New Value.

Most of the time, while working with JSON, we come across a scenario where we would like to update a existing json key with the new value and return the updated json. The key could exist at the nth level, and the value can be in any form (json, string, number..)

Directly updating the root node, is not that difficult.

The scenario become more complex, when we need to update the nth node in the Key value, with the new one.

I have written the basic API (with node js + ramda), where it can update the node at any level and it will return the updated json.

Here is the git repositories for it :

1. https://github.com/UtkarshYeolekar/update-jsonkey  (node js + ramda)

2. https://github.com/UtkarshYeolekar/update-jsonkey-express/ (node js + express + ramda)

Let me explain this by example :

Suppose, we have the following json structure :

```{
"testing":{
        "test1":{
            "a":11,
            "b":232
        },
        "test2":{
            "xy":233,
            "zz":"abc xyz",
            "json":{
                "msm":"sds",
                "abc":"weuewoew"
                }
            }
    }
  }
```

Example 1:

Now suppose, if we need to update the value of the key "abc" which is the not the direct value of key "test2".  We will require to iterate till the "json" node and then update the value of the "abc".

The key path is : testing->test2->json->abc , we need to iterate this full path to update the "abc" node.

To update the above node "abc" the API call would be :

Function Prototype : api.updateJson("keypathfromroot", "new value", existing json)

api.updateJson("/testing/test2/json/abc","newvalue", json)


Example 2:

Now, suppose if we need to update the node "test2" with the new json value.

let newValue =
{
  "key1" : "value1",
  "key2" : "value2"
}


The API call would be :

api.updateJson("/testing/test2/",newValue, json)

Note, the key path, we have just provided it to the node "test2". The keypath will always be from the root to the child node, which we need to update.

Both the git repository contains the enough documentation, to get started. Here is the link for the Readme.md file


Hope it helps.

Sunday, 15 April 2018

Sharing Host Directory/Folder with the Docker Container.

In this blog, we are going to learn, how we can mount a existing host folder/directory to a docker container.

Imagine a scenario, where you need to share local files with the docker container. And whenever you modify the files or folder on your host machine i.e outside a container, you need that to be updated in the container also.

This is possible by mounting a host directory with the docker container. Let's checkout the steps for it.

Here in the example, i am using boot2docker VM. So, the host is here boot2docker VM, not our machine, on which it is running. But as boot2docker is a linux VM and running over virtual box, we have the facility of having some folders from the machine mounted over the VM as a host folder.

We can check it, by going to the Oracle Virtual Box -> boot2docker VM -> Settings -> Shared Folders. And here you can see, c/users is already mounted there.



For mounting the host folders, other than c/users, we need to first share it with the VM, then only we can mount it with the container. For this session we will use the already shared folder.

Let's start mounting the host folder:

1. lets first create a folder, under c:/users directory for hosting our code files. I have created a folder name "terraform" under c://users and which consist of some javascript and json files.

2. Now, lets mount the terraform folder into the container at /home/app/config path.
 
Command :  docker run -v "hostfolder:folderInContainer" imageName

             docker run -it -v "/c/Users/terraform:/home/app/config" terraform /bin/ash

here i am mounting, the terraform host folder at the /home/app/config directory in the container. So, all the contents of terraform directory will be listed under the config folder in the container.

2. let's check, whether our files/content exist into the config folder or not, just "cd" to the config folder and execute the "ls" list command to list down the files.


As we can see, in the above screenshot, couple of json and java-script files listed there.

The good thing about this, whenever we do any changes to the host folder outside of the container, they are automatically sync/reflected in the container. Try yourself by adding couple of files and folder into the host folder and then just try listing the files into the container. You will be amazed to see, that changes are reflected there also.

Thanks for reading this blog.



Friday, 26 January 2018

Running gcloud/kubectl commands in docker container.

In this blog, we will see, how we can authenticate with the google cloud console from the docker container using service account.

I was having a scenario, where i need to run some gcloud commands from the docker container as a prerequisite for running the kubectl commands.

Example: initialize the .kube folder with the config file (google cloud cluster config).

Steps:

1. Create a service account, with the privileges you required for calling the google api's.
2. Download the service account JSON file on the local machine.
3. Create a docker file, which includes google cloud sdk and other components like kubectl in my case.
4. Passing the service account information to the docker container using environment variable.
5. Creating service account JSON file on the go, in the docker container using provided environment variable values.
6. Run the gcloud auth service account command and pass the service account json file to it.

In Brief: 

The first 3 steps are simple and lot of documentation available for it.I will start with the fourth one.

Service account information should not be copied directly into the image. They must be passed through the secrets or the environment variables. This make it more secure and configurable.

We can write a shell script, which creates a service account json file dynamically in the container using the environment variables.And we can copied that shell script file into the container and keep it is a entry point or manually run it for generating the service account json file.

Here is the link for, creating a JSON file dynamically inside the container.

Once, the file is generated we can use the following command for activating the service account and perform other operations :

./secrets is a folder, where account.json file generated from the environment variable

1. gcloud auth activate-service-account --key-file ./secrets/account.json
2. gcloud --quiet config set project $project
3. gcloud --quiet config set compute/zone $zone
4. gcloud container clusters get-credentials $cluster_name --zone $zone --project $project

We can also wrap the above 4 gcloud commands in one shell script and run that script file, instead of running commands independently.

Lets name the file init.sh

#!/bin/ash

sh ./generate.sh

gcloud auth activate-service-account --key-file ./secrets/account.json
gcloud --quiet config set project $project
gcloud --quiet config set compute/zone $zone
gcloud container clusters get-credentials $cluster_name --zone $zone --project $project

Where, sh ./generate.sh file, will generate the service acount json file in the secrets folder.

Now, lets just run the init file, and we are done.

sh ./init.sh

In the next blog, i will show you, how we can provision a google container engine using terraform.

How to create/generate a JSON file dynamically using shell script.

In this post, we will see that, how we can dynamically generate/create a JSON file using shell script.

Some days back, i was having a scenario where i need to generate a JSON file in a docker container using environment variable. And environment variable values are passed through the environment file into the docker container.

Will start with writing a shell/bash script. Lets name it, generate.sh

#!/bin/ash

cat > /home/app/secrets/account.json << EOF

{
  "type": $type,
  "project_id": $project_id,
  "private_key":$private_key,
  "client_email":$client_email,
  "client_id":$client_id,
  "auth_uri":$auth_uri,
  "token_uri":$token_uri
}


EOF

And we can now save this file. Where $type,$project_id,$private_key are the environment variables.

Now, we can run this shell script by executing the following command in the bash.

sh generate.sh

And this will generate a JSON file in the /home/app/secrets/ folder.

In the shebang, i have used #!/bin/ash as, i was using the alpine docker image.





Sunday, 13 August 2017

Authenticating With The Docker Hub V2 Api

This example is about authenticating with the docker hub v2 API and then getting the information/tags of the private repository.

Recently, i got this task and i was looking for any example. The public repository can be directly accessible using the v2 API, but for getting the private one, we must required authentication.

I have read their documentation and then tried the same using POSTMAN client first, once that is working, i have written a sample code.

First lets go with the postman client : 

There are two steps in it :

1. Getting the Auth Token by passing the username and password (POST)
2. Using that Auth token, query the docker hub v2 api (GET)


1. Getting the Auth Token : For getting the auth token we need to send a post request to https://hub.docker.com/v2/users/login/ with the username and password in the body. And in return it will return the auth token.


Post request to get the auth token.

2. Using that Auth token, query the docker hub v2 api : By using the above auth token, we can query the v2 api for getting the private repository tags info. We will require to pass the auth token in the headers.

Repository endpoint : https://hub.docker.com/v2/repositories/username/private-repo/tags



This will in-turn return the tags of the repository.

In node js :

1. Getting Auth token :

let dockerConfig = require('./config.js').dockerConfig,
    rp = require('request-promise'),
    _ = require('lodash'),
    R = require('ramda');


let getAuthToken = (username, password) => {

    let options = {
        method: 'POST',
        uri: `${dockerConfig.loginEndpoint}`,
        body: {
            "username": `${username}`,
            "password": `${password}`
        },
        json: true
    }
    return rp(options)

}

2. Getting the private repository tags using token :

let getImageTags = (username, repository, authtoken) => {
        let options = {
            method: 'GET',
            uri: `${dockerConfig.repositoryEndPoint}/${username}/${repository}/tags`,
            headers: {
                Authorization: `Bearer ${authtoken}`
            },
            json: true
        }
        return rp(options);
    }

Both the function will return promises, we can call them like this :

getAuthToken(config.username, config.password)
    .then((tokenInfo) => {
        console.log("token recieved");
        return getImageTags(config.username, config.repository, tokenInfo.token)
    })
    .then((tags) => {
        if (!_.isUndefined(tags) && !_.isNull(tags) && tags.count > 0) {
            let result = tags.results.map((tag) => (R.pick(["name"], tag)));
            console.log(result);
        }
        else
            console.log("No tags found");
    })
    .catch((err) => {
        console.error("Error Occured ", err.message);
    });


3. Config file, look like this : You will require to update it with your docker hub info.

module.exports = {
    dockerConfig : {
                loginEndpoint : "https://hub.docker.com/v2/users/login/",
                username :"username",
                password:"password",
                repository : "private_repo",
                repositoryEndPoint : "https://hub.docker.com/v2/repositories",
                tagsEndPoint : "tags"
        }

};

Here is the full working example, you can just clone it and start working.
https://github.com/UtkarshYeolekar/docker-auth-example

Hope it helps, Thanks!