Tuesday, December 25, 2012

Backbone JS + Chrome App [manifest version 2]

Hello World!!, YAJSBP :)
QUOTE: Getting to know a library or a technology is pretty easy, but things get reversed when you want to apply the same which you felt you knew :)
In this blog post I'll take you through some integration challenges I faced while developing a chrome app along with backbonejs

So, I got to know about this guy Backbone.js an year before at jsFoo 2011.
The moment I noticed him, I visited his birthplace and got an overview of him.
Then I felt that I could claim I know backbone.js. Later realised it was too early to say that.
An year after I decided to try this guy and started developing a chrome app[Features Recorder].
Then I faced the following small challenges[on integerating backbone with chrome app] which I solved my own way :P

Problem: 1 Chrome App manifest version 2 doesn't allow eval or new Function
Error: Uncaught Error: Code generation from strings disallowed for this context
Take a look at Chrome app Content Security Policy

Why do we need this?
Underscore.js microtemplating uses new Function syntax for compiling the templates
Note: The problem might persist with other templating options too

Solutions:
1. As mentioned in the CSP page, relax the rule against eval and related functions
"content_security_policy": "script-src 'self' 'unsafe-eval'; object-src 'self'"
2. As I just used underscore.js microtemplates. I pre-compiled my templates offline [not in the app]
var source = _.template("My template code").source
//assigned the source to template property of the view
//This solved the problem
//Ex:
Backbone.View.extend({
 initialize: function () {},
 template : function(obj){ //Precompiled template code
    var __t,__p='',__j=Array.prototype.join,print=function(){__p+=__j.call(arguments,'');};
    with(obj||{}){
    __p+=''+
    ((__t=( title ))==null?'':__t)+
    '';
    }
    return __p;
 }
});
3. Refer this blog

Problem: 2 window.localStorage for offline packaged is disabled
Problem: 3 chrome.storage is asynchronous [get & put are async as ajax]

Error: "window.localStorage is not available in packaged apps. Use chrome.storage.local instead."
I planned to build a offline packaged app.
But this error stunned me up as I can no more use local storage adapter for backbone

Solution:
I built my own localstorage adapter for chrome.storage
As the storage works in async way, callbacks for functions like model.save and create remained a must
var options = {success: function () {  }};
model.save(options);

Thursday, November 29, 2012

Asynchronous Javascript & jQuery Deferred Why?

Using jQuery Deferred
In this post I'll try to cover an overview of promises and deferred. Further how to use them in nodejs.

Define Deferred?
Google Define: Put off (an action or event) to a later time; postpone

Deferred in javascript?
Javsacript being an event driven scripting [and a developing programming] language, there is obvious need for developers to defer things awaiting events.
For ex:
window.addEventListner("load", function () { console.log("Defer my execution till window load"); }, false);
$.ajax({ url: 'data.html',
     success: function () {
       console.log("Defer my execution till ajax load is successful")
     },
     error: function () {
       console.log("Defer my execution till ajax load fails")
     }
});

So? This is nothing unusual!!!
Ya, obviously it is nothing unusual and obviously a beginner level fact.
Deferred: As a programming paradigm, is a design pattern which solves the problem of code neatness and maintainability while travelling along the lane of languages like javascript.

Why code neatness gets spoiled?
3 Basic Scenarios on which I feel so :D are
1. Duplicate code on either success or error callbacks [Ex: Hiding the loader animation in ajax]
2. Start a deferred event after success of another deferred event [Means: Call backs inside Call backs inside .. recursion :P]
3. Awaiting multiple events to complete successfully to proceed with someother execution [Ex: Submit button clicked, Animation Completed, Validation Done -> Proceed to Sign UP]

Beginner's Solution for all the above:

Scenario:1
//Assume the beginner knows a bit of refactoring :P
function common() {
 //Common Function after ajax call
}
$.ajax({ url: 'data.html',
     success: function (data) {
       common();
       console.log("Defer my execution till ajax load is successful")
     },
     error: function (data) {
       common();
       console.log("Defer my execution till ajax load fails")
     }
});

Scenario:2
$.ajax({ url: 'data.html',
     success: function (data) {
       $.ajax({ url: 'data1.html',
         data: data,
         success: function (data1) {
               $.ajax( ..... //so on and it is enough :P
             }
             });
     } 
    });



Scenario:3
//I Couldn't think of any better worse code sample :P
var i = 0;
function incr() {
 i++;
 if( i == 2 ) {
  doSubmitForm();
 }
}
$("submit-button").on('click', fuction () { 
 //do some validation and increment a global var 
});
$("submit-button").on('click', function () { 
 //do some animation and increment a global var 
});

And Hence we are done with code samples. Now lets go with Deferred and solve all the above 3

Scenario:1
var ajaxCall = $.ajax(myResourceURL); // Returns a deferred
ajaxCall.done(function () { /* Call me on success */ });
ajaxCall.fail(function () { /* Call me on success */ });
ajaxCall.always(function () { /* Call me always. Plz shut the loader div :P */ });


Scenario:2
var one = $.ajax( url ),
    two = one.pipe(function( data ) {
      return $.ajax( url2, {data: data} );
    }),
    three = two.pipe(function( data ) {
      return $.ajax( url3, { data: { data: data } } );
    });
three.done(function( data ) {
  // data retrieved from url2 as provided by the first request
});

Scenario:3
var validate = new $.Deferred(),
    animate = new $.Deferred(),
    proceed;
validate.done(function () {  });
animate.done(function () { });
proceed = $.when(validate, animate)
           .then(proceedSubmit, dont, notifySomething);
$("submit-button").on('click', function () { /*do some validation and you can even reject*/ validate.resolve(); });
$("submit-button").on('click', function () { /*do some animation and resolve or reject */ animate.resolve(); });

PS: Refer when, pipe and then
Note:
1. How does promise differ from deferred?
Promise can't be resolved or rejected by another source. But deffered can be. [Ex: In listing 3 validate and animate are resolved from click event's callback]
2. How do I pass params to done, fail, always callbacks or change the context of execution?
Check resolveWith and rejectWith
Also you can do deferred.resolve.call(context, param1, param2,...) incase you wanna get rid of array processing
3. Are we by any chance restricted to one call back per promise or deferred state (done, fail, always)?
Nopes (at least in jQuery deferred) you can add as many listeners [same as event listeners] for any of the promise states. All such callbacks will be triggered when promise gets resolved to appropriate state. This feature makes really good point in decoupling the actions we use to do in only available success or error callback in response to an event [like ajax load].
4. What if I attach a callback for a deferred which is already resolved?
The callback will be triggered immediately after the attachment, provided the state of the promise matches.

Lets see the way we can use jQuery deferred in nodejs
As mentioned in the Note 1: We can't use promise in node unless the module supports resolve & reject with in itself
var $ = require('jquery'),
    fileDeferred = new $.Deferred(),
    fs = require('fs');

fileDeferred.done(function (data) {  });
fileDeferred.fail(funtion (err) { });
fileDeferred.always(function () { });

fs.readFile(filename, 'utf8', function (err, data) {
 if(err) {
  fileDeferred.reject.call(null, err);
 } else {
  fileDeferred.resolve.call(null, data);
 }
});
If you do lots async programming do consider having a look at asyncjs
I hope you will be able to see the difference in normal way of programming than deferred way. Also how deferred way helps you to maintain code neatness. If you don't program the deferred way, I think its time to give it a try :)
Loads of thanks to @trevorburnham for Async Javascript
This blog is almost a gist of what I learnt from his creation :)
I hope I did some justice and helped you understanding the need for deferred.

Tuesday, October 2, 2012

Cassandra Composite Types - A Overview [with CQL & Cassandra-Cli examples]

Just felt like sharing what is composite type in cassandra and How can we make use of it
After so many discussions in Stackoverflow and PHPCassa forums, I hope I have got a clear picture over the topic

What are composite types?
Composite types are like structures in C or C++

For ex:
The way we define a linked list [basic]
struct LLNode {
 int data;
 char name[20];
 struct LLNode *next;
}
Which means every member of this struct will have data of type int, name of type char array and a next pointer of type LLnode
The struct is a composite of basic datatypes[not exactly in this case]
Also you can't initialize value to any of these attributes when you define a struct

The same way Cassandra Composite Type is a dataType derived from existing basic supported dataTypes.

Why do we need this?
Cassandra initially had and still has the concept of super columns.
The basic use of them is maintaining an inverted index.

Consider a data model in cassandra
ColumnFamily: UserMaster
ColumnModel:
userID: {name, prevCompany, experience} 
Now incase we need to support a query

Select name from UserMaster where prevCompany = xyz;

This is totally impossible untill we have a secondary index created.

To overcome this issue, Cassandra gave developers an option of creating their own index using super columns
ColumnFamily: UserIndex
ColumnModel: 
Date: {company1: {rowID1, .. , rowIDN },..,companyN: {rowID1, .. , rowIDN } } //SuperColumn

Now we can answer the above query via
allUsers = UserIndex[Date][prevCompany];
for i in allUsers
 echo UserMaster[i][name];

But this is a bit messy as every insert leads to two inserts with no transactional guarentees.
Also every read will result in minimum two reads across CFs
Also What if we have one more column of interest????
Say
Select name from UserMaster where prevCompany = xyz and experience = 2yrs;
And There are many other issues with super column itself

To overcome this [inverted index] EdAnuff came up with the concept of composite types

How would composite types save time?
Point to remember:
1. Composite types are type preserved
Ex:
CompositeType(ascii, int, ascii);
a:1:user1
a:10:user2
aa:2:user1
ab:0:user2

You can see the columns sorted first based on component 1, then 2 and then 3;
And sorting is based on the exact type of the component

So, we model our data in the following way
ColumnFamily: UserMaster
ColumnModel:
rowID: {prevCompany:exp:userName} //Notice the column value in model 1 becomes the Column names

Sample:
20120901: {prevCompany1:2:user1 => {null}, prevCompany1:2:user2 => {null}, prevCompany1:3:user3 => {null}, ...}
Query:
select * from UserMaster where prevCompany = 'prevCompany1' and exp > 2
Result:
prevCompany1:3:user3 => ''

Note:
I have kept the column value null as we don't need anything in there.
Also I have kept user ID as the last component because our read pattern is
Given a company get all users with given experience range

Now we can query for
All employees whose prevCompany is 'x'
All employees whose prevCompany is 'x' and exp is > '2yrs'
All employees whose prevCompany is 'x' and exp is = '2yrs' and name = 'xyz'
and so on

Note:
Your query should always fetch a contiguous slice incase of composite types

Means:
Sample Data [assume]:
20120901: {y:2:123, y:2:124, y:2:125, y:3:123, y:3:126}

All employees whose prevCompany is 'y' and exp is >= '2yrs' and userName = '123' will not work
Why?
Filter1: prevCompany is 'y' => contiguous
{y:2:123, y:2:124, y:2:125, y:3:123, y:3:126}
Filter2: exp is >= 2yrs => still contiguous
{y:2:123, y:2:124, y:2:125, y:3:123, y:3:126}
Filter3: userID = '123' => non contiguous
{y:2:123, y:2:124, y:2:125, y:3:123, y:3:126}

All employees whose prevCompany is 'y' and exp is = '2yrs' and userName > '123' will work
Why?
Filter1: prevCompany is 'y' => contiguous
{y:2:123, y:2:124, y:2:125, y:3:123, y:3:126}
Filter2: exp is = 2yrs => still contiguous
{y:2:123, y:2:124, y:2:125, y:3:123, y:3:126}
Filter3: userID > '123' => still contiguous
{y:2:123, y:2:124, y:2:125, y:3:123, y:3:126}

Finally,
Composite Types in cassandra is a nice concept but limits the number of components while defining the columnfamily
Cassandra has support for Dynamic Composite Columns [to overcome previous issue] but as of me it is not safe [as type safety is the tradeoff]
Syntax to Create composite type via cassandra-cli
create column family TestComposite 
 with comparator='CompositeType(UTF8Type, UTF8Type, LongType)'
 and key_validation_class = 'UTF8Type',
 and defaut_validation_class = 'UTF8Type';
 
Remember you can't query composite types via cassandra-cli [unless a point query]

Syntax to create composite type via CQL
CREATE TABLE UserMaster (
   day ascii,
   preCompany ascii,
   experience int,
   userName ascii,
   PRIMARY KEY (ID, preCompany, experience, userName)
 );
 
You can always only query based on components listed in Primary Key field above
Check this post in SO
Specifying composite fields via CQL is a bit different check this detailed post to get a clear picture. Remember both ways use the same storage pattern

Hope the whole blogpost made some sense about composite columns

Friday, July 27, 2012

Varnish Cache Purge, Ban and Ban Lurker

Lets walk through some basics of varnish before understanding purge and ban

Varnish?
From DOC: Varnish is a web application accelarator.
Varnish can cache & serve all your static properties [css, js, images, parsed PHP pages, HTML]
Reduces load on the webservers even on high traffic.
Can act as a load balancer even [provided with proper director configurations].

Varnish uses VCL [Varnish Configuration Language] to override the defaults and tweak varnish based on usecase

Varnish caches contents [cache object] against a key.
By Default the key is Hash(HostName and Request URL)
We can override the defaults by editing vcl_hash sub-routine in vcl file

Do Cache objects live long in varnish?
In varnish every cache object is stored against a ttl value.
Every object will be auto-magically removed out of cache once they reach the expiry.
TTL can be configured globally as default while starting varnishd with -t option.
Also can be overridden in VCL using bresp.ttl value.

What if I had to manually invalidate a cache object?
There comes purge and ban as savior :)

Purge?
Invalidates [removes] specified cache object actively.
Method?
CURL -X "PURGE" url
Means?
Hit varnish with request method as purge. You can use any equivalent of CURL
Can anybody purge my contents?
Use acl purge {} directive to allow IPs/IP Class from which purge request can be sent.

BAN?
Invalidates cache objects passively. Supports regex.
Consider ban as a filter over already available cache objects.
Method?
ban req.http.host == "example.com" && req.url ~ "\.png$"
Means?
filter all png objects from example.com
The above code should be placed in vcl_recv
Authentication mechanism is same as purge

Purge vs Ban How do they differ?
Purge:
Invalidates cached object actively [sets the ttl of object to 0 and removes the moment purge request is sent]
Ban :
A ban is a filter maintained by varnish not a command.
It is always applied before delivering an object from the cache.
There might be multiple bans in the same varnish instance.
A ban is applicable only for the contents that were present by the time it was created.
It will never prevent new objects being cached or delivered.
Too many ban lists per instance will consume too much cpu.
Long lived cache [assume infinite TTL] objects with no hits will remain untouched by bans and consumes memory.

Why CPU & Memory consuming?
Every request before being served it might need to be matched across multiple ban list before deliver.
Matching here means a regex match. Hence It is going to consume CPU.
Consider heavy traffic systems. The frequency of requests to ban fiter check might be a concern.
Ban is clearly a filter.
It will take care of removing objects that are actively getting hits and that match ban list.
But it will not take care of idle cache objects with high TTL values even if they match the ban.
Hence, the memory consumed by them is never released till their TTL expires although we had already invalidated them.

How to overcome this?
Use Ban Lurker.

What problem does this solve?
1. Banned objects can be discarded in background.
2. The size ban-list can be reduced.
Ban Lurker is a varnish process who will be actively walking the cache and invalidate objects against the ban list.
This is a kind of enable/disable feature by default off [enable it: param.set ban_lurker_sleep 0.1].
Read more about ban lurker here

Final Point:
Purge will not refresh the invalidated object from backend. It will happen only in next cache miss.
Incase you want to force a cache miss and refresh content from backend you need to set
req.hash_always_miss to true
In that cache varnish will miss the current object in the cache, thus forcing a fetch from the backend

Wednesday, July 11, 2012

npm - package manager for node and package.json a overview

npm: node package manager [!an acronym :P]

Till the time I got to know about this guy, code upload across servers use to be a tough task.
He is the one to look out incase you are to develop an application in node and host it across servers
package.json is his weapon ;)
In this article I'm just planning to touch npm basics and use of package.json and it is tl;dr ;)

What is npm?
From here : npm is a package manager for node. You can use it to install and publish your node programs. It manages dependencies and does other cool stuff.
Basically node uses commonjs style module system.
Every module is an independent piece of javascript code which can be plugged in and out of the core of your application.
Modules can be custom built or built for a generic purpose like redis, mysql, async, log4js.
And it will be always good to know if there are any pre-built modules available for our need before we start building our own.
npm does it for you like a charm :)

How do I install npm?
npm by default is shipped along with node.
So, zero step installation.
sudo apt-get install node
npm help
To search for a package. Just emit the following command
Ex:
#npm search keywords
npm search redis 
Locate yours and install it via
npm install pacakage-name
Where will my installed packages go?
npm installation can be done in two modes local [Default] or global
Local:
npm install redis would follow
if(cwd == node_modules)
  install in ./redis directory
else
  install in ./node_modules/redis directory
Global:
npm install -g redis would follow
prefix/lib/node_modules
So, That is it?
Wait we still have our main picture :) npm help or npm help action would help a lot

package.json => It should be a pure json not a javascript object
Actually package.json has many many many options.
I suggest to have npm help json or this as a reference
Check out node_redis, async

Let us consider async's package.json for example
{ "name": "async"
, "description": "Higher-order functions and common patterns for asynchronous code"
, "main": "./index"
, "author": "Caolan McMahon"
, "version": "0.1.22"
, "repository" :
  { "type" : "git"
  , "url" : "http://github.com/caolan/async.git"
  }
, "bugs" : { "url" : "http://github.com/caolan/async/issues" }
, "licenses" :
  [ { "type" : "MIT"
    , "url" : "http://github.com/caolan/async/raw/master/LICENSE"
    }
  ]
, "devDependencies":
  { "nodeunit": ">0.0.0"
  , "uglify-js": "1.2.x"
  , "nodelint": ">0.0.0"
  }
}

Try
npm search async

NAME(name)            DESCRIPTION(description)                                          AUTHOR(author)    DATE              KEYWORDS(keywords)
async                 Higher-order functions and common patterns for asynchronous code =caolan            2012-07-03 12:17

Some important canditates I use:
"name" => unique & represents your module in npm global repo
"devDependencies" => Will only be installed iff "npm install --dev" is done
"repository" => Where to look for the source code of your module? incase of a npm published module
"version" => Very important param. Should be in x.y.z format. Used in the hash to locate your module in global node repo.
"dependencies" => What are all the modules do your module depends on?
"scripts" => "start" => what should happen when you hit "npm start" on your application folder
          => "test" => what should happen when you hit "npm test" on your application folder
[Check Acquiring Fame ]

Why dependencies?
Basically any modules we use in an application would have dependencies itself
For ex: node_redis has hiredis dependency

It will be difficult for a programmer as such to resolve those dependencies manually
Hence to make our life easier, on
npm install redis
npm will pick up the internal dependencies of a module from its package.json and will manage to resolve them.
This process continues until the dependency tree is satisfied [means recursively].
Check out this SO post :)

Why scripts => start/test?
We follow different procedures on deployment of different services
For Ex:
nohup node index.js &
forever start index.js
node index.js
Hence it would be hard to remember which matches to what?
So, specifying the start up script in your package.json will make your life as much easier as this
npm start
The same applies for test.
Hence the command to start a node application will be the same across your environment

Why version?
npm indexes your module based on hash of (name + version) inorder to resolve version based dependencies
For Ex:
In dependencies section I can specify
"*" => anything is ok for me
">0.6.7" => anything > than 0.6.7 is ok for me
"~0.6.0" => anything > 0.6.0 and < 0.6.x is ok for me "0.6.7" => I need 0.6.7
So, to handle all such dependencies npm indexes the version along with name of the module

How do we build our projects?
Basically we rely on dependencies attribute much.
The local node framework we had designed for our system allows developers to work independently on their module.
Modules people work on is hosted independently on git and they are just added as dependencies in package.json of the whole application.
For deployment we just push the package.json to our servers and run
npm install && npm run :)
All our developers' custom built modules [from git] and their internally dependencies [from npm global] are resolved recursively by npm
So, we hardly push any code to live servers. npm takes care of building the whole application in no time :)
Uploading code with resolved dependencies is hell lot of code and binaries.
Sometimes modules can have compile environment dependencies.
Also our modules internally have dependency to different versions of same module. So, we didn't want to go with global installation once.
Every local module get their dependencies resolved at their level. Hence no need to worry about version clash. [We optimize our package.json a bit though]
So, we left it to npm + package.json to do our task :P They are doing a really great job :)

Wednesday, May 9, 2012

Javascript - A Reference for Beginners

Javascript Resources for beginners

Javascript is one of the best programming languages that I had come across as a programmer
Once I got a chance to know the good parts of javascript from Crockford from this video
I got some interest in exploring all that good parts in full.
First thing that impressed me from Crockford's presentation was
"Javascript is the only programming language which people dare to use before learning it"
Till that point I belonged to the same category that Crockford mentioned in the above quote

So, I began to spend some valuable time with the language to come out of that crowd

As a first thing I read a really nice book "Pro Javascript Techniques"
Then, I got a chance to attend a javascript conference jsFoo organised by HasGeek
There were really nice and lightning talks on advanced javascript techniques in which I had no knowledge about till that point.
I just observed the talks and grasped as much as I could.

After the day 1 of that conference my exploration on javascript techniques started.
Let me take you through the resources which gave me a clear picture on javascript and its awesome parts

When we start of with any programming language, it is necessary to know all datatypes and character set it supports
For javascript I followed this post on Oreilly.com

Is javascript pass by value or reference? Check out my blog post ;)

Javascript is one of the languages ( might be only too ;) ) where point of declaration of a function matters in execution order
You can understand the reason for above fact in this nice blog

Understand lexical scoping before writing nested functions in javascript. This article might be helpful

To have a in-depth knowledge in the concept of closures and prototype chaining , once you are aware of lexical scoping follow an awesome post
Also beware of this anti pattern while using closures

Really nice blog on when and when not use new operator [Don't dare to miss the comments on the post @yuiblog]

Then go ahead with what we call constructors in javascript

Once you understand the concept of constructors and functions, you can go ahead with a bunch of design patterns neatly explained

Want to test yourself on the above said concepts??
Try this awesome piece

Other interesting concepts like

One other blog post on javascript resources online

In the whole blog I never concentrated on Javascript's interaction with DOM. It is a very vaaaast play ground :)

Hope all these links makes some sense to you :)

Wednesday, April 18, 2012

Ubuntu 11.10 Bluetooth Issue & Exploration


Command Line Bluetooth Transfer
This blog talks about the way to mount bluetooth enabled mobile phone for file transfer via obexfs
and also a very nice way to transfer files without a GUI using sdptool & ussp-push

Don't forget the power of sudo whenever you get any permission denied error ;)

Since I had upgraded my ubuntu version from 11.04 to 11.10.
I never succeeded in sending files via bluetooth to my mobile phone.
And finally the day has come,
I Googled for the error that I use to get while trying to send files via bluetooth
"Permission Denied (13)" and landed Here
From there I got redirected to Launch pad where I found many people reporting the same issue.
I got a chance to know some decent amount of distinct phone models too ;)
And I got him & HEEEE was the Guy whom I was looking for.

Also The issue with 11.10 can be resolved by upgrading your blueman service too via ppa

I just tried his way
$ sudo apt-get install obexfs
$ hcitool scan
Scanning ...
  3C:8B:FE:F6:1B:3A Tamilmani
$ obexfs -b 3C:8B:FE:F6:1B:3A /media/tamil
#Once I completed my transfer I unmounted it via
$ fusermount -u /media/tamil
It worked like a charm :) my phone got mounted & I'm able to transfer data
From man pages
fusermount : mount/unmount fuse filesystem [-u==unmount in the above example]
obexfs : mount obexFTP(Object Exchange FTP) capable devices
obexfs -b[bluetooth]
Learn about FUSE Here
From now my exploration started..
What is this hcitool do?
From man pages : Configure Bluetooth Connections & send some special Commands to bluetooth devices
Some Interesting Commands that we can send via hcitool:
scan: Scans all available and visible bluetooth devices
dev : Display local device [the host where you run hcitool from]
name [addr] : Displays the name of the specified device[addr]
info [addr] : Print device name, version  and  supported  features
and many more
[addr refers to BADDR of device -> unique radio frequency identifier, ex: 3C:8B:FE:F6:1B:3A]

What more ?
I learn't how to transfer data without a gui :P
$ sdptool browse 3C:8B:FE:F6:1B:3A
Service Name: Object Exchange
Service RecHandle: 0x10005
Service Class ID List:
  "OBEX Object Push" (0x1105)
Protocol Descriptor List:
  "L2CAP" (0x0100)
  "RFCOMM" (0x0003)
  Channel: 5
   "OBEX" (0x0008)
Profile Descriptor List:
  "OBEX Object Push" (0x1105)
    Version: 0x0100
$ sudo apt-get install ussp-push
$ ussp-push 3C:8B:FE:F6:1B:3A@5 /home/tamil/image.jpg image.jpg
A Brief explanation of the above
sdp -> Service Discovery Protocol
 sdptool -> Tool to send sdp queries to bluetooth devices
 
 Available commands:
  search : search for a service
  browse : browse all available services
  add    : add a service to sdpd
  and many more
So, using sdptool I figured out a list of services available in my destination bluetooth device
Out of all the services, I found the channel [5 -> highlighted in the output] of OBEX Object Push service
then using
ussp-push -> program that can be used to send files using OBEX (OBject EXchange) protocol over Bluetooth
I did transfer my file from my machine to my phone
$ ussp-push [addr]@channel src dst #dst is just a filename not a path
 

Friday, April 13, 2012

Interesting Javascript Facts

Wishes to the web development world :)
In the past 2 to 3 months I had chance to know about loads & loads of innovative JS Libraries
All of them made me wonder "Is there a thing that we can't do in JS?"
I wish to share some facts which got me mad in to this language & very beginnerish ;)

What is {} + []?
As soon as I saw this question my reply was "[object Object]"
But the real answer was 0 :O
Couldn't get any good guess on this, so posted @stackoverflow
And I myself couldn't believe I missed it :P
{} => Empty Block Scope & +[] => 0
So, cometh thy answer :)
Some more of the same kind
{} + {} = NaN
[] + [] = ""
[] + {} = "[object Object]"

Why obj === +obj?
I found this @ the source of Backbone.js
I was wondering why should they check JS number type in such a way?
Once again I got back to Stackoverflow
The reply once again stunned me up :D
It was to save the network bandwidth of end user :)
After compressing the code:
typeof(obj)=='number' [24 bytes]
obj===+obj [13 bytes]
So, 11 Bytes saved :D

So why is this $.each("Boolean Number String Function Array Date RegExp Object".split(" "), function () {})
This one is from jQuery :)
The moment I saw this as a javascript kid
Why not ["Boolean", "Number", "String", "Function", "Array", "Date", "RegExp", "Object"] this???
But this time I didn't seek Stackoverflow's help :P
After Compressing the code:
["Boolean","Number","String","Function","Array","Date","RegExp","Object"] (74 Bytes)
"Boolean Number String Function Array Date RegExp Object".split(" ") (69 Bytes)
So we saved 5 Bytes :)

And Finally one Crockford fact:
The guy who renamed logical && and || meaningfully :P
Because both the logical operators doesn't knock you back with the boolean result of the evaluation
Instead
&& => Guard Operator
Reason:
Statement: expr1 && expr2
Return Val: expr1 if expr1 evaluates to false else expr2
Ex:
     var a = 1 && 10, b, c = b && a ;
     console.log(a); // Will be 10 rather true
     console.log(c); // Will be undefined rather false
|| => Default Operator
Reason:
Statement: expr1 || expr2
Return val: expr1 if expr1 evaluates to true else expr2
Ex:
     var a = 10 || 1, b, c = b || a ;
     console.log(a); // Will be 10 rather true
     console.log(c); // Will be 10 rather true

Default Operator is most widely used to handle browser level inconsistencies like
evt = event || window.event;
src  = evt.target || evt.srcElement;

Also the Guard Operator
body = document && document.body; //If document exists return document body
complexBody = document && (document.body || document.getElementsByTagName(body)[0]) //Using both guard and default

Enjoy Javascript :) It has the most crazy useful parts than anyother language does :D

Wednesday, March 14, 2012

Cassandra PHPCassa & Composite Types

This post is updated inorder to support phpcassa 1.0.a.1

Cassandra Composite Type using PHPCassa


phpcassa 1.0.a.1 uses namespaces in PHP which is supported in PHP 5 >= 5.3.0
Make sure you have the relavant package.
The script mentioned below is the copy of PHPCassa Composite Example

I will explain it step by step

(1) Creating Keyspace using PHPCassa
        Name => "Keyspace1"
        Replication Factor => 1
        Placement Strategy => Simple Strategy
(2) Creating Column Family with Composite Keys using PHPCassa
        Name => "Composites"
        Column Comparator => CompositeType of LongType, AsciiType (Ex: 1:example)
        Row Key Validation => CompositeType of AsciiType, LongType (Ex: example:1)
        Sample Row:
                'example':1 => { 1:'columnName': "value", 1:'d' => "Hai", 2:'b' => "Fine", 112:'a' => "Sorry" }
        Columns are sorted Based on Component types as shown above
        112 > 2 as LongType but "112" < "2" as Ascii         Cassandra Properly honors the type mentioned on column family definition         I have used '' to denote ascii. Ignore them as values
require_once(__DIR__.'/../lib/autoload.php');

use phpcassa\Connection\ConnectionPool;
use phpcassa\ColumnFamily;
use phpcassa\ColumnSlice;
use phpcassa\SystemManager;
use phpcassa\Schema\StrategyClass;

// Create a new keyspace and column family
$sys = new SystemManager('127.0.0.1');
$sys->create_keyspace('Keyspace1', array( // (1)
    "strategy_class" => StrategyClass::SIMPLE_STRATEGY,
    "strategy_options" => array('replication_factor' => '1')
));

// Use composites for column names and row keys
$sys->create_column_family('Keyspace1', 'Composites', array( //(2)
    "comparator_type" => "CompositeType(LongType, AsciiType)",
    "key_validation_class" => "CompositeType(AsciiType, LongType)"
));


Start a connection pool, create an instance of Composites ColumnFamily
$pool = new ConnectionPool('Keyspace1', array('127.0.0.1'));
$cf = new ColumnFamily($pool, 'Composites');
Specifying Row Keys and Column Keys
Both our row key [key_validation_class] and column key [comparator] are composite types.
That means our key has components in them and types of each component might differ
So, we can't specify the keys as a single entity. They might violate the data types that cassandra cluster expects
For ex: in our case of row keys: Component 1 is Ascii & Component 2 is Long
When a write or read request is sent to cassandra, the type property should be properly maintained
Specifying "key:1" won't work and would result in an cassandra exception

Hence we maintain components of key as a php array and specify insert_format & return_format as an array.
Ex: $key1 = array("key", 1); //Ascii, Long
Other available formats for insert and return are
  • DICTIONARY // Here, array keys correspond to row keys. So, we can't use this as our keys have components
  • OBJECT // This is almost same that thrift returns
Whereas for columns, each column corresponds to a value. Hence it will be array ( array ( components ) , value )
Here the array inside an array is required because php associative arrays don't support anything other than string keys.
As we need to preserve type. We can't specify "columnKey"=>value anymore.
Hence we map them in to an array as array(key, value) where key itself is an array(components)
// Make it easier to work with non-scalar types
$cf->insert_format = ColumnFamily::ARRAY_FORMAT;
$cf->return_format = ColumnFamily::ARRAY_FORMAT;

// Composite Row Keys ()
$key1 = array("key", 1);
$key2 = array("key", 2);

$columns = array(
    array(array(0, "a"), "val0a"),

    array(array(1, "a"), "val1a"),
    array(array(1, "b"), "val1b"),
    array(array(1, "c"), "val1c"),

    array(array(2, "a"), "val2a"),

    array(array(3, "a"), "val3a")
);

$cf->insert($key1, $columns);
$cf->insert($key2, $columns);

Then we fetch data
(1) Get all the columns corresponding to a key
(2) insert and return format is array so accessing via index
(3) Should output an array of components of column name
//Constructor of Column Slice
__construct( mixed $start = "", mixed $finish = "", integer $count = phpcassa\ColumnSlice::DEFAULT_COLUMN_COUNT, boolean $reversed = False ) 

(4) ColumnSlice => ColumnSlice(array(1), array(1))
  1. $start => array, means composite type
    Ex: array(component, array(component, INCLUSIVE_FLAG), ...) // inner array is component specific and required only if you wish to override INCLUSIVE_FLAG
  2. $end => Same as $first
So, we ask for all columns whose first component [note the array, coz of composite type] is with value 1 to 1.
And that Indirectly means, all columns with first component 1
(5) $start=> "" means beginning of the row and
array(1, array("c", false)) means, everything less than 1:c as per sorting I mentioned in the beginning
(6) Shortlists all values based on the first component exclusive of 0 and 2
(7) Shortlists all values based on the first component exclusive of 0 and 2 in reverse (Notice $reversed set to true)
// Fetch a user record
$row = $cf->get($key1); //(1)
$col1 = $row[0];
list($name, $value) = $col1; //(2)
echo "Column name: ";
print_r($name); //(3)
echo "Column value: ";
print_r($value);
echo "\n\n";

// Fetch columns with a first component of 1
$slice = new ColumnSlice(array(1), array(1)); // (4)
$columns = $cf->get($key1, $slice);
foreach($columns as $column) {
    list($name, $value) = $column;
    var_dump($name); 
    echo "$value, ";
}
echo "\n\n";

// Fetch everything before (1, c), exclusive
$inclusive = False;
$slice = new ColumnSlice('', array(1, array("c", $inclusive))); // (5)
$columns = $cf->get($key1, $slice);
foreach($columns as $column) {
    list($name, $value) = $column;
    echo "$value, ";
}
echo "\n\n";

// Fetch everything between 0 and 2, exclusive on both ends
$slice = new ColumnSlice( // (6)
    $start = array(array(0, False)),
    $end   = array(array(2, False))
);
$columns = $cf->get($key1, $slice);
foreach($columns as $column) {
    list($name, $value) = $column;
    echo "$value, ";
}
echo "\n\n";

// Do the same thing in reverse
$slice = new ColumnSlice(    //(7)
    $start = array(array(2, False)),
    $end   = array(array(0, False)),
    $count = 10,
    $reversed = True
);
$columns = $cf->get($key1, $slice);
foreach($columns as $column) {
    list($name, $value) = $column;
    echo "$value, ";
}
echo "\n\n";

// Clear out the column family
$cf->truncate();

// Destroy our schema
$sys->drop_keyspace("Keyspace1");

// Close our connections
$pool->close();
$sys->close();
Actually this version of PHPCassa is an awesome revamp from its later version.
  • This has come out with Thrift 0.8 Support
  • Composite Type Support [no more serialize or unserialize required ;)]
  • Full Support for Batch Mutate
  • Implementation using namespaces
  • All new API Reference
  • And Complete Examples
Awesome work by Tyler Hobbs :)
Hope this helps :)

Javascript ASI and join vs concat(+)

Javascript Automatic Semicolon Insertion


I came across a nice implication of Automatic Semicolon Insertion while developing an API in javascript.

I'll let you guess at first as usual. Try the following
function asi() {
    var a = 10,
    b = 20
    c = 30;
    this.log = function () {
        console.log(a,b,c);
    };
    this.set = function (A,B,C) {
        a=A;
        b=B;
        c=C;        
    }
}

var a = new asi();
a.log();
var b = new asi();
b.log();
a.set(11,21,31);
b.log();
b.set('This', 'is', 'wrong');
a.log();

//Expected output
10 20 30
10 20 30
10 20 30
11 21 31

//What happened??
10 20 30
10 20 30
10 20 31
11 21 wrong

How Come?
First Thing to note:
See Closely at line 3 there is a comma operator missing. So, now parser will decide what to do :P
Remember:
Whenever a statement misses a semicolon and if the statement following it makes sense along with the former.
Then JS engine will not place a semicolon. Perhaps it parse them as a single statement. [Refer My Prev Post ]
Implication:
var a=10,b=20 remains a incomplete statement without a semicolon
var a=10,b=20c=30; doesn't makeout a valid javascript statement. So ASI makes it var a=10,b=20;c=30; [converse of the above]
Finally:
The variable c is assigned before declaration in the scope of function ASI
Remember:
If a variable is used before declaring it in function scope &&
If the variable is not declared anywhere in the scope chain of the function
Then it will become a property of the Global Scope or the window
Implication:
Hence, variable 'c' is assumed to be declared in the global scope rather in function ASI()

That is all to say about it :)
Better don't save semicolons :P
Use them whenever is needed.
So, that you will be able to trace back errors nicely in case of unintentional errors like the above.

Next is about Join Vs Concat(+)


I was going through many of the test regarding this context @jsperf
I inferred that in all modern browsers (+) for concatenation is optimized in a really nice way [in some cases (+) was 100 times better than join].

But I think the better criteria to choose one among them should be the usecase.
Because both of them will be able to do job in less than a ms

Following is an example why do I feel join is safer than (+)
function whyJoin() {
    var a, b, c, delim = '&';
    return {
        setter: function (A, B, C) {
            a = A;
            b = B;
            c = C;            
        },
        concatMe: function () {
            return a+delim+b+delim+c;
        },
        joinMe: function () {
            return [a, b, c].join(delim);
        }        
    };
}

var test = whyJoin();
test.setter('This', 'is', 'Good');
var c = test.concatMe();
var j = test.joinMe();
console.log(c.split('&')); 
console.log(j.split('&'));
test.setter('This', 'is');
c = test.concatMe();
console.log("Doesnt look good", c);
j = test.joinMe();
console.log("Seems Fine", j); 
console.log(c.split('&')); 
console.log(j.split('&'));
test.setter('This', 'is', null);
c = test.concatMe();
console.log("Doesnt look good", c);
j = test.joinMe();
console.log("Seems Fine", j); 
console.log(c.split('&')); 
console.log(j.split('&'));
If you had noticed your console following will be the output
["This", "is", "Good"]
["This", "is", "Good"]
Doesnt look good This&is&undefined
Seems Fine This&is&
["This", "is", "undefined"]
["This", "is", ""]
Doesnt look good This&is&null
Seems Fine This&is&
["This", "is", "null"]
["This", "is", ""]

Hope you noticed, In Concat(+) undefined or null is converted to their string equivalent and appended which might be undesired in some cases.

Also join keeps things clear & clean.
For ex: If delimiter is going to be the same across all concatenation or operands already exists as an array.

Concat(+) is really useful in many cases
For ex: If the concatenation is not based on some delimiters & number of concatenation operations is less
var result = '
  • '+param+'
  • ';
    In some cases I feel using both keep things clear.
    For ex:
    for(some condns) {
        result += [param, param, param].join('&');
    }
    

    But Google Optimization suggests creating a string builder for the above case.
    function stringBuilder() {
        this.needls = [];
    }
    stringBuilder.prototype.push = function (needle) {
        this.needls.push(needle);
    };
    stringBuilder.prototype.build = function () {
        var result = this.needls.join('');
        this.needls = [];
        return result;
    };
    var strBuilderInstance = new stringBuilder();
    for(some cdns) {
         strBuilderInstance.push([param, param, param].join('&'));
    }
    var result = strBuilderInstance.build();
    

    All the tests performed in jsperf are performance test :)
    Better decide things based on your usecase because javascript is fast enough but the DOM is taking all the time :) -> Douglas Crockford

    Tuesday, March 13, 2012

    Is Javascript Pass By Reference or Pass By Value?

    Javascript - Pass By Reference or Value?
    Javascript Types:
    string, number, boolean, null, undefined are javascript primitive types.
    functions, objects, arrays are javascript reference types.

    Difference?
    One of them is pass by reference and value.

    I'm Considering string from primitive type and object from reference type for explanation.

    Try guessing the alerts in the following examples yourself before reaching the answers

    //Example 1
    function test(student) {
        student = 'XYZ';
        alert(student);
    }
    
    var a = 'ABC';
    test(a);
    alert(a); 
    
    //Example 2
    function test(student) {
        alert(student.name);
        student.marks = 10;
        student.name = 'XYZ';
    }
    
    var a = {name:'ABC'};
    test(a);
    alert(a.name);
    
    //Example 3
    function test() {
        var student;
        return {
         setter: function (a) {
             student = a;   
         },
         getter: function () {
             return student;
         },
         change: function () {
             student.name = 'XYZ';
         }
       }
    }
    
    
    var a = {name:'ABC'};
    var b = test();
    b.setter(a);
    a.name = 'DEF';
    alert(b.getter().name);
    b.change();
    alert(a.name);
    
    //Example 4
    function test() {
        var student;
        return {
         setter: function (a) {
             student = a;   
         },
         getter: function () {
             return student;
         },
         change: function () {
             student.name = 'XYZ';
         }
       }
    }
    
    var a = {name:'ABC'};
    var b = test();
    b.setter(a);
    a = {name:'DEF'};
    alert(b.getter().name);
    b.change();
    alert(a.name);
    

    Try reasoning why are they so if you are wrong, before reaching the explanation

    //Example 1
    alert 1: XYZ
    alert 2: ABC
    
    //Example 2
    alert 1: ABC
    alert 2: XYZ
    
    //Example 3
    alert 1: DEF
    alert 2: XYZ
    
    //Example 4
    alert 1: ABC
    alert 2: DEF
    

    Reason

    Example 1:
    string is a primitive type & variables hold the values for primitive types.
    Primitives are passed by value in javascript.
    So, change in 'student' will never affect 'a' and vice versa

    Example 2:
    object is a reference type & variables hold the reference rather value for reference types
    Reference types are passed by reference in javascript
    Both a & student refer to the same object. Hence change in 'student' reflects over 'a'.
    **Remember reference of 'student' never points to 'a', it follows the reference chain of 'a' and starts referring to the core object.

    Example 3:
    Reason same as Example 2
    Created this example to show that change made in 'a' too reflects back in 'student'

    Example 4:
    This example tests your real understanding.
    If you can reason this out now, then you can grade your self good in this topic

    Although a's reference has changed to new object, it didn't affect student's reference.
    Reason: point to remember mentioned in Example 2.
    You have to understand this clearly :)

    Hope I had done some justice. The same can be proved for other primitive and reference types :)

    You actually don't need function calls to prove these concepts.

    //Equivalent to Example 1
    var a = 10;
    var b = a;
    a = 20;
    console.log(a, b);
    
    //Equivalent to Example 2 & 3
    var a = {value:1};
    var b = a;
    a.name = 'xyz';
    b.value = 10;
    console.log(a, b, a===b);
    
    //Equivalent to Example 4
    var a = [1,2,3];
    var b = a;
    b[0] = 10;
    console.log(a, b, a===b);
    a = [5,6,7];
    console.log(a, b, a===b);
    

    But I used to function to explain the effects over 'passing' rather 'assignment|copying' :)

    Tuesday, January 31, 2012

    CAPTCHA - A Revolution


    CAPTCHA = Completely Automated Public Turing test to tell Computers and Humans Apart

    What is a CAPTCHA?
    A System built by Luis von Ahn, Manuel Blum, Nicholas J. Hopper, and John Langford of CMU to make sure that user who is active at the other end is a Human and not a bot. This was initially done to prevent bots entering yahoo chat rooms and redirecting the users to someother sites.

    CAPTCHA - Reverse Turing Test:
    Yups, CAPTCHA is a reverse turing test because it reverses the role of computers and human. Computer is a device designed to perform what human want it to. But in the case of CAPTCHA it is reversed. It is completely automated, so computer challenges you to perform some action to identify that you are a human.

    Initially [even now] CAPTCHA was an distorted image with some characters in it which would make lives of bots harder to detect them but which wouldn't affect human though

    Next generation of CAPTCHA carried a audio link with the distorted image beside to help visually challenged people

    Although CAPTCHA are automatically generated they are easily breakable using some techniques like OCR(Optical Character Recognition) or by understanding the underlying logic of automation.

    And Now, people started their own implementation including

    Mathemetical Captcha => What is 1 + 1?
    Image/Visual Captcha => Who is alice in the photo tagged with friends? [FB uses it to detect legitimate user of an account]
    and so on

    But the real master piece is reCaptcha [Powered by Google]








    What is great in that?
    It is great because it knows the value of human time. A test that unites human power :)
    How?
    If you had noticed any recaptcha there will be two space separated words
    Consider the image shown for example [said allectst]
    Where does this words come from?
    These words come from the process of digitizing old text with OCR
    Means?
    Inorder to generate digital version [ex: pdf] of a book which was written way back digitized books or word processing tools came in to existence, people use a technology which scans the book and takes a photocopy[image] of it. Then it tries to recognize the characters using image processing technique called OCR and digitizes the old text.
    What it has to do with reCaptcha?
    OCR is an automated tool to recognize characters from an image. It is not guaranteed that it will be able to recognize all characters with out any discrepancies. For ex. T can be interpreted as I based on some fonts or clarity of the image.

    So, what recaptcha people do is
    Pick two words; one was successfully recognized by ocr, said and the other it wasn't able to, allecstst. 
    Challenge the user for CAPTCHA test.
    If the user answers the one successfully recognized by ocr [said] correctly, it will confirm that the user is a human. And the other word is kind of a poll. The same unrecognized word will be shown to a group of people [say 10].
    If out of 10, 7 [i.e., majority] were able to recognize allecstst as allecstst and the rest understood it as alleestst, then the unrecognized word is considered as allecstst as majority falls for it. Hence a word is digitized in a book :)

    So, without your knowledge you are helping digitize a book whenever you fill a recaptcha :) Be happy whenever you answer a recaptcha and proud to be united :)
    A book is being digitized whenever a user signs in to Facebook, gmail, linked in, etc.,

    From the site
    About 200 million CAPTCHAs are solved by humans around the world every day. In each case, roughly ten seconds of human time are being spent. Individually, that's not a lot of time, but in aggregate these little puzzles consume more than 150,000 hours of work each day. What if we could make positive use of this human effort? reCAPTCHA does exactly that by channeling the effort spent solving CAPTCHAs online into "reading" books.

    visit this site to learn more and feel great :)

    Monday, January 9, 2012

    Nodejs Modules and Export Explained


    Enjoyed a week playing around with nodejs :) Lets share

    Simple Node Server [http://localhost:6666]:

    var http = require('http');
    
    var server = http.createServer(function (req, res) {
    
               // Do Whatever you want
    
                  res.writeHead(200, {'Content-Type':'text/plain'});
    
                  res.end('Running');
    
             }).listen(6666, function () {
    
                       console.log('Node Runs on port 6666');
    
    });

    How to install a node_module?
    NPM is a powerful package manager for node which you can use to install node modules
    Ex:
    npm install redis

    The above command installs redis module in ./node_modules/redis directory

    How to use a module?
    Use require() method

    Ex:
    require('redis')

    How will node resolve the modules?
    Consider

    /home/xxx/sample
    
     |___ index.js
    
     |___ node_modules
    
              |____ redis
    
                         |____lib/
    
                            ...
    
              |____ my_module
    
                            |____node_modules
    
                                  |____first.js
    
                            |____test.js
    
    


    //test.js
    
    var redis = require('redis');
    
    

    So node will look for redis module in

    1.my_module/node_modules/
    2.sample/node_modules/
    3.xxx/node_modules/
    4.home/node_modules
    5./node_modules

    Also some global folders

    You can exploit this behaviour nicely if planned :)

    Will node load the module everytime I request it to via require?
    No, It won't load the module everytime you request untill the module requested resolves to a location different from previously loaded location.
    Ya, Node caches the module

    Cache means?? What will get cached?
    uh uHHH... Lets get deeper into modules before this question

    How to write your own module?
    Simple... Let us write a module which performs lowercase to uppercase conversion

    //simple.js
    
    var stringtoUpper = function (text) {
    
            return text.toUpperCase();
    
    };
    
    exports.toUpper = stringtoUpper;

    Done :)

    //test.js
    
    var utils = require('./simple');
    
    console.log(utils.toUpper('this is text'));
    
    
    
    //Execution
    
    node test.js
    
    >THIS IS TEXT
    
    

    What is that exports?
    That is whatever you wish to expose to the src module that requires the destination module.
    This is shared between all instances of the current module.
    exports.toUpper same and equal to module.exports.toUpper, [exports === module.exports] nice way to use. module is the referrence to the current module :)
    Also you can name your export different from it's actual name. As you can see in the above example. Actual function's name is stringtoUpper, but exports' name is toUpper

    Consider the same simple.js and instead exports.toUpper = stringtoUpper replace with

    1.module.exports.toUpper = stringtoUpper;
    2.module.exports = stringtoUpper;
    3.module.exports = new stringtoUpper('this is a text');

    All the three are different

    module.exports is an object
    1 makes it {toUpper: [Function]]}
    2 makes it [Function] // We can use this a constructor function
    3 makes it an object of stringtoUpper class

    After require('test.js'); to use the exports, follow
    1 : simple.toUpper('This is a text')
    2 : simple('this is a text') or require('simple')('this is a text') or new simple('this is a text');
    3 : You can access all public members. Here there is none

    Methods 2 & 3 overrides the entire object, means whatever might be there initialised before via exports will be overridden
    Ex:
    module.exports.a = 10;
    module.exports.b = 20;
    module.exports = 20; // will override a & b
    Hope you understand the reason. module.exports itself is an object and members of it can be initialised either as module.exports.a, module.exports.b but when module.exports = 20 happens the whole object is initialised with new value

    Now lets comeback to caching

    node caches all these exports of a module when they are loaded for the first time

    1 & 2 has no effect on caching since they are just functions they can be called n number of times with change in parameter

    But 3 is different. The object returned is cached. So how many ever time you load the module after first time, no instantiation take place [Kind of singleton]. Because you are exporting only one object of the class.

    So, If you are looking for stateful implementation across modules you can go for 3, else go for 1 or 2

    Happy Node :) Hope this helps :)