Monday, May 30, 2016

Ruminating on IoT datastores

The most popular data-store choice for storing a high volume of IoT sensor data are NoSQL time-series databases. 

The following link contains a good list of NoSQL time-series databases that can be used in an IoT project. We have worked with both OpenTSDB and KairosDB and found both of them to be enterprise grade. 



Tuesday, May 24, 2016

Ruminating on Power BI

We were building our dashboard using Power BI and were looking at the various options available to refresh the data.

The following link would give a good overview of the various data refresh options -https://powerbi.microsoft.com/en-us/documentation/powerbi-refresh-data/#databases-in-the-cloud

It is also possible to pump in live streaming data to Power BI using it's REST APIs - https://powerbi.microsoft.com/en-us/documentation/powerbi-developer-overview-of-power-bi-rest-api/

But we were a bit concerned about the dataset size limit of 10 GB in Power BI Pro version. An excellent article by Reza Rad - http://www.radacad.com/step-beyond-the-10gb-limitation-of-power-bi

Essentially in Power BI, you have two options - either import the entire dataset into memory OR establish a live connection between Power BI and your data-source.

Power BI uses some nifty compression techniques for all data that is imported into it - Reza observed a compression from 800 MB file to 8 MB Power BI file. Hence for all practical purposes, a 10 GB limit should suffice for most use-cases.
In case you are working with large volumes of data (GB, TB, PB), then a live connection with the data-source is the only option.

Some snippets from Reza's article:
"Live connection won’t import data into the model in Power BI. Live connection brings the metadata and data structure into Power BI, and then you can visualize data based on that. With every visualization, a query will be sent to the data source and brings the response.

Limitations of Live Connection - 
1. With Live connection, there won’t be any Data tab in Power BI to create calculated measures, columns or tables. You have to create all calculations at the data source level.
2. Multiple Data Sources is not supported.
3. No Power Q&A
4. Power Query still is available with Live Connection. This gives you ability to join tables, flatten them if you require, apply data transformation and prepare the data as you want. Power Query can also set the data types in a way that be more familiar for the Power BI model to understand.
5. You need to do proper index and query optimization at data-source."

Monday, May 23, 2016

API keys providing a false sense of security !

We have seen so many API implementations wherein an API key is the only thing used to secure APIs. API keys are typically long alphanumeric strings that give a false sense of security.

The entire onus of protecting that key and making only SSL requests lies with the API consumer. This is a very concerning, since this rarely happens. We have decompiled Android APKs and found API keys stored in config files. We have seen API keys checked-into source control systems such as GitHub :)

Kristopher Sandoval has written an excellent blog post on the prevalent usage of using API keys to secure your APIs.

We must not rely solely on API keys to secure our APIs, but rather use open standards such as OAuth 2, OpenID Connect, etc. to secure access to our APIs. Many developers use insecure methods of storing API keys in mobile apps or pushing the API key to Github.

Snippets from the article (http://nordicapis.com/why-api-keys-are-not-enough/) -

"Most developers utilize API Keys as a method of authentication or authorization, but the API Key was only ever meant to serve as identification.
API Keys are best for two things: identification and analytics (API metrics).

If an API is limited specifically in functionality where “read” is the only possible command, an API Key can be an adequate solution. Without the need to edit, modify, or delete, security is a lower concern."

Another great article by NordicAPIs is on the core concepts of Authentication, Authorization, Federation and Delegation - http://nordicapis.com/api-security-the-4-defenses-of-the-api-stronghold/
The next article demonstrates how these 4 core concepts can be implemented using OAuth and OpenID Connect protocols - http://nordicapis.com/api-security-oauth-openid-connect-depth/



Serverless Options for Mobile Apps

A lot of MBaaS platforms today provide a mobile developer with tools that enable them to quickly roll out mobile apps without worrying about the backend.
In a traditional development project, we would first have to build the backend storage DB, develop the APIs and then build the mobile app.

But if you are looking for quick go-to-market approach, then you can use the following options:

  • Google Firebase Platform - Developers can use the Firebase SDK and directly work with JSON objects. All data would be stored (synched) with the server automatically. No need to write any server-side code. Also REST APIs are available to access data from the server for other purposes. 
  • AWS MBaaS: AWS Mobile SDK provides libraries for working with DynamoDB (AWS NoSQL Store). The developer just uses the DynamoDB object mapper to map objects to table columns. Again no need to write server-side code and everything is handled automatically. 
  • Other open source MBaaS platforms such as BassBox, Convertigo, etc. 

Open Source API Management Tools

For folks, who are interested in setting up their own API Management tools, given below are a few options:

HTTP proxy tools for capturing network traffic

In the past, we had used tools such as Fiddler and Wireshark to analyse the network traffic between clients and servers. But these tools need to be installed on the machine and within corporate networks, this would entail taking proper Infosec approvals.

If you are looking for a nifty network traffic capture tool that does not need installation - then 'TcpCatcher' is a good option. This is a simple jar file that can run on any m/c having Java.

Whenever we are using such proxy tools, we have two options -
1. Change the client to point to the IP of the tool, instead of the server. The tool would then forward the request to the server. (Explicit man in the middle)
2.  Configure the tool IP as a proxy in your browser.  (Implicit man in the middle)

Update: 25May2016
The TcpCatcher jar tool started behaving strangely today with an alert stating - "This version of TcpCatcher has expired. Please download the latest version". We had the latest version, but looks like this is a bug in the system.

We moved on to use Burp Suite free edition. This tool is also available as jar file and can run on any machine having Java. There is an excellent article by Oleg Nikiforov that explains how to setup burp proxy and use it to intercept all http requests. You can also download their root certificate and install it in your machine or mobile phone to log all HTTPS traffic.
We could setup Burp in under 20 mins to monitor all HTTPS traffic between our mobile apps and APIs.

Friday, May 20, 2016

Utilizing Azure AD for B2C mobile apps

We had successfully utilized Azure Active Directory for authentication of enterprise mobile apps. But can Azure AD be used for B2C apps? The answer is YES - Microsoft has released a preview version of Azure AD B2C that can be used for all customer-facing apps.

In Azure AD tenant, each user has to sign in with a long userID-email - e.g. {name}@{tenant}.onmicrosoft.com. This is not feasible for B2C apps, hence in Azure AD B2C, it is possible to log in with any email address, even plain usernames are supported. These accounts are called as Local Accounts in Azure AD B2C. Social Identity logins are also supported - e.g. Facebook, Google+, LinkedIn, and Amazon.

For more details on Azure AD B2C please refer to the following links:

https://azure.microsoft.com/en-in/documentation/articles/active-directory-b2c-faqs/

https://azure.microsoft.com/en-in/services/active-directory-b2c/



Thursday, May 12, 2016

Fundamentals of NFC communication

NFC communication happens through the exchange of NDEF (NFC Data Exchange Format) messages. An NDEF message is a binary format message that consists of a set of records - with each record containing a header and a payload.

The 'Beginning NFC' book on Safari is an excellent source for getting your basics right - https://www.safaribooksonline.com/library/view/beginning-nfc/9781449324094/ch04.html
I would highly recommend buying this book.

I always wanted to know the maximum length of an NFC message and it was answered in the above book as follows:

"In theory, there is no limit to the length of an NDEF message. In practice, the capabilities of your devices and tags define your limits. If you’re exchanging peer-to-peer messages between devices and no tags are involved, your NDEF messages are limited only by the computational capacity of your devices, and the patience of the person holding the two devices together. If you’re communicating between a device and a tag, however, your messages are limited by the tag’s memory capacity.

NDEF record payloads are limited in size to 2^32–1 bytes long, which is why the payload length field of the header is four bytes (or 2^32 bits).

It’s not a protocol designed for long exchanges because the devices need to be held literally in contact with each other."

Wednesday, May 11, 2016

Ruminating on JWT

JWT (JSON Web Token) has gained a lot of traction in the past couple of years and is slowly becoming the standard choice for all authentication and authorization communication.

The best way to learn about JWT is to head straight to their site - https://jwt.io/introduction/
I was impressed with the quality of the documentation. Core concepts were explained in a simple and lucid language. It took me days to understand SAML, whereas I could grasp even the complex concepts of JWT in minutes :)
Also we can store all authorization claims in the JWT payload, reducing the need to make another database call for checking authorization access levels.

But it is important to note that JWT specification does not talk about encrypting the payload - that is out of scope in the specification. You can encrypt the payload if you want it, but you would need to control the client/server code - i.e. JWT decoding libraries.

Since the JWT payload is not encrypted, it is of utmost important that JWTs are passed over TLS (HTTPS). Eran Hammer has written a good blog post on the perils of using a bearer token without TLS. A bearer token is called so because the 'bearer' - i.e. whoever holds the token is given all rights that the token would specify. A good analogy would be 'cash' - whoever has the cash can spend it, irrespective of who the rightful owner of that cash was.

Ruminating on Biometric security

Fingerprint scanners are becoming ubiquitous in many smartphones. There are also a few other pure software biometric solutions that are gaining traction in the market. Jotting down a few references.

http://www.eyeverify.com/  - EyeVerify maps the unique veins (blood vessels) and other micro-features in and around your eyes to create a digital key (eye-print) equal to a 50-character complex password. They claim that they are more than 99.99% accurate. It can work on any existing 1+ MP (megapixel) smart device camera !

Nuance Vocal Password: A user's voice is analyzed for hundreds of unique characteristics that are then compared to the voiceprint on file. 

Monday, May 09, 2016

Ruminating on browser fingerprinting

I was aware of sites using first-party and third-party cookies to track user activity on the web. But the use of browser fingerprinting to uniquely identify a user was quite intriguing.

The following sites can tell you the various unique characteristics of your browser environment that can be tracked by websites.

https://amiunique.org/

https://panopticlick.eff.org/

Browser fingerprinting entails collecting information like your user-agent, IP address, plug-ins installed and their version nos#, timezone, screen resolution, screen size/color dept, fonts installed, etc.

Looks like the only way you can be absolutely sure that you are not being tracked is by using the Tor browser :)

Tuesday, May 03, 2016

Ruminating on User Journey Maps

Creating user journey maps is an integral part of any UX or design thinking process. There are many ways in which you can create a user journey map. The links below would serve as guidance on the different approaches one can take to illustrate customer journey maps.

http://www.joycehostyn.com/blog/2010/03/22/visualizing-the-customer-experience-using-customer-experience-journey-maps/

I liked the sample journey maps created by Lego and Starbucks.

  • Before creating a user journey map, you have to define personas - based on customer segmentation and personality types. 
  • Then identify the customer experience journeys that you want to illustrate for each persona - e.g. transactional process of buying a car insurance, lifetime journey for an insurance customer, etc. 
  • Each journey is then broken down into various stages or phases that the customer goes through.
  • For each step, identify the customer emotions (e.g. positive, negative, neutral) and think on improving the customer experience - making it a 'wow' moment. 

Joyce also has a great presentation on SlideShare that shows many examples of customer journey maps and how they can be used to create superior customer experiences. My personal favourite was the below example that was a simple yet powerful tool to create wow moments for your customers.


There is another great blog post by ThoughWorks on facilitating collaborative design workshops.