Monday, April 14, 2014

Updating content in an iOS app

Any mobile app needs to have a design strategy for updating content from the server. We were exploring multiple options for retrieving content from the server and updating the local cache in our iOS app.
After considerable research, we have found that iOS 7 provides a very neat design for background fetching of new content - one that uses silent push events to raise events in the client app. Even if the app is not running, it would be launched in the background (with UI invisible, rendered off-screen) to process the event. The following article gives a very good overview of the technique.

Some snippets from the above article:

A Remote Notification is really just a normal Push Notification with the content-available flag set. You might send a push with an alert message informing the user that something has happened, while you update the UI in the background. 
But Remote Notifications can also be silent, containing no alert message or sound, used only to update your app’s interface or trigger background work. You might then post a local notification when you’ve finished downloading or processing the new content. 

 iOS 7 adds a new application delegate method, which is called when a push notification with the content-available key is received. Again, the app is launched into the background and given 30 seconds to fetch new content and update its UI, before calling the completion handler.

How is the App launched in the background? 
If your app is currently suspended, the system will wake it before callingapplication: performFetchWithCompletionHandler:. If your app is not running, the system will launch it, calling the usual delegate methods, includingapplication: didFinishLaunchingWithOptions:. You can think of it as the app running exactly the same way as if the user had launched it from Springboard, except the UI is invisible, rendered offscreen.

Thursday, March 20, 2014

Mobile Device Management Products

My team was helping our organization to evaluate different Mobile Device Management (MDM) tools for enterprise level deployment. The following 2 articles are an excellent read for understanding the various features provided by MDM tools and how products compare to each other.

http://www.computerworld.com/s/article/9245614/How_to_choose_the_right_enterprise_mobility_management_tool

http://www.computerworld.com/s/article/9238981/MDM_tools_Features_and_functions_compare

Monday, March 17, 2014

Ruminating on Distributed Logging

Of late, we have been experimenting with different frameworks available for distributed logging. I recollect that a decade back, I had written my own rudimentary distributed logging solution :)

To better appreciate the benefits of distributed log collection, it's important to visualize logs as streams and not files, as explained in this article.
The most promising frameworks we have experimented with are:

  1. Logstash: Logstash combined with ElasticSearch and Kibana gives us a cool OOTB solution. Also Logstash is developed on the Java platform and was very easy to setup and start running. 
  2. Fluentd: Another cool framework for distributed logging. A good comparison between Logstash and Fluentd is available here
  3. Splunk: The most popular commercial tool for log management and data analytics. 
  4. GrayLog: A new kid on the block. Uses ElasticSearch. Need to keep a watch on this. 
  5. Flume: Flume's main goal is to deliver data from applications to Apache Hadoop's HDFS. It has a simple and flexible architecture based on streaming data flows.
  6. Scribe: Scribe is written in C++ and uses Thrift for the protocol encoding. This project was released as open-source by Facebook.  

Sunday, March 16, 2014

Calling a secure HTTPS webservice from a Java client

Over the last decade, I have seen so many developers struggle with digital certificates when they have to call a secure webservice. A lot of confusion arises when a secure https webservice call is made from a servlet running in Tomcat. This is because the exception stack shows a SSLHandshake exception and then developers keep fiddling with the Tomcat connector configuration as stated here.

But when we make a connection to a secure server, what we need is to trust the digital certificate of the server. If the digital certificate of the server has been signed by a trusted root authority such as 'Verisign', 'eTrust', then our default Java Trust Store would automatically validate it. But if the server has a self-signed certificate, then we have to add the server's digital certificate to the trust store.

There are multiple ways of doing this. A long time ago, I had blogged about one option that entails setting the Java system properties. This can be done through code or by setting the Java properties of the JVM during startup. For e.g.


System.setProperty("javax.net.ssl.trustStore", trustFilename );
System.setProperty("javax.net.ssl.trustStorePassword", "changeit") ;



Different AppServers (WebSphere, Weblogic, etc.) may provide different ways to add certs to the trust store.

Another option is to create a cert-store (filename:jssecacerts) that contains the digital cert of the server and copy that cert-store file to the “$JAVA_HOME\jre\lib\security” folder. There is also a nifty program called InstallCert.java that downloads the certificate and creates the cert-store file. A good tutorial on the same is available here.
I have also created a mirror of InstallCert.java here. This program cam be run without any dependencies on external libraries and I have found it to be very handy.

So what is the difference between setting the TrustStore system property and adding the jssecacerts file?
Well, the documentation of JSSE should help our understanding here. The TrustManager performs the following steps to search for trusted certs:
1.  system property javax.net.ssl.trustStore
2.  $JAVA_HOME/lib/security/jssecacerts
3. $JAVA_HOME/lib/security/cacerts (shipped by default)

It's important to note that is the TrustManager finds the jssecacerts file, then it would not read cacerts file! Hence it may be a better option to add the server digital cert to the cacerts keystore file. To add a certificate to a keystore, there is a nice GUI program called portecle. Alternatively do it from the command prompt using the keytool command as stated here

Wednesday, February 19, 2014

MongoDB support for geo-location data

It was interesting to learn that FourSquare uses MongoDB to store all its geo-spatial data. I was also enlightening to see JSON standards for expressing GPS coordinates and other geo-spatial data. The following links would give quick information about GeoJSON and TopoJSON.

http://en.wikipedia.org/wiki/GeoJSON
http://en.wikipedia.org/wiki/TopoJSON
http://bost.ocks.org/mike/topology/
https://github.com/mbostock/topojson/blob/master/README.md

MongoDB supports the GeoJSON format and allows us to easily build location aware applications. For e.g. Using MongoDB geospatial query operators, you can -
  • Query for locations contained entirely within a specified polygon.
  • Query for locations that intersect with a specified geometry. We can supply a point, line, or polygon, and any document that intersects with the supplied geometry will be returned.
  • Query for the points nearest to another point.
As you can see these queries make it very easy to find documents (JSON objects) that are near a given point or documents that lie within a given polygon.

Ruminating on SSL handshake using Netty

I was again amazed at the simplicity of using Netty for HTTPS (SSL) authentication. The following sample examples are a good starting point for anyone interested in writing a HTTP server supporting SSL.

http://netty.io/5.0/xref/io/netty/example/http/snoop/package-summary.html

Also adding support for 2-way SSL authentication (aka mutual authentication) is also very simple. Here are some hints on how to get this done.

http://stackoverflow.com/questions/9573894/set-up-netty-with-2-way-ssl-handsake-client-and-server-certificate
http://maxrohde.com/2013/09/07/setting-up-ssl-with-netty/

Tuesday, February 18, 2014

Ruminating on IoT Security

The Internet of Things (IoT) is going to be the next big investment area for many organizations as the value of real-time data keeps on increasing in this hyper competitive world.

Cisco CEO states that IoT is going to be a 19 trillion $ opportunity. Coca Cola has blocked 16 million mac addresses that would be utilized for its smart vending machines. In the Healthcare space, increasing survival rates have resulted in an aging population. With age comes chronic illness and to address this, Payers and Providers are investing in home-care appliances to monitor patient data - heart/pulse rate, temperature, glucose level, etc.

The next couple of years would see a plethora of smart devices connected to the internet and this presents a very challenging security problem. Already poorly secured smart devices have been hacked and compromised. Fridges have been used to send spam messages. Smart TVs are spying on viewers home network.

One of the fundamental non-technical challenge is that manufactures of IoT devices have very little incentive to invest in patching old sensors/devices for security. The supply chain starts from the MCU (micro-controller) manufacturer to the OEM and these folks are busy releasing new versions of their products and do not have the time and energy to patch their old products for security. A very good article describing this is available here.

On the technology front, the challenge is that sensor devices have limited resources for implementing industry standard encryption techniques. A typical sensor (MCU - Micro Controller Unit) would  just have a processor of 32MHz and 256KB memory.  Also most sensor systems are proprietary and closed, making it difficult to patch security updates in a open way.

There is no easy solution to the above problem. The only hope is that the newer MCUs developed for IoT devices have enough processing power to support digital certificates. Using Digital certificates enables us to satisfy all facets of enterprise security - a) Authentication b)Authorization c) Integrity d)Confidentiality e)Non-Repudiation.
Verizon has a comprehensive suite of products for enabling IoT security using PKI/digital certificate technologies. 

Monday, February 17, 2014

How are hash collisions handled in a HashMap/HashTable?

In my previous post, I had explained how HashMap is the basic data-structure used in many popular big data stores including Hadoop HBase. But how is a HashMap actually implemented as?

The following links give a very good explanation of the internal working of a HashMap.
http://java.dzone.com/articles/hashmap-internal
http://howtodoinjava.com/2012/10/09/how-hashmap-works-in-java/

Whenever a key/value pair is added to the HashMap data structure, the key is hashed. The hash of the key determines in what 'bucket' or 'array index' the value would be stored at.
So what happens if there is a hash collision? The answer is - Each bucket is actually implemented as a LinkedList. The values are stored in a LinkedList. So for keys with same hash, a linear search is done in the linkedList.

HashSet vs HashMap

Today, one of my developers was using a HashMap to store a white-list of IP addresses. He was storing the IP address as both the key and value.
I told him about the HashSet class that can also be used to store a white-list of elements and similar to the HashMap; a HashSet also provides a time-complexity of 1 for get()/contains() operations.

We were a bit intrigued on how a HashSet actually works; so we looked into the source code. Much to our surprise, a HashSet actually uses a HashMap internally. For every item we add to the set, it adds it as a key to the HashMap and adds a dummy object as a value, as given below.

// Dummy value to associate with an Object in the backing Map
96     private static final Object PRESENT = new Object();

In theory, a Set and Map are separate concepts. A set cannot contain duplicates and is unordered, whereas a Map is a key/value pair collection.

Besides the HashSet, you also have 2 more classes or Set implementations that are useful. The first being TreeSet that sorts the elements as you store them in the collection. Second is the LinkedHashSet that maintains the insertion order. 

Wednesday, February 12, 2014

Ruminating on Star Schema

Sharing a good YouTube video on the basic concepts of a Star Schema. Definitely worth a perusal for anyone wanting a primer on DW schemas.



The data is the dimension tables can also change, albeit less frequently. For e.g. a state changes it's name. A customer changes his last name, etc. These are known as 'slowly changing dimensions'. To support slowly changing dimensions, we would need to add timestamp columns to the dimension table. A good video explaining these concepts is available below:


Ruminating on Column Oriented Data Stores

There is some confusion in the market regarding column oriented databases (aka column-based). Due to the popularity of NoSQL stores such as HBase and Cassandra, many folks assume that that column oriented stores are only for unstructured big data. The fact is that column-oriented tables are available both in SQL (RDBMS) and NoSQL stores.

Let's look at SQL RDBMS first to understand the difference between row-based and column-based tables.
First and foremost, it is important to understand that whether the RDBMS is column or row-oriented is a physical storage implementation detail of the database. There is NO difference in the way we query against these databases using SQL or MDX (for multi-dimension querying on cubes).

In a row-based table, all the attributes (fields of a row) are stored side-by-side in a linear fashion. In a column-based table, all the the data in a particular column are stored side-by-side. An excellent overview of this concept with illustrative examples is given on Wikipedia. The following illustration should clear the concept easily.


For any query, the most expensive operations are hard disk seeks. As you can see from the above diagram, based on the type of table storage, certain queries would run faster. Column-oriented tables are more efficient when aggregates (or other calculations) need to be computed over many rows but only for a notably smaller subset of all columns of data. Hence column-oriented databases are popular in OLAP databases, where as row-oriented databases are popular in OLTP databases.
What's interesting to note is that the with the advent of in-memory databases, disk seek time is no longer a constraint. But large DWs contain peta-bytes of data and all data cannot be kept in memory.

A few years back, the world was divided between column-based databases and row-based databases. But slowly, almost all database vendors are giving the flexibility to store data in either row-based or column-based tables in the same database. For e.g. SAP HANA and Oracle 12c. Also all these databases are adding in-memory capabilities, thus boosting the performance of databases many-fold.

Now in case of NOSQL stores, the concept of column-oriented is more-or-less the same. But the difference is that stores such as HBase allow us to have different number of columns for each row. More information on the internal schema of HBase can be found here

Ruminating on BSON vs JSON

MongoDB uses BSON  as the default format to store documents in collections. BSON is a binary-encoded serialization of JSON-like documents. But what are the advantages of BSON over the ubiquitous JSON format?

In terms of space (memory footprint), BSON is generally more efficient than JSON, but it need not always be so. For example, integers are stored as 32 (or 64) bit integers, so they don't need to be parsed to and from text. This uses more space than JSON for small integers.

The primary goal of BSON is to enable very fast traversability i.e faster processing. BSON adds extra information to documents, e.g. length prefixes, that make it easy and fast to traverse. Due to this the BSON parser can parse through the documents at blinding speed.

Thus BSON sacrifices space efficiency for processing efficiency (a.k.a traversability). This is a fair trade-off for MongoDB where speed is the primary concern. 

Ruminating on HBase datastore

While explaining the concept of HBase to my colleagues, I have observed that folks that do not have the baggage of traditional expertise on RDBMS/DW are able to understand the fundamental concepts of HBase much faster than others. Application developers have been using data structures (collections) such as HashTable, HashMap for decades and they are better able to understand HBase concepts.

Part of the confusion is due to the terminology used in describing HBase. It is categorized as a column-oriented NOSQL store and is different from a row oriented traditional RDBMS. There are tons of articles that describe HBase as a data structure containing rows, column families, columns, cells, etc. In reality, HBase is nothing but a giant multidimensional sorted HashMap that is distributed across nodes. A HashMap consists of a set of key-value pairs. A multidimensional HashMap is one that has 'values' as other HashMaps.

Each row in HBase is actually a key/value map. This map can have any number of keys (known as columns), each of which has a value. This 'value' can again be a HashMap which has version history with timestamps. 

Each row in HBase can also store multiple key/value maps. In such cases, each key/value map is called a 'column family'. Each column-family is typically stored on different physical files or disks. This concept was introduced to support use-cases where you can have two sets of data for the same concept that are not generally accessed separately. 

The following article gives a good overview of the concepts we just discussed with JSON examples.
http://jimbojw.com/wiki/index.php?title=Understanding_Hbase_and_BigTable

Once we have a solid understanding of HBase concepts, it is useful to look at some common schema examples or use-cases where HBase datastore would be valuable. Link: http://hbase.apache.org/book/schema.casestudies.html

Looking at the use-cases in the above link, it is easy to understand that a lot of thought needs to go into designing a HBase schema. There would be multiple options/ways to design a HBase schema based on the primary use case of how data is going to be accessed. In other words, the design of your HBase schema would be dependent on your use-case. This is in total contrast to traditional data warehouses, where the applications accessing the data warehouse are free to define their own use-cases and the warehouse would support almost all use-cases within reasonable limits.

The following articles throw more light on the design constraints that should be considered while using HBase.
http://www.chrispwood.net/2013/09/hbase-schema-design-fundamentals.html
http://ianvarley.com/coding/HBaseSchema_HBaseCon2012.pdf

For folks who do not have a programming background, but want to understand HBase in terms of table concepts, then the following YouTube video would be useful.
http://www.youtube.com/watch?v=IumVWII3fRQ

Tuesday, February 11, 2014

Examples of Real Time Analytics in Healthcare

The Healthcare industry has always been striving for reducing the cost of care and improving the quality of care. To do this, Payers are focusing more on prevention and wellness of members.

Digitization of clinical information and real-time decision making are important IT capabilities that are required today.  Given below are some examples of real time analytics in Healthcare.

  1. Hospital acquired infections are very dangerous for premature infants. Monitors can detect patterns in infected premature babies up to 24 hrs in advance before any symptoms are shown. This real time data can be captured and run through a real-time analytics engine to identify such cases and ensure that adequate treatment is given ASAP. 
  2. Use real time event processing to act as an early warning system, based on historical patterns or trends. For e.g. If few members are exhibiting behavior patterns of prior patients who relapsed into critical condition, then we can plan a targeted intervention for these members.  

Ruminating on Decision Trees

Decision trees are tree-like structures that can be used for decision making, classification of data, etc.
The following simple example (on the IBM SPSS Modeler Infocenter Site) shows a decision tree for making a car purchase.

Another example of a decision tree that can be used for classification is shown below. These diagrams are taken from the article available at - www.cse.msu.edu/~cse802/DecisionTrees.pdf‎



Any tree with a branching factor of 2 (only 2 leafs) is called as a "binary decision tree". Any tree with a variety of branching factors can be represented in an equivalent binary tree. For e.g. the below binary tree will evaluate to the same result as the first tree.


It is easy to see that such decision tree models can help us in segmentation. For e.g. segmentation of patients into high-risk and low-risk categories; high-risk credit vs. low risk credit; etc.
An excellent whitepaper on Decision Trees by SAS is available here.

Decision trees can also be used in predictive modeling - this is known as Decision Tree Learning of Decision Tree Induction. Other names for such tree models are classification trees or regression trees; aka Classification And Regression Tree (CART).
Essentially "Decision Tree Learning" is a data mining technique using which a decision tree is constructed by slicing and dicing the data using statistical algorithms. Decision trees are produced by algorithms that identify various ways of splitting a data set into branch-like segments.
For e.g. On Wikipedia, there is a good example of a decision tree that was constructed by looking at the historic data of titanic survivors.
Decision Tree constructed through Data Mining of Titanic passengers.
Once such a decision tree model has been created, it can be exported as a standard PMML file. This PMML file can then be used in a real time scoring engine such as JPMML.

There is another open source project called as 'OpenScoring' that uses JPMML behind the scenes and provides us with a REST API to score data against our model. A simple example (with probability prediction mode) for identifying a flower based on attributes is illustrated here: https://github.com/jpmml/openscoring 

Decision Trees can also be modeled in Rule Engines. IBM iLog BRMS suite (WODM) supports the modeling of rules as a Decision Tree. 

When to use a Rule Engine?

The following article on JessRules.com is a good read before we jump on using a RuleEngine for each and every problem.


Earlier, I had also written another blog-post that lists down the simple steps one should take to understand what kind of data (logic) should be put in a rule engine. 

Friday, February 07, 2014

Ruminating on Netty Performance

Recently we had again used Netty for building a Event Server from grounds-up and we were amazed at the performance of this amazing library. The secret sauce of Netty is ofcourse the implementation of the Reactor Pattern using Java NIO. More information about the Reactive design paradigm can be found @ http://www.reactivemanifesto.org/

The following links bear testimony to the extreme performance of Netty.

http://netty.io/testimonials

http://www.infoq.com/news/2013/11/netty4-twitter (5x performance improvement)

http://yahooeng.tumblr.com/post/64758709722/making-storm-fly-with-netty (Netty used in Yahoo Storm)

Wednesday, February 05, 2014

Ruminating on Apple Passbook

The Apple 'Passbook' is an iOS application that allows users to store coupons, boarding passes, event tickets, gift cards (or any other card for that matter) on their phone. Hence it functions as a digital wallet.
Apple defines passes as - "Passes are a digital representation of information that might otherwise be printed on small pieces of paper or plastic. They let users take an action in the physical world, in the same way as boarding passes, membership cards, and coupons."

The Passbook application intelligently pops-up the pass (2D barcodes) on the 'locked screen' at the right time/place - i.e. either triggered by place using GPS tracking or triggered by time.

It's common sense to realize that the success of this app would largely depend on the number of partners who agree to publish their passes/cards in a format that is compatible with Apple Passbook. Many gift card companies have started adopting Passbook and are sending their coupons as Passbook attachments.

So what does a Passbook pass in digital format contain? The pass file is essentially a zip file with the *.pkpass extension. The zip file contains meta-data as JSON files and images as PNG files.
This format is an open format and hence any merchant or organization can adopt this standard and allow their customers to use the digital format of their coupon/pass/gift certificate/gift card, etc. 

Wednesday, January 29, 2014

A/B testing vs. Multivariate testing

The following articles give us a good overview of the differences between A/B testing and multivariate testing.

http://visualwebsiteoptimizer.com/split-testing-blog/difference-ab-testing-multivariate-testing/
http://searchengineland.com/landing-page-testing-choosing-between-ab-or-multivariate-approaches-27195

Snippets from the articles:

A/B testing consists of creating alternative pages for a specific page and showing each of them to a certain percentage of visitors. For example, if you create 4 different variations of a landing page, 20% of visitors to the website will see each version (4 variations + original). Cookies are used to maintain a consistent user experience—if a visitor sees one version, they will see it again and again when visiting the website as long as the cookies are not deleted.

Rather than testing different versions of web pages, as we do with A/B tests, Multivariate tests experiment with elements inside one specific page. Basically, we define elements inside a page (e.g. a picture, a text or a button) and provide different alternatives of each element. The testing tool will show each element combined with all other elements to visitors. The resulting combinations are derived from the number of elements multiplied by the number of element variations. Just as with A/B testing, however, each visitor sees only one particular combination of elements regardless of how many times they view a page. 

Hence A/B testing is best for testing radically different ideas for conversion rate optimization. Multivariate testing is more granular and is best for optimizing and refining an existing landing page or homepage without doing significant investment in redesign. 

Tuesday, January 28, 2014

Ruminating on Information Architecture

Information Architecture is both an art and science of organizing and structuring your content in such a way that it is very intuitive for the end-user to navigate. As such, Information Architecture is a subset of the User Experience Design field.

The output of Information Architecture is typically a set of wire-frames that depicts that design. It may also define a taxonomy to classify information (e.g. content, products), using a tree-structure hierarchy.
Other outputs for IA include site maps, annotated page layouts, page templates, personas, storyboards.

To put it in other words, IA answers the following questions:
  • How do you categorize and structure information?
  • How do you label information? e.g.  'Contact Us' label would hold all details on contact info.
  • How users navigate through information?
  • How users search for information?
And content cannot be structured in isolation, but depends on the 'users' and 'context', as depicted in this Venn diagram in the famous book by Rosenfeld and Morville.


Jotting down a few links on IA that are worth a perusal.

http://www.theguardian.com/help/insideguardian/2010/feb/02/what-is-information-architecture
http://boxesandarrows.com/category/design-principles/
http://docstore.mik.ua/orelly/web2/infoarch/index.htm
http://www.steptwo.com.au/papers/kmc_whatisinfoarch

Monday, January 27, 2014

Ruminating on Liferay CMS

It had been a few years since I had last used Liferay portal. Today my team members demonstrated the simplicity of the CMS (content management system) features of Liferay 6.2 and I was mighty impressed with what I saw.

First and foremost, Liferay's MVC portlet concept makes writing custom portlets so easy. I still remember the struggles we had to go through to code against the initial portlet API a decade ago. But now in Liferay, anyone who understands the MVC design pattern, can quickly product portlets with blinding speed.

Next, I found the CMS of Liferay very simple to understand and intuitive to use. Liferay makes it easy to quickly create and add a web page. Basic content management capabilities are so easy to add; using the Web Content Display application plugin. A 'Web Content' UI component can have a structure and a template. The structure contains the data and meta-data of the content. The templates are instructions for how to display structures, written in Freemarker or Velocity template engine language.

We also have page templates and site templates. There is first class support for themes at the page level as well as the site level.

Document management capabilities are integrated with the core product. Liferay’s content management system manages all file types including images, documents, videos, and web content. 

Thursday, January 23, 2014

Three Rules for Making a Company Truly Great

What do Custodians do?

A custodian is responsible for the safekeeping and safeguarding of investments & securities on behalf of the owners. Owners could be mutual funds, HNIs, etc.

Securities which are in paper form are kept in safe custody of a custodian and securities which are in dematerialized electronic form are kept with a Depository Participant, who acts on the advice of custodian.

The 1940 Investment Company Act mandates that the fund investment adviser and fund assets be kept separate, which necessitates the role of the third-party custodian. It's important to note that the custodian provides safekeeping of securities but has no role in portfolio management.

Custodians perform important back office operations such as settlement of trades, accounting, collect dividends and interest, etc. This ensures that an accurate record of all trades and cash flows in maintained by the custodian and this helps in preventing any fraud. 

Thursday, January 16, 2014

Advanced search options in Windows Search

I had always been a fan of Google Desktop and it's ability to surface any information hidden under folders. Recently my friend mentioned that the default search available on Windows has also matured and there are a lot of options available for geeks like us :)

The following MSDN site gives the various options that are available for searching your computer for files, once you open the search window using 'Win+f" key.

http://msdn.microsoft.com/en-us/library/aa965711(VS.85).aspx

Monday, November 18, 2013

Twitter bootstrap has come a long way !

The latest version of bootstrap (version 3) really rocks ! I was mighty impressed with the default JavaScript controls available and the ease with which we can build responsive web designs.

For beginners, there is an excellent tutorial on bootstrap by Syed Rahman at:
http://www.sitepoint.com/twitter-bootstrap-tutorial-handling-complex-designs/

Syed also had authored other interesting articles on bootstrap and they are available here:
http://www.sitepoint.com/author/sfrahman/


Aah! It's Lorem ipsum

It's interesting to learn something new everyday. Today I learned about 'Lorem ipsum'.
I had seen such text so many times and was often mistaken that the browser was doing some language translation :)
You can also generate your Lorem Ipsum on this site:

Ruminating on Agile and Fixed Price Contracts

Fixed Price Contracts have become a necessary evil for all agile practitioners to deal with. Over the last few years, I am seeing more and more projects being awarded based on fixed-bid contracts. In a fixed price contract, 3 things are fixed - time, scope and price.

The dangers on fixed price contracts are known to all. But still customers want to go with fixed price because they want to reduce financial risk, choose a vendor based on price, support annual budgeting, etc.

But unfortunately fixed bid contracts have the highest risk of failure. In fact Martin Fowler calls it the 'Fixed Scope Mirage'. Martin suggests using Fixed Price, but keeping the scope negotiable. He calls out an real life case study that worked for him.

In this article, Scott Amber raises questions on the ethics behind fixed price contracts and also elaborates on the dire consequences of it. I have personally seen tens of projects compromising on quality to meet the unrealistic deadlines of fixed price projects. The same story gets repeated again and again - Project slips and pressure is put on developers to still deliver on time and on budget. They begin to cut corners, and quality is the first victim.

This InfoQ article gives some ideas on how to structure Agile fixed-bid contracts.Cognizant has another idea put on a thought-paper called as 60/40/20 that can be used on Agile projects.

IMHO, any kind of fixed price contact would only work if there is a good degree of trust between the customer and the IT vendor. One of the fundamental principles of the Agile manifesto is to "Put customer collaboration before contract negotiation".

Wednesday, November 13, 2013

Beautiful REST API documentation

InfoQ has published a list of tools that can be used for creating some cool documentation for our Web APIs.

I was particularly impressed with Swagger and the demo available here.
RAML from MuleSoft was also quite interesting. The list of projects available for RAML makes it a serious candidate for documenting our REST APIs. 

Thursday, October 24, 2013

Ruminating on HIE (Health Information Exchange)

The healthit.gov site gives a very understanding on HIE. This is not to be confused with HIX (Health Insurance Exchange). Snippet from the site:

The term "health information exchange" (HIE) actually encompasses two related concepts: 
Verb: The electronic sharing of health-related information among organizations 
Noun: An organization that provides services to enable the electronic sharing of health-related information

Today, most organizations are leveraging HL7 as the standard for HIE. HL7 v3 is XML based, where as the prominent HL7 v2.0 is ASCII text based.  Another standard that is used in HIE is CCR (Continuity of Care Record).

Blue Button is another attempt in simplifying HIE. Providers and Health Plans participating in the Blue Button initiative would allow you to download all your health records as a plain text file from their site.  The image below from www.healthit.gov gives you a good idea on the advantages of having access to your health record.

 
A new initiative called 'Blue Button+' would allow information to be downloaded as a XML (HL7 based) and enable machine to machine exchange of health information. 

HL7 FHIR– Fast Health Interoperable Resources is an interesting initiative that uses REST principles for HIE. 

Friday, October 18, 2013

Ruminating on ACO (Accountable Care Organizations)

The below article on national journal gives an excellent overview of ACO and what were the reasons it was formed.
Some snippets from the article, that would give you a good understanding of ACO. 
"ACOs are groups of providers that have been assigned a projected budget per patient. If the cost of caring for the patient comes in below that level, the group/payer shares the savings. The idea is that doctors will better coordinate care to prevent wasteful or ineffective treatment. 

With accountable care organizations, the theory is that if the provider does a good job taking care of the patient, something the insurer can track with quality metrics, the patient's health will be better, they will use fewer and less expensive services, and, therefore, they will cost less to insure. 

Medicare is running two pilot versions of the program. In one, providers may sustain losses if they're over budget but can be handsomely rewarded if they're under. The other rewards providers for coming in under budget but has no downside risk. The government is monitoring quality to make sure providers aren't skipping necessary treatment to come in under budget. 

Making ACOs work will require many organizational changes on the part of providers. They'll have to orient their systems more around quality than quantity. They'll have to track patients closely, using new analytics, to make sure their status is improving. And they may focus on high-risk, high-cost patients, using analytics and tailored interventions to help them. The payoff for improving the health of that population could be substantial."