Friday, January 14, 2022

Ruminating on Snowflake Architecture

 The following video is an excellent tutorial to understand how Snowflake can perform both as a Data Lake and Datawarehouse.

https://www.youtube.com/watch?v=jmVnZPeClag

The following articles on Snowflake are also worth a perusal:

https://www.snowflake.com/workloads/data-warehouse-modernization/

https://www.snowflake.com/guides/data-lake

The following key concepts are important to understand to appreciate how Snowflake works:

  • Snowflake separates compute with storage and each can be scaled out independently
  • For storage, Snowflake leverages distributed cloud storage services like AWS S3, Azure Blob, Google Cloud Storage). This is cool since these services are already battle-tested for reliability, scalability and redundancy. Snowflake compresses the data in these cloud storage buckets. 
  • For compute, Snowflake has a concept called as "Virtual warehouse". A virtual warehouse is a simple bundle of compute (CPU) and memory (RAM) with some temperory storage. All SQL queries are executed in the virtual warehouse. 
  • Snowflake can be queried using plain simple SQL - so no specialized skills required. 
  • If a query is fired more frequently, then the data is cached in memory. This "Cache" is the magic that enables fast ad-hoc queries to be run against the data. 
  • Snowflake enables a unified data architecture for the enterprise since it can be used as a Data Lake as well as a Data warehouse. The 'variant' data type can store JSON and this JSON can also be queried. 
The virtual datawarehouse provide a kind of dynamic scalability to the Snowflake DW. Snippets from the Snowflake documentation.

"The number of queries that a warehouse can concurrently process is determined by the size and complexity of each query. As queries are submitted, the warehouse calculates and reserves the compute resources needed to process each query. If the warehouse does not have enough remaining resources to process a query, the query is queued, pending resources that become available as other running queries complete. If queries are queuing more than desired, another warehouse can be created and queries can be manually redirected to the new warehouse. In addition, resizing a warehouse can enable limited scaling for query concurrency and queuing; however, warehouse resizing is primarily intended for improving query performance. 
With multi-cluster warehouses, Snowflake supports allocating, either statically or dynamically, additional warehouses to make a larger pool of compute resources available". 

Sunday, January 09, 2022

Ruminating on Joint Probability vs Conditional Probability vs Marginal Probability

The concept of conditional probability is a very important to understand Bayesian networks. An excellent introduction to these concepts is available in this video - https://youtu.be/5s7XdGacztw

As we know, probability is calculated as the number of desired outcomes divided by the total possible outcomes. Hence if we roll a dice, the probability that it would be 4 is 1/6 ~ (P = 0.166 = 16.66%)

Siimilary, the probability of an event not occurring is called as the complement ~ (1-P). Hence the probability of not rolling a 4 would = 1-0.166 = 0.833 ~ 83.33%

While the above is true for a single variable, we also need to understand how to calculate the probability of two or more variables - e.g. probability of lightening and thunder happening together when it rains. 

When two or more variables are involved, then we have to consider 3 types of probability:

1) Joint probability calculates the likelihood of two events occurring together and at the same point in time. For example, the joint probability of event A and event B is written formally as:  P(A and B) or  P(A ^ B) or P(A, B)

2) Conditional probability measures the probability of one event given the occurrence of another event.  It is typically denoted as P(A given B) or P(A | B). For complex problems involving many variables, it is difficult to calculate joint probability of all possible permutations and combinations. Hence conditional probability becomes a useful and easy technique to solve such problems. Please check the video link above. 

3) Marginal probability is the probability of an event irrespective of the outcome of another variable.

Another excellent article explaining conditional probability with real life examples is here - https://www.investopedia.com/terms/c/conditional_probability.asp 


Tuesday, January 04, 2022

Confusion Matrix for Classification Models

Classification models are supervised ML models used to classify information into various classes - e.g. binary classification (true/false) or multi-class classification (facebook/twitter/whatsapp)

When it comes to classification models, we need a better metric than accuracy for evaluating the holistic performance of the model. The following article gives an excellent overview of Confusion Matrix and how it can be used to evaluate classification models (and also tune their performance).

https://www.analyticsvidhya.com/blog/2020/04/confusion-matrix-machine-learning/

Some snippets from the above article:

A Confusion matrix is an N x N matrix used for evaluating the performance of a classification model, where N is the number of target classes. The matrix compares the actual target values with those predicted by the machine learning model. This gives us a holistic view of how well our classification model is performing and what kinds of errors it is making.

A binary classification model will have false positives (aka Type 1 error) and false negatives (aka Type 2 error). A good example would be a ML model that predicts whether a person has COVID based on symptoms and the confusion matrix would look something like the below. 


Based on the confusion matrix, we can calculate other metrics such as 'Precision' and 'Recall'. 
Precision is a useful metric in cases where False Positive is a higher concern than False Negatives.

Recall is a useful metric in cases where False Negative trumps False Positive.
Recall is important in medical cases where it doesn’t matter whether we raise a false alarm but the actual positive cases should not go undetected!

F1-score is a harmonic mean of Precision and Recall, and so it gives a combined idea about these two metrics. It is maximum when Precision is equal to Recall.

Another illustration of a multi-class classification confusion matrix that predicts the social media channel. 

Monday, January 03, 2022

Bias and Variance in ML

Before we embark on machine learning, it is important to understand basic concepts around bias, variance, overfitting and underfitting. 

The below video from StatQuest gives an excellent overview of these concepts: https://youtu.be/EuBBz3bI-aA

Another good article explaining the concepts is here - https://towardsdatascience.com/understanding-the-bias-variance-tradeoff-165e6942b229

Bias as the difference (aka error) between the average prediction made by the ML model and the real data in the training set. The bias shows how well the model matches the training dataset. A low bias model will closely match the training data set. 

Variance refers to the amount by which the predictions would change if we fit the model to a different training data set. A low variance model will produce consistent predictions across different datasets. 

Ideally, we would want a model with both a low bias and low variance. But we often need to do a trade-off between bias and variance. Hence we need to find a sweet spot between a simple model and a complex model. 
Overfitting means your model has a low bias but a high variance. It overfits the training dataset. 
Underfiting means your model has a high bias but a low variance. 
If our model is too simple and has very few parameters then it may have high bias and low variance (underfitting). On the other hand if our model has large number of parameters then it’s going to have high variance and low bias (overfitting). So we need to find the right/good balance without overfitting and underfitting the data.

Ruminating on high dimensional data

 In simple terms, dimensions of a dataset refer to the number of attributes (or features) that a dataset has. This concept of dimensions of data is not new and quite common in the data warehousing world as explained here

Many datasets can have a large number of features (variables/attributes) such as healthcare data, signal processing, bioinformatics. When the number of dimensions are staggeringly high, ML calculations become extremely difficult. It is also possible that the number of features can exceed the number of observations (or records in a dataset) - e.g. microarrays, which measure gene expression, can contain hundreds of samples/records, but each record can contain tens of thousands of genes.

In such highly dimensional data, we experience something called as the "Curse of dimensionality" - i.e. all records appear to be sparse and dissimilar in many ways, which prevents common data organization strategies from being efficient. The more dimensions we add to a data set, the more sparse the data becomes and this results in an exponential decrease in the ML model performance (i.e. predictive capabilities). 

A typical rule of thumb is that there should be at least 5 training examples for each dimension in the dataset. Another interesting excerpt from Wikipedia is given below:   

In machine learning and insofar as predictive performance is concerned, the curse of dimensionality is used interchangeably with the peaking phenomenon, which is also known as Hughes phenomenon. This phenomenon states that with a fixed number of training samples, the average (expected) predictive power of a classifier or regressor first increases as the number of dimensions or features used is increased but beyond a certain dimensionality it starts deteriorating instead of improving steadily.

To handle high dimensional datasets, data scientists typically perform various data dimension reduction techniques on the datasets - e.g. feature selection, feature projection, etc. More information about dimension reduction can be found here