Thursday, December 30, 2021

Ruminating on ONNX format

Open Neural Network Exchange (ONNX) is an open standard format for representing machine learning models. While there are proprietary formats such as pickle (for Python) and MOJO (for H20 AI), there was a need to drive interoperability. 

ONNX defines a common set of operators - the building blocks of machine learning and deep learning models - and a common file format to enable AI developers to use models with a variety of frameworks, tools, runtimes, and compilers. ONNX also provides a definition of an extensible computation graph model and each computation dataflow graph is structured as a list of nodes that form an acyclic graph. Nodes have one or more inputs and one or more outputs. Each node is a call to an operator. The entire source code of the standard is available here - https://github.com/onnx/

Thus ONNX enables an open ecosystem for interoperable AI models. The ONNX Model Zoo is a collection of pre-trained, state-of-the-art models in the ONNX format that can be easily reused in a plethora of AI frameworks. All popular AI frameworks such as TensorFlow, CoreML, Caffe2, PyTorch, Keras, etc. support ONNX. 

There are also opensource tools that enable us to convert existing models into ONNX format - https://github.com/onnx/onnxmltools

ML Data Preparation - Impute missing values

 In any AI project, data plumbing/engineering takes up 60-80% of the effort. Before training a model on the input dataset, we have to cleanse and standarize the dataset.

One common challenge is around missing values (features) in the dataset. We can either ignore these records or try to "impute" the value. The word "impute" means to assign a value to something by inference. 

Since most AI models cannot work with blanks or NaN values (for numerical features), it is important to impute values of missing features in a recordset.  

Given below are some of the techniques used to impute the value of missing features in the dataset. 

  • Numerical features: We can use the MEAN value of the feature or the MEDIAN value. 
  • Categorical features: For categorical features, we can use the most frequent value in the training dataset or we can use a constant like 'NA' or 'Not Available'. 

Wednesday, December 29, 2021

Computer Vision Use Cases

viso.ai has published a series of interesting posts, where they list down a number of potential usecases of computer vision (CV) across various industries. 

Posting a few links below: 

https://viso.ai/applications/computer-vision-in-manufacturing/

https://viso.ai/applications/computer-vision-in-retail/

https://viso.ai/applications/computer-vision-in-healthcare/

Jotting down a few usecases, where we see traction in the industry: 

  • Quality inspection: CV can be used to inspect the final OEM product for quality defects - e.g. cracks on the surface, missing component in a PCB, dents on a metal surface, etc. Google has launched a specialized service for this called as Visual Inspection AI
  • Process monitoring: CV can ensure that proper guidelines are being followed during the manufacturing process - e.g. In meat plants, are knives steralized after each cut?, Are healthcare workers wearing helmets and hand gloves? Do workers keep the tools back in the correct location? You can also measure the time spent in specific factory floor areas to implement 'Lean Manufacturing'. 
  • Worker safety: Is a worker immobile for quite some time (potential accident?), Are they too many workers in a dangerous factory area? Are workers wearing proper protective gear? During Covid times, monitor the social distancing between workers and ensure compliance. 
  • Inventory management: Using drones (fitted with HD cameras) or even a simple smartphone, CV can be used to count the inventory of items in a warehouse. They can also be used for detecting unused space in the warehouse and automate many tasks in the supply chain.
  • Predictive maintenance: Deep learning has been used to find cracks in industrial components such as spherical tanks and pressure vessels.
  • Patient monitoring during surgeries: Just have a look at the cool stuff done by Gauss Surgical (https://www.gausssurgical.com/) With AI-enabled, real-time blood loss monitoring, Triton identifies four times more hemorrhages.
  • Machine-assisted Diagnosis: CV can help radiologists in more accurate diagnosis of ailments and detect patterns and minor changes far better than humans. Siemens Healthineers is doing a lot of interesting AI work to assist radiologists. More info can be found here -- https://www.siemens-healthineers.com/digital-health-solutions/artificial-intelligence-in-healthcare
  • Home-based patient monitoring: Detect human fall scenarios in old patients - at home or in elderly care centers. 
  • Retail heat maps: Test new merchandising strategies and experiment with various layouts. 
  • Customer behavior analysis: Track how much time a user has spent looking at a display item. What is the conversion rate?
  • Loss prevention: Detect potential theft scenarios - e.g. suspicious behavior. 
  • Cashierless retail store: Amazon Go !
  • Virtual mirrors: Equivalent of SnapChat filters where you can virtually try the new fashion. 
  • Crowd managment: Detect unsual crowds in public spaces and proactively take action. 
A huge number of interesting CV case studies are also available on roboflow's site here - https://blog.roboflow.com/tag/case-studies/

Thursday, December 23, 2021

Ruminating on Stochastic vs Deterministic Models

The below article is an excellent read on the difference between stochastic forecasting and deterministic forecasting. 

https://blog.ev.uk/stochastic-vs-deterministic-models-understand-the-pros-and-cons

Snippets from the article:

"A Deterministic Model allows you to calculate a future event exactly, without the involvement of randomness. If something is deterministic, you have all of the data necessary to predict (determine) the outcome with certainty. 

Most financial planners will be accustomed to using some form of cash flow modelling tool powered by a deterministic model to project future investment returns. Typically, this is due to their simplicity. But this is fundamentally flawed when making financial planning decisions because they are unable to consider ongoing variables that will affect the plan over time.

Stochastic models possess some inherent randomness - the same set of parameter values and initial conditions will lead to an ensemble of different outputs. A stochastic model will not produce one determined outcome, but a range of possible outcomes, this is particularly useful when helping a customer plan for their future.

By running thousands of calculations, using many different estimates of future economic conditions, stochastic models predict a range of possible future investment results showing the potential upside and downsides of each."

Tuesday, December 14, 2021

Ruminating on Memory Mapped Files

 Today, both Java and .NET have full support for memory mapped files. Memory mapped files are not only very fast for read/write operations, but they also serve another important function --- they can be shared between different processes. This enables us to use memory mapped files as shared data store for IPC. 

The following excellent articles by Microsoft and MathWorks explains how to use memory mapped files. 

https://docs.microsoft.com/en-us/dotnet/standard/io/memory-mapped-files

https://www.mathworks.com/help/matlab/import_export/overview-of-memory-mapping.html

Snippets from the above articles:

A memory-mapped file contains the contents of a file in virtual memory. This mapping between a file and memory space enables an application, including multiple processes, to modify the file by reading and writing directly to the memory. 

Memory-mapped files can be shared across multiple processes. Processes can map to the same memory-mapped file by using a common name that is assigned by the process that created the file.

Accessing files via memory map is faster than using I/O functions such as fread and fwrite. Data is read and written using the virtual memory capabilities that are built in to the operating system rather than having to allocate, copy into, and then deallocate data buffers owned by the process. 

Memory-mapping works best with binary files, and in the following scenarios:

  • For large files that you want to access randomly one or more times
  • For small files that you want to read into memory once and access frequently
  • For data that you want to share between applications
  • When you want to work with data in a file as if it were an array

Due to limits set by the operating system, the maximum amount of data you can map with a single instance of a memory map is 2 gigabytes on 32-bit systems, and 256 terabytes on 64-bit systems.

Ruminating on TTS and ASR (STT)

 Advances in deep neural networks have helped us build far more accurate voice bots. While adding a voice channel to a chatbot, there are two important services that are required:

1) TTS (text to Speech)

2) ASR/STT (Automatic Speech Recognition or Speech to Text)

For TTS, there are a lot of commerical cloud services available ---- few examples given below:

Similary for speech recognition, there are many commercial services available:


If you are looking for open source solutions for TTS & STT, then the following open source projects look promising:
A few articles that explain the setup:

Friday, September 24, 2021

Ruminating on R&D investment capitalization

When organizations invest in R&D activities, there are accounting rules for claiming tax credits. The reason governments give tax credits is to encourage innovation thorough R&D spending. 

Earlier, the accounting rules mandated all R&D investments to be shown as "expenses" on the Profit and Loss (P&L) statement in the same year, the investments were made. But this poses a problem as the return of a R&D investment could be over multiple future years. Also R&D investments can vary widely every year and its net profit can be significantly higher or lower because of the timing of R&D spending. 

Hence in the new accounting rules, instead of treating R&D investments as an "expense" in the P&L statement, it is treated as an "Asset" in the balance sheet. This is referred to as "Capitalizing the R&D expense". 

Once the R&D investment is treated like an "Asset", the company can claim amortization (or depreciation) benefits on it over multiple years. For example - A software firm has invested 1 MM USD in R&D to build a product in 2021. The economic life of the built product is 5 years.  The company can then amortize/depreciate the capitalized R&D expenses equally over the five-year life. This amortized amout per year is a line item in the yearly P&L statement. 

A good explanation of this concept is given here - https://corporatefinanceinstitute.com/resources/knowledge/accounting/capitalizing-rd-expenses/

Wednesday, September 08, 2021

Ruminating on Systems Thinking

Systems Thinking is an approach towards problem solving that emphasizes the fact that most systems are complex and do not have a linear cause-effect relationship. We need to view the problem as part of a wider dynamic system and understand how different parts interact or influence one other. 

The crux of systems thinking is the ability to zoom out and look at the big picture (holistic view) - to understand the various factors or perspectives that play a role. 

By steering away from linear thinking into systems thinking, we will make better decisions in all aspects of business. System thinking also helps us to think in terms of ecosystems and not just be confined by traditional boundaries. 

The following video is an excellent tutorial on Systems Thinking - https://youtu.be/GPW0j2Bo_eY

Other good links showing examples of system thinking: 

- Use of DDT for mosquitos: https://thesystemsthinker.com/making-the-jump-to-systems-thinking/

- Systems thinking for water services - e.g. understanding the impact of rainfall, educating customers to improve the quality of wastewater, external factors such as new building development, rising river levels, etc : https://www.waterindustryjournal.co.uk/united-utilities-rolls-out-systems-thinking-approach-to-wastewater-network-management

Wednesday, September 01, 2021

Ruminating on Data Drift and Concept Drift

 Quite often, the performance (accuracy of prediction) of a AI model degrades with time. One of the reasons this happens is due to a phenomenon called as "Data Drift". 

So what exactly is Data Drift? 

Data Drift can be defined as any change to the structure, semantics or statistical properties of data - i.e. model input data.  

  • Changes to structure of data: New fields being added or old field deleted. This could happen because of a new upgrade to a upstream system, a new sensor, etc. 
  • Changes to semantics of data: A new upstream system is sending temperature in F and not C. 
  • Changes to statistical properties of data: Changes to the atmospheric pressure threshold levels due to environmental changes. There could be also data quality issues such as a bug in upstream system that delivers junk data. 

To maintain the accuracy of our AI models, it is imperative that we measure and monitor Data Drift. Our machine learning infrastructure needs to have tools that automatically detect data drift and can pin-point the features that are causing the drift. 

Changes to the underlying statistical properties of data is also called as "Concept Drift". A classic example of this is the current pandemic. The "behaviour" or "concept" has changed after the pandemic - e.g. models that predict the amount of commute time are no longer valid. Models that forecasted the number of the cosmetic surgeries in 2021 are no longer valid. 

Most of the hyperscalers provide services that enable us to monitor data drift and take proactive actions. The below links provide some examples of cloud services for data drift:

https://docs.microsoft.com/en-us/azure/machine-learning/how-to-monitor-datasets?tabs=python

https://aws.amazon.com/sagemaker/model-monitor/

https://cloud.google.com/blog/topics/developers-practitioners/event-triggered-detection-data-drift-ml-workflows

Thursday, February 04, 2021

Process Discovery vs. Process Mining

The first step in automation is to gain a complete understanding of the current state business processes. Often enterprises do not have a knowledge base of all their existing processes and hence identifying automation opportunities becomes a challenge. 

There are two techniques to address this challenge:

Process Discovery

Process Discovery tools typically record an end-user's interactions with the business applications. This recording is then intelligently transformed into process maps, BPMN diagrams and other technical documentation. The output from the Process Discovery tool can also be used to serve as a foundation for building automation bots. Given below are some examples of Process Discovery tools:

Process Mining

Process Mining is all about extracting knowledge from event data. Today all IT systems and applications emit event logs that can be captured and analysed centrally using tools such as Splunk or ELK stack. As one can imagine, the biggest drawback of Process Mining is that it would not work if your legacy IT systems do not emit logs/events. 

By analysing event data, Process Mining tools can model business process and also capture information on how the process performs in terms of latency, cost and other KPIs (from the log/event data). 

Given below are some examples of Process Mining tools:


Tuesday, February 02, 2021

Ticket Triaging with AI

The below link is a comprehensive article that introduces "ticket traiging" and also explains how AI can be used to automate this important step. 

https://monkeylearn.com/blog/how-to-implement-a-ticket-triaging-system-with-ai/

Some snippets from the article:

"Automated ticket triaging involves evaluating and directing support tickets to the right person, quickly and effectively, and even sorting tickets in order of importance, topic or urgency. Triage powered by Natural Language Processing (NLP) technology is well-equipped to understand support ticket content to ensure the right ticket gets to the right agent.

NLP can process language like a human, by reading between the lines and detecting language variations, to make sense of text data before it categorizes it and allocates it to a customer support agent. The big advantages of using NLP is that it can triage faster, more accurately and more objectively than a human, making it a no-brainer for businesses when deciding to switch from manual triage to auto triage."