Thoughts and Theory

How graph theory, cellular automata and deep learning are silently coming together to shape the future of spatial modelling.

Photo by Myriam Jessier on Unsplash

The topic under discussion is very broad, as we will need to discuss the entire scope of spatial analysis before projecting into the future. For the first thing you need to predict the future is knowing the past. So I will narrow down the scope of discussion progressively to the area of spatial analysis that has intrigued me the most. I have been involved in numerous deep learning projects, and also have some experience in modelling urban processes as temporal graphs. This combination of exposure gave me a particular perspective, and I frequently try to compare all these paradigms(just inside…


Hurdles to model generalization and potential solutions to the problem.

Image by Author

Machine Learning has been a major success story in recent times, but we are still beginning to understand how these models work and what are are the risks of deploying them to public sectors. Neural Networks in particular are low bias high variance machines, so their inability to generalize as expected to unseen data is a notorious shortcoming for their practical usage. Making models transfer well to new domains or changing environments is still one of thew hardest challenges of machine learning.

Understanding the generalization performance of deep learning models…


Putting invariance be design approach to practice in an image classification setting.

Image by Author

Robust Deep Learning

This article is a part of series that address Robust Deep Learning. We have talked about model generalization, robustness invariance and causality in details in previous articles. In this article, we provide empirical demonstration of how invariance based approach can help ensure model robustness. To best appreciate the content read these articles:

  1. Invariance, Causality, and Robust Deep Learning
  2. Domain Randomization: future of robust modeling
  3. Rethinking Data Augmentations: A Causal Perspective
  4. Systematic Approach to Robust Deep Learning

This article explores a case study where we apply…


Data augmentation is a mainstream tool in Machine Learning, but do we really understand it ?

Image by Author

Let us look at how various sources define data augmentations.

Data augmentation in data analysis are techniques used to increase the amount of data by adding slightly modified copies of already existing data or newly created synthetic data from existing data. It acts as a regularizer and helps reduce overfitting when training a machine learning model. ~ Wikipedia

Modern machine learning models, such as deep neural networks, may have billions of parameters and require massive labeled training datasets — which are often not available. The technique of artificially expanding labeled training datasets — known as data augmentation — has quickly…


Learn how to leverage this tool to unlock the true potential of synthetic data for Machine Learning!

Image by Author

Domain Randomization

Domain randomization is a systematic approach to data generation process that aims to enhance generalization of the machine learning algorithms to new environments.

Domain randomization is an approach where one tries to find a representation that generalizes across different environments, called domains. ~ Intervention Design for Effective Sim2Real Transfer

The purpose of domain randomization is to provide enough simulated variability at training time such that attest time the model is able to generalize to real-world data. …


Is all the craze over new and more complex model architectures justified?

Source

Two basic components of all AI systems are Data and Model, both go hand in hand in producing desired results. In this article we talk about how the AI community has been biased towards putting more effort in the model, and see how it is not always the best approach.

We all know that machine learning is an iterative process, because machine learning is largely an empirical science. You do not jump to the final solution by thinking about the problem, because you can no easily articulate what the solution should look like. Hence you empirically move towards better solutions…


Upfront causal thinking to design Robust Neural Networks that work well in new environments

As the title employs, we are in the pursuit of robust deep learning. Let us define robust deep learning first. Robust deep learning will result in models that work well in new and unseen environments, which are different from the training distribution. Lack of generalization to different environment is the major problem that neural networks face, despite making huge strides in the last decade.

Despite recent advancements in machine learning fueled by deep learning, studies like Azulay and Weiss have shown that deep learning methods may not generalize to inputs from outside of their training distribution. In safety-critical fields like…


Why do Neural networks fail to generalize to new environments, and how can this be fixed?

Image by Author, inspired by Source

Many real world data analysis problems exhibit in-variant structure, and models that take advantage of this structure have shown impressive empirical performance, particularly in deep learning. ~ On the Benefits of Invariance in Neural Networks

Most machine learning problems have an invariant structure. Image classification tasks, for example, are usually invariant to translation, rotation, scale, viewpoint, illumination etc. An example of traffic sign class is shown below.


Susceptibility of neural networks to overfit on irrelevant properties and importance of randomizing them.

Image by Author

Robust Deep Learning

This article is part of a series that addresses Robust Deep Learning. We have talked about model generalization, robustness invariance and causality in details in previous articles. In this article we look at how Data Augmentations and Domain Randomization help ensure model robustness. To best appreciate the content read these articles:

  1. Invariance, Causality, and Robust Deep Learning
  2. Domain Randomization: future of robust modeling
  3. Rethinking Data Augmentations: A Causal Perspective
  4. Systematic Approach to Robust Deep Learning

In this tutorial you will learn the following:

  1. Understand that neural network can overfit to properties of data which have no association…


Problems caused by spurious correlations and data-centric approach to deal with them and train Robust Models.

Robust Deep Learning

This article is a part of series that address Robust Deep Learning. We have talked about model generalization, robustness invariance and causality in details in previous articles. In this article we practically how Data Augmentations and Domain Randomization help ensure model robustness. To best appreciate the content read these articles:

  1. Invariance, Causality, and Robust Deep Learning
  2. Domain Randomization: future of robust modeling
  3. Rethinking Data Augmentations: A Causal Perspective
  4. Systematic Approach to Robust Deep Learning

In this tutorial you will learn the…

Urwa Muaz

Computer Vision Practitioner |Data Science Graduate, NYU | Interested in Robust Deep Learning

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store