man

We are committed to share our experience among passionate people.

Pick the lecture that matches your needs and skills.

 

If you want to create a machine learning model but say you don’t have a computer that can take the workload, Google Colab is the platform for you. Even if you have a GPU or a good computer creating a local environment with anaconda and installing packages and resolving installation issues are a hassle. Colaboratory is a free Jupyter notebook environment provided by Google where you can use free GPUs and TPUs which can solve all these issues. ‌

To start working with Colab you first need to log in to your google account, then go to this link: ‌

You may have already tried some cloud GPU computing before, lets talk about what is new here: ‌‌
1. Setting runtime:
Click the “Runtime” dropdown menu. Select “Change runtime type”. Select python2 or 3 from “Runtime type” dropdown menu and choose your favorite, we will use 3. ‌
‌2. Use GPU and TPU: Click the “Runtime” dropdown menu. Select “Change runtime type”. Now select anything(GPU, CPU, None) you want in the “Hardware accelerator” dropdown menu. ‌
3. ‌Install additional libraries:
You can also install a new library in your working environment:
! pip install numpy ‌‌
4. File Hierarchy:
You can also see file hierarchy by clicking “>” at top left below the control buttons (CODE, TEXT, CELL). ‌

Our first Model
Now we created our GPU environment, lets train our first neural network, Implementing logic gates using neural networks help understand the mathematical computation by which a neural network processes its inputs to arrive at a certain output. This neural network will deal with the AND logic problem. The AND gate is a digital logic gate that gives a true output only when both its inputs are both true. ‌

Structure
This neural network needs to produce two different decision planes to linearly separate the input data based on the output patterns. This is achieved by using the concept of hidden layers. The neural network will consist of one input layer with two nodes (X1,X2); one hidden layer with two nodes (since two decision planes are needed); and one output layer with one node (Y). ‌

Activation Function
To implement an AND gate, I will be using a Sigmoid Neuron as nodes in the neural network. The characteristics of a Sigmoid Neuron are:
1. Can accept real values as input.
2. The value of the activation is equal to the weighted sum of its inputs
i.e. ∑wi xi
3. The output of the sigmoid neuron is a function of the sigmoid function, which is also known as a logistic regression function. The sigmoid function is a continuous function which outputs values between 0 and 1.


The learning Process:
The information of a neural network is stored in the interconnections between the neurons i.e. the weights. A neural network learns by updating its weights according to a learning algorithm that helps it converge to the expected output. The learning algorithm is a principled way of changing the weights and biases based on the loss function.
‌‌

The Loss function:
Our goal is to find the weight vector corresponding to the point where the error is minimum i.e. the minima of the error gradient. The loss function of the sigmoid neuron is the squared error loss and will help managing the learning process.
that was a simple description for this network, please see how to implement this without any additional libraries (beside Numpy) here.

 

 

After we trained our first machine learning using basic foundation, it’s time to use slightly advanced tool to tackle more challenging problems. The concept is technically the same; we have to feed the data, normalize it, and send the knowledge through the network back and forth until the loss function is minimized.
In this lesson we will build our first classifier for a dataset called MNIST which consists of tens of thousands of hand written number. We will go through the training step by step, look through the data, train our model and store it for future use. Without further intro, let’s get into the colab.
Bring your cup of coffee and let’s run this colab!

 

 

Unsupervised learning is a type of self-organized learning that helps find previously unknown patterns in data set without pre-existing labels. It is also known as self-organization and allows modeling probability densities of given inputs. It is one of the main three categories of machine learning, along with supervised and reinforcement learning. autoencoder is the main component that is used in unsupervised learning.
Unsupervised learning algorithms allows you to perform more complex processing tasks compared to supervised learning. Although, unsupervised learning can be more unpredictable compared with other natural learning methods. Unsupervised machine learning finds all kind of unknown patterns in data. it can also help you to find features which can be useful for categorization.
Some applications of unsupervised machine learning techniques are:
Clustering automatically split the dataset into groups base on their similarities, anomaly detection can discover unusual data points in your dataset. It is useful for finding fraudulent transactions. Association mining identifies sets of items which often occur together in your dataset.
Without more delays , lets learn how to build and operate our first unsupervised network. lets get into the colab, click here to start.

 

 

Convolutional Neural Networks are very similar to ordinary Neural Networks from the previous chapter: they are made up of neurons that have learnable weights and biases. Each neuron receives some inputs, performs a dot product and optionally follows it with a non-linearity. The whole network still expresses a single differentiable score function: from the raw image pixels on one end to class scores at the other. And they still have a loss function (e.g. SVM/Softmax) on the last (fully-connected) layer and all the tips/tricks we developed for learning regular Neural Networks still apply.

Lets know more through our CNN colab here.

 

 

In this series of lectures, we covered the basis of machine learning, methods and designs, problems and many solutions to overcome training difficulties. In this session we will go a little further and discuss the master models that scientists have developed and how to use them to solve many of the challenges in the wild. Let’s get started with this new topic using this Colab.

 

 

Deep learning algorithms have solved several computer vision tasks with an increasing level of difficulty. We previously tackled image classification and image reconstruction and today we tackle object detection. The image semantic segmentation challenge consists in classifying each pixel of an image (or just several ones) into an instance, each instance (or category) corresponding to an object or a part of the image (road, sky). This task is part of the concept of scene understanding: how a deep learning model can better learn the global context of a visual content. Let’s get started with this new topic using this Colab.

 

Send a Message / Request a Demo

We are looking to answer your questions or arrange demos for you