◢ ◤

20 April 2018, I was in the 3rd year of my undergrad and had some exam going on, when I heard the news of Avicii passing away.

I cried of course, I remember listening to ‘Wake me up” on loop that day. I know he still lives in our hearts, and his music is something so close to a lot of us. 

His songs have this rare beauty that I don’t even want to try to articulate in a few words.

He was 28, rich, famous, successful (professionally). He was one of the most amazing song-writers, he was on a vacation in Muscat. And he took his own life. 

Imagine what he must be going through? The mental trauma he might be facing. 

I can’t even imagine what might be going on in his head. But I feel sorry that although he so openly mentioned about his problems, begged for help, he DID NOT get it. At least not on time. But now, hopefully because of him millions of others will.

And that’s not it; every year, more than 1,000 boys and men (only) in Sweden take their own lives. About 14 percent of the population has at some point had some thoughts of taking their own life. Suicide is the leading cause of death among men up to the age of 44.

Suicide can be impulse acts and is not just a loss of ONE person, imagine the trauma for those left behind. It can break families, and leave them shattered forever.

Suicide can be triggered, for example, by a difficult life event such as a separation, death, illness or financial problems. We can pay some attention and check on people in case someone close to you changes behaviour; withdraws, becomes absent, aggressive or drinks more alcohol.

The ability to talk about mental health, the freedom and slight comfort can make a lot of difference, it can save lives.

If you think you are suffering yourself and need some help, please don’t keep your thoughts to yourself. There’s help to be had. You’re not alone.

Let’s start by talking about it, by discussing mental health, having a quick chat or a fika. Let there be concurrences and some disagreements. But maybe that’s the best way to start?

The iconic Ericsson Globe in Stockholm becomes Avicii Arena. A part of a bigger project to prevent suicide and mental illness among young adults. ◢ ◤

Please visit https://aviciiarena.se to see the amaaaazing work they are doing!

This is the Avicii Arena. Source: https://aviciiarena.se/en

Also the work https://www.timberglingfoundation.org is doing is so great.

All I can say is; “I can’t tell where the journey will end, but I know where to start” (~wake me up, Avicii)

Tack för att du läste, låt oss prata om mental hälsa. Nå mig när som helst! Hejdå!

Random Forest : Supervised Learning Algorithm

You must have at least once solved a problem of probability in your high-school in which you were supposed to find the probability of getting a specific colored ball from a bag containing different colored balls, given the number of balls of each color. Random forests are simple if we try to learn them with this analogy in mind.

Random forests (RF) are basically a bag containing n Decision Trees (DT) having a different set of hyper-parameters and trained on different subsets of data. Let’s say I have 100 decision trees in my Random forest bag!! As I just said, these decision trees have a different set of hyper-parameters and a different subset of training data, so the decision or the prediction given by these trees can vary a lot. Let’s consider that I have somehow trained all these 100 trees with their respective subset of data. Now I will ask all the hundred trees in my bag that what is their prediction on my test data. Now we need to take only one decision on one example or one test data, we do it by taking a simple vote. We go with what the majority of the trees have predicted for that example.

In the above picture, we can see how an example is classified using n trees where the final prediction is done by taking a vote from all n trees.

This can be used for regression and classification tasks both. But we will discuss its use for classification because it’s more intuitive and easy to understand. Random forest is one of the most used algorithms because of its simplicity and stability.

While building subsets of data for trees, the word “random” comes into the picture. A subset of data is made by randomly selecting x number of features (columns) and y number of examples (rows) from the original dataset of n features and m examples.

Random forests are more stable and reliable than just a decision tree. This is just saying like- it’s better to take a vote from all cabinet ministers rather than just accepting the decision given by the PM.

As we have seen that the Random Forests are nothing but the collection of decision trees, it becomes essential to know the decision tree. Sharpen that up if you haven’t already!

Link: https://smritimishra.in/2021/05/13/decision-tree-algorithm-supervised-learning/

In general, the more trees in the forest the more robust the forest looks like. In the same way in the random forest classifier, the higher the number of trees in the forest gives the high the accuracy results.

Why Random forest algorithm

The random forest is a model made up of many decision trees. Rather than just simply averaging the prediction of trees (which we could call a “forest”), this model uses two key concepts that gives it the name random:

  1. Random sampling of training data points when building trees
  2. Random subsets of features considered when splitting nodes

The reasons we use random forest algorithm are:

  • The same random forest algorithm or the random forest classifier can use for both classification and the regression task.
  • Random forest classifier will handle the missing values.
  • When we have more trees in the forest, a random forest classifier won’t overfit the model.
  • Can model the random forest classifier for categorical values also.

Random Forest Vs Decision Tree

Let’s explore this with an easy example.

Suppose you have to buy a packet of $5 cupcakes. Now, you have to decide one among several biscuits’ brands. 

You choose a decision tree algorithm. Now, it will check the $5 packet, which is sweet. It will choose probably the most sold biscuits. You will decide to go for $5 chocolate cupcakes. You are happy!

But your friend used the Random forest algorithm. Now, he has made several decisions. Further, choosing the majority decision. He chooses among various strawberry, vanilla, blueberry, and orange flavoured cupcakes. He checks that a particular $5 packet served 3 units more than the original one. It was served in vanilla chocolate. He bought that vanilla chocolate cupcakes. He is the happiest, while you are left to regret your decision.

Decision Tree :

Decision Tree is a supervised learning algorithm used in machine learning. It operated in both classification and regression algorithms. As the name suggests, it is like a tree with nodes. The branches depend on the number of criteria. It splits data into branches like these till it achieves a threshold unit. A decision tree has root nodes, children nodes, and leaf nodes.

Recursion is used for traversing through the nodes. You need no other algorithm. It handles data accurately and works best for a linear pattern. It handles large data easily and takes less time.

Random Forest :

It is also used for supervised learning but is very powerful. It is very widely used. The basic difference being it does not rely on a singular decision. It assembles randomized decisions based on several decisions and makes the final decision based on the majority.

It does not search for the best prediction. Instead, it makes multiple random predictions. Thus, more diversity is attached, and prediction becomes much smoother.

You can infer Random forest to be a collection of multiple decision trees!

Bagging is the process of establishing random forests while decisions work parallelly.

What is Bagging?

  • Take some training data set
  • Make a decision tree
  • Repeat the process for a definite period
  • Now take the major vote. The one that wins is your decision to take.

What is Bootstrapping?

Bootstrapping is randomly choosing samples from training data. This is a random procedure. 

Random Forest Step by Step (in simple terms) :

  • Random choose conditions
  • Calculate the root node
  • Split
  • Repeat
  • You get a forest

Advantages of Random Forest:

  1. Powerful and highly accurate
  2. No need to normalizing
  3. Can handle several features at once
  4. Run trees in parallel ways

Disdvantages of Random Forest:

  1. Biased to certain features sometimes
  2. Slow
  3. Can’t be used for linear methods
  4. Not good for high dimensional data

P.S. – Decision trees are very easy as compared to the random forest. A decision tree combines some decisions, whereas a random forest combines several decision trees. Thus, it is a long process, yet slow. 

Whereas, a decision tree is fast and operates easily on large data sets, especially the linear one. The random forest model needs rigorous training. When you are trying to put up a project, you might need more than one model. Thus, a large number of random forests, more the time. 

It depends on your requirements. If you have less time to work on a model, you are bound to choose a decision tree. However, stability and reliable predictions are in the basket of random forests. 

A really good article on implementation of random forests is : https://towardsdatascience.com/an-implementation-and-explanation-of-the-random-forest-in-python-77bf308a9b76

Naive Bayes Classifier : Supervised Learning Algorithm

Naive bayes is a classification algorithm (based on bayes theorem) that assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature.

Naive Bayes Classifier and Collaborative Filtering together create a recommendation system that together can filter very useful information that can provide a very good recommendation to the user. It is widely used in a spam filter, it is widely used in text classification due to a higher success rate in multinomial classification with an independent rule. Naive Bayes is very fast and can be used to solve problems in real-time.

Let’s imagine a fruit may be considered to be an orange if it is orange, round, and about 3 inches in diameter. Even if these features depend on each other or upon the existence of the other features, all of these properties independently contribute to the probability that this fruit is an apple and that is why it is known as ‘Naive’.

Let’s dive into the formulae and mathematics behind, shall we? 

Naive Bayes is basically us putting a naive assumption over Bayes rule in probability to make life simple. Bayes Rule is the same concept that most of us have seen at one place or the other. Very similar to high school mathematics! (I hope we paid attention then, haha)

Bayes Rule:

P(Y|X) = P(Y) * P(X|Y) / P(X)

Now, let’s discuss how do we use the Naive Bayes in classification problems and what is this “naive assumption” that we mentioned!

So let’s assume we have a dataset with ‘n’ features; and we want to predict the value for Y. 

X can be represented as <X1, X2, …, Xn> and Y is a boolean variable that can take only 2 values. 

The naive assumption that we make is : All Xi  are conditionally independent given Y. This means that given the value of Y. Xi doesn’t care what some other Xj is (obviously i != j), just like in the example above the features are color, shape and diameter but assume them to be independent of each other given that we know it is an apple when we apply this algorithm. 

So, the term in the right hand side of that formula simply becomes the product of n terms, which are P(Xi | Y) where i varies from 1 to n. 

To understand why this helps us in classification problems => one must understand what all is required when predicting probabilities for a target variable. 

We want to predict the value of Y given a bunch of features, X. 

We want P(Y | X), now if we were to find this we would need the joint probability distribution of X and Y. This is the main problem as estimating the joint distribution is a difficult task with limited data. 

For n boolean features we need to estimate 2 ^ n probabilities/parameters. By making the assumption of conditional independence we are limiting down the parameters to linear in n. 

2 * n – 1 to be exact. 

How to estimate them? We can either use Maximum Likelihood estimation or Maximum a posteriori estimation.

Real life example of Naive Bayes Algorithm

A real life example of Naive Bayes is filtering spam emails. Naive Bayes classifiers are often used in text classification. This is because they perform better in multi class problems and also assume independence rules.

A detailed article on how Naive Bayes Algorithm filters spam messages is : https://towardsdatascience.com/naïve-bayes-spam-filter-from-scratch-12970ad3dae7

Advantages of Naive Bayes

  • Easy and quick implementation to predict the class of test data set. 
  • Performs well in multi class prediction too. 
  • If the assumption of independence holds, a Naive Bayes classifier performs better compared to other models like logistic regression and you need less training data.
  • Performs great in case of categorical input variables.

Disadvantages of Naive Bayes

  • The occurrence of ‘zero frequency’, which can be treated using smoothing technique. 
  • Naive Bayes can be a bad estimator. 
  • One of the major limitations of Naive Bayes is the assumption of independent predictors. As in real life, it is almost impossible that we get a set of predictors which are completely independent.

P.S. – I co-authoured this blog with Sarthak Kathuria.

Ha en bra lördag! (Have a great Saturday!)

Decision Tree Algorithm: Supervised Learning

Decision trees are supervised learning algorithms, which can be used both for the purpose of classification and regression. However, more often it is used for  solving Classification problems. It is a tree-structured classifier, where internal nodes represent the features of a dataset, branches represent the decision rules and each leaf node represents the outcome.

In a Decision tree, there are two nodes, which are the Decision Node and Leaf Node. Decision nodes are used to make any decision and have multiple branches, whereas Leaf nodes are the output of those decisions and do not contain any further branches. 

The decision tree is a graphical representation for getting all the possible solutions to a problem/decision based on given conditions. Hence, it is more interpretable and easy to understand.

Aim of a Decision Tree Algorithm

Decision tree aims to create a training model which can predict the class or value of the target variable by learning simple decision rules inferred from the training data.

In Decision Trees, if we want to predict a class label, we begin from the ‘root node’ of the tree. We compare the values of the root attribute with the record’s attribute. On the basis of comparison, we follow the branch corresponding to that value and jump to the next node. (as illustrated in the above image)

Important Terminologies related to Decision Trees

  1. Root Node: The entire population or sample and this further gets divided into two or more homogeneous sets.
  2. Splitting: Process of dividing a node into two or more sub-nodes.
  3. Decision Node: When a sub-node splits into further sub-nodes, then it is called the decision node.
  4. Leaf / Terminal Node: Nodes that do not split are called Leaf or Terminal nodes.
  5. Pruning: When we remove sub-nodes of a decision node, this process is called pruning. You can say the opposite process of splitting.
  6. Branch / Sub-Tree: A subsection of the entire tree is called branch or sub-tree.
  7. Parent and Child Node: A node, which is divided into sub-nodes is called a parent node of sub-nodes whereas sub-nodes are the child of a parent node.

Decision trees classify the examples by sorting them down the tree from the root to some leaf/terminal node, with the leaf/terminal node providing the classification of the example.

Each node in the tree acts as a test case for some attribute, and each edge descending from the node corresponds to the possible answers to the test case. This process is recursive in nature and is repeated for every subtree rooted at the new node.

Assumptions while creating Decision Tree

Below are some of the assumptions we make while using Decision tree:

  • In the beginning, the whole training set is considered as the root.
  • Feature values are preferred to be categorical. If the values are continuous then they are discretized prior to building the model.
  • Records are distributed recursively on the basis of attribute values.
  • Order to place attributes as root or internal node of the tree is done by using some statistical approach.

Decision Trees follow Sum of Product (SOP) representation. The Sum of product (SOP) is also known as Disjunctive Normal Form. For a class, every branch from the root of the tree to a leaf node having the same class is a conjunction (product) of values, different branches ending in that class form a disjunction (sum).

The primary challenge in the decision tree implementation is to identify which attributes do we need to consider as the root node and each level. Handling this is known as the attributes selection. We have different attributes selection measures to identify the attribute which can be considered as the root note at each level.

Types of Decision Trees

Types of decision trees are based on the type of target variable we have;

  1. Categorical Variable Decision Tree: Decision Tree with a categorical target variable.
  2. Continuous Variable Decision Tree: Decision Tree with continuous target variable.

A Real Life Example

In colleges and universities, the shortlisting of a student can be decided based upon his merit scores, attendance, overall score etc. A decision tree can also decide the overall promotional strategy of faculties present in the universities. 

Advantages of Decision Tree

  1. A decision tree model is very interpretable and can be easily represented to senior management and stakeholders.
  2. Preprocessing of data such as normalization and scaling is not required which reduces the effort in building a model.
  3. A decision tree algorithm can handle both categorical and numeric data and is much efficient compared to other algorithms.
  4. Any missing value present in the data does not affect a decision tree which is why it is considered a flexible algorithm.

Disadvantages of Decision Tree

  1. A decision tree works badly when it comes to regression as it fails to perform if the data have too much variation.
  2. A decision tree is sometimes unstable and cannot be reliable as alteration in data can cause a decision tree go in a bad structure which may affect the accuracy of the model.
  3. If the data are not properly discretized, then a decision tree algorithm can give inaccurate results and will perform badly compared to other algorithms.
  4. Complexities arise in calculation if the outcomes are linked and it may consume time while training a model.

Empathy in AI

Artificial empathy (AE) or computational empathy is the development of AI systems which can detect and respond to human emotions in an empathic way.

Empathy is the ability to understand or feel what the other person is experiencing by putting oneself in another’s position; and can be of different types.

Now one might ask if this is encouraging or terrifying?

I would say that, even though such technology can be initially perceived as threatening by many people, it has some very interesting use cases in the health care sector. 

From the care-giver perspective for instance; caring for mental health patients is emotionally difficult for nurses and doctors. Doctors and nurses report feeling burnt-out, performing emotional labor above and beyond the requirements of paid labor, which compromises the quality of care. AI robots can use empathy to care for dementia patients without feeling ‘burned-out’. AI robots can use empathy to care for dementia patients without feeling “burned-out”. They can be the go-between between doctors/nurses and their patients. They can work closely with doctors to gather information and refine treatment plans. They can work with nurses to monitor patients and engage in day to day care. At the same time, dementia patients who receive consistent empathetic care report better outcomes. 

Also, according to the emotional intelligence pyramid, empathy is in all the layers of the upper pyramid above emotion recognition.

This addresses the missing Apex of Maslow’s Hierarchy, after the top of Maslow’s hierarchy of self-actualization, there needs to be self-transcendence and emotional unity.

P.S. : Maslow’s hierarchy of self-actualization represents the highest level or stage in his model of human motivation: the ‘Hierarchy of Needs’. According to the hierarchy of needs, self-actualization represents the highest-order motivations, which drive us to realize our true potential and achieve our ‘ideal self’.

There are three parts of empathy that describe empathy to us; cognitive empathy, affective empathy and somatic empathy. 

When it comes to AI, and moving towards a more human-like intelligence, empathy will be essential. Since, human intelligence is different from AI, artificial empathy is different from human empathy. Empathy can be learned and AI can surely be equipped with artificial empathy in the years to come. 

However, it could be scary if not used cautiously. Therefore, ethics in AI are also extremely important.

Ethics in AI refer to a set of values, principles, and techniques which employ widely accepted standards of right and wrong to guide moral conduct in the development and deployment of Artificial Intelligence technologies.

An interesting article around AI and empathy: https://www.forbes.com/sites/cognitiveworld/2019/12/17/empathy-in-artificial-intelligence/?sh=be1d08063270

Link to this preprint paper by Dan Zahavi from Københavns Universitet – University of Copenhagen: https://www.academia.edu/48847511/Empathy_Alterity_Morality

(all) At Sea

I saw a man, walking towards me, very clearly, vividly. I stood there – in silence, shocked. I knew him. But I hadn’t seen him for 2 years now, exactly two years. How could he walk and come right at me? 

We’re not in the same city, or country or continent, I think. 

But he’s right there, how? I look at the face closely, his eyes staring at me. I can’t take it anymore. I have been scared of him before. He used to stalk me, send me emails, call me from several different numbers, haunt me in every way possible. But I saw him 2 years back, when he was moving. I did receive some anonymous emails during that time. My mind always thought it was him, at a point my mind dreaded opening emails, fearing it might be him. 

His presence in any way was, and still is, scary and unpleasant. 

But how could I see him again? Why is this happening? I sweat, I’m uncomfortable. 

I know I can’t take him. I want to be away from him. 

I run, and run and run. Surprisingly there’s no sound, it’s just images. His presence, me running, the birds I see around. Where did the sound go? How can I not hear the bird chirp or the wind blow or his voice that I dread. 

This makes me uncomfortable, I sweat. I think, I sweat. I’m confused, definitely. 

I see his face again, coming closer. 

This time I think I scream, I feel a touch. I am scared, I scream and ask him to go away. 

I feel the touch again on my left shoulder, firmer this time. Now I start hearing some sounds, some words, I hear my name. 

But wait; it’s a different voice, a familiar voice, a voice I love. 

I open my eyes slowly and realise my brother, asking if I’m fine. I say yes. 

I see tears down my eyes. 

I ask him to go to sleep and I lay awake in bed. I lay awake until I’m tired of it. 

It’s a pretty regular night for me.

Most of the time it’s difficult to tell dreams for reality. 

But if the dream is affecting my reality so much isn’t it reality too? 

What’s real, what’s not? Why is it so confusing? It wasn’t like this before, before I had certain issues with my mental health. 

I ask myself sometimes, the bags under my eyes, the confusion in my head, the difficulty trusting people, the fear of making friends; is that something I deserved? Most of the time my mind says NO. 

But sometimes when I’m in a reality that might not be real, the answer is different. 

That answer makes me question myself again. Do I deserve it? 

Does anyone deserve it? And yet again, in the more ‘normally-accepted’ reality, that most people live in; my mind screams NO. 

I think it’s a loop. It’s not easy to come out of it. 

I guess that’s what mental health disorders can do to you sometimes. 

It robs you off your creativity, objectivity and identity. It restricts you from seeing the hope at the other side of the tunnel. It makes you feel worthless at times and guilty of things happening around you. 

When you dream, and you wake up and you have no idea about what’s real and what’s not. Those hallucinations, they are scary, when you have to double check each and every thing (because you can’t tell between reality and perceptions). 

Sometimes even when you know you are sleeping, it’s so vivid that your brain gets confused, if it’s real or not? And you have to wake up each and every time something bad happens in the dream. 

It leads to insomnia, it leads to tiredness, confusion between dream and reality. 

It always makes me wonder how complex is our brain? And are we doing enough for mental health?  

To anyone reading this and struggling with any mental health issues, I have a message for you: 

You are more than a disorder. You are an individual filled with ideas and energy. Don’t let that go to waste. 

We don’t try to cure cancer on our own, nor should we try to battle mental health disorders on our own. To quote one of Robin Williams’ movie personas, “You’ll have bad times, but it’ll always wake you up to the good stuff you weren’t paying attention to.” 

Everyone’s life has value, and mental illness does not diminish this. Please seek support and speak up if you are not feeling OK, and let’s build a resilient network where everyone feels safe.

SVM: Supervised Learning Algorithm

Now since we have already discussed linear regression and logistic regression algorithms in detail, it’s time to move on to Support Vector Machine (SVM). 

SVM is another simple yet crucial algorithm that every machine learning expert should have in their armaments. 

SVM is highly preferred by many as it produces significant accuracy with less computation power. 

SVM can be used for both regression and classification tasks. But, it is widely used in classification objectives. 

Quick heads up. I’d suggest you go through linear regression and logistic regression before this.

Done? Awesome! Let’s move on! 

How does SVM work? 

Let’s understand and visualize the basics of SVM using a simple example

Let’s imagine we have two tags: red and blue, and our data has two features: x and y. We want a classifier that, given a pair of (x,y) coordinates, outputs if it’s either red or blue. We plot our already labeled training data on a plane:

SVM takes these data points and outputs the hyperplane; please note that in two dimensions this hyperplane will just be a line that best separates the tags. This line is the ‘decision boundary‘: anything that falls to one side of it we will classify as blue, and anything that falls to the other as red.

However, what exactly is the best hyperplane? For SVM, it’s the one that maximizes the margins from both tags. In other words: the hyperplane (remember it’s a line in this case) whose distance to the nearest element of each tag is the largest.

Suggested video to learn exactly how this optimal hyperplane is found : https://www.youtube.com/watch?v=1NxnPkZM9bc

A simple real life example of SVM : Face Detection

SVM can be used to classify the parts of the image as face and non-face. We employ training data of  (n x n) pixels with a two-classes; face (+1) and non-face (-1). 

 Once the classes are defined, the algorithm extracts features from each pixel as face or non-face. After which, the algorithm creates a square boundary around faces based on the brightness of the pixels. Hence, SVM classifies each image using the same process.

Jag hoppas att du gillade det! (I hope you liked it!)

The Relationship between our Consciousness and AI

I have always been extremely curious about how consciousness works in the human brain – how do people lose memory and the overall functionality? Considering how little we know about the wiring of the human brain, and its connections to about 86 to 100 billion neurons, understanding patterns could sound like a herculean task. However this is exactly where Artificial Intelligence steps in. 

But we need to dive deeper to know about neurons before getting AI involved. Our nervous systems detect what is going inside our bodies as well as our surroundings. On the basis of their detections, they decide how an individual needs to act. Also, the network memorizes the entire process simultaneously, that can be recalled later. For example, when we are about to fall, we might experience a slight increase in our heartbeat and then in a blink of an eye, we take an action involuntarily.  

The entire process is reliant on a sophisticated network called neurons.

Our brains are far too complex for us to comprehend the objectivity behind each action. I had read a book called “The Psychopath Inside” by James Fallon, where he explains the brain in terms of a 3-by-3 Rubik’s cube (it’s still difficult to understand and visualise without prior knowledge). This is where AI jumps in, according to me, and can be employed in a number of ways. Using AI we can produce new tools or applications to come up with related connections to theoretical principles. This would help us understand the complex patterns of the sophisticated machine that our brain is. Imagine an interface? 

At a rational level, AI can help us visualise the different patterns and try to find the correlations to underlying reasons. For example; certain types of event overlapping in the brain cause people to lose memory for a short duration of time – this can be visualised and studied using Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs) to recognize cell extensions, cell components, and synapses, and to distinguish them from each other.

CNNs and RNNs are different types of neural networks. RNNs are usually used with sequential data, as RNN has memory. RNN just doesn’t consider the input data, but also the previous input, which are both used to predict what’s going to happen next. That’s why they are often used in temporal analysis. CNNs are specialized neural networks for processing data that has an input shape like a 2D matrix like images. They often find use in image detection and classification, stock market prediction, etc. 

During my study at KTH Royal Institute of Technology, Stockholm, we  implemented Bayesian Confidence Propagation Neural Network (BCPNN) & sequence learning in a non-spiking attractor neural network. 

To give you a better intuition about BCPNN, they are inspired by hebbian networks and bayes learning. The activation in the neural network represents confidence; which is the probability in presence of input features or categories. The synaptic weights are based on estimated correlations and the spread of activation corresponds to calculating posterior probabilities. It was originally proposed by Anders Lansner and Örjan Ekeberg at KTH Royal Institute of Technology. 

This was used to analyse the behaviour of non-orthogonal sequences as well as to implement sequential overlapping on the BCPNN model. Although the project was done in non-spiking conditions that are relatively abstract conditions, the research holds great importance in understanding the networks’ abilities to learn and recall. During our research we also presented that the proposed network model would enable encoding and may reproduce temporal aspects of the input. The model would offer internal controls of the recall dynamics by gaining modulation. 

There’s so much about the brain we are yet to learn, and I am excited to learn and share more with you guys!

Ha en bra dag! (Have a nice day! – in Swedish)

Simple Logistic Regression : Supervised Learning Algorithm

Machine learning and statistics are like Chandler and Joey, best combination – you see?

So today let’s talk about another supervised learning algorithm which is absolutely based on statistics.

Enter Logistic Regression.

It is the go-to method for binary classification problems (problems with two class values). Hence it is used when the target is categorical.

For example: To predict whether a student passed (1) or not (0)

Therefore this is a classification algorithm.

The name logistic regression comes from the function used at the core of the method, the logistic function.

The logistic function, (a.k.a sigmoid function) was developed by statisticians to describe properties of population growth in ecology, rising quickly and maxing out at the carrying capacity of the environment. It is an S-shaped curve (you can see the graph below) which can take any real-value number and map it into a value between 0 and 1, but never exactly at those limits.

1 / (1 + e^-value)


e : base of the natural logarithms (Euler’s number) and value is the actual numerical value that you want to transform. Below is a plot of the numbers between -5 and 5 transformed into the range 0 and 1 using the logistic function.

Let’s discuss the mathematical expression used to describe Logistic Regression.

Input values (x) are combined linearly using weights or coefficient values to predict an output value (y). A key difference from linear regression is that the output value being modeled is a binary values (0 or 1) rather than a numeric value.

An example logistic regression equation:

y = e^(b0 + b1*x) / (1 + e^(b0 + b1*x))


y : predicted output,

b0 : bias or intercept term

b1 : coefficient for the single input value (x).

Each column in your input data has an associated b coefficient (a constant real value) that must be learned from your training data.

How does Logistic Regression work?

(let’s discuss with a real world example, also when to use linear regression and when logistic regression)

Let’s go with the good ol’ student example. To predict whether a student passed (1) or not (0). In this case, if we use simple linear regression, we will need to specify a threshold on which classification can be done.

Let say the actual class is the students who pass, and predicted continuous value is 0.85 and the threshold we have considered is 0.9, then this data point will be considered as the students who will fail and this will lead to the wrong prediction.

So we conclude that we can not use linear regression for this type of classification problem. As we know linear regression is bounded, So here comes logistic regression where value strictly ranges from 0 to 1.

Other examples could be:

  • To predict whether a person will buy a car (1) or (0)
  • To know whether the tumor is malignant (1) or (0)

How to implement logistic regression from scratch?

This is an implementation using numpy in Python.

Link to the source code: https://github.com/smriti-mishra/Logistic_regression

Tusen tack! (thousand thanks – in Swedish)

Burnout and the Neuroscience Behind.

Anxious. Obnoxious. Crippled. Helpless. Disillusioned. That’s how you feel when you are on your journey to burnout. 

Burnout is a mental, emotional and physical state that leaves you mentally drained, demotivated and exhausted; and this occurs due to prolonged anxiety or stress. It can make you lose interest in what earlier motivated you and leave you feeling absolutely crippled. 

It’s even worse now with the pandemic going on, when most people are either working from home, or spending all their times looking for jobs, or people who have to go out and work, interacting with people, which often leaves them in fear of getting contaminated by the virus. 

The line between work, personal and private space has vanished somewhere, and we need to chase it and embed it back into our lives. 

If we don’t do that soon enough, we may be on the verge of facing a burnout. It steals all your energy, robs you of your productivity and you are left in a cynical and resentful state. 

And the adverse effects of burnout don’t just affect your work life, but it spills over into every area of life, including your home, work, and social life. It can leave long term effects on your body making your body more vulnerable to common illnesses as well.

It doesn’t happen overnight, but you can feel it creeping on to you slyly. If you feel helpless, overloaded, or unappreciated, and you literally have to drag yourself out of the bed every morning, you may be burned out. However, it’s not so simple to decipher. 

How can you tell if it’s stress or burnout, right? Stress is something that is challenging mentally and physically, but at the end of the day, you are able to get the situation under control. 

Burnout is when you just can’t do enough, you feel mentally exhausted and can feel some sort of hollowness inside you, things don’t feel good enough and you lack motivation.

Let’s see the effects burnout has on your brain. Neuroscientists discovered that burnout has the following effects on your brain:

It enlarges your amygdala: Amygdala is the part of the brain that controls emotional reactions. This results in you being more moody than usual and having a stronger response to stress when triggered. 

The effect on the prefrontal cortex : Prefrontal cortex is the part of the brain that is responsible for cognitive functioning, the part of the brain responsible to think. This brain region has been implicated in planning complex cognitive behavior, personality expression, decision making, and moderating social behaviour. Burnout degrades the functionality of the prefrontal cortex – it usually occurs as you age, but in people who are stressed for prolonged periods of time, it occurs much more rapidly.

Burnout also leads to shorter attention spans, and weakening of the parts of the brain that control memory. This makes it more difficult to learn new things or to recollect thoughts.

The brains of people who are chronically burnt-out show similar damage as people who have experienced trauma.

How to fight burnout?

  • Validate your feelings, understand and accept what’s happening to your body and allow yourself the break your mind and body requires. 
  • Sleep well. Get some good rest. 
  • Eat healthy and keep yourself hydrated. 
  • Exercise. It boosts your physical and mental energy. 
  • Ask for help any time you need it, communicate about it with someone.

Please refer to this blog to understand more about the phases of burnout, the symptoms and how it can be dealt with: https://www.thisiscalmer.com/blog/5-stages-of-burnout

Let’s deal with it while there’s time!