The Logic of Logistic Regression

At the onset let me take this opportunity to wish each one of you a very happy and prosperous New Year. In this post I will start the discussion around one of the most frequent type of problems encountered in a machine learning context – classification problem. I will also introduce one of the basic algorithms used in the classification context called the logistic regression.

classification

In one of my earlier posts on machine learning I mentioned that the essence of machine learning is prediction. When we talk about prediction there are basically two types of predictions  we encounter in a machine learning context. In the first type, given some data your aim is to estimate a real scalar value. For example, predicting the amount of rainfall  from meteorological data or predicting the stock prices based on the current economic environment or predicting sales based on the past market data are all valid use cases of the first type of prediction context. This genre of prediction problems is called the regression problem. The second type of problems deal with predicting the category or class the observed examples fall into. For example, classifying whether a given mail is spam or not , predicting whether a prospective lead will buy an insurance policy or not, or processing images of handwritten digits and classifying the images under the correct digit etc fall under this gamut of problem. The second type of problem is called the classification problem. As mentioned earlier classification problems are the most widely encountered ones in the machine learning domain and therefore I will devout considerable space to give an intuitive sense of the classification problem. In this post I will define the basic settings for classification problems.

Classification Problems Unplugged – Setting the context

In a machine learning setting we work around with two major components. One is the data we have at hand and the second are the parameters of the data. The dynamics between the data and the parameters provides us the results which we want i.e the correct prediction. Of these two components, the one which is available readily to us is the data. The parameters are something which we have to learn or derive from the available data. Our ability to learn the correct set of parameters determines the efficacy of our prediction. Let me elaborate with a toy example.

Suppose you are part of an insurance organisation and you have a large set of customer data and you would like to predict which of these customers are likely to buy a health insurance in the future.

For simplicity let us assume that each customers data consists of three variables

  • Age of the customer
  • Income of the customer and
  • A propensity factor based on the interest the customer shows for health insurance products.

Let the data for 3 of our leads look like the below

Customer                Age                 Income                Propensity
Cust-1                                   22                      1000                           1
Cust-2                                   36                     5000                           6
Cust-3                                   62                     4500                            8

Suppose, we also have a set of parameters which were derived from our historical data on past leads and the conversion rate(i.e how many of the leads actually bought the insurance product).

Let the parameters be denoted by ‘W’ suffixed by the name of the variable, i.e

W(age) = 8 ; W(income) = 3 ; W(propensity) = 10

Once we have the data and the parameters, our next task is to use these two data points and arrive at some relative scoring for the leads so that we can make predictions. For this, let us multiply the parameters with the corresponding variables and find a weighted score for each customer.

Customer           Age                 Income             Propensity           Total Score
Cust-1                  22 x 8         +     1000 x 3     +    1 x 10                  3186
Cust-2                 36 x 8         +    5000 x 3     +     6 x 10                  15,348
Cust-3                  62 x 8          +   4500 x 3     +    8 x 10                 14,076

Now that we have the weighted score for each customer, its time to arrive at some decisions. From our past experience we have also observed that any lead, obtaining a score of  more than 14,000 tend to buy an insurance policy. So based on this knowledge we can comfortably make prediction that customer 1 will not buy the insurance policy and that there is very high chance that customer 2 will buy the policy. Customer 3 is in the borderline and with little efforts one can convert this customer too. Equipped with this predictive knowledge, the sales force can then focus their attention to customer 2 & 3 so that they get more “bang for their buck”.

In the above toy example, we can observe some interesting dynamics at play,

  1. The derivation of the parameters for each variable – In machine learning, the quality of the results we obtain depend to a large extend on the parameters or weights we learn.
  2. The derivation of the total score – In this example we multiplied the weights with the data and summed the results to get a score. In effect we applied a function(multiplication and addition) to get a score. In machine learning parlance such functions are called activation functions.The activation functions converts the parameters and data into a composite measure aiding the final decision.
  3. The decision boundary – The score(14,000) used to demarcate the examples as to whether the lead can be converted or not.

The efficacy of our prediction  is dependent on how well we are able to represent the interplay between all these dynamic forces. This in effect is the big picture on what we try to achieve through machine learning.

Now that we have set our context, I will delve deeper into these dynamics in the next part of this post. In the next part I will primarily be dealing with the dynamics of parameter learning. Watch out this space for more on that.

Advertisements

Bayesian Inference – A naive perspective

Many people have been asking me on the unusual name I have given for this Blog – “Bayesian Quest”. Well, the name is inspired from one of the important theorems in statistics ‘The Bayes Theorem’. There is also a branch in statistics called Bayesian Inference whose foundation is  the Bayes Theorem. Bayesian Inference has shot into prominence in this age of ‘Big Data’ and is therefore widely used in machine learning. This week, I will give a perspective on Bayes Theorem.

The essence of Statistics is to draw inference on an unknown population, from samples. Let me elaborate this with an example.  Suppose you are part of an agency specializing in predicting poll outcomes of general elections. To publish the most accurate predictions, the ideal method would be to ask  all the eligible voters within your country  which party they are going to vote. Obviously we all know that this is not possible as the cost and time required to conduct such a survey will be prohibitively expensive. So what do you, as a Psephologist do ? That’s where statistics and statistical inference methods comes in handy. What you would do in such a scenario is to select representative samples of people  from across the country and ask them questions on their voting preferences. In statistical parlance this is called sampling. The idea behind sampling is that, the sample sizes so selected( if selected carefully) will reflect the mood and voting preferences of the general population.  This act of inferring the unknown parameters of the population from the known parameters of the sample is the essence of statistics.There are predominantly two philosophical approaches for doing statistical inference. The first one, which is the more classical of the two is called the Frequentist approach and the second the Bayesian approach.

Let us first see how a frequentist will approach the problem of predictions. For the sake of simplicity let us assume that there are only two political parties, party A and party B.Any party which gets more than 50% of popular votes wins in the election. A frequentist will start their inference by first defining a set of hypothesis. The first hypothesis, which is called the null hypothesis, will ascertain that party A will get more than 50% vote. The other hypothesis, called the alternate hypothesis, will state the contrary i.e. party A will not get more than 50% vote. Given these hypothesis, the next task is to test the validity of these hypothesis from the sample data. Please note here, that the two hypothesis are defined with respect to population(all the eligible voters in the country) and not the sample.

Let  our sample size consist of 100 people who were interviewed. Out of this sample 46 people said they will vote for party A, 38 people said that they will vote for party B and the balance 16 people were undecided. The task at hand is to predict whether party A will get more than 50% in the general election given the numbers we have observed in the sample. To do the inference the frequentist will calculate a probability statistic called the ‘P’ statistic. The ‘P’ statistic in this case can be defined as follows – It is the probability of observing 46 people from a sample of  100 people who would vote for party A, assuming 50% or more of the population will vote for party A. Confused ????? ………….. Let me simplify this a bit more. Suppose there is a definite mood among the public in favor of party A, then there is a high chance of seeing a sample where  40 people or 50 or even 60 people out of the 100 saying that they will vote for party A. However there is very low chance to see a sample with only 10 people out of 100 saying that they will vote for party A. Please remember that these chances are with respect to our hypothesis that party A is very popular. On the contrary if party A were very unpopular, then the chance of seeing  10 people out of 100 saying they will vote for party A, is very plausible. The chance or probability of seeing the number we saw in our sample under the condition that our hypothesis is true is the ‘P’ statistic. Once the ‘P’ statistic is calculated , it is then compared to a threshold value usually 5%. If the ‘P’ value is less than the threshold value we will junk our null hypothesis that 50% or more people will vote for party A and will go with the alternate hypothesis. On the contrary if the P value is more than 5% we will stick with our null hypothesis. This in short is how a frequentist will approach the problem.

A Bayesian will approach this problem in a different way. A Bayesian will take into account historical data of past elections and then assume the probability of party A getting more than 50% of popular vote. This assumption is called the Prior probability.Looking at the historical data of the past 10 elections,  we find that only in 4 of them party A has got more than 50% of votes. In that scenario we will assume the prior probability of party A getting more than 50% of votes as .4( 4 out of 10). Once we have assumed a prior probability, we then look at our observed sample data ( 46 out of 100 saying they will vote for party A) and determine the possibility of seeing such data under the assumed prior. This possibility is called the Likelihood. The likelihood and the prior is multiplied together to get the final probability called the posterior probability. The posterior probability is our updated belief based on the data we observed and also the historical prior we assumed. So if party A has higher posterior probability than party B, we will assume that Party A has higher chance of getting more than 50% of votes than party B. This is rather a very naive explanation to the Bayesian approach.

Now that you have seen both Bayesian and Frequentist approaches you might be tempted to ask which is the better among the two. Well this debate has been going on for many years and there is no right answer. It all depends on the context and the problem which is at hand. However, in the recent past Bayesian inference has gained a definite edge over the Frequentist methods due to its ability to update prior beliefs through observation of more data. In addition, computing power is also getting cheaper and faster making Bayesian inference much more fulfilling than Frequentist methods. I will get into more examples of Bayesian inference in a future post.

 

Machine Learning in Action – Word Prediction

In my previous blog on machine learning, I explained the science behind how a machine learns from its parameters. In this week, I will delve on a very common application which we use in our day to day life – Next Word Prediction.

When we text with our smartphones all  of us would have appreciated how our phones make our typing so easy by predicting or suggesting the word which we have in mind. And many would also have noticed the fact that, our phones predict words which we tend to use regularly in our personal lexicon. Our phones have learned from our pattern of usage and is giving us a personalized offering. This genre of machine learning falls under a very potent field called the Natural Language Processing ( NLP).

Natural Language Processing, deals with ways in which machines derives its learning from human languages. The basic input within the NLP world is something called a Corpora, which essentially is a collection of words or groups of words, within the language. Some of the most prominent corpora for English are Brown Corpus, American National Corpus etc. Even Google has its own linguistic corpora with which it achieves many of the amazing features in many of its products. Deriving learning out of the corpora is the essence of NLP. In the context which we are discussing, i.e. word prediction, its about learning from the corpora to do prediction. Let us now see, how we do it.

The way we do learning from the corpora is through the use of some simple rules in probabilities. It all starts with calculating the frequencies of words or group of words within the corpora. For finding the frequencies, what we use is something called a n-gram model, where the “n” stands for the number of words which are grouped together. The most common n-gram models are the trigram and the bigram models. For example the sentence “the quick red fox jumps over the lazy brown dog” has the following word level trigrams:(Source : Wikipedia)

the quick red
quick red fox
red fox jumps
fox jumps over
jumps over the
over the lazy
the lazy brown
lazy brown dog

Similarly a bi-gram model will split a given sentence into combinations of two word groups. These groups of trigrams or bigrams forms the basic building blocks for calculating the frequencies of word combinations. The idea behind calculation of frequencies of word groups goes like this. Suppose we want to calculate the frequency of the trigram “the quick red”. What we look for in this calculation is how often we find the combination of the words “the” and “quick” followed by “red” within the whole corpora. Suppose in our corpora there were other 5 instances where the words “the” and “quick” was followed by the word “red”, then the frequency of this trigram is 5.

Once the frequencies of the words are found, the next step is to calculate the probabilities of the trigram. The probability is just the frequency divided by the total number of trigrams within the corpora.Suppose there are around 500,000 trigrams in our corpora, then the probability of our trigram “the quick red” will be 5/500,000.The probabilities so calculated comes under a subjective probability model called the Hidden Markov Model(HMM).By the term subjective probability what we mean is the probability of an event happening subject to something else happening. In our trigram model context it means,the probability of seeing the word “red” subject to having preceded with words “the” and “quick”. Extending the same concept to bigrams, it would mean probability of seeing the second word subject to have seen the first word. So if “My God” is a bigram, then the subjective probability would be the probability of seeing the word “God” followed by the word “My”

The trigrams and bigrams along with the calculated probabilities arranged in a huge table forms the basis of the word prediction algorithm.The mechanism of prediction works like this. Suppose you were planning to type “Oh my God” and you typed the first word “Oh”. The algorithm will quickly go through the n-gram table and identify those n-grams starting with word “Oh” in the order of its probabilities. So if the top words in the n-gram table starting with “Oh” are “Oh come on”,”Oh my God” and “Oh Dear Lord” in decreasing order of probabilities, the algorithm will predict the words “Come” ,”my” and “Dear” as your three choices as soon as you type the first word “Oh”.After you type “Oh” you also type “my” the algorithm reworks the prediction and looks at the highest probabilities of n-gram combinations preceded with words “Oh” and “my”. In this case the word “God” might be the most probable choice which is predicted. The algorithm will keep on giving prediction as you keep on typing more and more words. At every instance of your texting process the algorithm will look at the penultimate two words you have already typed to do the prediction of the running word and the process continues.

The algorithm which I have explained here is a very simple algorithm involving n-grams and HMM models. Needless to say there are more complex models which involves more complex models like Neural Networks. I will explain about Neural Networks and its applications in a future post.
images

Machine Learning: Teaching a machine to learn

In my previous post on recommendation engines, I fleetingly mentioned about machine learning. Talking about machine learning, what comes to my mind is a recent conversation I had with my uncle. He was asking me on what I was working on and I started mentioning about machine learning and data science . He listened very attentively and later on told my mother that he had absolutely no clue  what I was talking about. So  I thought it would be a good idea to try and unravel the science behind machine learning.

Let me start with an analogy. When we were all toddlers whenever we saw something new ,say a dog, our  parents would point and tell us “Look , a dog”. This is how we start to learn about things around us,from inputs such as these that we receive from our parents and elders . The science behind  machine learning works pretty similar. In this context, the toddler is the machine and the elder which teaches the machine is a bunch of  data .

In very simple terms the setup for a machine learning context works  like this. The machine is fed with a set of data. This data consists of two parts, one part is called  features and the other labels. Let me elaborate a little bit more. Suppose we are training the machine to identify the image of a dog. As a first step we feed multiple images of dogs to the machine. Each image which is fed, say a jpeg or png image, consists of millions of pixels. Each pixel in turn is composed of some value of the three primary colors Red, Blue and Green. The values of these primary colors ranges between 0 to 256. This is called the pixel intensity. For example the pixel intensity for the color orange would be (255,102,0), where 255 is the intensity of its red component, 102 its green component and 0 its blue component. Like wise, every pixel in an image will have various combinations of these primary colors.

RGB

These pixel intensities are the features of the image which are provided as inputs to the machine. Against each of these features, we also provide a class or category describing the features we provided. This is the label. This data set is our basic input. To  visualize the data set, think of it as a huge table of pixel values and its labels. If we have,say 10 pixels per image and there are 10 images. Our table will have 10 rows, corresponding to each image and for each row there would be 11 columns. The first 10 columns would correspond to  pixel values and the 11th column would be the label.

Now that we have provided the machine its data, let us look at how it learns. For this let me take you back to your school days. In your basic geometry, you would have learnt the equation of a line as Y = C + (theta * X). In this equation, the variable C is called the intercept and theta the slope of the line. These two variables govern the  properties of the line Y . The relevance of these variables is that, if we are given any other value of X, then by our knowledge of C and theta we will be able to predict or create a line. So by learning  two parameters we are in effect predicting an outcome. This  is the essence of machine learning. In a machine learning, setup the machine is made to learn the parameters from the features which is provided.Equipped with the knowledge of these parameters the machine will be able to predict the most probable values of Y(Outcomes) when new values of X(features) are provided.

In our dog identification example, the X values are the pixel intensities of the images we provided, Y denotes labels of the dogs. The parameters are learned from the provided data. If we are to give the machine new values of X’s which contain say  features of both dogs and cats, the machine will correctly identify which is a dog and which is a cat, with its knowledge of the parameters. The first set of data which we provide to the machine for it to learn parameters is called the training set and the new data which we provide for prediction is called the test set. The above mentioned genre of machine learning is called Supervised Learning. Needless to say, the earlier equation of the line is one among multiple types of algorithm used in machine learning. This type of algorithm for the line is called linear regression. There are multiple algorithms like these which enables machines to learn parameters and carry out predictions.

What I have described herewith is a very simple version of machine learning. Advances are being made in this field and scientists are trying to mimic the learning mechanism of human brain on to machines. An important and growing field aligned to this idea is called Deep Learning. I will delve on deep learning in a future post.

The power of machine learning is quite prevalent in the world around us and quite often the learning  is inconspicuous. As a matter of fact, we are all party to the training process inconspicuously. A very popular example is the photo tagging process in Facebook. When we tag pictures which we post on Facebook, we are in fact providing labels enabling a machine to learn.  Facebook’s powerful machines will extract features from the photos we tag. Next time we tag a new photo, Facebook will automatically predict the correct tag through the parameters which it has learned. So next time you tag a picture on Facebook, realize that you are also playing your part in teaching a machine to learn.

 

The Recommendation Engine

I was recently browsing through Amazon and guess what ? All that was displayed to me were a bunch of books, books which probably I might never buy at all. I wasn’t quite surprised about the choices Amazon laid out to me. One reason for this is, I am a very dormant online buyer. So the choices Amazon laid out to me is a reflection of the fact that,it doesn’t know me well at all. But wait a minute, did I just say, that Amazon doesn’t know me ?? How can a website know me ? Knowing , understanding , taking care are all traits supposed to be associated with living entities, and not with a static webpages. If you are also thinking the same way, then you are in for a huge surprise. Static webpages are part of old dispensation, the new mantra is making everything,  from webpages to billboards and every facets which touch customers, teeming with life. All these are made possible through advances made in field of machine learning. Yes, machines are equipped with sufficient intelligence to learn based on their interaction with customers . So that they also start taking care of you and me. This is the new dispensation. In this post, I would like to unravel one such application in the field of machine learning, which lies at the heart of online stores like Amazon , E bay etc. : The Recommendation Engine :

You as an avid online buyer would have noticed that before logging in to any of these online stores, if you just browse these sites, you will be shown a bunch of items scrolling before you. Now these could be items which are totally unrelated to your tastes. However Amazon or any online store decided to recommend it to you because these are their top selling or trending items. Bereft of any intelligence about you as a buyer this is the best, the website could lay out to you. This kind of recommendation is called the Non Personalized recommendations. Such recommendations are made based on the top items which are being bought or searched on the site.

Now once you log in, it would be a totally different world. Based on  your level of activity on the site, you would realize that many of the products which are recommended to you are more aligned to your tastes. The more your level of activity, more aligned to your tastes the products recommended to you. This is the part which I referred to you in the beginning about the site understanding you. The more it understands you, the better it would take care of you. Interesting isn’t it ? These type of recommendations falls under the genre called the personalized recommendations.

Personalized recommendations predominantly works on an algorithm called the collaborative filtering. A very simple analogy of the collaborative filtering algorithm is a huge table, where the rows of the table will be users like you and me and the columns of the table will be the items which you or me has bought or has shown interest in. So this table is one huge table with millions and millions of items and as many customers in it. Each time you buy something or even browse something, against your name against the corresponding item column,some value will be updated . However one interesting point to note is that, you as an individual customer at the most would not have bought more than hundreds of types of items. This is quite minuscule compared to the millions of items which adorn the columns of the huge table. This is the case for most of the other users too. The number of items which any user would have shown interest  would be quite minuscule  in comparison to the total number of items in the table. This kind of representation is called the sparse representation.

So naturally you would think, if you as a customer buys or shows interests in only a small percentage of items, how come Amazon recommends new things to me. That’s where the intricacies of the collaborative filtering algorithm kicks in. As I said earlier, the table is a large table with millions of users. So considering the millions of users and the varied tastes each user will have, there would be some transactions which would have happened against all the items in the table. The essence of the collaborative filtering algorithm is to find similarities from this huge table. Similarities between users who would be have bought similar kinds of items, similarities between items which are usually bought together etc. It is these similarities extracted from that huge table, which forms the basis of the recommendations. So the idea is like this, if you and me like casual dressing, we would be more inclined to browse for such brands. So based on our transactions, the algorithm will combine both of us as people having similar tastes. Now next time you go ahead and buy a new Polo shirt, the algorithm will assume that I might also like such a shirt and will recommend the same kind of shirt to me too. This is how the collaborative filtering algorithm works. In addition to the similarities between users, the algorithm also finds similarities between items too, to further enhance the ability to recommend products.

In addition to the above, there is another type of recommendation.  Say you want to buy an ice bucket and you start browsing for various models of ice buckets. Once you zero in on the model you like and decide to add it to the cart, you might get a recommendation for an Ice Scoop saying – “Items usually bought together”.  This is an example of similarities between items and is called Market Basket Analysis. The idea behind this algorithm is also similar to the one mentioned above. In this type of algorithm, the huge table is again analysed and transactions where two or more similar items are bought together are identified and is often recommended when one of them is being bought.

Now the base of all these data products is the transactions you do on the virtual world. All the websites you browse, things you rate, items you buy, something which you comment on , all these generates data which would be channelized to make you buy more. And all these happens without you realizing whats going on.  So next time, you browse the net and suddenly you find an ad for a new Polo shirt,do not be surprised. “Somebody is Watching”

Watchin you