Different types of classifiers | Machine Learning

Different types of classifiers | Machine Learning

There are different types of classifiers, a classifier is an algorithm that maps the input data to a specific category. Now, let us take a look at the different types of classifiers:
  1. Perceptron
  2. Naive Bayes
  3. Decision Tree
  4. Logistic Regression
  5. K-Nearest Neighbor
  6. Artificial Neural Networks/Deep Learning
  7. Support Vector Machine
Then there are the ensemble methods: Random Forest, Bagging, AdaBoost, etc.
 
As we have seen before, linear models give us the same output for a given data over and over again. Whereas, machine learning models, irrespective of classification or regression, give us different results. This is because they work on random simulations when it comes to supervised learning. In the same way, Artificial Neural Networks use random weights. Whatever method you use, these machine learning models have to reach a level of accuracy of prediction with the given data input. These are also known as Artificial Intelligence Models. We can differentiate them into two parts - Discriminative algorithms and Generative algorithms.
 
Now, let us talk about Perceptron classifiers- it is a concept taken from artificial neural networks. The problem here is to classify this into two classes, X1 or class X2. There are two inputs given to the perceptron and there is a summation in between; the input is Xi1 and Xi2 and there are weights associated with it, w1 and w2. The Yi cap from outside is the desired output and w0 is a weight to it, and our desired output is that the system can classify data into classes accurately. You can easily relate this equation with linear regression, wherein, Y is the dependent variable similar to Y^. W0 is the intercept, W1 and W2 are slopes. X1 and X2 are independent variables.
 
When we say random weights get generated, it means, random simulation is happening in every iteration. When we have one desired output that we show to the model, the machine has to come up with an output similar to our expectation. Initially, it may not be as accurate. But, as the “training” continues the machine becomes more accurate. With the passage of time, the error minimizes. Depending on the complexity of the data and the number of classes, it may take longer to solve or reach a level of accuracy that is acceptable to the trainer. This means when the data is complex the machine will take more iterations before it can reach a level of accuracy that we expect from it. These iterations are called Epochs in artificial neural networks in deep learning problems.