Other forms of Classic ML

We have discussed the main types of ML Algorithms until now but there is a lot more to learn in this vast field. We have discussed only the things that were commonly used and the most popular. Now we will be shedding some light on the other forms of ML. I do not have so much knowledge regarding these types of algorithms, but I will try to make it as intuitive as possible. As I do not know much about this I will not be able to show you a practical demo on python. I feel that it is good to have some extra knowledge on these topics to ensure that you are going to grasp it in the future. We are mainly going to shed some light on:

  • Ensemble learners
  • Naive Bayes
  • Random Forests

These explanations will not have great detail, this just serves as a building block for the future.

Ensemble Learners

Ensemble Learners has a very simple concept we have a certain problem and a dataset, we split the dataset into training and testing samples. Once that is done, we proceed onto fitting our data into different types of algorithms such as a kNN, K-Means, KNearestNeighbors, Linear Regression, Decision Tree etc. Then the mean i.e the average of the outputs are taken and then combined to form a output.

The advantages of Ensemble learners are:

  • Lower error rates
  • Higher accuracy
  • Less Overfitting
  • Works very well

So this was basically a short description on Ensemble learners.

Naive Bayes

The Naive Bayes algorithm is an algorithm for classification problem. It is primarily used for text classification, which involves high-dimensional training data. A few examples of the applications of this algorithm are Spam Filtering, Sentiment Analysis and Classifying news articles. This algorithm is called ‘Naive’ because it makes the assumption that the occurrence of a certain feature is independent of the other features. This model consists of mathematics, statistics and mainly probability.

This was another short description on the Naive Bayes algorithm.

Random Forests

In the simplest form, Random Forests is just what it sounds like! They are a group of trees, here Decision Trees. As the number of trees increase the accuracy increases too. It is like daisy-chaining multiple Decision Trees together to solve a simple problem. This was the simplest explanation of a random forest that I could present to you.

So now we have done a little bit of effort in getting the absolute basics of some other forms of ML. Now we can be proud to say that we have completed most of the Classical Machine Learning tasks. We can advance into deep learning relatively soon! Until then, have a nice day and Enjoy Deep Learning! :)]



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s