Naïve Bayes classification

From Machine Learning
Jump to: navigation, search

Naïve-Bayes is a family of classification methods that are based on Bayes theorem. Bayes theorem describes the probability of event A occurring based on prior knowledge of other events B.

P(A│B)= (P(B│A)×P(A)) ÷ (P(B))

Reminder on probability notation:

P(A|B): the (conditional) probability that A occurs given that B has already occurred

In machine learning, this means the (conditional) probability of the target variable given the training data inputs

P(A): The probability of event A

Naïve Bayes classification is a supervised machine learning method that works on the principles of conditional probability [1] [2]. It is suitable for binary and multi-class classification. The differences between naïve-bayes classifiers are in how they calculate the likelihood of the features.


The “naïve” part of this method is that it assumes that all features are independent of each other. This approach is flawed from the outset since it is almost impossible to obtain a set of predictors that are completely independent of each other. Due to the dependency of the algorithm on previous results, inaccurate or unrealistic training data and previous instances can lead to misleading results.

It is however very simple to implement as well as being an extremely fast and easily interpretable method to use, and the more data input there is the more accurate the predictions become. Naïve-Bayes classifiers can be very reliable depending on the application. They are commonly used as email spam filters.

They are excellent classifiers but poor performers for estimation.

References