Decision Trees

From Machine Learning
Jump to: navigation, search

Decision trees are one of the simplest algorithms. As the name suggests, it branches out much like a tree, with the root node at the top. Decision tree models where the result takes the form of discrete values are classification trees, and those where the result takes the form of continuous values (typically real numbers) are regression trees[1].

A decision tree predicts the class or value of the target variables by learning decision rules inferred from the training data.

When creating a decision tree, it is important to choose which attribute of the data is most impactful on the result. This can be done using a measure of the entropy of the information in the dataset which can then inform the information gain of a branch. This discussion provides a clear visualisation of how this works.

References