Information Theory

From Machine Learning
Jump to: navigation, search
Information theory is a branch of mathematics that defines efficient and practical methods by which data can be exchanged and interpreted.

Entropy

In information theory entropy is a measure of the uncertainty associated with a random variable[1]. Information entropy is the amount of information needed to describe a system. For example, consider a password. The more characters added, the greater the entropy.

For example:

“hello” = 5 characters = characters = 40 bits of entropy (assuming charsize of 8 bits). This also means with 40 bits there are 240 possible combinations.

In practice, calculating entropy looks as follows:

entropy= Σ-pilog2pi

Where pI is the probability of class i being selected from the dataset. The higher the entropy the more the information content. The base of the log can be any number and determines the units used. When the base is 2 as above, the units are bits.

For training a model, a dataset with maximum impurity (the data is split evenly between classes, entropy = 1) is ideal, since the model will learn from a set with a high variation of classes. If the dataset is composed all of one class (entropy = 0), it will likely overfit and struggle to compute meaningful results when presented with test or real data.

Note: In computer science, entropy is the randomness collected by an operating system or application for uses that require random data. It is usually collected from hardware sources such as mouse movements or specially created randomness generators. True random generation through an algorithm is a difficult problem to solve, and inbuilt functions of languages are often an approximation of randomness rather than truly random.


Information Gain

Information gain tells us how important a given attribute of the dataset is. It is used in ordering the attributes in the nodes of a decision tree[2].

information gain = entropy(parent) - [average entropy(children)]

The attribute with the highest information gain for the training dataset should be set as the root node of the tree. The dataset is then split for each value that this attribute has, and these values become the next nodes in the decision tree. This process is then repeated recursively until each branch ends in a value.


References