Markov network
From Wikipedia, the free encyclopedia
A Markov network, or Markov random field, is a model of the (full) joint probability distribution of a set of random variables. A Markov network is similar to a Bayesian network in its representation of dependencies, but a Markov network can represent dependencies that a Bayesian network cannot, such as cyclic dependencies.
Formally, a Markov network consists of:
- an undirected graph G = (V,E), where each vertex v ∈V represents a random variable in and each edge {u,v} ∈ E represents a dependency between the random variables u and v,
- a set of potential functions φk (also called factors or clique potentials), one for each maximal clique k in G. Each φk is a mapping from possible joint assignments (to the elements of k) to non-negative real numbers.
The joint distribution represented by a Markov network is given by:
where x{k} is the state of the random variables in the kth clique, and the normalizing constant Z (also called a partition function), where
.
In practice, a Markov network is often conveniently expressed as a log-linear model, given by
with normalizing constant .
The Markov blanket of a node vi in a Markov network is defined to be every node with an edge to vi, i.e. all vj such that . Every node v in a Markov network is conditionally independent of every other node given the Markov blanket of v.
As in a Bayesian network, one may calculate the conditional distribution of a set of nodes V' = {v1,...,vi} given values to another set of nodes W' = {w1,...,wj} in the Markov network by summing over all possible assignments to ; this is called exact inference. However, exact inference is in general a #P-complete problem, and thus computationally intractable. Approximation techniques such as Markov chain Monte Carlo and loopy belief propagation are more feasible in practice. (Though note that some particular subclasses of MRF have polynomial algorithms; discovering such subclasses is an active research topic.)
One notable variant of a Markov network is a conditional random field, in which each random variable may also be conditioned upon a set of global observations o. In this model, each function φk is a mapping from all assignments to both the clique k and the observations o to the nonnegative real numbers. This form of the Markov network may be more appropriate for producing discriminative classifiers, which do not model the distribution over the observations.