Commit 39f58776 authored by Tiago Peixoto's avatar Tiago Peixoto
Browse files

Add 'inference' section to cookbook

parent a0d0dd23
......@@ -7,4 +7,5 @@ Contents:
:maxdepth: 3
:glob:
inference/inference
animation/animation
\ No newline at end of file
Inferring network structure
===========================
``graph-tool`` includes algorithms to identify the large-scale structure
of networks in the :mod:`~graph_tool.inference` submodule. Here we
explain the basic functionality with self-contained examples.
Background: Nonparametric statistical inference
-----------------------------------------------
A common task when analyzing networks is to characterize their
structures in simple terms, often by dividing the nodes into modules or
"communities".
A principled approach to perform this task is to formulate `generative
models <https://en.wikipedia.org/wiki/Generative_model>`_ that include
the idea of "modules" in their descriptions, which then can be detected
by `inferring <https://en.wikipedia.org/wiki/Statistical_inference>`_
the model parameters from data. More precisely, given the partition
:math:`\boldsymbol b = \{b_i\}` of the network into :math:`B` groups,
where :math:`b_i\in[0,B-1]` is the group membership of node :math:`i`,
we define a model that generates a network :math:`G` with a probability
.. math::
:label: model-likelihood
P(G|\theta, \boldsymbol b)
where :math:`\theta` are additional model parameters. Therefore, if we
observe a network :math:`G`, the likelihood that it was generated by a
given partition :math:`\boldsymbol b` is obtained via the `Bayesian
<https://en.wikipedia.org/wiki/Bayesian_inference>`_ posterior
.. math::
:label: model-posterior-sum
P(\boldsymbol b | G) = \frac{\sum_{\theta}P(G|\theta, \boldsymbol b)P(\theta, \boldsymbol b)}{P(G)}
where :math:`P(\theta, \boldsymbol b)` is the `prior likelihood` of the
model parameters, and
.. math::
:label: model-evidence
P(G) = \sum_{\theta,\boldsymbol b}P(G|\theta, \boldsymbol b)P(\theta, \boldsymbol b)
is called the `model evidence`. The particular types of model that will
be considered here have "hard constraints", such that there is only one
choice for the remaining parameters :math:`\theta` that is compatible
with the generated network, such that Eq. :eq:`model-posterior-sum` simplifies to
.. math::
:label: model-posterior
P(\boldsymbol b | G) = \frac{P(G|\theta, \boldsymbol b)P(\theta, \boldsymbol b)}{P(G)}
with :math:`\theta` above being the only choice compatible with
:math:`G` and :math:`\boldsymbol b`. The inference procedures considered
here will consist in either finding a network partition that maximizes
Eq. :eq:`model-posterior`, or sampling different partitions according
its posterior probability.
As we will show below, this approach will also enable the comparison of
`different` models according to statistical evidence (a.k.a. `model
selection`).
Minimum description length (MDL)
++++++++++++++++++++++++++++++++
We note that Eq. :eq:`model-posterior` can be written as
.. math::
P(\boldsymbol b | G) = \frac{e^{-\Sigma}}{P(G)}
where
.. math::
:label: model-dl
\Sigma = -\ln P(G|\theta, \boldsymbol b) - \ln P(\theta, \boldsymbol b)
is called the **description length** of the network :math:`G`. It
measures the amount of `information
<https://en.wikipedia.org/wiki/Information_theory>`_ required to
describe the data, if we `encode
<https://en.wikipedia.org/wiki/Entropy_encoding>`_ it using the
particular parametrization of the generative model given by
:math:`\theta` and :math:`\boldsymbol b`, as well as the parameters
themselves. Therefore, if we choose to maximize the posterior likelihood
of Eq. :eq:`model-posterior` it will be fully equivalent to the
so-called `minimum description length
<https://en.wikipedia.org/wiki/Minimum_description_length>`_
method. This approach corresponds to an implementation of `Occam's razor
<https://en.wikipedia.org/wiki/Occam%27s_razor>`_, where the `simplest`
model is selected, among all possibilities with the same explanatory
power. The selection is based on the statistical evidence available, and
therefore will not `overfit
<https://en.wikipedia.org/wiki/Overfitting>`_, i.e. mistake stochastic
fluctuations for actual structure.
The stochastic block model (SBM)
--------------------------------
The `stochastic block model
<https://en.wikipedia.org/wiki/Stochastic_block_model>`_ is arguably
the simplest generative process based on the notion of groups of
nodes [holland-stochastic-1983]_. The `microcanonical
<https://en.wikipedia.org/wiki/Microcanonical_ensemble>`_ formulation
[peixoto-entropy-2012]_ of the basic or "traditional" version takes
as parameters the partition of the nodes into groups
:math:`\boldsymbol b` and a :math:`B\times B` matrix of edge counts
:math:`\boldsymbol e`, where :math:`e_{rs}` is the number of edges
between groups :math:`r` and :math:`s`. Given these constraints, the
edges are then placed randomly. Hence, nodes that belong to the same
group possess the same probability of being connected with other
nodes of the network.
An example of a possible parametrization is given in the following
figure.
.. testcode:: sbm-example
:hide:
import os
try:
os.chdir("demos/inference")
except FileNotFoundError:
pass
g = gt.load_graph("blockmodel-example.gt.gz")
gt.graph_draw(g, pos=g.vp.pos, vertex_size=10, vertex_fill_color=g.vp.bo,
vertex_color="#333333",
edge_gradient=g.new_ep("vector<double>", val=[0]),
output="sbm-example.svg")
ers = g.gp.w
from pylab import *
figure()
matshow(log(ers))
xlabel("Group $r$")
ylabel("Group $s$")
gca().xaxis.set_label_position("top")
savefig("sbm-example-ers.svg")
.. table::
:class: figure
+----------------------------------+------------------------------+
|.. figure:: sbm-example-ers.svg |.. figure:: sbm-example.svg |
| :width: 300px | :width: 300px |
| :align: center | :align: center |
| | |
| Matrix of edge counts | Generated network. |
| :math:`\boldsymbol e` between | |
| groups. | |
+----------------------------------+------------------------------+
.. note::
We emphasize that no constraints are imposed on what `kind` of
modular structure is allowed. Hence, we can detect the putatively
typical pattern of `"community structure"
<https://en.wikipedia.org/wiki/Community_structure>`_, i.e. when
nodes are connected mostly to other nodes of the same group, if it
happens to be the most likely network description, but we can also
detect a large multiplicity of other patterns, such as `bipartiteness
<https://en.wikipedia.org/wiki/Bipartite_graph>`_, core-periphery,
and many others, all under the same inference framework.
Although quite general, the traditional model assumes that the edges are
placed randomly inside each group, and as such the nodes that belong to
the same group have very similar degrees. As it turns out, this is often
a poor model for many networks, which possess highly heterogeneous
degree distributions. A better model for such networks is called the
`degree-corrected` stochastic block model [karrer-stochastic-2011]_, and
it is defined just like the traditional model, with the addition of the
degree sequence :math:`\boldsymbol k = \{k_i\}` of the graph as an
additional set of parameters (assuming again a microcanonical
formulation [peixoto-entropy-2012]_).
The nested stochastic block model
+++++++++++++++++++++++++++++++++
The regular SBM has a drawback when applied to very large
networks. Namely, it cannot be used to find relatively small groups in
very large networks: The maximum number of groups that can be found
scales as :math:`B_{\text{max}}\sim\sqrt{N}`, where :math:`N` is the
number of nodes in the network, if Bayesian inference is performed
[peixoto-parsimonious-2013]_. In order to circumvent this, we need to
replace the noninformative priors used by a hierarchy of priors and
hyperpriors, which amounts to a `nested SBM`, where the groups
themselves are clustered into groups, and the matrix :math:`e` of edge
counts are generated from another SBM, and so on recursively
[peixoto-hierarchical-2014]_.
.. figure:: nested-diagram.*
:width: 400px
:align: center
Example of a nested SBM with three levels.
In addition to being able to find small groups in large networks, this
model also provides a multilevel hierarchical description of the
network, that describes its structure at multiple scales.
Inferring the best partition
----------------------------
The simplest and most efficient approach is to find the best
partition of the network by maximizing Eq. :eq:`model-posterior`
according to some version of the model. This is obtained via the
functions :func:`~graph_tool.inference.minimize_blockmodel_dl` or
:func:`~graph_tool.inference.minimize_nested_blockmodel_dl`, which
employs an agglomerative multilevel `Markov chain Monte Carlo (MCMC)
<https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo>`_ algorithm
[peixoto-efficient-2014]_.
We focus first on the non-nested model, and we illustrate its use with a
network of American football teams, which we load from the
:mod:`~graph_tool.collection` module:
.. testsetup:: football
import os
try:
os.chdir("demos/inference")
except FileNotFoundError:
pass
.. testcode:: football
g = gt.collection.data["football"]
print(g)
which yields
.. testoutput:: football
<Graph object, undirected, with 115 vertices and 613 edges at 0x...>
we then fit the `traditional` model by calling
.. testcode:: football
state = gt.minimize_blockmodel_dl(g, deg_corr=False)
This returns a :class:`~graph_tool.inference.BlockState` object that
includes the inference results.
.. note::
The inference algorithm used is stochastic by nature, and may return
a slightly different answer each time it is run. This may be due to
the fact that there are alternative partitions with similar
likelihoods, or that the optimum is difficult to find. Note that the
inference problem here is, in general, `NP-Hard
<https://en.wikipedia.org/wiki/NP-hardness>`_, hence there is no
efficient algorithm that is guaranteed to always find the best
answer.
Because of this, typically one would call the algorithm many times,
and select the partition with the largest posterior likelihood of
Eq. :eq:`model-posterior`, or equivalently, the minimum description
length of Eq. :eq:`model-dl`. The description length of a fit can be
obtained with the :meth:`~graph_tool.inference.BlockState.entropy`
method. See also :ref:`sec_model_selection` below.
We may perform a drawing of the partition obtained via the
:mod:`~graph_tool.inference.BlockState.draw` method, that functions as a
convenience wrapper to the :func:`~graph_tool.draw.graph_draw` function
.. testcode:: football
state.draw(pos=g.vp.pos, output="football-sbm-fit.svg")
which yields the following image.
.. figure:: football-sbm-fit.*
:align: center
:width: 400px
Stochastic block model inference of a network of American college
football teams. The colors correspond to inferred group membership of
the nodes.
We can obtain the group memberships as a
:class:`~graph_tool.PropertyMap` on the vertices via the
:mod:`~graph_tool.inference.BlockState.get_blocks` method:
.. testcode:: football
b = state.get_blocks()
r = b[10] # group membership of vertex 10
print(r)
which yields:
.. testoutput:: football
3
We may also access the matrix of edge counts between groups via
:mod:`~graph_tool.inference.BlockState.get_matrix`
.. testcode:: football
e = state.get_matrix()
matshow(e.todense())
savefig("football-edge-counts.svg")
.. figure:: football-edge-counts.*
:align: center
Matrix of edge counts between groups.
We may obtain the same matrix of edge counts as a graph, which has
internal edge and vertex property maps with the edge and vertex counts,
respectively:
.. testcode:: football
bg = state.get_bg()
ers = bg.ep.count # edge counts
nr = bg.vp.count # node counts
.. _sec_model_selection:
Hierarchical partitions
+++++++++++++++++++++++
The inference of the nested family of SBMs is done in a similar manner,
but we must use instead the
:func:`~graph_tool.inference.minimize_nested_blockmodel_dl` function. We
illustrate its use with the neural network of the `C. elegans
<https://en.wikipedia.org/wiki/Caenorhabditis_elegans>`_ worm:
.. testcode:: celegans
g = gt.collection.data["celegansneural"]
print(g)
which has 297 vertices and 2359 edges.
.. testoutput:: celegans
<Graph object, directed, with 297 vertices and 2359 edges at 0x...>
A hierarchical fit of the degree-corrected model is performed as follows.
.. testcode:: celegans
state = gt.minimize_nested_blockmodel_dl(g, deg_corr=True)
The object returned is an instance of a
:class:`~graph_tool.inference.NestedBlockState` class, which
encapsulates the results. We can again draw the resulting hierarchical
clustering using the
:meth:`~graph_tool.inference.NestedBlockState.draw` method:
.. testcode:: celegans
state.draw(output="celegans-hsbm-fit.svg")
.. figure:: celegans-hsbm-fit.*
:align: center
Most likely hierarchical partition of the neural network of
the C. elegans worm according to the nested degree-corrected SBM.
.. note::
If the ``output`` parameter to
:meth:`~graph_tool.inference.NestedBlockState.draw` is omitted, an
interactive visualization is performed, where the user can re-order
the hierarchy nodes using the mouse and pressing the ``r`` key.
A summary of the inferred hierarchy can be obtained with the
:meth:`~graph_tool.inference.NestedBlockState.print_summary` method,
which shows the number of nodes and groups in all levels:
.. testcode:: celegans
state.print_summary()
.. testoutput:: celegans
l: 0, N: 297, B: 23
l: 1, N: 23, B: 6
l: 2, N: 6, B: 2
l: 3, N: 2, B: 1
The hierarchical levels themselves are represented by individual
:meth:`~graph_tool.inference.BlockState` instances via the
:meth:`~graph_tool.inference.NestedBlockState.get_levels()` method:
.. testcode:: celegans
levels = state.get_levels()
for s in levels:
print(s)
.. testoutput:: celegans
<BlockState object with 23 blocks (23 nonempty), degree-corrected, for graph <Graph object, directed, with 297 vertices and 2359 edges at 0x...>, at 0x...>
<BlockState object with 6 blocks (6 nonempty), for graph <Graph object, directed, with 23 vertices and 249 edges at 0x...>, at 0x...>
<BlockState object with 2 blocks (2 nonempty), for graph <Graph object, directed, with 6 vertices and 31 edges at 0x...>, at 0x...>
<BlockState object with 1 blocks (1 nonempty), for graph <Graph object, directed, with 2 vertices and 4 edges at 0x...>, at 0x...>
This means that we can inspect the hierarchical partition just as before:
.. testcode:: celegans
r = levels[0].get_blocks()[42] # group membership of node 42 in level 0
print(r)
r = levels[0].get_blocks()[r] # group membership of node 42 in level 1
print(r)
r = levels[0].get_blocks()[r] # group membership of node 42 in level 2
print(r)
.. testoutput:: celegans
10
6
4
Model selection
+++++++++++++++
As mentioned above, one can select the best model according to the
choice that yields the smallest description length. For instance, in
case of the `C. elegans` network we have
.. testcode:: model-selection
g = gt.collection.data["celegansneural"]
state_ndc = gt.minimize_nested_blockmodel_dl(g, deg_corr=False)
state_dc = gt.minimize_nested_blockmodel_dl(g, deg_corr=True)
print("Non-degree-corrected DL:\t", state_ndc.entropy())
print("Degree-corrected DL:\t", state_dc.entropy())
.. testoutput:: model-selection
:options: +NORMALIZE_WHITESPACE
Non-degree-corrected DL: 8498.72893945
Degree-corrected DL: 8302.44951314
Since it yields the smallest description length, the degree-corrected
fit should be preferred. The statistical significance of the choice can
be accessed by inspecting the posterior odds ratio (or more precisely,
the `Bayes factor <https://en.wikipedia.org/wiki/Bayes_factor>`_)
[peixoto-model-2016]_
.. math::
\Lambda &= \frac{P(\boldsymbol b | G, \mathcal{H}_\text{NDC})}{P(\boldsymbol b | G, \mathcal{H}_\text{DC})} \\
&= \exp(-\Delta\Sigma)
where :math:`\mathcal{H}_\text{NDC}` and :math:`\mathcal{H}_\text{DC}`
correspond to the non-degree-corrected and degree-corrected model
hypotheses, respectively, and :math:`\Delta\Sigma` is the difference of the
description length of both fits. In our particular case, we have
.. testcode:: model-selection
print("ln Λ: ", state_dc.entropy() - state_ndc.entropy())
.. testoutput:: model-selection
:options: +NORMALIZE_WHITESPACE
ln Λ: -196.279426317
The precise threshold that should be used to decide when to `reject a
hypothesis <https://en.wikipedia.org/wiki/Hypothesis_testing>`_ is
subjective and context-dependent, but the value above implies that the
particular degree-corrected fit is around :math:`e^{196} \sim 10^{85}`
times more likely than the non-degree corrected one, and hence it can be
safely concluded that it provides a substantially better fit.
Although it is often true that the degree-corrected model provides a
better fit for many empirical networks, there are also exceptions. For
example, for the American football network above, we have:
.. testcode:: model-selection
g = gt.collection.data["football"]
state_ndc = gt.minimize_nested_blockmodel_dl(g, deg_corr=False)
state_dc = gt.minimize_nested_blockmodel_dl(g, deg_corr=True)
print("Non-degree-corrected DL:\t", state_ndc.entropy())
print("Degree-corrected DL:\t", state_dc.entropy())
print("ln Λ:\t\t\t", state_ndc.entropy() - state_dc.entropy())
.. testoutput:: model-selection
:options: +NORMALIZE_WHITESPACE
Non-degree-corrected DL: 1725.78502074
Degree-corrected DL: 1772.83605254
ln Λ: -47.0510317979
Hence, with a posterior odds ratio of :math:`\Lambda \sim e^{-47} \sim
10^{-20}` in favor of the non-degree-corrected model, it seems like the
degree-corrected variant is an unnecessarily complex description for
this network.
Averaging over models
---------------------
When analyzing empirical networks, one should be open to the possibility
that there will be more than one fit of the SBM with similar posterior
likelihoods. In such situations, one should instead `sample` partitions
from the posterior likelihood, instead of simply finding its
maximum. One can then compute quantities that are averaged over the
different model fits, weighted according to their posterior likelihoods.
Full support for model averaging is implemented in ``graph-tool`` via an
efficient `Markov chain Monte Carlo (MCMC)
<https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo>`_ algorithm
[peixoto-efficient-2014]_. It works by attempting to move nodes into
different groups with specific probabilities, and `accepting or
rejecting
<https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm>`_
such moves such that, after a sufficiently long time, the partitions
will be observed with the desired posterior probability. The algorithm
is so designed, that its run-time is independent on the number of groups
being used in the model, and hence is suitable for use on very large
networks.
In order to perform such moves, one needs again to operate with
:class:`~graph_tool.inference.BlockState` or
:class:`~graph_tool.inference.NestedBlockState` instances, and calling
their :meth:`~graph_tool.inference.BlockState.mcmc_sweep` methods. For
example, the following will perform 1000 sweeps of the algorithm with
the network of characters in the novel Les Misérables, starting from a
random partition into 20 groups
.. testsetup:: model-averaging
import os
try:
os.chdir("demos/inference")
except FileNotFoundError:
pass
.. testcode:: model-averaging
g = gt.collection.data["lesmis"]
state = gt.BlockState(g, B=20) # This automatically initializes the state
# with a random partition into B=20
# nonempty groups; The user could
# also pass an arbitrary initial
# partition using the 'b' parameter.
# If we work with the above state object, we will be restricted to
# partitions into at most B=20 groups. But since we want to consider
# an arbitrary number of groups in the range [1, N], we transform it
# into a state with B=N groups (where N-20 will be empty).
state = state.copy(B=g.num_vertices())
# Now we run 1,000 sweeps of the MCMC
dS, nmoves = state.mcmc_sweep(niter=1000)
print("Change in description length:", dS)
print("Number of accepted vertex moves:", nmoves)
.. testoutput:: model-averaging
Change in description length: -374.3292765930462
Number of accepted vertex moves: 4394
.. note::
Starting from a random partition is rarely the best option, since it
may take a long time for it to equilibrate; It was done above simply
as an illustration on how to initialize
:class:`~graph_tool.inference.BlockState` by hand. Instead, a much
better option in practice is to start from the "ground state"
obtained with :func:`~graph_tool.inference.minimize_blockmodel_dl`,
e.g.
.. testcode:: model-averaging
state = gt.minimize_blockmodel_dl(g)
state = state.copy(B=g.num_vertices())
dS, nmoves = state.mcmc_sweep(niter=1000)
print("Change in description length:", dS)
print("Number of accepted vertex moves:", nmoves)
.. testoutput:: model-averaging
Change in description length: 22.056557648826185
Number of accepted vertex moves: 4490
Although the above is sufficient to implement model averaging, there is a
convenience function called
:func:`~graph_tool.inference.mcmc_equilibrate` that is intend to
simplify the detection of equilibration, by keeping track of the maximum
and minimum values of description length encountered and how many sweeps
have been made without a "record breaking" event. For example,
.. testcode:: model-averaging
# We will accept equilibration if 10 sweeps are completed without a
# record breaking event, 2 consecutive times.
gt.mcmc_equilibrate(state, wait=10, nbreaks=2, mcmc_args=dict(niter=10), verbose=True)