Commit d2cde4cb authored by Tiago Peixoto's avatar Tiago Peixoto

Add network reconstruction to inference HOWTO

parent 57d7c7ec
Pipeline #428 failed with stage
in 216 minutes and 48 seconds
......@@ -83,6 +83,8 @@ release = gt_version.split()[0]
# for source files.
exclude_trees = ['.build']
exclude_patterns = ['**/_*.rst']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
......
Background: Nonparametric statistical inference
-----------------------------------------------
A common task when analyzing networks is to characterize their
structures in simple terms, often by dividing the nodes into modules or
`"communities" <https://en.wikipedia.org/wiki/Community_structure>`__.
A principled approach to perform this task is to formulate `generative
models <https://en.wikipedia.org/wiki/Generative_model>`_ that include
the idea of "modules" in their descriptions, which then can be detected
by `inferring <https://en.wikipedia.org/wiki/Statistical_inference>`_
the model parameters from data. More precisely, given the partition
:math:`\boldsymbol b = \{b_i\}` of the network into :math:`B` groups,
where :math:`b_i\in[0,B-1]` is the group membership of node :math:`i`,
we define a model that generates a network :math:`\boldsymbol G` with a
probability
.. math::
:label: model-likelihood
P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b)
where :math:`\boldsymbol\theta` are additional model parameters that
control how the node partition affects the structure of the
network. Therefore, if we observe a network :math:`\boldsymbol G`, the
likelihood that it was generated by a given partition :math:`\boldsymbol
b` is obtained via the `Bayesian
<https://en.wikipedia.org/wiki/Bayesian_inference>`_ posterior probability
.. math::
:label: model-posterior-sum
P(\boldsymbol b | \boldsymbol G) = \frac{\sum_{\boldsymbol\theta}P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b)P(\boldsymbol\theta, \boldsymbol b)}{P(\boldsymbol G)}
where :math:`P(\boldsymbol\theta, \boldsymbol b)` is the `prior
probability <https://en.wikipedia.org/wiki/Prior_probability>`_ of the
model parameters, and
.. math::
:label: model-evidence
P(\boldsymbol G) = \sum_{\boldsymbol\theta,\boldsymbol b}P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b)P(\boldsymbol\theta, \boldsymbol b)
is called the `evidence`, and corresponds to the total probability of
the data summed over all model parameters. The particular types of model
that will be considered here have "hard constraints", such that there is
only one choice for the remaining parameters :math:`\boldsymbol\theta`
that is compatible with the generated network, such that
Eq. :eq:`model-posterior-sum` simplifies to
.. math::
:label: model-posterior
P(\boldsymbol b | \boldsymbol G) = \frac{P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b)P(\boldsymbol\theta, \boldsymbol b)}{P(\boldsymbol G)}
with :math:`\boldsymbol\theta` above being the only choice compatible with
:math:`\boldsymbol G` and :math:`\boldsymbol b`. The inference procedures considered
here will consist in either finding a network partition that maximizes
Eq. :eq:`model-posterior`, or sampling different partitions according
its posterior probability.
As we will show below, this approach also enables the comparison of
`different` models according to statistical evidence (a.k.a. `model
selection`).
Minimum description length (MDL)
++++++++++++++++++++++++++++++++
We note that Eq. :eq:`model-posterior` can be written as
.. math::
P(\boldsymbol b | \boldsymbol G) = \frac{\exp(-\Sigma)}{P(\boldsymbol G)}
where
.. math::
:label: model-dl
\Sigma = -\ln P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b) - \ln P(\boldsymbol\theta, \boldsymbol b)
is called the **description length** of the network :math:`\boldsymbol
G`. It measures the amount of `information
<https://en.wikipedia.org/wiki/Information_theory>`_ required to
describe the data, if we `encode
<https://en.wikipedia.org/wiki/Entropy_encoding>`_ it using the
particular parametrization of the generative model given by
:math:`\boldsymbol\theta` and :math:`\boldsymbol b`, as well as the
parameters themselves. Therefore, if we choose to maximize the posterior
distribution of Eq. :eq:`model-posterior` it will be fully equivalent to
the so-called `minimum description length
<https://en.wikipedia.org/wiki/Minimum_description_length>`_
method. This approach corresponds to an implementation of `Occam's razor
<https://en.wikipedia.org/wiki/Occam%27s_razor>`_, where the `simplest`
model is selected, among all possibilities with the same explanatory
power. The selection is based on the statistical evidence available, and
therefore will not `overfit
<https://en.wikipedia.org/wiki/Overfitting>`_, i.e. mistake stochastic
fluctuations for actual structure. In particular this means that we will
not find modules in networks if they could have arisen simply because of
stochastic fluctuations, as they do in fully random graphs
[guimera-modularity-2004]_.
The stochastic block model (SBM)
--------------------------------
The `stochastic block model
<https://en.wikipedia.org/wiki/Stochastic_block_model>`_ is arguably
the simplest generative process based on the notion of groups of
nodes [holland-stochastic-1983]_. The `microcanonical
<https://en.wikipedia.org/wiki/Microcanonical_ensemble>`_ formulation
[peixoto-nonparametric-2017]_ of the basic or "traditional" version takes
as parameters the partition of the nodes into groups
:math:`\boldsymbol b` and a :math:`B\times B` matrix of edge counts
:math:`\boldsymbol e`, where :math:`e_{rs}` is the number of edges
between groups :math:`r` and :math:`s`. Given these constraints, the
edges are then placed randomly. Hence, nodes that belong to the same
group possess the same probability of being connected with other
nodes of the network.
An example of a possible parametrization is given in the following
figure.
.. testcode:: sbm-example
:hide:
import os
try:
os.chdir("demos/inference")
except FileNotFoundError:
pass
g = gt.load_graph("blockmodel-example.gt.gz")
gt.graph_draw(g, pos=g.vp.pos, vertex_size=10, vertex_fill_color=g.vp.bo,
vertex_color="#333333",
edge_gradient=g.new_ep("vector<double>", val=[0]),
output="sbm-example.svg")
ers = g.gp.w
from pylab import *
figure()
matshow(log(ers))
xlabel("Group $r$")
ylabel("Group $s$")
gca().xaxis.set_label_position("top")
savefig("sbm-example-ers.svg")
.. table::
:class: figure
+----------------------------------+------------------------------+
|.. figure:: sbm-example-ers.svg |.. figure:: sbm-example.svg |
| :width: 300px | :width: 300px |
| :align: center | :align: center |
| | |
| Matrix of edge counts | Generated network. |
| :math:`\boldsymbol e` between | |
| groups. | |
+----------------------------------+------------------------------+
.. note::
We emphasize that no constraints are imposed on what `kind` of
modular structure is allowed, as the matrix of edge counts :math:`e`
is unconstrained. Hence, we can detect the putatively typical pattern
of `"community structure"
<https://en.wikipedia.org/wiki/Community_structure>`_, i.e. when
nodes are connected mostly to other nodes of the same group, if it
happens to be the most likely network description, but we can also
detect a large multiplicity of other patterns, such as `bipartiteness
<https://en.wikipedia.org/wiki/Bipartite_graph>`_, core-periphery,
and many others, all under the same inference framework.
Although quite general, the traditional model assumes that the edges are
placed randomly inside each group, and because of this the nodes that
belong to the same group tend to have very similar degrees. As it turns
out, this is often a poor model for many networks, which possess highly
heterogeneous degree distributions. A better model for such networks is
called the `degree-corrected` stochastic block model
[karrer-stochastic-2011]_, and it is defined just like the traditional
model, with the addition of the degree sequence :math:`\boldsymbol k =
\{k_i\}` of the graph as an additional set of parameters (assuming again
a microcanonical formulation [peixoto-nonparametric-2017]_).
The nested stochastic block model
+++++++++++++++++++++++++++++++++
The regular SBM has a drawback when applied to large networks. Namely,
it cannot be used to find relatively small groups, as the maximum number
of groups that can be found scales as
:math:`B_{\text{max}}=O(\sqrt{N})`, where :math:`N` is the number of
nodes in the network, if Bayesian inference is performed
[peixoto-parsimonious-2013]_. In order to circumvent this, we need to
replace the noninformative priors used by a hierarchy of priors and
hyperpriors, which amounts to a `nested SBM`, where the groups
themselves are clustered into groups, and the matrix :math:`e` of edge
counts are generated from another SBM, and so on recursively
[peixoto-hierarchical-2014]_, as illustrated below.
.. figure:: nested-diagram.*
:width: 400px
:align: center
Example of a nested SBM with three levels.
With this model, the maximum number of groups that can be inferred
scales as :math:`B_{\text{max}}=O(N/\log(N))`. In addition to being able
to find small groups in large networks, this model also provides a
multilevel hierarchical description of the network. With such a
description, we can uncover structural patterns at multiple scales,
representing different levels of coarse-graining.
This diff is collapsed.
Layered networks
----------------
The edges of the network may be distributed in discrete "layers",
representing distinct types if interactions
[peixoto-inferring-2015]_. Extensions to the SBM may be defined for such
data, and they can be inferred using the exact same interface shown
above, except one should use the
:class:`~graph_tool.inference.layered_blockmodel.LayeredBlockState`
class, instead of
:class:`~graph_tool.inference.blockmodel.BlockState`. This class takes
two additional parameters: the ``ec`` parameter, that must correspond to
an edge :class:`~graph_tool.PropertyMap` with the layer/covariate values
on the edges, and the Boolean ``layers`` parameter, which if ``True``
specifies a layered model, otherwise one with categorical edge
covariates (not to be confused with the weighted models in
Sec. :ref:`weights`).
If we use :func:`~graph_tool.inference.minimize.minimize_blockmodel_dl`, this can
be achieved simply by passing the option ``layers=True`` as well as the
appropriate value of ``state_args``, which will be propagated to
:class:`~graph_tool.inference.layered_blockmodel.LayeredBlockState`'s constructor.
As an example, let us consider a social network of tribes, where two
types of interactions were recorded, amounting to either friendship or
enmity [read-cultures-1954]_. We may apply the layered model by
separating these two types of interactions in two layers:
.. testsetup:: layered-model
import os
try:
os.chdir("demos/inference")
except FileNotFoundError:
pass
gt.seed_rng(42)
.. testcode:: layered-model
g = gt.collection.konect_data["ucidata-gama"]
# The edge types are stored in the edge property map "weights".
# Note the different meanings of the two 'layers' parameters below: The
# first enables the use of LayeredBlockState, and the second selects
# the 'edge layers' version (instead of 'edge covariates').
state = gt.minimize_nested_blockmodel_dl(g, layers=True,
state_args=dict(ec=g.ep.weight, layers=True))
state.draw(edge_color=g.ep.weight, edge_gradient=[],
ecmap=(matplotlib.cm.coolwarm_r, .6), edge_pen_width=5,
output="tribes-sbm-edge-layers.svg")
.. figure:: tribes-sbm-edge-layers.*
:align: center
:width: 350px
Best fit of the degree-corrected SBM with edge layers for a network
of tribes, with edge layers shown as colors. The groups show two
enemy tribes.
It is possible to perform model averaging of all layered variants
exactly like for the regular SBMs as was shown above.
Inferring the best partition
----------------------------
The simplest and most efficient approach is to find the best
partition of the network by maximizing Eq. :eq:`model-posterior`
according to some version of the model. This is obtained via the
functions :func:`~graph_tool.inference.minimize.minimize_blockmodel_dl` or
:func:`~graph_tool.inference.minimize.minimize_nested_blockmodel_dl`, which
employs an agglomerative multilevel `Markov chain Monte Carlo (MCMC)
<https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo>`_ algorithm
[peixoto-efficient-2014]_.
We focus first on the non-nested model, and we illustrate its use with a
network of American football teams, which we load from the
:mod:`~graph_tool.collection` module:
.. testsetup:: football
import os
try:
os.chdir("demos/inference")
except FileNotFoundError:
pass
gt.seed_rng(7)
.. testcode:: football
g = gt.collection.data["football"]
print(g)
which yields
.. testoutput:: football
<Graph object, undirected, with 115 vertices and 613 edges at 0x...>
we then fit the degree-corrected model by calling
.. testcode:: football
state = gt.minimize_blockmodel_dl(g)
This returns a :class:`~graph_tool.inference.blockmodel.BlockState` object that
includes the inference results.
.. note::
The inference algorithm used is stochastic by nature, and may return
a different answer each time it is run. This may be due to the fact
that there are alternative partitions with similar probabilities, or
that the optimum is difficult to find. Note that the inference
problem here is, in general, `NP-Hard
<https://en.wikipedia.org/wiki/NP-hardness>`_, hence there is no
efficient algorithm that is guaranteed to always find the best
answer.
Because of this, typically one would call the algorithm many times,
and select the partition with the largest posterior probability of
Eq. :eq:`model-posterior`, or equivalently, the minimum description
length of Eq. :eq:`model-dl`. The description length of a fit can be
obtained with the :meth:`~graph_tool.inference.blockmodel.BlockState.entropy`
method. See also Sec. :ref:`sec_model_selection` below.
We may perform a drawing of the partition obtained via the
:mod:`~graph_tool.inference.blockmodel.BlockState.draw` method, that functions as a
convenience wrapper to the :func:`~graph_tool.draw.graph_draw` function
.. testcode:: football
state.draw(pos=g.vp.pos, output="football-sbm-fit.svg")
which yields the following image.
.. figure:: football-sbm-fit.*
:align: center
:width: 400px
Stochastic block model inference of a network of American college
football teams. The colors correspond to inferred group membership of
the nodes.
We can obtain the group memberships as a
:class:`~graph_tool.PropertyMap` on the vertices via the
:mod:`~graph_tool.inference.blockmodel.BlockState.get_blocks` method:
.. testcode:: football
b = state.get_blocks()
r = b[10] # group membership of vertex 10
print(r)
which yields:
.. testoutput:: football
3
We may also access the matrix of edge counts between groups via
:mod:`~graph_tool.inference.blockmodel.BlockState.get_matrix`
.. testcode:: football
e = state.get_matrix()
matshow(e.todense())
savefig("football-edge-counts.svg")
.. figure:: football-edge-counts.*
:align: center
Matrix of edge counts between groups.
We may obtain the same matrix of edge counts as a graph, which has
internal edge and vertex property maps with the edge and vertex counts,
respectively:
.. testcode:: football
bg = state.get_bg()
ers = state.mrs # edge counts
nr = state.wr # node counts
.. _sec_model_selection:
Hierarchical partitions
+++++++++++++++++++++++
The inference of the nested family of SBMs is done in a similar manner,
but we must use instead the
:func:`~graph_tool.inference.minimize.minimize_nested_blockmodel_dl` function. We
illustrate its use with the neural network of the `C. elegans
<https://en.wikipedia.org/wiki/Caenorhabditis_elegans>`_ worm:
.. testsetup:: celegans
gt.seed_rng(47)
.. testcode:: celegans
g = gt.collection.data["celegansneural"]
print(g)
which has 297 vertices and 2359 edges.
.. testoutput:: celegans
<Graph object, directed, with 297 vertices and 2359 edges at 0x...>
A hierarchical fit of the degree-corrected model is performed as follows.
.. testcode:: celegans
state = gt.minimize_nested_blockmodel_dl(g)
The object returned is an instance of a
:class:`~graph_tool.inference.nested_blockmodel.NestedBlockState` class, which
encapsulates the results. We can again draw the resulting hierarchical
clustering using the
:meth:`~graph_tool.inference.nested_blockmodel.NestedBlockState.draw` method:
.. testcode:: celegans
state.draw(output="celegans-hsbm-fit.svg")
.. figure:: celegans-hsbm-fit.*
:align: center
Most likely hierarchical partition of the neural network of
the *C. elegans* worm according to the nested degree-corrected SBM.
.. note::
If the ``output`` parameter to
:meth:`~graph_tool.inference.nested_blockmodel.NestedBlockState.draw` is omitted, an
interactive visualization is performed, where the user can re-order
the hierarchy nodes using the mouse and pressing the ``r`` key.
A summary of the inferred hierarchy can be obtained with the
:meth:`~graph_tool.inference.nested_blockmodel.NestedBlockState.print_summary` method,
which shows the number of nodes and groups in all levels:
.. testcode:: celegans
state.print_summary()
.. testoutput:: celegans
l: 0, N: 297, B: 17
l: 1, N: 17, B: 9
l: 2, N: 9, B: 3
l: 3, N: 3, B: 1
The hierarchical levels themselves are represented by individual
:meth:`~graph_tool.inference.blockmodel.BlockState` instances obtained via the
:meth:`~graph_tool.inference.nested_blockmodel.NestedBlockState.get_levels()` method:
.. testcode:: celegans
levels = state.get_levels()
for s in levels:
print(s)
.. testoutput:: celegans
<BlockState object with 17 blocks (17 nonempty), degree-corrected, for graph <Graph object, directed, with 297 vertices and 2359 edges at 0x...>, at 0x...>
<BlockState object with 9 blocks (9 nonempty), for graph <Graph object, directed, with 17 vertices and 156 edges at 0x...>, at 0x...>
<BlockState object with 3 blocks (3 nonempty), for graph <Graph object, directed, with 9 vertices and 57 edges at 0x...>, at 0x...>
<BlockState object with 1 blocks (1 nonempty), for graph <Graph object, directed, with 3 vertices and 9 edges at 0x...>, at 0x...>
This means that we can inspect the hierarchical partition just as before:
.. testcode:: celegans
r = levels[0].get_blocks()[46] # group membership of node 46 in level 0
print(r)
r = levels[0].get_blocks()[r] # group membership of node 46 in level 1
print(r)
r = levels[0].get_blocks()[r] # group membership of node 46 in level 2
print(r)
.. testoutput:: celegans
7
0
0
Model class selection
+++++++++++++++++++++
When averaging over partitions, we may be interested in evaluating which
**model class** provides a better fit of the data, considering all
possible parameter choices. This is done by evaluating the model
evidence summed over all possible partitions [peixoto-nonparametric-2017]_:
.. math::
P(\boldsymbol G) = \sum_{\boldsymbol\theta,\boldsymbol b}P(\boldsymbol G,\boldsymbol\theta, \boldsymbol b) = \sum_{\boldsymbol b}P(\boldsymbol G,\boldsymbol b).
This quantity is analogous to a `partition function
<https://en.wikipedia.org/wiki/Partition_function_(statistical_mechanics)>`_
in statistical physics, which we can write more conveniently as a
negative `free energy
<https://en.wikipedia.org/wiki/Thermodynamic_free_energy>`_ by taking
its logarithm
.. math::
:label: free-energy
\ln P(\boldsymbol G) = \underbrace{\sum_{\boldsymbol b}q(\boldsymbol b)\ln P(\boldsymbol G,\boldsymbol b)}_{-\left<\Sigma\right>}\;
\underbrace{- \sum_{\boldsymbol b}q(\boldsymbol b)\ln q(\boldsymbol b)}_{\mathcal{S}}
where
.. math::
q(\boldsymbol b) = \frac{P(\boldsymbol G,\boldsymbol b)}{\sum_{\boldsymbol b'}P(\boldsymbol G,\boldsymbol b')}
is the posterior probability of partition :math:`\boldsymbol b`. The
first term of Eq. :eq:`free-energy` (the "negative energy") is minus the
average of description length :math:`\left<\Sigma\right>`, weighted
according to the posterior distribution. The second term
:math:`\mathcal{S}` is the `entropy
<https://en.wikipedia.org/wiki/Entropy_(information_theory)>`_ of the
posterior distribution, and measures, in a sense, the "quality of fit"
of the model: If the posterior is very "peaked", i.e. dominated by a
single partition with a very large probability, the entropy will tend to
zero. However, if there are many partitions with similar probabilities
--- meaning that there is no single partition that describes the network
uniquely well --- it will take a large value instead.
Since the MCMC algorithm samples partitions from the distribution
:math:`q(\boldsymbol b)`, it can be used to compute
:math:`\left<\Sigma\right>` easily, simply by averaging the description
length values encountered by sampling from the posterior distribution
many times.
The computation of the posterior entropy :math:`\mathcal{S}`, however,
is significantly more difficult, since it involves measuring the precise
value of :math:`q(\boldsymbol b)`. A direct "brute force" computation of
:math:`\mathcal{S}` is implemented via
:meth:`~graph_tool.inference.blockmodel.BlockState.collect_partition_histogram` and
:func:`~graph_tool.inference.blockmodel.microstate_entropy`, however this is only
feasible for very small networks. For larger networks, we are forced to
perform approximations. The simplest is a "mean field" one, where we
assume the posterior factorizes as
.. math::
q(\boldsymbol b) \approx \prod_i{q_i(b_i)}
where
.. math::
q_i(r) = P(b_i = r | \boldsymbol G)
is the marginal group membership distribution of node :math:`i`. This
yields an entropy value given by
.. math::
S \approx -\sum_i\sum_rq_i(r)\ln q_i(r).
This approximation should be seen as an upper bound, since any existing
correlation between the nodes (which are ignored here) will yield
smaller entropy values.
A more accurate assumption is called the `Bethe approximation`
[mezard-information-2009]_, and takes into account the correlation
between adjacent nodes in the network,
.. math::
q(\boldsymbol b) \approx \prod_{i<j}q_{ij}(b_i,b_j)^{A_{ij}}\prod_iq_i(b_i)^{1-k_i}
where :math:`A_{ij}` is the `adjacency matrix
<https://en.wikipedia.org/wiki/Adjacency_matrix>`_, :math:`k_i` is the
degree of node :math:`i`, and
.. math::
q_{ij}(r, s) = P(b_i = r, b_j = s|\boldsymbol G)
is the joint group membership distribution of nodes :math:`i` and
:math:`j` (a.k.a. the `edge marginals`). This yields an entropy value
given by
.. math::
S \approx -\sum_{i<j}A_{ij}\sum_{rs}q_{ij}(r,s)\ln q_{ij}(r,s) - \sum_i(1-k_i)\sum_rq_i(r)\ln q_i(r).
Typically, this approximation yields smaller values than the mean field
one, and is generally considered to be superior. However, formally, it
depends on the graph being sufficiently locally "tree-like", and the
posterior being indeed strongly correlated with the adjacency matrix
itself --- two characteristics which do not hold in general. Although
the approximation often gives reasonable results even when these
conditions do not strictly hold, in some situations when they are
strongly violated this approach can yield meaningless values, such as a
negative entropy. Therefore, it is useful to compare both approaches
whenever possible.
With these approximations, it possible to estimate the full model
evidence efficiently, as we show below, using
:meth:`~graph_tool.inference.blockmodel.BlockState.collect_vertex_marginals`,
:meth:`~graph_tool.inference.blockmodel.BlockState.collect_edge_marginals`,
:meth:`~graph_tool.inference.blockmodel.mf_entropy` and
:meth:`~graph_tool.inference.blockmodel.bethe_entropy`.
.. testcode:: model-evidence
g = gt.collection.data["lesmis"]
for deg_corr in [True, False]:
state = gt.minimize_blockmodel_dl(g, deg_corr=deg_corr) # Initialize the Markov
# chain from the "ground
# state"
state = state.copy(B=g.num_vertices())
dls = [] # description length history
vm = None # vertex marginals
em = None # edge marginals
def collect_marginals(s):
global vm, em
vm = s.collect_vertex_marginals(vm)
em = s.collect_edge_marginals(em)
dls.append(s.entropy())
# Now we collect the marginal distributions for exactly 200,000 sweeps
gt.mcmc_equilibrate(state, force_niter=20000, mcmc_args=dict(niter=10),
callback=collect_marginals)
S_mf = gt.mf_entropy(g, vm)
S_bethe = gt.bethe_entropy(g, em)[0]
L = -mean(dls)
print("Model evidence for deg_corr = %s:" % deg_corr,
L + S_mf, "(mean field),", L + S_bethe, "(Bethe)")
.. testoutput:: model-evidence
Model evidence for deg_corr = True: -569.590426... (mean field), -817.788531... (Bethe)
Model evidence for deg_corr = False: -587.028530... (mean field), -736.990655... (Bethe)
If we consider the more accurate approximation, the outcome shows a
preference for the non-degree-corrected model.
When using the nested model, the approach is entirely analogous. The
only difference now is that we have a hierarchical partition
:math:`\{\boldsymbol b_l\}` in the equations above, instead of simply
:math:`\boldsymbol b`. In order to make the approach tractable, we
assume the factorization
.. math::
q(\{\boldsymbol b_l\}) \approx \prod_lq_l(\boldsymbol b_l)
where :math:`q_l(\boldsymbol b_l)` is the marginal posterior for the
partition at level :math:`l`. For :math:`q_0(\boldsymbol b_0)` we may
use again either the mean-field or Bethe approximations, however for
:math:`l>0` only the mean-field approximation is applicable, since the
adjacency matrix of the higher layers is not constant. We show below the
approach for the same network, using the nested model.
.. testcode:: model-evidence
g = gt.collection.data["lesmis"]
nL = 10
for deg_corr in [True, False]:
state = gt.minimize_nested_blockmodel_dl(g, deg_corr=deg_corr) # Initialize the Markov
# chain from the "ground
# state"
bs = state.get_bs() # Get hierarchical partition.
bs += [np.zeros(1)] * (nL - len(bs)) # Augment it to L = 10 with
# single-group levels.
state = state.copy(bs=bs, sampling=True)
dls = [] # description length history
vm = [None] * len(state.get_levels()) # vertex marginals
em = None # edge marginals
def collect_marginals(s):
global vm, em
levels = s.get_levels()
vm = [sl.collect_vertex_marginals(vm[l]) for l, sl in enumerate(levels)]
em = levels[0].collect_edge_marginals(em)
dls.append(s.entropy())
# Now we collect the marginal distributions for exactly 200,000 sweeps
gt.mcmc_equilibrate(state, force_niter=20000, mcmc_args=dict(niter=10),
callback=collect_marginals)
S_mf = [gt.mf_entropy(sl.g, vm[l]) for l, sl in enumerate(state.get_levels())]
S_bethe = gt.bethe_entropy(g, em)[0]
L = -mean(dls)
print("Model evidence for deg_corr = %s:" % deg_corr,
L + sum(S_mf), "(mean field),", L + S_bethe + sum(S_mf[1:]), "(Bethe)")