inference.rst 56.6 KB
Newer Older
1
2
.. _inference-howto:

3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Inferring network structure
===========================

``graph-tool`` includes algorithms to identify the large-scale structure
of networks in the :mod:`~graph_tool.inference` submodule. Here we
explain the basic functionality with self-contained examples.

Background: Nonparametric statistical inference
-----------------------------------------------

A common task when analyzing networks is to characterize their
structures in simple terms, often by dividing the nodes into modules or
"communities".

A principled approach to perform this task is to formulate `generative
models <https://en.wikipedia.org/wiki/Generative_model>`_ that include
the idea of "modules" in their descriptions, which then can be detected
by `inferring <https://en.wikipedia.org/wiki/Statistical_inference>`_
the model parameters from data. More precisely, given the partition
:math:`\boldsymbol b = \{b_i\}` of the network into :math:`B` groups,
where :math:`b_i\in[0,B-1]` is the group membership of node :math:`i`,
24
25
we define a model that generates a network :math:`\boldsymbol G` with a
probability
26
27
28
29

.. math::
   :label: model-likelihood

30
   P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b)
31

32
33
where :math:`\boldsymbol\theta` are additional model parameters. Therefore, if we
observe a network :math:`\boldsymbol G`, the likelihood that it was generated by a
34
35
36
37
38
39
given partition :math:`\boldsymbol b` is obtained via the `Bayesian
<https://en.wikipedia.org/wiki/Bayesian_inference>`_ posterior

.. math::
   :label: model-posterior-sum

40
   P(\boldsymbol b | \boldsymbol G) = \frac{\sum_{\boldsymbol\theta}P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b)P(\boldsymbol\theta, \boldsymbol b)}{P(\boldsymbol G)}
41

42
where :math:`P(\boldsymbol\theta, \boldsymbol b)` is the `prior likelihood` of the
43
44
45
46
47
model parameters, and

.. math::
   :label: model-evidence

48
   P(\boldsymbol G) = \sum_{\boldsymbol\theta,\boldsymbol b}P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b)P(\boldsymbol\theta, \boldsymbol b)
49
50
51

is called the `model evidence`. The particular types of model that will
be considered here have "hard constraints", such that there is only one
52
choice for the remaining parameters :math:`\boldsymbol\theta` that is compatible
53
54
55
56
57
with the generated network, such that Eq. :eq:`model-posterior-sum` simplifies to

.. math::
   :label: model-posterior

58
   P(\boldsymbol b | \boldsymbol G) = \frac{P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b)P(\boldsymbol\theta, \boldsymbol b)}{P(\boldsymbol G)}
59

60
61
with :math:`\boldsymbol\theta` above being the only choice compatible with
:math:`\boldsymbol G` and :math:`\boldsymbol b`. The inference procedures considered
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
here will consist in either finding a network partition that maximizes
Eq. :eq:`model-posterior`, or sampling different partitions according
its posterior probability.

As we will show below, this approach will also enable the comparison of
`different` models according to statistical evidence (a.k.a. `model
selection`).

Minimum description length (MDL)
++++++++++++++++++++++++++++++++

We note that Eq. :eq:`model-posterior` can be written as

.. math::

77
   P(\boldsymbol b | \boldsymbol G) = \frac{\exp(-\Sigma)}{P(\boldsymbol G)}
78
79
80
81
82
83

where

.. math::
   :label: model-dl

84
   \Sigma = -\ln P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b) - \ln P(\boldsymbol\theta, \boldsymbol b)
85

86
is called the **description length** of the network :math:`\boldsymbol G`. It
87
88
89
90
91
measures the amount of `information
<https://en.wikipedia.org/wiki/Information_theory>`_ required to
describe the data, if we `encode
<https://en.wikipedia.org/wiki/Entropy_encoding>`_ it using the
particular parametrization of the generative model given by
92
:math:`\boldsymbol\theta` and :math:`\boldsymbol b`, as well as the parameters
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
themselves. Therefore, if we choose to maximize the posterior likelihood
of Eq. :eq:`model-posterior` it will be fully equivalent to the
so-called `minimum description length
<https://en.wikipedia.org/wiki/Minimum_description_length>`_
method. This approach corresponds to an implementation of `Occam's razor
<https://en.wikipedia.org/wiki/Occam%27s_razor>`_, where the `simplest`
model is selected, among all possibilities with the same explanatory
power. The selection is based on the statistical evidence available, and
therefore will not `overfit
<https://en.wikipedia.org/wiki/Overfitting>`_, i.e. mistake stochastic
fluctuations for actual structure.

The stochastic block model (SBM)
--------------------------------

The `stochastic block model
<https://en.wikipedia.org/wiki/Stochastic_block_model>`_ is arguably
the simplest generative process based on the notion of groups of
nodes [holland-stochastic-1983]_. The `microcanonical
<https://en.wikipedia.org/wiki/Microcanonical_ensemble>`_ formulation
113
[peixoto-nonparametric-2016]_ of the basic or "traditional" version takes
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
as parameters the partition of the nodes into groups
:math:`\boldsymbol b` and a :math:`B\times B` matrix of edge counts
:math:`\boldsymbol e`, where :math:`e_{rs}` is the number of edges
between groups :math:`r` and :math:`s`. Given these constraints, the
edges are then placed randomly. Hence, nodes that belong to the same
group possess the same probability of being connected with other
nodes of the network.

An example of a possible parametrization is given in the following
figure.

.. testcode:: sbm-example
   :hide:

   import os
   try:
      os.chdir("demos/inference")
   except FileNotFoundError:
       pass

   g = gt.load_graph("blockmodel-example.gt.gz")
   gt.graph_draw(g, pos=g.vp.pos, vertex_size=10, vertex_fill_color=g.vp.bo,
                 vertex_color="#333333",
                 edge_gradient=g.new_ep("vector<double>", val=[0]),
                 output="sbm-example.svg")

   ers = g.gp.w

   from pylab import *
   figure()
   matshow(log(ers))
   xlabel("Group $r$")
   ylabel("Group $s$")
   gca().xaxis.set_label_position("top") 
   savefig("sbm-example-ers.svg")

.. table::
    :class: figure

    +----------------------------------+------------------------------+
    |.. figure:: sbm-example-ers.svg   |.. figure:: sbm-example.svg   |
    |   :width: 300px                  |   :width: 300px              |
    |   :align: center                 |   :align: center             |
    |                                  |                              |
    |   Matrix of edge counts          |   Generated network.         |
    |   :math:`\boldsymbol e` between  |                              |
    |   groups.                        |                              |
    +----------------------------------+------------------------------+

.. note::

   We emphasize that no constraints are imposed on what `kind` of
   modular structure is allowed. Hence, we can detect the putatively
   typical pattern of `"community structure"
   <https://en.wikipedia.org/wiki/Community_structure>`_, i.e. when
   nodes are connected mostly to other nodes of the same group, if it
   happens to be the most likely network description, but we can also
   detect a large multiplicity of other patterns, such as `bipartiteness
   <https://en.wikipedia.org/wiki/Bipartite_graph>`_, core-periphery,
   and many others, all under the same inference framework.


Although quite general, the traditional model assumes that the edges are
placed randomly inside each group, and as such the nodes that belong to
the same group have very similar degrees. As it turns out, this is often
a poor model for many networks, which possess highly heterogeneous
degree distributions. A better model for such networks is called the
`degree-corrected` stochastic block model [karrer-stochastic-2011]_, and
it is defined just like the traditional model, with the addition of the
degree sequence :math:`\boldsymbol k = \{k_i\}` of the graph as an
additional set of parameters (assuming again a microcanonical
185
formulation [peixoto-nonparametric-2016]_).
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236


The nested stochastic block model
+++++++++++++++++++++++++++++++++

The regular SBM has a drawback when applied to very large
networks. Namely, it cannot be used to find relatively small groups in
very large networks: The maximum number of groups that can be found
scales as :math:`B_{\text{max}}\sim\sqrt{N}`, where :math:`N` is the
number of nodes in the network, if Bayesian inference is performed
[peixoto-parsimonious-2013]_. In order to circumvent this, we need to
replace the noninformative priors used by a hierarchy of priors and
hyperpriors, which amounts to a `nested SBM`, where the groups
themselves are clustered into groups, and the matrix :math:`e` of edge
counts are generated from another SBM, and so on recursively
[peixoto-hierarchical-2014]_.

.. figure:: nested-diagram.*
   :width: 400px
   :align: center

   Example of a nested SBM with three levels.

In addition to being able to find small groups in large networks, this
model also provides a multilevel hierarchical description of the
network, that describes its structure at multiple scales.


Inferring the best partition
----------------------------

The simplest and most efficient approach is to find the best
partition of the network by maximizing Eq. :eq:`model-posterior`
according to some version of the model. This is obtained via the
functions :func:`~graph_tool.inference.minimize_blockmodel_dl` or
:func:`~graph_tool.inference.minimize_nested_blockmodel_dl`, which
employs an agglomerative multilevel `Markov chain Monte Carlo (MCMC)
<https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo>`_ algorithm
[peixoto-efficient-2014]_.

We focus first on the non-nested model, and we illustrate its use with a
network of American football teams, which we load from the
:mod:`~graph_tool.collection` module:

.. testsetup:: football

   import os
   try:
      os.chdir("demos/inference")
   except FileNotFoundError:
       pass
Tiago Peixoto's avatar
Tiago Peixoto committed
237
   gt.seed_rng(3)
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309

.. testcode:: football

   g = gt.collection.data["football"]
   print(g)

which yields

.. testoutput:: football

   <Graph object, undirected, with 115 vertices and 613 edges at 0x...>

we then fit the `traditional` model by calling

.. testcode:: football

   state = gt.minimize_blockmodel_dl(g, deg_corr=False)

This returns a :class:`~graph_tool.inference.BlockState` object that
includes the inference results.

.. note::

   The inference algorithm used is stochastic by nature, and may return
   a slightly different answer each time it is run. This may be due to
   the fact that there are alternative partitions with similar
   likelihoods, or that the optimum is difficult to find. Note that the
   inference problem here is, in general, `NP-Hard
   <https://en.wikipedia.org/wiki/NP-hardness>`_, hence there is no
   efficient algorithm that is guaranteed to always find the best
   answer.

   Because of this, typically one would call the algorithm many times,
   and select the partition with the largest posterior likelihood of
   Eq. :eq:`model-posterior`, or equivalently, the minimum description
   length of Eq. :eq:`model-dl`. The description length of a fit can be
   obtained with the :meth:`~graph_tool.inference.BlockState.entropy`
   method. See also :ref:`sec_model_selection` below.


We may perform a drawing of the partition obtained via the
:mod:`~graph_tool.inference.BlockState.draw` method, that functions as a
convenience wrapper to the :func:`~graph_tool.draw.graph_draw` function

.. testcode:: football

   state.draw(pos=g.vp.pos, output="football-sbm-fit.svg")

which yields the following image.

.. figure:: football-sbm-fit.*
   :align: center
   :width: 400px

   Stochastic block model inference of a network of American college
   football teams. The colors correspond to inferred group membership of
   the nodes.

We can obtain the group memberships as a
:class:`~graph_tool.PropertyMap` on the vertices via the
:mod:`~graph_tool.inference.BlockState.get_blocks` method:

.. testcode:: football

   b = state.get_blocks()
   r = b[10]   # group membership of vertex 10
   print(r)

which yields:

.. testoutput:: football

Tiago Peixoto's avatar
Tiago Peixoto committed
310
   3
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397

We may also access the matrix of edge counts between groups via
:mod:`~graph_tool.inference.BlockState.get_matrix`

.. testcode:: football

   e = state.get_matrix()

   matshow(e.todense())
   savefig("football-edge-counts.svg")

.. figure:: football-edge-counts.*
   :align: center

   Matrix of edge counts between groups.

We may obtain the same matrix of edge counts as a graph, which has
internal edge and vertex property maps with the edge and vertex counts,
respectively:

.. testcode:: football

   bg = state.get_bg()
   ers = bg.ep.count    # edge counts
   nr = bg.vp.count     # node counts

.. _sec_model_selection:

Hierarchical partitions
+++++++++++++++++++++++

The inference of the nested family of SBMs is done in a similar manner,
but we must use instead the
:func:`~graph_tool.inference.minimize_nested_blockmodel_dl` function. We
illustrate its use with the neural network of the `C. elegans
<https://en.wikipedia.org/wiki/Caenorhabditis_elegans>`_ worm:

.. testcode:: celegans

   g = gt.collection.data["celegansneural"]
   print(g)

which has 297 vertices and 2359 edges.

.. testoutput:: celegans

   <Graph object, directed, with 297 vertices and 2359 edges at 0x...>

A hierarchical fit of the degree-corrected model is performed as follows.

.. testcode:: celegans

   state = gt.minimize_nested_blockmodel_dl(g, deg_corr=True)

The object returned is an instance of a
:class:`~graph_tool.inference.NestedBlockState` class, which
encapsulates the results. We can again draw the resulting hierarchical
clustering using the
:meth:`~graph_tool.inference.NestedBlockState.draw` method:

.. testcode:: celegans

   state.draw(output="celegans-hsbm-fit.svg")

.. figure:: celegans-hsbm-fit.*
   :align: center

   Most likely hierarchical partition of the neural network of
   the C. elegans worm according to the nested degree-corrected SBM.

.. note::

   If the ``output`` parameter to
   :meth:`~graph_tool.inference.NestedBlockState.draw` is omitted, an
   interactive visualization is performed, where the user can re-order
   the hierarchy nodes using the mouse and pressing the ``r`` key.

A summary of the inferred hierarchy can be obtained with the
:meth:`~graph_tool.inference.NestedBlockState.print_summary` method,
which shows the number of nodes and groups in all levels:

.. testcode:: celegans

   state.print_summary()

.. testoutput:: celegans

398
399
400
401
   l: 0, N: 297, B: 13
   l: 1, N: 13, B: 5
   l: 2, N: 5, B: 2
   l: 3, N: 2, B: 1
402
403

The hierarchical levels themselves are represented by individual
Tiago Peixoto's avatar
Tiago Peixoto committed
404
:meth:`~graph_tool.inference.BlockState` instances obtained via the
405
406
407
408
409
410
411
412
413
414
:meth:`~graph_tool.inference.NestedBlockState.get_levels()` method:

.. testcode:: celegans

   levels = state.get_levels()
   for s in levels:
       print(s)

.. testoutput:: celegans

415
   <BlockState object with 13 blocks (13 nonempty), degree-corrected, for graph <Graph object, directed, with 297 vertices and 2359 edges at 0x...>, at 0x...>
Tiago Peixoto's avatar
Tiago Peixoto committed
416
417
   <BlockState object with 5 blocks (5 nonempty), for graph <Graph object, directed, with 13 vertices and 105 edges at 0x...>, at 0x...>
   <BlockState object with 2 blocks (2 nonempty), for graph <Graph object, directed, with 5 vertices and 21 edges at 0x...>, at 0x...>
418
   <BlockState object with 1 blocks (1 nonempty), for graph <Graph object, directed, with 2 vertices and 4 edges at 0x...>, at 0x...>
419
420
421
422
423

This means that we can inspect the hierarchical partition just as before:

.. testcode:: celegans

424
   r = levels[0].get_blocks()[46]    # group membership of node 46 in level 0
425
   print(r)
426
   r = levels[0].get_blocks()[r]     # group membership of node 46 in level 1
427
   print(r)
428
   r = levels[0].get_blocks()[r]     # group membership of node 46 in level 2
429
430
431
432
   print(r)

.. testoutput:: celegans

433
434
   2
   1
435
   0
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457


Model selection
+++++++++++++++

As mentioned above, one can select the best model according to the
choice that yields the smallest description length. For instance, in
case of the `C. elegans` network we have

.. testcode:: model-selection

   g = gt.collection.data["celegansneural"]

   state_ndc = gt.minimize_nested_blockmodel_dl(g, deg_corr=False)
   state_dc  = gt.minimize_nested_blockmodel_dl(g, deg_corr=True)

   print("Non-degree-corrected DL:\t", state_ndc.entropy())
   print("Degree-corrected DL:\t", state_dc.entropy())

.. testoutput:: model-selection
   :options: +NORMALIZE_WHITESPACE

Tiago Peixoto's avatar
Tiago Peixoto committed
458
459
   Non-degree-corrected DL:	 8507.97432099
   Degree-corrected DL:	 8228.11609772
460
461
462

Since it yields the smallest description length, the degree-corrected
fit should be preferred. The statistical significance of the choice can
463
464
be accessed by inspecting the posterior odds ratio
[peixoto-nonparametric-2016]_
465
466
467

.. math::

468
469
   \Lambda &= \frac{P(\boldsymbol b, \mathcal{H}_\text{NDC} | \boldsymbol G)}{P(\boldsymbol b, \mathcal{H}_\text{DC} | \boldsymbol G)} \\
           &= \frac{P(\boldsymbol G, \boldsymbol b | \mathcal{H}_\text{NDC})}{P(\boldsymbol G, \boldsymbol b | \mathcal{H}_\text{DC})}\times\frac{P(\mathcal{H}_\text{NDC})}{P(\mathcal{H}_\text{DC})} \\
470
471
472
473
           &= \exp(-\Delta\Sigma)

where :math:`\mathcal{H}_\text{NDC}` and :math:`\mathcal{H}_\text{DC}`
correspond to the non-degree-corrected and degree-corrected model
474
475
476
hypotheses (assumed to be equally likely `a priori`), respectively, and
:math:`\Delta\Sigma` is the difference of the description length of both
fits. In our particular case, we have
477
478
479
480
481
482
483
484

.. testcode:: model-selection

   print("ln Λ: ", state_dc.entropy() - state_ndc.entropy())

.. testoutput:: model-selection
   :options: +NORMALIZE_WHITESPACE

Tiago Peixoto's avatar
Tiago Peixoto committed
485
   ln Λ:  -279.858223272
486
487
488
489

The precise threshold that should be used to decide when to `reject a
hypothesis <https://en.wikipedia.org/wiki/Hypothesis_testing>`_ is
subjective and context-dependent, but the value above implies that the
Tiago Peixoto's avatar
Tiago Peixoto committed
490
particular degree-corrected fit is around :math:`e^{280} \sim 10^{121}`
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
times more likely than the non-degree corrected one, and hence it can be
safely concluded that it provides a substantially better fit.

Although it is often true that the degree-corrected model provides a
better fit for many empirical networks, there are also exceptions. For
example, for the American football network above, we have:

.. testcode:: model-selection

   g = gt.collection.data["football"]

   state_ndc = gt.minimize_nested_blockmodel_dl(g, deg_corr=False)
   state_dc  = gt.minimize_nested_blockmodel_dl(g, deg_corr=True)

   print("Non-degree-corrected DL:\t", state_ndc.entropy())
   print("Degree-corrected DL:\t", state_dc.entropy())
   print("ln Λ:\t\t\t", state_ndc.entropy() - state_dc.entropy())

.. testoutput:: model-selection
   :options: +NORMALIZE_WHITESPACE

Tiago Peixoto's avatar
Tiago Peixoto committed
512
513
514
   Non-degree-corrected DL:	 1751.86962605
   Degree-corrected DL:	 1787.64676873
   ln Λ:			 -35.7771426724
515

Tiago Peixoto's avatar
Tiago Peixoto committed
516
517
Hence, with a posterior odds ratio of :math:`\Lambda \sim e^{-36} \sim
10^{-16}` in favor of the non-degree-corrected model, it seems like the
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
degree-corrected variant is an unnecessarily complex description for
this network.

Averaging over models
---------------------

When analyzing empirical networks, one should be open to the possibility
that there will be more than one fit of the SBM with similar posterior
likelihoods. In such situations, one should instead `sample` partitions
from the posterior likelihood, instead of simply finding its
maximum. One can then compute quantities that are averaged over the
different model fits, weighted according to their posterior likelihoods.

Full support for model averaging is implemented in ``graph-tool`` via an
efficient `Markov chain Monte Carlo (MCMC)
<https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo>`_ algorithm
[peixoto-efficient-2014]_. It works by attempting to move nodes into
different groups with specific probabilities, and `accepting or
rejecting
<https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm>`_
such moves such that, after a sufficiently long time, the partitions
will be observed with the desired posterior probability. The algorithm
is so designed, that its run-time is independent on the number of groups
being used in the model, and hence is suitable for use on very large
networks.

In order to perform such moves, one needs again to operate with
:class:`~graph_tool.inference.BlockState` or
:class:`~graph_tool.inference.NestedBlockState` instances, and calling
their :meth:`~graph_tool.inference.BlockState.mcmc_sweep` methods. For
example, the following will perform 1000 sweeps of the algorithm with
the network of characters in the novel Les Misérables, starting from a
random partition into 20 groups

.. testcode:: model-averaging

   g = gt.collection.data["lesmis"]

   state = gt.BlockState(g, B=20)   # This automatically initializes the state
                                    # with a random partition into B=20
                                    # nonempty groups; The user could
                                    # also pass an arbitrary initial
                                    # partition using the 'b' parameter.

   # If we work with the above state object, we will be restricted to
   # partitions into at most B=20 groups. But since we want to consider
   # an arbitrary number of groups in the range [1, N], we transform it
   # into a state with B=N groups (where N-20 will be empty).

   state = state.copy(B=g.num_vertices())

   # Now we run 1,000 sweeps of the MCMC

   dS, nmoves = state.mcmc_sweep(niter=1000)

   print("Change in description length:", dS)
   print("Number of accepted vertex moves:", nmoves)

.. testoutput:: model-averaging

Tiago Peixoto's avatar
Tiago Peixoto committed
578
579
   Change in description length: -355.3963421220926
   Number of accepted vertex moves: 4561
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601

.. note::

   Starting from a random partition is rarely the best option, since it
   may take a long time for it to equilibrate; It was done above simply
   as an illustration on how to initialize
   :class:`~graph_tool.inference.BlockState` by hand. Instead, a much
   better option in practice is to start from the "ground state"
   obtained with :func:`~graph_tool.inference.minimize_blockmodel_dl`,
   e.g.

    .. testcode:: model-averaging

       state = gt.minimize_blockmodel_dl(g)
       state = state.copy(B=g.num_vertices())
       dS, nmoves = state.mcmc_sweep(niter=1000)

       print("Change in description length:", dS)
       print("Number of accepted vertex moves:", nmoves)

    .. testoutput:: model-averaging

Tiago Peixoto's avatar
Tiago Peixoto committed
602
603
       Change in description length: 7.3423409719804855
       Number of accepted vertex moves: 3939
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623

Although the above is sufficient to implement model averaging, there is a
convenience function called
:func:`~graph_tool.inference.mcmc_equilibrate` that is intend to
simplify the detection of equilibration, by keeping track of the maximum
and minimum values of description length encountered and how many sweeps
have been made without a "record breaking" event. For example,

.. testcode:: model-averaging

   # We will accept equilibration if 10 sweeps are completed without a
   # record breaking event, 2 consecutive times.

   gt.mcmc_equilibrate(state, wait=10, nbreaks=2, mcmc_args=dict(niter=10), verbose=True)

will output:

.. testoutput:: model-averaging
    :options: +NORMALIZE_WHITESPACE

Tiago Peixoto's avatar
Tiago Peixoto committed
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
    niter:     1  count:    0  breaks:  0  min_S: 709.95524  max_S: 726.36140  S: 726.36140  ΔS:      16.4062  moves:    57 
    niter:     2  count:    1  breaks:  0  min_S: 709.95524  max_S: 726.36140  S: 721.68682  ΔS:     -4.67459  moves:    67 
    niter:     3  count:    0  breaks:  0  min_S: 709.37313  max_S: 726.36140  S: 709.37313  ΔS:     -12.3137  moves:    47 
    niter:     4  count:    1  breaks:  0  min_S: 709.37313  max_S: 726.36140  S: 711.61100  ΔS:      2.23787  moves:    57 
    niter:     5  count:    2  breaks:  0  min_S: 709.37313  max_S: 726.36140  S: 716.08147  ΔS:      4.47047  moves:    28 
    niter:     6  count:    3  breaks:  0  min_S: 709.37313  max_S: 726.36140  S: 712.93940  ΔS:     -3.14207  moves:    47 
    niter:     7  count:    4  breaks:  0  min_S: 709.37313  max_S: 726.36140  S: 712.38780  ΔS:    -0.551596  moves:    46 
    niter:     8  count:    5  breaks:  0  min_S: 709.37313  max_S: 726.36140  S: 718.00449  ΔS:      5.61668  moves:    40 
    niter:     9  count:    0  breaks:  0  min_S: 709.37313  max_S: 731.89940  S: 731.89940  ΔS:      13.8949  moves:    50 
    niter:    10  count:    0  breaks:  0  min_S: 707.07048  max_S: 731.89940  S: 707.07048  ΔS:     -24.8289  moves:    45 
    niter:    11  count:    1  breaks:  0  min_S: 707.07048  max_S: 731.89940  S: 711.91030  ΔS:      4.83982  moves:    31 
    niter:    12  count:    2  breaks:  0  min_S: 707.07048  max_S: 731.89940  S: 726.56358  ΔS:      14.6533  moves:    56 
    niter:    13  count:    3  breaks:  0  min_S: 707.07048  max_S: 731.89940  S: 731.77165  ΔS:      5.20807  moves:    72 
    niter:    14  count:    4  breaks:  0  min_S: 707.07048  max_S: 731.89940  S: 707.08606  ΔS:     -24.6856  moves:    57 
    niter:    15  count:    0  breaks:  0  min_S: 707.07048  max_S: 735.85102  S: 735.85102  ΔS:      28.7650  moves:    65 
    niter:    16  count:    1  breaks:  0  min_S: 707.07048  max_S: 735.85102  S: 707.29116  ΔS:     -28.5599  moves:    43 
    niter:    17  count:    0  breaks:  0  min_S: 702.18860  max_S: 735.85102  S: 702.18860  ΔS:     -5.10256  moves:    39 
    niter:    18  count:    1  breaks:  0  min_S: 702.18860  max_S: 735.85102  S: 716.40444  ΔS:      14.2158  moves:    55 
    niter:    19  count:    2  breaks:  0  min_S: 702.18860  max_S: 735.85102  S: 703.51896  ΔS:     -12.8855  moves:    32 
    niter:    20  count:    3  breaks:  0  min_S: 702.18860  max_S: 735.85102  S: 714.30455  ΔS:      10.7856  moves:    34 
    niter:    21  count:    4  breaks:  0  min_S: 702.18860  max_S: 735.85102  S: 707.26722  ΔS:     -7.03733  moves:    25 
    niter:    22  count:    5  breaks:  0  min_S: 702.18860  max_S: 735.85102  S: 730.23976  ΔS:      22.9725  moves:    21 
    niter:    23  count:    6  breaks:  0  min_S: 702.18860  max_S: 735.85102  S: 730.56562  ΔS:     0.325858  moves:    59 
    niter:    24  count:    0  breaks:  0  min_S: 702.18860  max_S: 738.45136  S: 738.45136  ΔS:      7.88574  moves:    60 
    niter:    25  count:    0  breaks:  0  min_S: 702.18860  max_S: 740.29015  S: 740.29015  ΔS:      1.83879  moves:    88 
    niter:    26  count:    1  breaks:  0  min_S: 702.18860  max_S: 740.29015  S: 720.86367  ΔS:     -19.4265  moves:    68 
    niter:    27  count:    2  breaks:  0  min_S: 702.18860  max_S: 740.29015  S: 723.60308  ΔS:      2.73941  moves:    48 
    niter:    28  count:    3  breaks:  0  min_S: 702.18860  max_S: 740.29015  S: 732.81310  ΔS:      9.21002  moves:    44 
    niter:    29  count:    4  breaks:  0  min_S: 702.18860  max_S: 740.29015  S: 729.62283  ΔS:     -3.19028  moves:    62 
    niter:    30  count:    5  breaks:  0  min_S: 702.18860  max_S: 740.29015  S: 730.15676  ΔS:     0.533935  moves:    59 
    niter:    31  count:    6  breaks:  0  min_S: 702.18860  max_S: 740.29015  S: 728.27350  ΔS:     -1.88326  moves:    65 
    niter:    32  count:    7  breaks:  0  min_S: 702.18860  max_S: 740.29015  S: 732.19406  ΔS:      3.92056  moves:    57 
    niter:    33  count:    8  breaks:  0  min_S: 702.18860  max_S: 740.29015  S: 730.53906  ΔS:     -1.65500  moves:    72 
    niter:    34  count:    9  breaks:  0  min_S: 702.18860  max_S: 740.29015  S: 725.59638  ΔS:     -4.94268  moves:    72 
    niter:    35  count:    0  breaks:  1  min_S: 733.07687  max_S: 733.07687  S: 733.07687  ΔS:      7.48049  moves:    54 
    niter:    36  count:    0  breaks:  1  min_S: 728.56326  max_S: 733.07687  S: 728.56326  ΔS:     -4.51361  moves:    57 
    niter:    37  count:    0  breaks:  1  min_S: 728.56326  max_S: 755.55140  S: 755.55140  ΔS:      26.9881  moves:    83 
    niter:    38  count:    0  breaks:  1  min_S: 728.56326  max_S: 761.09434  S: 761.09434  ΔS:      5.54294  moves:    96 
    niter:    39  count:    0  breaks:  1  min_S: 713.60740  max_S: 761.09434  S: 713.60740  ΔS:     -47.4869  moves:    71 
    niter:    40  count:    1  breaks:  1  min_S: 713.60740  max_S: 761.09434  S: 713.98904  ΔS:     0.381637  moves:    67 
    niter:    41  count:    2  breaks:  1  min_S: 713.60740  max_S: 761.09434  S: 729.22460  ΔS:      15.2356  moves:    68 
    niter:    42  count:    3  breaks:  1  min_S: 713.60740  max_S: 761.09434  S: 724.70143  ΔS:     -4.52317  moves:    69 
    niter:    43  count:    0  breaks:  1  min_S: 703.51896  max_S: 761.09434  S: 703.51896  ΔS:     -21.1825  moves:    40 
    niter:    44  count:    0  breaks:  1  min_S: 702.85027  max_S: 761.09434  S: 702.85027  ΔS:    -0.668696  moves:    33 
    niter:    45  count:    1  breaks:  1  min_S: 702.85027  max_S: 761.09434  S: 722.46508  ΔS:      19.6148  moves:    49 
    niter:    46  count:    2  breaks:  1  min_S: 702.85027  max_S: 761.09434  S: 714.77930  ΔS:     -7.68578  moves:    62 
    niter:    47  count:    3  breaks:  1  min_S: 702.85027  max_S: 761.09434  S: 722.04551  ΔS:      7.26621  moves:    55 
    niter:    48  count:    4  breaks:  1  min_S: 702.85027  max_S: 761.09434  S: 708.96879  ΔS:     -13.0767  moves:    37 
    niter:    49  count:    5  breaks:  1  min_S: 702.85027  max_S: 761.09434  S: 714.84009  ΔS:      5.87130  moves:    37 
    niter:    50  count:    6  breaks:  1  min_S: 702.85027  max_S: 761.09434  S: 718.28558  ΔS:      3.44549  moves:    55 
    niter:    51  count:    7  breaks:  1  min_S: 702.85027  max_S: 761.09434  S: 720.86398  ΔS:      2.57840  moves:    44 
    niter:    52  count:    8  breaks:  1  min_S: 702.85027  max_S: 761.09434  S: 710.93672  ΔS:     -9.92726  moves:    45 
    niter:    53  count:    9  breaks:  1  min_S: 702.85027  max_S: 761.09434  S: 735.06773  ΔS:      24.1310  moves:    28 
    niter:    54  count:   10  breaks:  2  min_S: 702.85027  max_S: 761.09434  S: 738.16756  ΔS:      3.09983  moves:   115
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746

Note that the value of `wait` above was made purposefully low so that
the output would not be overly long. The most appropriate value requires
experimentation, but a typically good value is `wait=1000`.

The function :func:`~graph_tool.inference.mcmc_equilibrate` accepts a
``callback`` argument that takes an optional function to be invoked
after each call to
:meth:`~graph_tool.inference.BlockState.mcmc_sweep`. This function
should accept a single parameter which will contain the actual
:class:`~graph_tool.inference.BlockState` instance. We will use this in
the example below to collect the posterior vertex marginals, i.e. the
posterior probability that a node belongs to a given group:

.. testcode:: model-averaging

   # We will first equilibrate the Markov chain
   gt.mcmc_equilibrate(state, wait=1000, mcmc_args=dict(niter=10))

   pv = None 

   def collect_marginals(s):
      global pv
      pv = s.collect_vertex_marginals(pv)

   # Now we collect the marginals for exactly 100,000 sweeps
   gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                       callback=collect_marginals)

   # Now the node marginals are stored in property map pv. We can
   # visualize them as pie charts on the nodes:
   state.draw(pos=g.vp.pos, vertex_shape="pie", vertex_pie_fractions=pv,
              edge_gradient=None, output="lesmis-sbm-marginals.svg")

.. figure:: lesmis-sbm-marginals.*
   :align: center
   :width: 450px

   Marginal probabilities of group memberships of the network of
   characters in the novel Les Misérables, according to the
   degree-corrected SBM. The `pie fractions
   <https://en.wikipedia.org/wiki/Pie_chart>`_ on the nodes correspond
   to the probability of being in group associated with the respective
   color.

We can also obtain a marginal probability on the number of groups
itself, as follows.

.. testcode:: model-averaging

   h = np.zeros(g.num_vertices() + 1)

   def collect_num_groups(s):
       B = s.get_nonempty_B()
       h[B] += 1

   # Now we collect the marginal distribution for exactly 100,000 sweeps
   gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                       callback=collect_num_groups)

.. testcode:: model-averaging
   :hide:

   figure()
   Bs = np.arange(len(h))
   idx = h > 0
   bar(Bs[idx] - .5, h[idx] / h.sum(), width=1, color="#ccb974")
   gca().set_xticks([6,7,8,9])
   xlabel("$B$")
747
   ylabel(r"$P(B|\boldsymbol G)$")
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
   savefig("lesmis-B-posterior.svg")

.. figure:: lesmis-B-posterior.*
   :align: center

   Marginal posterior likelihood of the number of nonempty groups for the
   network of characters in the novel Les Misérables, according to the
   degree-corrected SBM.


Hierarchical partitions
+++++++++++++++++++++++

We can also perform model averaging using the nested SBM, which will
give us a distribution over hierarchies. The whole procedure is fairly
analogous, but now we make use of
:class:`~graph_tool.inference.NestedBlockState` instances.

.. note::

    When using :class:`~graph_tool.inference.NestedBlockState` instances
    to perform model averaging, they need to be constructed with the
    option `sampling=True`.

Here we perform the sampling of hierarchical partitions using the same
network as above.

.. testcode:: nested-model-averaging

   g = gt.collection.data["lesmis"]

   state = gt.minimize_nested_blockmodel_dl(g) # Initialize he Markov
                                               # chain from the "ground
                                               # state"

   # Before doing model averaging, the need to create a NestedBlockState
   # by passing sampling = True.

   # We also want to increase the maximum hierarchy depth to L = 10

   # We can do both of the above by copying.

   bs = state.get_bs()                     # Get hierarchical partition.
   bs += [np.zeros(1)] * (10 - len(bs))    # Augment it to L = 10 with
                                           # single-group levels.

   state = state.copy(bs=bs, sampling=True)

   # Now we run 1000 sweeps of the MCMC

   dS, nmoves = state.mcmc_sweep(niter=1000)

   print("Change in description length:", dS)
   print("Number of accepted vertex moves:", nmoves)

.. testoutput:: nested-model-averaging

Tiago Peixoto's avatar
Tiago Peixoto committed
805
806
   Change in description length: 6.222068...
   Number of accepted vertex moves: 7615
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839

Similarly to the the non-nested case, we can use
:func:`~graph_tool.inference.mcmc_equilibrate` to do most of the boring
work, and we can now obtain vertex marginals on all hierarchical levels:


.. testcode:: nested-model-averaging

   # We will first equilibrate the Markov chain
   gt.mcmc_equilibrate(state, wait=1000, mcmc_args=dict(niter=10))

   pv = [None] * len(state.get_levels())

   def collect_marginals(s):
      global pv
      pv = [sl.collect_vertex_marginals(pv[l]) for l, sl in enumerate(s.get_levels())]

   # Now we collect the marginals for exactly 100,000 sweeps
   gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                       callback=collect_marginals)

   # Now the node marginals for all levels are stored in property map
   # list pv. We can visualize the first level as pie charts on the nodes:
   state_0 = state.get_levels()[0]
   state_0.draw(pos=g.vp.pos, vertex_shape="pie", vertex_pie_fractions=pv[0],
                edge_gradient=None, output="lesmis-nested-sbm-marginals.svg")

.. figure:: lesmis-nested-sbm-marginals.*
   :align: center
   :width: 450px

   Marginal probabilities of group memberships of the network of
   characters in the novel Les Misérables, according to the nested
Tiago Peixoto's avatar
Tiago Peixoto committed
840
841
842
   degree-corrected SBM. The pie fractions on the nodes correspond to
   the probability of being in group associated with the respective
   color.
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863

We can also obtain a marginal probability of the number of groups
itself, as follows.

.. testcode:: nested-model-averaging

   h = [np.zeros(g.num_vertices() + 1) for s in state.get_levels()]

   def collect_num_groups(s):
       for l, sl in enumerate(s.get_levels()):
          B = sl.get_nonempty_B()
          h[l][B] += 1

   # Now we collect the marginal distribution for exactly 100,000 sweeps
   gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                       callback=collect_num_groups)

.. testcode:: nested-model-averaging
   :hide:

   figure()
864
   f, ax = plt.subplots(1, 5, figsize=(10, 3))
865
   for i, h_ in enumerate(h[:5]):
866
867
       Bs = np.arange(len(h_))
       idx = h_ > 0
868
869
870
       ax[i].bar(Bs[idx] - .5, h_[idx] / h_.sum(), width=1, color="#ccb974")
       ax[i].set_xticks(Bs[idx])
       ax[i].set_xlabel("$B_{%d}$" % i)
871
       ax[i].set_ylabel(r"$P(B_{%d}|\boldsymbol G)$" % i)
872
       locator = MaxNLocator(prune='both', nbins=5)
873
       ax[i].yaxis.set_major_locator(locator)
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
   tight_layout()
   savefig("lesmis-nested-B-posterior.svg")

.. figure:: lesmis-nested-B-posterior.*
   :align: center

   Marginal posterior likelihood of the number of nonempty groups
   :math:`B_l` at each hierarchy level :math:`l` for the network of
   characters in the novel Les Misérables, according to the nested
   degree-corrected SBM.

Below we obtain some hierarchical partitions sampled from the posterior
distribution.

.. testcode:: nested-model-averaging

   for i in range(10):
       state.mcmc_sweep(niter=1000)
       state.draw(output="lesmis-partition-sample-%i.svg" % i, empty_branches=False)

.. image:: lesmis-partition-sample-0.svg
   :width: 200px
.. image:: lesmis-partition-sample-1.svg
   :width: 200px
.. image:: lesmis-partition-sample-2.svg
   :width: 200px
.. image:: lesmis-partition-sample-3.svg
   :width: 200px
.. image:: lesmis-partition-sample-4.svg
   :width: 200px
.. image:: lesmis-partition-sample-5.svg
   :width: 200px
.. image:: lesmis-partition-sample-6.svg
   :width: 200px
.. image:: lesmis-partition-sample-7.svg
   :width: 200px
.. image:: lesmis-partition-sample-8.svg
   :width: 200px
.. image:: lesmis-partition-sample-9.svg
   :width: 200px

Model class selection
+++++++++++++++++++++

When averaging over partitions, we may be interested in evaluating which
919
920
921
**model class** provides a better fit of the data, considering all
possible parameter choices. This is done by evaluating the model
evidence [peixoto-nonparametric-2016]_
922
923
924

.. math::

925
   P(\boldsymbol G) = \sum_{\boldsymbol\theta,\boldsymbol b}P(\boldsymbol G,\boldsymbol\theta, \boldsymbol b) =  \sum_{\boldsymbol b}P(\boldsymbol G,\boldsymbol b).
926
927
928
929
930
931
932
933
934
935
936

This quantity is analogous to a `partition function
<https://en.wikipedia.org/wiki/Partition_function_(statistical_mechanics)>`_
in statistical physics, which we can write more conveniently as a
negative `free energy
<https://en.wikipedia.org/wiki/Thermodynamic_free_energy>`_ by taking
its logarithm

.. math::
   :label: free-energy

937
   \ln P(\boldsymbol G) = \underbrace{\sum_{\boldsymbol b}q(\boldsymbol b)\ln P(\boldsymbol G,\boldsymbol b)}_{-\left<\Sigma\right>}\;
938
939
940
941
942
943
              \underbrace{- \sum_{\boldsymbol b}q(\boldsymbol b)\ln q(\boldsymbol b)}_{\mathcal{S}}

where

.. math::

944
   q(\boldsymbol b) = \frac{P(\boldsymbol G,\boldsymbol b)}{\sum_{\boldsymbol b'}P(\boldsymbol G,\boldsymbol b')}
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982

is the posterior likelihood of partition :math:`\boldsymbol b`. The
first term of Eq. :eq:`free-energy` (the "negative energy") is minus the
average of description length :math:`\left<\Sigma\right>`, weighted
according to the posterior distribution. The second term
:math:`\mathcal{S}` is the `entropy
<https://en.wikipedia.org/wiki/Entropy_(information_theory)>`_ of the
posterior distribution, and measures, in a sense, the "quality of fit"
of the model: If the posterior is very "peaked", i.e. dominated by a
single partition with a very large likelihood, the entropy will tend to
zero. However, if there are many partitions with similar likelihoods ---
meaning that there is no single partition that describes the
network uniquely well --- it will take a large value instead.

Since the MCMC algorithm samples partitions from the distribution
:math:`q(\boldsymbol b)`, it can be used to compute
:math:`\left<\Sigma\right>` easily, simply by averaging the description
length values encountered by sampling from the posterior distribution
many times.

The computation of the posterior entropy :math:`\mathcal{S}`, however,
is significantly more difficult, since it involves measuring the precise
value of :math:`q(\boldsymbol b)`. A direct "brute force" computation of
:math:`\mathcal{S}` is implemented via
:meth:`~graph_tool.inference.BlockState.collect_partition_histogram` and
:func:`~graph_tool.inference.microstate_entropy`, however this is only
feasible for very small networks. For larger networks, we are forced to
perform approximations. The simplest is a "mean field" one, where we
assume the posterior factorizes as

.. math::

   q(\boldsymbol b) \approx \prod_i{q_i(b_i)}

where

.. math::

983
   q_i(r) = P(b_i = r | \boldsymbol G)
984
985
986
987
988
989
990
991
992
993
994
995

is the marginal group membership distribution of node :math:`i`. This
yields an entropy value given by

.. math::

   S \approx -\sum_i\sum_rq_i(r)\ln q_i(r).

This approximation should be seen as an upper bound, since any existing
correlation between the nodes (which are ignored here) will yield
smaller entropy values.

Tiago Peixoto's avatar
Tiago Peixoto committed
996
A more accurate assumption is called the `Bethe approximation`
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
[mezard-information-2009]_, and takes into account the correlation
between adjacent nodes in the network,

.. math::

   q(\boldsymbol b) \approx \prod_{i<j}q_{ij}(b_i,b_j)^{A_{ij}}\prod_iq_i(b_i)^{1-k_i}

where :math:`A_{ij}` is the `adjacency matrix
<https://en.wikipedia.org/wiki/Adjacency_matrix>`_, :math:`k_i` is the
degree of node :math:`i`, and

.. math::

1010
   q_{ij}(r, s) = P(b_i = r, b_j = s|\boldsymbol G)
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057

is the joint group membership distribution of nodes :math:`i` and
:math:`j` (a.k.a. the `edge marginals`). This yields an entropy value
given by

.. math::

   S \approx -\sum_{i<j}A_{ij}\sum_{rs}q_{ij}(r,s)\ln q_{ij}(r,s) - \sum_i(1-k_i)\sum_rq_i(r)\ln q_i(r).

Typically, this approximation yields smaller values than the mean field
one, and is generally considered to be superior. However, formally, it
depends on the graph being sufficiently locally "tree-like", and the
posterior being indeed strongly correlated with the adjacency matrix
itself --- two characteristics which do not hold in general. Although
the approximation often gives reasonable results even when these
conditions do not strictly hold, in some situations when they are
strongly violated this approach can yield meaningless values, such as a
negative entropy. Therefore, it is useful to compare both approaches
whenever possible.

With these approximations, it possible to estimate the full model
evidence efficiently, as we show below, using
:meth:`~graph_tool.inference.BlockState.collect_vertex_marginals`,
:meth:`~graph_tool.inference.BlockState.collect_edge_marginals`,
:meth:`~graph_tool.inference.mf_entropy` and
:meth:`~graph_tool.inference.bethe_entropy`.

.. testcode:: model-evidence

   g = gt.collection.data["lesmis"]

   for deg_corr in [True, False]:
       state = gt.minimize_blockmodel_dl(g, deg_corr=deg_corr)     # Initialize the Markov
                                                                   # chain from the "ground
                                                                   # state"
       state = state.copy(B=g.num_vertices())

       dls = []         # description length history
       vm = None        # vertex marginals
       em = None        # edge marginals

       def collect_marginals(s):
           global vm, em
           vm = s.collect_vertex_marginals(vm)
           em = s.collect_edge_marginals(em)
           dls.append(s.entropy())

Tiago Peixoto's avatar
Tiago Peixoto committed
1058
1059
       # Now we collect the marginal distributions for exactly 200,000 sweeps
       gt.mcmc_equilibrate(state, force_niter=20000, mcmc_args=dict(niter=10),
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
                           callback=collect_marginals)

       S_mf = gt.mf_entropy(g, vm)
       S_bethe = gt.bethe_entropy(g, em)[0]
       L = -mean(dls)

       print("Model evidence for deg_corr = %s:" % deg_corr,
             L + S_mf, "(mean field),", L + S_bethe, "(Bethe)")

.. testoutput:: model-evidence

Tiago Peixoto's avatar
Tiago Peixoto committed
1071
1072
   Model evidence for deg_corr = True: -575.864972067 (mean field), -802.39062289 (Bethe)
   Model evidence for deg_corr = False: -580.293548926 (mean field), -747.567093161 (Bethe)
1073

Tiago Peixoto's avatar
Tiago Peixoto committed
1074
1075
If we consider the more accurate approximation, the outcome shows a
preference for the non-degree-corrected model.
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121

When using the nested model, the approach is entirely analogous. The
only difference now is that we have a hierarchical partition
:math:`\{\boldsymbol b_l\}` in the equations above, instead of simply
:math:`\boldsymbol b`. In order to make the approach tractable, we
assume the factorization

.. math::

  q(\{\boldsymbol b_l\}) \approx \prod_lq_l(\boldsymbol b_l)

where :math:`q_l(\boldsymbol b_l)` is the marginal posterior for the
partition at level :math:`l`. For :math:`q_0(\boldsymbol b_0)` we may
use again either the mean-field or Bethe approximations, however for
:math:`l>0` only the mean-field approximation is applicable, since the
adjacency matrix of the higher layers is not constant. We show below the
approach for the same network, using the nested model.


.. testcode:: model-evidence

   g = gt.collection.data["lesmis"]

   L = 10

   for deg_corr in [True, False]:
       state = gt.minimize_nested_blockmodel_dl(g, deg_corr=deg_corr)     # Initialize the Markov
                                                                          # chain from the "ground
                                                                          # state"
       bs = state.get_bs()                     # Get hierarchical partition.
       bs += [np.zeros(1)] * (L - len(bs))     # Augment it to L = 10 with
                                               # single-group levels.

       state = state.copy(bs=bs, sampling=True)

       dls = []                               # description length history
       vm = [None] * len(state.get_levels())  # vertex marginals
       em = None                              # edge marginals

       def collect_marginals(s):
           global vm, em
           levels = s.get_levels()
           vm = [sl.collect_vertex_marginals(vm[l]) for l, sl in enumerate(levels)]
           em = levels[0].collect_edge_marginals(em)
           dls.append(s.entropy())

Tiago Peixoto's avatar
Tiago Peixoto committed
1122
1123
       # Now we collect the marginal distributions for exactly 200,000 sweeps
       gt.mcmc_equilibrate(state, force_niter=20000, mcmc_args=dict(niter=10),
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
                           callback=collect_marginals)

       S_mf = [gt.mf_entropy(sl.g, vm[l]) for l, sl in enumerate(state.get_levels())]
       S_bethe = gt.bethe_entropy(g, em)[0]
       L = -mean(dls)

       print("Model evidence for deg_corr = %s:" % deg_corr,
             L + sum(S_mf), "(mean field),", L + S_bethe + sum(S_mf[1:]), "(Bethe)")


.. testoutput:: model-evidence

Tiago Peixoto's avatar
Tiago Peixoto committed
1136
1137
   Model evidence for deg_corr = True: -358.493559653 (mean field), -649.40897099 (Bethe)
   Model evidence for deg_corr = False: -372.104532802 (mean field), -561.973406506 (Bethe)
1138

Tiago Peixoto's avatar
Tiago Peixoto committed
1139
1140
1141
1142
1143
1144
The results are similar: If we consider the most accurate approximation,
the non-degree-corrected model possesses the largest evidence. Note also
that we observe a better evidence for the nested models themselves, when
comparing to the evidences for the non-nested model --- which is not
quite surprising, since the non-nested model is a special case of the
nested one.
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202

Edge layers and covariates
--------------------------

In many situations, the edges of the network may posses discrete
covariates on them, or they may be distributed in discrete
"layers". Extensions to the SBM may be defined for such data, and they
can be inferred using the exact same interface shown above, except one
should use the :class:`~graph_tool.inference.LayeredBlockState` class,
instead of :class:`~graph_tool.inference.BlockState`. This class takes
two additional parameters: the ``ec`` parameter, that must correspond to
an edge :class:`~graph_tool.PropertyMap` with the layer/covariate
values on the edges, and the Boolean ``layers`` parameter, which if
``True`` specifies a layered model, otherwise one with edge covariates.

If we use :func:`~graph_tool.inference.minimize_blockmodel_dl`, this can
be achieved simply by passing the option ``layers=True`` as well as the
appropriate value of ``state_args``, which will be propagated to
:class:`~graph_tool.inference.LayeredBlockState`'s constructor.

For example, consider again the Les Misérables network, where we
consider the number of co-appearances between characters as edge
covariates.

.. testsetup:: layered-model

   import os
   try:
       os.chdir("demos/inference")
   except FileNotFoundError:
       pass

.. testcode:: layered-model

   g = gt.collection.data["lesmis"]

   # Note the different meaning of the two 'layers' parameters below: The
   # first enables the use of LayeredBlockState, and the second selects
   # the 'edge covariates' version.

   state = gt.minimize_blockmodel_dl(g, deg_corr=False, layers=True,
                                     state_args=dict(ec=g.ep.value, layers=False))

   state.draw(pos=g.vp.pos, edge_color=g.ep.value, edge_gradient=None,
              output="lesmis-sbm-edge-cov.svg")

.. figure:: lesmis-sbm-edge-cov.*
   :align: center
   :width: 350px

   Best fit of the non-degree-corrected SBM with edge covariates for the
   network of characters in the novel Les Misérables, using the number
   of co-appearances as edge covariates. The edge colors correspond to
   the edge covariates.


In the case of the nested model, we still should use the
:class:`~graph_tool.inference.NestedBlockState` class, but it must be
Tiago Peixoto's avatar
Tiago Peixoto committed
1203
initialized with the parameter ``base_type = LayeredBlockState``. But if
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
we use :func:`~graph_tool.inference.minimize_nested_blockmodel_dl`, it
works identically to the above:

.. testcode:: layered-model

   state = gt.minimize_nested_blockmodel_dl(g, deg_corr=False, layers=True,
                                            state_args=dict(ec=g.ep.value, layers=False))

   state.draw(eprops=dict(color=g.ep.value, gradient=None),
              output="lesmis-nested-sbm-edge-cov.svg")

.. figure:: lesmis-nested-sbm-edge-cov.*
   :align: center
   :width: 350px

   Best fit of the nested non-degree-corrected SBM with edge covariates
   for the network of characters in the novel Les Misérables, using the
   number of co-appearances as edge covariates. The edge colors
   correspond to the edge covariates.

It is possible to perform model averaging of all layered variants
exactly like for the regular SBMs as was shown above.

Predicting spurious and missing edges
-------------------------------------

An important application of generative models is to be able to
generalize from observations and make predictions that go beyond what
is seen in the data. This is particularly useful when the network we
observe is incomplete, or contains errors, i.e. some of the edges are
either missing or are outcomes of mistakes in measurement. In this
situation, the fit we make of the observed network can help us
predict missing or spurious edges in the network
[clauset-hierarchical-2008]_ [guimera-missing-2009]_.

1239
1240
We do so by dividing the edges into two sets :math:`\boldsymbol G` and :math:`\delta
\boldsymbol G`, where the former corresponds to the observed network and the latter
1241
either to the missing or spurious edges. In the case of missing edges,
1242
we may compute the posterior of :math:`\delta \boldsymbol G` as
1243
1244
1245
1246

.. math::
   :label: posterior-missing

1247
   P(\delta \boldsymbol G | \boldsymbol G) = \frac{\sum_{\boldsymbol b}P(\boldsymbol G+\delta \boldsymbol G | \boldsymbol b)P(\boldsymbol b | \boldsymbol G)}{P_{\delta}(\boldsymbol G)}
1248
1249
1250
1251
1252

where

.. math::

1253
   P_{\delta}(\boldsymbol G) = \sum_{\delta \boldsymbol G}\sum_{\boldsymbol b}P(\boldsymbol G+\delta \boldsymbol G | \boldsymbol b)P(\boldsymbol b | \boldsymbol G)
1254

1255
1256
1257
is a normalization constant. Although the value of :math:`P_{\delta}(\boldsymbol G)` is
difficult to obtain in general (since we need to perform a sum over all
possible spurious/missing edges), the numerator of
1258
1259
1260
Eq. :eq:`posterior-missing` can be computed by sampling partitions from
the posterior, and then inserting or deleting edges from the graph and
computing the new likelihood. This means that we can easily compare
1261
alternative predictive hypotheses :math:`\{\delta \boldsymbol G_i\}` via their
1262
1263
1264
1265
likelihood ratios

.. math::

1266
1267
1268
   \lambda_i = \frac{P(\delta \boldsymbol G_i | \boldsymbol G)}{\sum_j P(\delta \boldsymbol G_j | \boldsymbol G)}
             = \frac{\sum_{\boldsymbol b}P(\boldsymbol G+\delta \boldsymbol G_i | \boldsymbol b)P(\boldsymbol b | \boldsymbol G)}
                    {\sum_j \sum_{\boldsymbol b}P(\boldsymbol G+\delta \boldsymbol G_j | \boldsymbol b)P(\boldsymbol b | \boldsymbol G)}
1269

1270
which do not depend on the value of :math:`P_{\delta}(\boldsymbol G)`.
1271

1272
1273
The values :math:`P(\boldsymbol G+\delta \boldsymbol G | \boldsymbol b)`
can be computed with
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
:meth:`~graph_tool.inference.BlockState.get_edges_prob`. Hence, we can
compute spurious/missing edge probabilities just as if we were
collecting marginal distributions when doing model averaging.

Below is an example for predicting the two following edges in the
football network, using the nested model (for which we need to replace
:math:`\boldsymbol b` by :math:`\{\boldsymbol b_l\}` in the equations
above).

.. testcode:: missing-edges
   :hide:

   g = gt.collection.data["football"].copy()
   color = g.new_vp("string", val="#cccccc")
   ecolor = g.new_ep("string", val="#cccccc")
   e = g.add_edge(101, 102)
   ecolor[e] = "#a40000"
   e = g.add_edge(17, 56)
   ecolor[e] = "#a40000"
   eorder = g.edge_index.copy("int")

   gt.graph_draw(g, pos=g.vp.pos, vertex_color=color,
                 vertex_fill_color=color, edge_color=ecolor,
                 eorder=eorder,
                 output="football_missing.svg")

.. figure:: football_missing.*
   :align: center
   :width: 350px

   Two non-existing edges in the football network (in red):
   :math:`(101,102)` in the middle, and :math:`(17,56)` in the upper
   right region of the figure.

.. testcode:: missing-edges

   g = gt.collection.data["football"]

   missing_edges = [(101, 102), (17, 56)]
   
   L = 10

   state = gt.minimize_nested_blockmodel_dl(g, deg_corr=True)

   bs = state.get_bs()                     # Get hierarchical partition.
   bs += [np.zeros(1)] * (L - len(bs))     # Augment it to L = 10 with
                                           # single-group levels.

   state = state.copy(bs=bs, sampling=True)

   probs = ([], [])

   def collect_edge_probs(s):
1327
1328
       p1 = s.get_edges_prob([missing_edges[0]], entropy_args=dict(partition_dl=False))
       p2 = s.get_edges_prob([missing_edges[1]], entropy_args=dict(partition_dl=False))
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
       probs[0].append(p1)
       probs[1].append(p2)

   # Now we collect the probabilities for exactly 10,000 sweeps
   gt.mcmc_equilibrate(state, force_niter=1000, mcmc_args=dict(niter=10),
                       callback=collect_edge_probs)


   def get_avg(p):
      p = np.array(p)
      pmax = p.max()
      p -= pmax
      return pmax + log(exp(p).mean())

   p1 = get_avg(probs[0])
   p2 = get_avg(probs[1])

   p_sum = get_avg([p1, p2]) + log(2)
   
   l1 = p1 - p_sum
   l2 = p2 - p_sum

   print("likelihood-ratio for %s: %g" % (missing_edges[0], exp(l1)))
   print("likelihood-ratio for %s: %g" % (missing_edges[1], exp(l2)))


.. testoutput:: missing-edges

Tiago Peixoto's avatar
Tiago Peixoto committed
1357
1358
   likelihood-ratio for (101, 102): 0.372308
   likelihood-ratio for (17, 56): 0.627692
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376

From which we can conclude that edge :math:`(17, 56)` is around twice as
likely as :math:`(101, 102)` to be a missing edge.

The prediction using the non-nested model can be performed in an
entirely analogous fashion.

References
----------

.. [holland-stochastic-1983] Paul W. Holland, Kathryn Blackmond Laskey,
   Samuel Leinhardt, "Stochastic blockmodels: First steps", Social Networks
   Volume 5, Issue 2, Pages 109-137 (1983), :doi:`10.1016/0378-8733(83)90021-7`

.. [karrer-stochastic-2011] Brian Karrer, M. E. J. Newman "Stochastic
   blockmodels and community structure in networks", Phys. Rev. E 83,
   016107 (2011), :doi:`10.1103/PhysRevE.83.016107`, :arxiv:`1008.3926`
   
1377
1378
1379
.. [peixoto-nonparametric-2016] Tiago P. Peixoto, "Nonparametric
   Bayesian inference of the microcanonical stochastic block model"
   :arxiv:`1610.02703`
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412

.. [peixoto-parsimonious-2013] Tiago P. Peixoto, "Parsimonious module
   inference in large networks", Phys. Rev. Lett. 110, 148701 (2013),
   :doi:`10.1103/PhysRevLett.110.148701`, :arxiv:`1212.4794`.

.. [peixoto-hierarchical-2014] Tiago P. Peixoto, "Hierarchical block
   structures and high-resolution model selection in large networks",
   Phys. Rev. X 4, 011047 (2014), :doi:`10.1103/PhysRevX.4.011047`,
   :arxiv:`1310.4377`.

.. [peixoto-model-2016] Tiago P. Peixoto, "Model selection and hypothesis
   testing for large-scale network models with overlapping groups",
   Phys. Rev. X 5, 011033 (2016), :doi:`10.1103/PhysRevX.5.011033`,
   :arxiv:`1409.3059`.

.. [peixoto-efficient-2014] Tiago P. Peixoto, "Efficient Monte Carlo and
   greedy heuristic for the inference of stochastic block models", Phys.
   Rev. E 89, 012804 (2014), :doi:`10.1103/PhysRevE.89.012804`,
   :arxiv:`1310.4378`

.. [clauset-hierarchical-2008] Aaron Clauset, Cristopher
   Moore, M. E. J. Newman, "Hierarchical structure and the prediction of
   missing links in networks", Nature 453, 98-101 (2008),
   :doi:`10.1038/nature06830`

.. [guimera-missing-2009] Roger Guimerà, Marta Sales-Pardo, "Missing and
   spurious interactions and the reconstruction of complex networks", PNAS
   vol. 106 no. 52 (2009), :doi:`10.1073/pnas.0908366106`
          
.. [mezard-information-2009] Marc Mézard, Andrea Montanari, "Information,
   Physics, and Computation", Oxford Univ Press, 2009.
   :DOI:`10.1093/acprof:oso/9780198570837.001.0001`