inference.rst 54.3 KB
Newer Older
1
2
.. _inference-howto:

3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Inferring network structure
===========================

``graph-tool`` includes algorithms to identify the large-scale structure
of networks in the :mod:`~graph_tool.inference` submodule. Here we
explain the basic functionality with self-contained examples.

Background: Nonparametric statistical inference
-----------------------------------------------

A common task when analyzing networks is to characterize their
structures in simple terms, often by dividing the nodes into modules or
"communities".

A principled approach to perform this task is to formulate `generative
models <https://en.wikipedia.org/wiki/Generative_model>`_ that include
the idea of "modules" in their descriptions, which then can be detected
by `inferring <https://en.wikipedia.org/wiki/Statistical_inference>`_
the model parameters from data. More precisely, given the partition
:math:`\boldsymbol b = \{b_i\}` of the network into :math:`B` groups,
where :math:`b_i\in[0,B-1]` is the group membership of node :math:`i`,
24
25
we define a model that generates a network :math:`\boldsymbol G` with a
probability
26
27
28
29

.. math::
   :label: model-likelihood

30
   P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b)
31

32
33
where :math:`\boldsymbol\theta` are additional model parameters. Therefore, if we
observe a network :math:`\boldsymbol G`, the likelihood that it was generated by a
34
35
36
37
38
39
given partition :math:`\boldsymbol b` is obtained via the `Bayesian
<https://en.wikipedia.org/wiki/Bayesian_inference>`_ posterior

.. math::
   :label: model-posterior-sum

40
   P(\boldsymbol b | \boldsymbol G) = \frac{\sum_{\boldsymbol\theta}P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b)P(\boldsymbol\theta, \boldsymbol b)}{P(\boldsymbol G)}
41

42
where :math:`P(\boldsymbol\theta, \boldsymbol b)` is the `prior likelihood` of the
43
44
45
46
47
model parameters, and

.. math::
   :label: model-evidence

48
   P(\boldsymbol G) = \sum_{\boldsymbol\theta,\boldsymbol b}P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b)P(\boldsymbol\theta, \boldsymbol b)
49
50
51

is called the `model evidence`. The particular types of model that will
be considered here have "hard constraints", such that there is only one
52
choice for the remaining parameters :math:`\boldsymbol\theta` that is compatible
53
54
55
56
57
with the generated network, such that Eq. :eq:`model-posterior-sum` simplifies to

.. math::
   :label: model-posterior

58
   P(\boldsymbol b | \boldsymbol G) = \frac{P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b)P(\boldsymbol\theta, \boldsymbol b)}{P(\boldsymbol G)}
59

60
61
with :math:`\boldsymbol\theta` above being the only choice compatible with
:math:`\boldsymbol G` and :math:`\boldsymbol b`. The inference procedures considered
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
here will consist in either finding a network partition that maximizes
Eq. :eq:`model-posterior`, or sampling different partitions according
its posterior probability.

As we will show below, this approach will also enable the comparison of
`different` models according to statistical evidence (a.k.a. `model
selection`).

Minimum description length (MDL)
++++++++++++++++++++++++++++++++

We note that Eq. :eq:`model-posterior` can be written as

.. math::

77
   P(\boldsymbol b | \boldsymbol G) = \frac{\exp(-\Sigma)}{P(\boldsymbol G)}
78
79
80
81
82
83

where

.. math::
   :label: model-dl

84
   \Sigma = -\ln P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b) - \ln P(\boldsymbol\theta, \boldsymbol b)
85

86
is called the **description length** of the network :math:`\boldsymbol G`. It
87
88
89
90
91
measures the amount of `information
<https://en.wikipedia.org/wiki/Information_theory>`_ required to
describe the data, if we `encode
<https://en.wikipedia.org/wiki/Entropy_encoding>`_ it using the
particular parametrization of the generative model given by
92
:math:`\boldsymbol\theta` and :math:`\boldsymbol b`, as well as the parameters
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
themselves. Therefore, if we choose to maximize the posterior likelihood
of Eq. :eq:`model-posterior` it will be fully equivalent to the
so-called `minimum description length
<https://en.wikipedia.org/wiki/Minimum_description_length>`_
method. This approach corresponds to an implementation of `Occam's razor
<https://en.wikipedia.org/wiki/Occam%27s_razor>`_, where the `simplest`
model is selected, among all possibilities with the same explanatory
power. The selection is based on the statistical evidence available, and
therefore will not `overfit
<https://en.wikipedia.org/wiki/Overfitting>`_, i.e. mistake stochastic
fluctuations for actual structure.

The stochastic block model (SBM)
--------------------------------

The `stochastic block model
<https://en.wikipedia.org/wiki/Stochastic_block_model>`_ is arguably
the simplest generative process based on the notion of groups of
nodes [holland-stochastic-1983]_. The `microcanonical
<https://en.wikipedia.org/wiki/Microcanonical_ensemble>`_ formulation
113
[peixoto-nonparametric-2016]_ of the basic or "traditional" version takes
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
as parameters the partition of the nodes into groups
:math:`\boldsymbol b` and a :math:`B\times B` matrix of edge counts
:math:`\boldsymbol e`, where :math:`e_{rs}` is the number of edges
between groups :math:`r` and :math:`s`. Given these constraints, the
edges are then placed randomly. Hence, nodes that belong to the same
group possess the same probability of being connected with other
nodes of the network.

An example of a possible parametrization is given in the following
figure.

.. testcode:: sbm-example
   :hide:

   import os
   try:
      os.chdir("demos/inference")
   except FileNotFoundError:
       pass

   g = gt.load_graph("blockmodel-example.gt.gz")
   gt.graph_draw(g, pos=g.vp.pos, vertex_size=10, vertex_fill_color=g.vp.bo,
                 vertex_color="#333333",
                 edge_gradient=g.new_ep("vector<double>", val=[0]),
                 output="sbm-example.svg")

   ers = g.gp.w

   from pylab import *
   figure()
   matshow(log(ers))
   xlabel("Group $r$")
   ylabel("Group $s$")
   gca().xaxis.set_label_position("top") 
   savefig("sbm-example-ers.svg")

.. table::
    :class: figure

    +----------------------------------+------------------------------+
    |.. figure:: sbm-example-ers.svg   |.. figure:: sbm-example.svg   |
    |   :width: 300px                  |   :width: 300px              |
    |   :align: center                 |   :align: center             |
    |                                  |                              |
    |   Matrix of edge counts          |   Generated network.         |
    |   :math:`\boldsymbol e` between  |                              |
    |   groups.                        |                              |
    +----------------------------------+------------------------------+

.. note::

   We emphasize that no constraints are imposed on what `kind` of
   modular structure is allowed. Hence, we can detect the putatively
   typical pattern of `"community structure"
   <https://en.wikipedia.org/wiki/Community_structure>`_, i.e. when
   nodes are connected mostly to other nodes of the same group, if it
   happens to be the most likely network description, but we can also
   detect a large multiplicity of other patterns, such as `bipartiteness
   <https://en.wikipedia.org/wiki/Bipartite_graph>`_, core-periphery,
   and many others, all under the same inference framework.


Although quite general, the traditional model assumes that the edges are
placed randomly inside each group, and as such the nodes that belong to
the same group have very similar degrees. As it turns out, this is often
a poor model for many networks, which possess highly heterogeneous
degree distributions. A better model for such networks is called the
`degree-corrected` stochastic block model [karrer-stochastic-2011]_, and
it is defined just like the traditional model, with the addition of the
degree sequence :math:`\boldsymbol k = \{k_i\}` of the graph as an
additional set of parameters (assuming again a microcanonical
185
formulation [peixoto-nonparametric-2016]_).
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308


The nested stochastic block model
+++++++++++++++++++++++++++++++++

The regular SBM has a drawback when applied to very large
networks. Namely, it cannot be used to find relatively small groups in
very large networks: The maximum number of groups that can be found
scales as :math:`B_{\text{max}}\sim\sqrt{N}`, where :math:`N` is the
number of nodes in the network, if Bayesian inference is performed
[peixoto-parsimonious-2013]_. In order to circumvent this, we need to
replace the noninformative priors used by a hierarchy of priors and
hyperpriors, which amounts to a `nested SBM`, where the groups
themselves are clustered into groups, and the matrix :math:`e` of edge
counts are generated from another SBM, and so on recursively
[peixoto-hierarchical-2014]_.

.. figure:: nested-diagram.*
   :width: 400px
   :align: center

   Example of a nested SBM with three levels.

In addition to being able to find small groups in large networks, this
model also provides a multilevel hierarchical description of the
network, that describes its structure at multiple scales.


Inferring the best partition
----------------------------

The simplest and most efficient approach is to find the best
partition of the network by maximizing Eq. :eq:`model-posterior`
according to some version of the model. This is obtained via the
functions :func:`~graph_tool.inference.minimize_blockmodel_dl` or
:func:`~graph_tool.inference.minimize_nested_blockmodel_dl`, which
employs an agglomerative multilevel `Markov chain Monte Carlo (MCMC)
<https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo>`_ algorithm
[peixoto-efficient-2014]_.

We focus first on the non-nested model, and we illustrate its use with a
network of American football teams, which we load from the
:mod:`~graph_tool.collection` module:

.. testsetup:: football

   import os
   try:
      os.chdir("demos/inference")
   except FileNotFoundError:
       pass

.. testcode:: football

   g = gt.collection.data["football"]
   print(g)

which yields

.. testoutput:: football

   <Graph object, undirected, with 115 vertices and 613 edges at 0x...>

we then fit the `traditional` model by calling

.. testcode:: football

   state = gt.minimize_blockmodel_dl(g, deg_corr=False)

This returns a :class:`~graph_tool.inference.BlockState` object that
includes the inference results.

.. note::

   The inference algorithm used is stochastic by nature, and may return
   a slightly different answer each time it is run. This may be due to
   the fact that there are alternative partitions with similar
   likelihoods, or that the optimum is difficult to find. Note that the
   inference problem here is, in general, `NP-Hard
   <https://en.wikipedia.org/wiki/NP-hardness>`_, hence there is no
   efficient algorithm that is guaranteed to always find the best
   answer.

   Because of this, typically one would call the algorithm many times,
   and select the partition with the largest posterior likelihood of
   Eq. :eq:`model-posterior`, or equivalently, the minimum description
   length of Eq. :eq:`model-dl`. The description length of a fit can be
   obtained with the :meth:`~graph_tool.inference.BlockState.entropy`
   method. See also :ref:`sec_model_selection` below.


We may perform a drawing of the partition obtained via the
:mod:`~graph_tool.inference.BlockState.draw` method, that functions as a
convenience wrapper to the :func:`~graph_tool.draw.graph_draw` function

.. testcode:: football

   state.draw(pos=g.vp.pos, output="football-sbm-fit.svg")

which yields the following image.

.. figure:: football-sbm-fit.*
   :align: center
   :width: 400px

   Stochastic block model inference of a network of American college
   football teams. The colors correspond to inferred group membership of
   the nodes.

We can obtain the group memberships as a
:class:`~graph_tool.PropertyMap` on the vertices via the
:mod:`~graph_tool.inference.BlockState.get_blocks` method:

.. testcode:: football

   b = state.get_blocks()
   r = b[10]   # group membership of vertex 10
   print(r)

which yields:

.. testoutput:: football

309
   10
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396

We may also access the matrix of edge counts between groups via
:mod:`~graph_tool.inference.BlockState.get_matrix`

.. testcode:: football

   e = state.get_matrix()

   matshow(e.todense())
   savefig("football-edge-counts.svg")

.. figure:: football-edge-counts.*
   :align: center

   Matrix of edge counts between groups.

We may obtain the same matrix of edge counts as a graph, which has
internal edge and vertex property maps with the edge and vertex counts,
respectively:

.. testcode:: football

   bg = state.get_bg()
   ers = bg.ep.count    # edge counts
   nr = bg.vp.count     # node counts

.. _sec_model_selection:

Hierarchical partitions
+++++++++++++++++++++++

The inference of the nested family of SBMs is done in a similar manner,
but we must use instead the
:func:`~graph_tool.inference.minimize_nested_blockmodel_dl` function. We
illustrate its use with the neural network of the `C. elegans
<https://en.wikipedia.org/wiki/Caenorhabditis_elegans>`_ worm:

.. testcode:: celegans

   g = gt.collection.data["celegansneural"]
   print(g)

which has 297 vertices and 2359 edges.

.. testoutput:: celegans

   <Graph object, directed, with 297 vertices and 2359 edges at 0x...>

A hierarchical fit of the degree-corrected model is performed as follows.

.. testcode:: celegans

   state = gt.minimize_nested_blockmodel_dl(g, deg_corr=True)

The object returned is an instance of a
:class:`~graph_tool.inference.NestedBlockState` class, which
encapsulates the results. We can again draw the resulting hierarchical
clustering using the
:meth:`~graph_tool.inference.NestedBlockState.draw` method:

.. testcode:: celegans

   state.draw(output="celegans-hsbm-fit.svg")

.. figure:: celegans-hsbm-fit.*
   :align: center

   Most likely hierarchical partition of the neural network of
   the C. elegans worm according to the nested degree-corrected SBM.

.. note::

   If the ``output`` parameter to
   :meth:`~graph_tool.inference.NestedBlockState.draw` is omitted, an
   interactive visualization is performed, where the user can re-order
   the hierarchy nodes using the mouse and pressing the ``r`` key.

A summary of the inferred hierarchy can be obtained with the
:meth:`~graph_tool.inference.NestedBlockState.print_summary` method,
which shows the number of nodes and groups in all levels:

.. testcode:: celegans

   state.print_summary()

.. testoutput:: celegans

397
398
399
400
   l: 0, N: 297, B: 13
   l: 1, N: 13, B: 5
   l: 2, N: 5, B: 2
   l: 3, N: 2, B: 1
401
402

The hierarchical levels themselves are represented by individual
Tiago Peixoto's avatar
Tiago Peixoto committed
403
:meth:`~graph_tool.inference.BlockState` instances obtained via the
404
405
406
407
408
409
410
411
412
413
:meth:`~graph_tool.inference.NestedBlockState.get_levels()` method:

.. testcode:: celegans

   levels = state.get_levels()
   for s in levels:
       print(s)

.. testoutput:: celegans

414
415
416
417
   <BlockState object with 13 blocks (13 nonempty), degree-corrected, for graph <Graph object, directed, with 297 vertices and 2359 edges at 0x...>, at 0x...>
   <BlockState object with 5 blocks (5 nonempty), for graph <Graph object, directed, with 13 vertices and 109 edges at 0x...>, at 0x...>
   <BlockState object with 2 blocks (2 nonempty), for graph <Graph object, directed, with 5 vertices and 24 edges at 0x...>, at 0x...>
   <BlockState object with 1 blocks (1 nonempty), for graph <Graph object, directed, with 2 vertices and 4 edges at 0x...>, at 0x...>
418
419
420
421
422

This means that we can inspect the hierarchical partition just as before:

.. testcode:: celegans

423
   r = levels[0].get_blocks()[46]    # group membership of node 46 in level 0
424
   print(r)
425
   r = levels[0].get_blocks()[r]     # group membership of node 46 in level 1
426
   print(r)
427
   r = levels[0].get_blocks()[r]     # group membership of node 46 in level 2
428
429
430
431
   print(r)

.. testoutput:: celegans

432
433
   2
   1
434
   0
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456


Model selection
+++++++++++++++

As mentioned above, one can select the best model according to the
choice that yields the smallest description length. For instance, in
case of the `C. elegans` network we have

.. testcode:: model-selection

   g = gt.collection.data["celegansneural"]

   state_ndc = gt.minimize_nested_blockmodel_dl(g, deg_corr=False)
   state_dc  = gt.minimize_nested_blockmodel_dl(g, deg_corr=True)

   print("Non-degree-corrected DL:\t", state_ndc.entropy())
   print("Degree-corrected DL:\t", state_dc.entropy())

.. testoutput:: model-selection
   :options: +NORMALIZE_WHITESPACE

457
458
   Non-degree-corrected DL:      8568.61212614
   Degree-corrected DL:          8246.48662192
459
460
461

Since it yields the smallest description length, the degree-corrected
fit should be preferred. The statistical significance of the choice can
462
463
be accessed by inspecting the posterior odds ratio
[peixoto-nonparametric-2016]_
464
465
466

.. math::

467
468
   \Lambda &= \frac{P(\boldsymbol b, \mathcal{H}_\text{NDC} | \boldsymbol G)}{P(\boldsymbol b, \mathcal{H}_\text{DC} | \boldsymbol G)} \\
           &= \frac{P(\boldsymbol G, \boldsymbol b | \mathcal{H}_\text{NDC})}{P(\boldsymbol G, \boldsymbol b | \mathcal{H}_\text{DC})}\times\frac{P(\mathcal{H}_\text{NDC})}{P(\mathcal{H}_\text{DC})} \\
469
470
471
472
           &= \exp(-\Delta\Sigma)

where :math:`\mathcal{H}_\text{NDC}` and :math:`\mathcal{H}_\text{DC}`
correspond to the non-degree-corrected and degree-corrected model
473
474
475
hypotheses (assumed to be equally likely `a priori`), respectively, and
:math:`\Delta\Sigma` is the difference of the description length of both
fits. In our particular case, we have
476
477
478
479
480
481
482
483

.. testcode:: model-selection

   print("ln Λ: ", state_dc.entropy() - state_ndc.entropy())

.. testoutput:: model-selection
   :options: +NORMALIZE_WHITESPACE

484
   ln Λ:  -322.125504215
485
486
487
488

The precise threshold that should be used to decide when to `reject a
hypothesis <https://en.wikipedia.org/wiki/Hypothesis_testing>`_ is
subjective and context-dependent, but the value above implies that the
489
particular degree-corrected fit is around :math:`e^{322} \sim 10^{140}`
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
times more likely than the non-degree corrected one, and hence it can be
safely concluded that it provides a substantially better fit.

Although it is often true that the degree-corrected model provides a
better fit for many empirical networks, there are also exceptions. For
example, for the American football network above, we have:

.. testcode:: model-selection

   g = gt.collection.data["football"]

   state_ndc = gt.minimize_nested_blockmodel_dl(g, deg_corr=False)
   state_dc  = gt.minimize_nested_blockmodel_dl(g, deg_corr=True)

   print("Non-degree-corrected DL:\t", state_ndc.entropy())
   print("Degree-corrected DL:\t", state_dc.entropy())
   print("ln Λ:\t\t\t", state_ndc.entropy() - state_dc.entropy())

.. testoutput:: model-selection
   :options: +NORMALIZE_WHITESPACE

511
512
513
   Non-degree-corrected DL:      1757.84382615
   Degree-corrected DL:          1787.60777164
   ln Λ:                         -29.7639454931
514

515
516
Hence, with a posterior odds ratio of :math:`\Lambda \sim e^{-29} \sim
10^{-13}` in favor of the non-degree-corrected model, it seems like the
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
degree-corrected variant is an unnecessarily complex description for
this network.

Averaging over models
---------------------

When analyzing empirical networks, one should be open to the possibility
that there will be more than one fit of the SBM with similar posterior
likelihoods. In such situations, one should instead `sample` partitions
from the posterior likelihood, instead of simply finding its
maximum. One can then compute quantities that are averaged over the
different model fits, weighted according to their posterior likelihoods.

Full support for model averaging is implemented in ``graph-tool`` via an
efficient `Markov chain Monte Carlo (MCMC)
<https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo>`_ algorithm
[peixoto-efficient-2014]_. It works by attempting to move nodes into
different groups with specific probabilities, and `accepting or
rejecting
<https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm>`_
such moves such that, after a sufficiently long time, the partitions
will be observed with the desired posterior probability. The algorithm
is so designed, that its run-time is independent on the number of groups
being used in the model, and hence is suitable for use on very large
networks.

In order to perform such moves, one needs again to operate with
:class:`~graph_tool.inference.BlockState` or
:class:`~graph_tool.inference.NestedBlockState` instances, and calling
their :meth:`~graph_tool.inference.BlockState.mcmc_sweep` methods. For
example, the following will perform 1000 sweeps of the algorithm with
the network of characters in the novel Les Misérables, starting from a
random partition into 20 groups

.. testcode:: model-averaging

   g = gt.collection.data["lesmis"]

   state = gt.BlockState(g, B=20)   # This automatically initializes the state
                                    # with a random partition into B=20
                                    # nonempty groups; The user could
                                    # also pass an arbitrary initial
                                    # partition using the 'b' parameter.

   # If we work with the above state object, we will be restricted to
   # partitions into at most B=20 groups. But since we want to consider
   # an arbitrary number of groups in the range [1, N], we transform it
   # into a state with B=N groups (where N-20 will be empty).

   state = state.copy(B=g.num_vertices())

   # Now we run 1,000 sweeps of the MCMC

   dS, nmoves = state.mcmc_sweep(niter=1000)

   print("Change in description length:", dS)
   print("Number of accepted vertex moves:", nmoves)

.. testoutput:: model-averaging

577
   Change in description length: -360.18357903823386
578
   Number of accepted vertex moves: 4743
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600

.. note::

   Starting from a random partition is rarely the best option, since it
   may take a long time for it to equilibrate; It was done above simply
   as an illustration on how to initialize
   :class:`~graph_tool.inference.BlockState` by hand. Instead, a much
   better option in practice is to start from the "ground state"
   obtained with :func:`~graph_tool.inference.minimize_blockmodel_dl`,
   e.g.

    .. testcode:: model-averaging

       state = gt.minimize_blockmodel_dl(g)
       state = state.copy(B=g.num_vertices())
       dS, nmoves = state.mcmc_sweep(niter=1000)

       print("Change in description length:", dS)
       print("Number of accepted vertex moves:", nmoves)

    .. testoutput:: model-averaging

601
602
       Change in description length: 0.23920882820149814
       Number of accepted vertex moves: 4016
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622

Although the above is sufficient to implement model averaging, there is a
convenience function called
:func:`~graph_tool.inference.mcmc_equilibrate` that is intend to
simplify the detection of equilibration, by keeping track of the maximum
and minimum values of description length encountered and how many sweeps
have been made without a "record breaking" event. For example,

.. testcode:: model-averaging

   # We will accept equilibration if 10 sweeps are completed without a
   # record breaking event, 2 consecutive times.

   gt.mcmc_equilibrate(state, wait=10, nbreaks=2, mcmc_args=dict(niter=10), verbose=True)

will output:

.. testoutput:: model-averaging
    :options: +NORMALIZE_WHITESPACE

623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
    niter:     1  count:    0  breaks:  0  min_S: 699.58882  max_S: 713.12054  S: 713.12054  ΔS:      13.5317  moves:    31 
    niter:     2  count:    1  breaks:  0  min_S: 699.58882  max_S: 713.12054  S: 711.03345  ΔS:     -2.08709  moves:    25 
    niter:     3  count:    0  breaks:  0  min_S: 699.58882  max_S: 715.72860  S: 715.72860  ΔS:      4.69514  moves:    37 
    niter:     4  count:    1  breaks:  0  min_S: 699.58882  max_S: 715.72860  S: 704.76394  ΔS:     -10.9647  moves:    36 
    niter:     5  count:    2  breaks:  0  min_S: 699.58882  max_S: 715.72860  S: 706.55192  ΔS:      1.78798  moves:    27 
    niter:     6  count:    3  breaks:  0  min_S: 699.58882  max_S: 715.72860  S: 706.97865  ΔS:     0.426724  moves:    30 
    niter:     7  count:    4  breaks:  0  min_S: 699.58882  max_S: 715.72860  S: 706.41383  ΔS:    -0.564821  moves:    33 
    niter:     8  count:    0  breaks:  0  min_S: 699.58882  max_S: 718.47291  S: 718.47291  ΔS:      12.0591  moves:    43 
    niter:     9  count:    1  breaks:  0  min_S: 699.58882  max_S: 718.47291  S: 709.59053  ΔS:     -8.88238  moves:    29 
    niter:    10  count:    2  breaks:  0  min_S: 699.58882  max_S: 718.47291  S: 700.39505  ΔS:     -9.19548  moves:    27 
    niter:    11  count:    3  breaks:  0  min_S: 699.58882  max_S: 718.47291  S: 710.45317  ΔS:      10.0581  moves:    44 
    niter:    12  count:    0  breaks:  0  min_S: 699.58882  max_S: 719.51868  S: 719.51868  ΔS:      9.06551  moves:    38 
    niter:    13  count:    1  breaks:  0  min_S: 699.58882  max_S: 719.51868  S: 711.66691  ΔS:     -7.85177  moves:    41 
    niter:    14  count:    2  breaks:  0  min_S: 699.58882  max_S: 719.51868  S: 701.51609  ΔS:     -10.1508  moves:    34 
    niter:    15  count:    3  breaks:  0  min_S: 699.58882  max_S: 719.51868  S: 705.78796  ΔS:      4.27188  moves:    41 
    niter:    16  count:    4  breaks:  0  min_S: 699.58882  max_S: 719.51868  S: 715.67960  ΔS:      9.89164  moves:    33 
    niter:    17  count:    5  breaks:  0  min_S: 699.58882  max_S: 719.51868  S: 703.65838  ΔS:     -12.0212  moves:    33 
    niter:    18  count:    6  breaks:  0  min_S: 699.58882  max_S: 719.51868  S: 709.55912  ΔS:      5.90074  moves:    28 
    niter:    19  count:    7  breaks:  0  min_S: 699.58882  max_S: 719.51868  S: 718.42521  ΔS:      8.86609  moves:    28 
    niter:    20  count:    8  breaks:  0  min_S: 699.58882  max_S: 719.51868  S: 711.13784  ΔS:     -7.28737  moves:    48 
    niter:    21  count:    9  breaks:  0  min_S: 699.58882  max_S: 719.51868  S: 706.63047  ΔS:     -4.50737  moves:    28 
    niter:    22  count:    0  breaks:  1  min_S: 707.03211  max_S: 707.03211  S: 707.03211  ΔS:     0.401639  moves:    57 
    niter:    23  count:    0  breaks:  1  min_S: 707.03211  max_S: 717.50359  S: 717.50359  ΔS:      10.4715  moves:    31 
    niter:    24  count:    0  breaks:  1  min_S: 707.03211  max_S: 726.72811  S: 726.72811  ΔS:      9.22451  moves:    63 
    niter:    25  count:    0  breaks:  1  min_S: 701.53898  max_S: 726.72811  S: 701.53898  ΔS:     -25.1891  moves:    26 
    niter:    26  count:    1  breaks:  1  min_S: 701.53898  max_S: 726.72811  S: 710.04615  ΔS:      8.50718  moves:    25 
    niter:    27  count:    2  breaks:  1  min_S: 701.53898  max_S: 726.72811  S: 710.59565  ΔS:     0.549493  moves:    31 
    niter:    28  count:    3  breaks:  1  min_S: 701.53898  max_S: 726.72811  S: 713.53473  ΔS:      2.93908  moves:    19 
    niter:    29  count:    4  breaks:  1  min_S: 701.53898  max_S: 726.72811  S: 707.57709  ΔS:     -5.95763  moves:    34 
    niter:    30  count:    5  breaks:  1  min_S: 701.53898  max_S: 726.72811  S: 712.49104  ΔS:      4.91395  moves:    24 
    niter:    31  count:    6  breaks:  1  min_S: 701.53898  max_S: 726.72811  S: 716.40137  ΔS:      3.91032  moves:    33 
    niter:    32  count:    7  breaks:  1  min_S: 701.53898  max_S: 726.72811  S: 717.30606  ΔS:     0.904690  moves:    54 
    niter:    33  count:    8  breaks:  1  min_S: 701.53898  max_S: 726.72811  S: 712.74418  ΔS:     -4.56187  moves:    43 
    niter:    34  count:    9  breaks:  1  min_S: 701.53898  max_S: 726.72811  S: 711.01120  ΔS:     -1.73298  moves:    57 
    niter:    35  count:   10  breaks:  2  min_S: 701.53898  max_S: 726.72811  S: 717.58446  ΔS:      6.57326  moves:    47 
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726

Note that the value of `wait` above was made purposefully low so that
the output would not be overly long. The most appropriate value requires
experimentation, but a typically good value is `wait=1000`.

The function :func:`~graph_tool.inference.mcmc_equilibrate` accepts a
``callback`` argument that takes an optional function to be invoked
after each call to
:meth:`~graph_tool.inference.BlockState.mcmc_sweep`. This function
should accept a single parameter which will contain the actual
:class:`~graph_tool.inference.BlockState` instance. We will use this in
the example below to collect the posterior vertex marginals, i.e. the
posterior probability that a node belongs to a given group:

.. testcode:: model-averaging

   # We will first equilibrate the Markov chain
   gt.mcmc_equilibrate(state, wait=1000, mcmc_args=dict(niter=10))

   pv = None 

   def collect_marginals(s):
      global pv
      pv = s.collect_vertex_marginals(pv)

   # Now we collect the marginals for exactly 100,000 sweeps
   gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                       callback=collect_marginals)

   # Now the node marginals are stored in property map pv. We can
   # visualize them as pie charts on the nodes:
   state.draw(pos=g.vp.pos, vertex_shape="pie", vertex_pie_fractions=pv,
              edge_gradient=None, output="lesmis-sbm-marginals.svg")

.. figure:: lesmis-sbm-marginals.*
   :align: center
   :width: 450px

   Marginal probabilities of group memberships of the network of
   characters in the novel Les Misérables, according to the
   degree-corrected SBM. The `pie fractions
   <https://en.wikipedia.org/wiki/Pie_chart>`_ on the nodes correspond
   to the probability of being in group associated with the respective
   color.

We can also obtain a marginal probability on the number of groups
itself, as follows.

.. testcode:: model-averaging

   h = np.zeros(g.num_vertices() + 1)

   def collect_num_groups(s):
       B = s.get_nonempty_B()
       h[B] += 1

   # Now we collect the marginal distribution for exactly 100,000 sweeps
   gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                       callback=collect_num_groups)

.. testcode:: model-averaging
   :hide:

   figure()
   Bs = np.arange(len(h))
   idx = h > 0
   bar(Bs[idx] - .5, h[idx] / h.sum(), width=1, color="#ccb974")
   gca().set_xticks([6,7,8,9])
   xlabel("$B$")
727
   ylabel(r"$P(B|\boldsymbol G)$")
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
   savefig("lesmis-B-posterior.svg")

.. figure:: lesmis-B-posterior.*
   :align: center

   Marginal posterior likelihood of the number of nonempty groups for the
   network of characters in the novel Les Misérables, according to the
   degree-corrected SBM.


Hierarchical partitions
+++++++++++++++++++++++

We can also perform model averaging using the nested SBM, which will
give us a distribution over hierarchies. The whole procedure is fairly
analogous, but now we make use of
:class:`~graph_tool.inference.NestedBlockState` instances.

.. note::

    When using :class:`~graph_tool.inference.NestedBlockState` instances
    to perform model averaging, they need to be constructed with the
    option `sampling=True`.

Here we perform the sampling of hierarchical partitions using the same
network as above.

.. testcode:: nested-model-averaging

   g = gt.collection.data["lesmis"]

   state = gt.minimize_nested_blockmodel_dl(g) # Initialize he Markov
                                               # chain from the "ground
                                               # state"

   # Before doing model averaging, the need to create a NestedBlockState
   # by passing sampling = True.

   # We also want to increase the maximum hierarchy depth to L = 10

   # We can do both of the above by copying.

   bs = state.get_bs()                     # Get hierarchical partition.
   bs += [np.zeros(1)] * (10 - len(bs))    # Augment it to L = 10 with
                                           # single-group levels.

   state = state.copy(bs=bs, sampling=True)

   # Now we run 1000 sweeps of the MCMC

   dS, nmoves = state.mcmc_sweep(niter=1000)

   print("Change in description length:", dS)
   print("Number of accepted vertex moves:", nmoves)

.. testoutput:: nested-model-averaging

785
786
   Change in description length: 6.1062707...
   Number of accepted vertex moves: 8488
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819

Similarly to the the non-nested case, we can use
:func:`~graph_tool.inference.mcmc_equilibrate` to do most of the boring
work, and we can now obtain vertex marginals on all hierarchical levels:


.. testcode:: nested-model-averaging

   # We will first equilibrate the Markov chain
   gt.mcmc_equilibrate(state, wait=1000, mcmc_args=dict(niter=10))

   pv = [None] * len(state.get_levels())

   def collect_marginals(s):
      global pv
      pv = [sl.collect_vertex_marginals(pv[l]) for l, sl in enumerate(s.get_levels())]

   # Now we collect the marginals for exactly 100,000 sweeps
   gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                       callback=collect_marginals)

   # Now the node marginals for all levels are stored in property map
   # list pv. We can visualize the first level as pie charts on the nodes:
   state_0 = state.get_levels()[0]
   state_0.draw(pos=g.vp.pos, vertex_shape="pie", vertex_pie_fractions=pv[0],
                edge_gradient=None, output="lesmis-nested-sbm-marginals.svg")

.. figure:: lesmis-nested-sbm-marginals.*
   :align: center
   :width: 450px

   Marginal probabilities of group memberships of the network of
   characters in the novel Les Misérables, according to the nested
Tiago Peixoto's avatar
Tiago Peixoto committed
820
821
822
   degree-corrected SBM. The pie fractions on the nodes correspond to
   the probability of being in group associated with the respective
   color.
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843

We can also obtain a marginal probability of the number of groups
itself, as follows.

.. testcode:: nested-model-averaging

   h = [np.zeros(g.num_vertices() + 1) for s in state.get_levels()]

   def collect_num_groups(s):
       for l, sl in enumerate(s.get_levels()):
          B = sl.get_nonempty_B()
          h[l][B] += 1

   # Now we collect the marginal distribution for exactly 100,000 sweeps
   gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                       callback=collect_num_groups)

.. testcode:: nested-model-averaging
   :hide:

   figure()
844
   f, ax = plt.subplots(1, 5, figsize=(10, 3))
845
   for i, h_ in enumerate(h[:5]):
846
847
       Bs = np.arange(len(h_))
       idx = h_ > 0
848
849
850
       ax[i].bar(Bs[idx] - .5, h_[idx] / h_.sum(), width=1, color="#ccb974")
       ax[i].set_xticks(Bs[idx])
       ax[i].set_xlabel("$B_{%d}$" % i)
851
       ax[i].set_ylabel(r"$P(B_{%d}|\boldsymbol G)$" % i)
852
       locator = MaxNLocator(prune='both', nbins=5)
853
       ax[i].yaxis.set_major_locator(locator)
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
   tight_layout()
   savefig("lesmis-nested-B-posterior.svg")

.. figure:: lesmis-nested-B-posterior.*
   :align: center

   Marginal posterior likelihood of the number of nonempty groups
   :math:`B_l` at each hierarchy level :math:`l` for the network of
   characters in the novel Les Misérables, according to the nested
   degree-corrected SBM.

Below we obtain some hierarchical partitions sampled from the posterior
distribution.

.. testcode:: nested-model-averaging

   for i in range(10):
       state.mcmc_sweep(niter=1000)
       state.draw(output="lesmis-partition-sample-%i.svg" % i, empty_branches=False)

.. image:: lesmis-partition-sample-0.svg
   :width: 200px
.. image:: lesmis-partition-sample-1.svg
   :width: 200px
.. image:: lesmis-partition-sample-2.svg
   :width: 200px
.. image:: lesmis-partition-sample-3.svg
   :width: 200px
.. image:: lesmis-partition-sample-4.svg
   :width: 200px
.. image:: lesmis-partition-sample-5.svg
   :width: 200px
.. image:: lesmis-partition-sample-6.svg
   :width: 200px
.. image:: lesmis-partition-sample-7.svg
   :width: 200px
.. image:: lesmis-partition-sample-8.svg
   :width: 200px
.. image:: lesmis-partition-sample-9.svg
   :width: 200px

Model class selection
+++++++++++++++++++++

When averaging over partitions, we may be interested in evaluating which
899
900
901
**model class** provides a better fit of the data, considering all
possible parameter choices. This is done by evaluating the model
evidence [peixoto-nonparametric-2016]_
902
903
904

.. math::

905
   P(\boldsymbol G) = \sum_{\boldsymbol\theta,\boldsymbol b}P(\boldsymbol G,\boldsymbol\theta, \boldsymbol b) =  \sum_{\boldsymbol b}P(\boldsymbol G,\boldsymbol b).
906
907
908
909
910
911
912
913
914
915
916

This quantity is analogous to a `partition function
<https://en.wikipedia.org/wiki/Partition_function_(statistical_mechanics)>`_
in statistical physics, which we can write more conveniently as a
negative `free energy
<https://en.wikipedia.org/wiki/Thermodynamic_free_energy>`_ by taking
its logarithm

.. math::
   :label: free-energy

917
   \ln P(\boldsymbol G) = \underbrace{\sum_{\boldsymbol b}q(\boldsymbol b)\ln P(\boldsymbol G,\boldsymbol b)}_{-\left<\Sigma\right>}\;
918
919
920
921
922
923
              \underbrace{- \sum_{\boldsymbol b}q(\boldsymbol b)\ln q(\boldsymbol b)}_{\mathcal{S}}

where

.. math::

924
   q(\boldsymbol b) = \frac{P(\boldsymbol G,\boldsymbol b)}{\sum_{\boldsymbol b'}P(\boldsymbol G,\boldsymbol b')}
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962

is the posterior likelihood of partition :math:`\boldsymbol b`. The
first term of Eq. :eq:`free-energy` (the "negative energy") is minus the
average of description length :math:`\left<\Sigma\right>`, weighted
according to the posterior distribution. The second term
:math:`\mathcal{S}` is the `entropy
<https://en.wikipedia.org/wiki/Entropy_(information_theory)>`_ of the
posterior distribution, and measures, in a sense, the "quality of fit"
of the model: If the posterior is very "peaked", i.e. dominated by a
single partition with a very large likelihood, the entropy will tend to
zero. However, if there are many partitions with similar likelihoods ---
meaning that there is no single partition that describes the
network uniquely well --- it will take a large value instead.

Since the MCMC algorithm samples partitions from the distribution
:math:`q(\boldsymbol b)`, it can be used to compute
:math:`\left<\Sigma\right>` easily, simply by averaging the description
length values encountered by sampling from the posterior distribution
many times.

The computation of the posterior entropy :math:`\mathcal{S}`, however,
is significantly more difficult, since it involves measuring the precise
value of :math:`q(\boldsymbol b)`. A direct "brute force" computation of
:math:`\mathcal{S}` is implemented via
:meth:`~graph_tool.inference.BlockState.collect_partition_histogram` and
:func:`~graph_tool.inference.microstate_entropy`, however this is only
feasible for very small networks. For larger networks, we are forced to
perform approximations. The simplest is a "mean field" one, where we
assume the posterior factorizes as

.. math::

   q(\boldsymbol b) \approx \prod_i{q_i(b_i)}

where

.. math::

963
   q_i(r) = P(b_i = r | \boldsymbol G)
964
965
966
967
968
969
970
971
972
973
974
975

is the marginal group membership distribution of node :math:`i`. This
yields an entropy value given by

.. math::

   S \approx -\sum_i\sum_rq_i(r)\ln q_i(r).

This approximation should be seen as an upper bound, since any existing
correlation between the nodes (which are ignored here) will yield
smaller entropy values.

Tiago Peixoto's avatar
Tiago Peixoto committed
976
A more accurate assumption is called the `Bethe approximation`
977
978
979
980
981
982
983
984
985
986
987
988
989
[mezard-information-2009]_, and takes into account the correlation
between adjacent nodes in the network,

.. math::

   q(\boldsymbol b) \approx \prod_{i<j}q_{ij}(b_i,b_j)^{A_{ij}}\prod_iq_i(b_i)^{1-k_i}

where :math:`A_{ij}` is the `adjacency matrix
<https://en.wikipedia.org/wiki/Adjacency_matrix>`_, :math:`k_i` is the
degree of node :math:`i`, and

.. math::

990
   q_{ij}(r, s) = P(b_i = r, b_j = s|\boldsymbol G)
991
992
993
994
995
996
997
998
999
1000

is the joint group membership distribution of nodes :math:`i` and
:math:`j` (a.k.a. the `edge marginals`). This yields an entropy value
given by

.. math::

   S \approx -\sum_{i<j}A_{ij}\sum_{rs}q_{ij}(r,s)\ln q_{ij}(r,s) - \sum_i(1-k_i)\sum_rq_i(r)\ln q_i(r).

Typically, this approximation yields smaller values than the mean field