inference.rst 53.3 KB
Newer Older
1
2
.. _inference-howto:

3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
Inferring network structure
===========================

``graph-tool`` includes algorithms to identify the large-scale structure
of networks in the :mod:`~graph_tool.inference` submodule. Here we
explain the basic functionality with self-contained examples.

Background: Nonparametric statistical inference
-----------------------------------------------

A common task when analyzing networks is to characterize their
structures in simple terms, often by dividing the nodes into modules or
"communities".

A principled approach to perform this task is to formulate `generative
models <https://en.wikipedia.org/wiki/Generative_model>`_ that include
the idea of "modules" in their descriptions, which then can be detected
by `inferring <https://en.wikipedia.org/wiki/Statistical_inference>`_
the model parameters from data. More precisely, given the partition
:math:`\boldsymbol b = \{b_i\}` of the network into :math:`B` groups,
where :math:`b_i\in[0,B-1]` is the group membership of node :math:`i`,
we define a model that generates a network :math:`G` with a probability

.. math::
   :label: model-likelihood

   P(G|\theta, \boldsymbol b)

where :math:`\theta` are additional model parameters. Therefore, if we
observe a network :math:`G`, the likelihood that it was generated by a
given partition :math:`\boldsymbol b` is obtained via the `Bayesian
<https://en.wikipedia.org/wiki/Bayesian_inference>`_ posterior

.. math::
   :label: model-posterior-sum

   P(\boldsymbol b | G) = \frac{\sum_{\theta}P(G|\theta, \boldsymbol b)P(\theta, \boldsymbol b)}{P(G)}

where :math:`P(\theta, \boldsymbol b)` is the `prior likelihood` of the
model parameters, and

.. math::
   :label: model-evidence

   P(G) = \sum_{\theta,\boldsymbol b}P(G|\theta, \boldsymbol b)P(\theta, \boldsymbol b)

is called the `model evidence`. The particular types of model that will
be considered here have "hard constraints", such that there is only one
choice for the remaining parameters :math:`\theta` that is compatible
with the generated network, such that Eq. :eq:`model-posterior-sum` simplifies to

.. math::
   :label: model-posterior

   P(\boldsymbol b | G) = \frac{P(G|\theta, \boldsymbol b)P(\theta, \boldsymbol b)}{P(G)}

with :math:`\theta` above being the only choice compatible with
:math:`G` and :math:`\boldsymbol b`. The inference procedures considered
here will consist in either finding a network partition that maximizes
Eq. :eq:`model-posterior`, or sampling different partitions according
its posterior probability.

As we will show below, this approach will also enable the comparison of
`different` models according to statistical evidence (a.k.a. `model
selection`).

Minimum description length (MDL)
++++++++++++++++++++++++++++++++

We note that Eq. :eq:`model-posterior` can be written as

.. math::

   P(\boldsymbol b | G) = \frac{e^{-\Sigma}}{P(G)}

where

.. math::
   :label: model-dl

   \Sigma = -\ln P(G|\theta, \boldsymbol b) - \ln P(\theta, \boldsymbol b)

is called the **description length** of the network :math:`G`. It
measures the amount of `information
<https://en.wikipedia.org/wiki/Information_theory>`_ required to
describe the data, if we `encode
<https://en.wikipedia.org/wiki/Entropy_encoding>`_ it using the
particular parametrization of the generative model given by
:math:`\theta` and :math:`\boldsymbol b`, as well as the parameters
themselves. Therefore, if we choose to maximize the posterior likelihood
of Eq. :eq:`model-posterior` it will be fully equivalent to the
so-called `minimum description length
<https://en.wikipedia.org/wiki/Minimum_description_length>`_
method. This approach corresponds to an implementation of `Occam's razor
<https://en.wikipedia.org/wiki/Occam%27s_razor>`_, where the `simplest`
model is selected, among all possibilities with the same explanatory
power. The selection is based on the statistical evidence available, and
therefore will not `overfit
<https://en.wikipedia.org/wiki/Overfitting>`_, i.e. mistake stochastic
fluctuations for actual structure.

The stochastic block model (SBM)
--------------------------------

The `stochastic block model
<https://en.wikipedia.org/wiki/Stochastic_block_model>`_ is arguably
the simplest generative process based on the notion of groups of
nodes [holland-stochastic-1983]_. The `microcanonical
<https://en.wikipedia.org/wiki/Microcanonical_ensemble>`_ formulation
[peixoto-entropy-2012]_ of the basic or "traditional" version takes
as parameters the partition of the nodes into groups
:math:`\boldsymbol b` and a :math:`B\times B` matrix of edge counts
:math:`\boldsymbol e`, where :math:`e_{rs}` is the number of edges
between groups :math:`r` and :math:`s`. Given these constraints, the
edges are then placed randomly. Hence, nodes that belong to the same
group possess the same probability of being connected with other
nodes of the network.

An example of a possible parametrization is given in the following
figure.

.. testcode:: sbm-example
   :hide:

   import os
   try:
      os.chdir("demos/inference")
   except FileNotFoundError:
       pass

   g = gt.load_graph("blockmodel-example.gt.gz")
   gt.graph_draw(g, pos=g.vp.pos, vertex_size=10, vertex_fill_color=g.vp.bo,
                 vertex_color="#333333",
                 edge_gradient=g.new_ep("vector<double>", val=[0]),
                 output="sbm-example.svg")

   ers = g.gp.w

   from pylab import *
   figure()
   matshow(log(ers))
   xlabel("Group $r$")
   ylabel("Group $s$")
   gca().xaxis.set_label_position("top") 
   savefig("sbm-example-ers.svg")

.. table::
    :class: figure

    +----------------------------------+------------------------------+
    |.. figure:: sbm-example-ers.svg   |.. figure:: sbm-example.svg   |
    |   :width: 300px                  |   :width: 300px              |
    |   :align: center                 |   :align: center             |
    |                                  |                              |
    |   Matrix of edge counts          |   Generated network.         |
    |   :math:`\boldsymbol e` between  |                              |
    |   groups.                        |                              |
    +----------------------------------+------------------------------+

.. note::

   We emphasize that no constraints are imposed on what `kind` of
   modular structure is allowed. Hence, we can detect the putatively
   typical pattern of `"community structure"
   <https://en.wikipedia.org/wiki/Community_structure>`_, i.e. when
   nodes are connected mostly to other nodes of the same group, if it
   happens to be the most likely network description, but we can also
   detect a large multiplicity of other patterns, such as `bipartiteness
   <https://en.wikipedia.org/wiki/Bipartite_graph>`_, core-periphery,
   and many others, all under the same inference framework.


Although quite general, the traditional model assumes that the edges are
placed randomly inside each group, and as such the nodes that belong to
the same group have very similar degrees. As it turns out, this is often
a poor model for many networks, which possess highly heterogeneous
degree distributions. A better model for such networks is called the
`degree-corrected` stochastic block model [karrer-stochastic-2011]_, and
it is defined just like the traditional model, with the addition of the
degree sequence :math:`\boldsymbol k = \{k_i\}` of the graph as an
additional set of parameters (assuming again a microcanonical
formulation [peixoto-entropy-2012]_).


The nested stochastic block model
+++++++++++++++++++++++++++++++++

The regular SBM has a drawback when applied to very large
networks. Namely, it cannot be used to find relatively small groups in
very large networks: The maximum number of groups that can be found
scales as :math:`B_{\text{max}}\sim\sqrt{N}`, where :math:`N` is the
number of nodes in the network, if Bayesian inference is performed
[peixoto-parsimonious-2013]_. In order to circumvent this, we need to
replace the noninformative priors used by a hierarchy of priors and
hyperpriors, which amounts to a `nested SBM`, where the groups
themselves are clustered into groups, and the matrix :math:`e` of edge
counts are generated from another SBM, and so on recursively
[peixoto-hierarchical-2014]_.

.. figure:: nested-diagram.*
   :width: 400px
   :align: center

   Example of a nested SBM with three levels.

In addition to being able to find small groups in large networks, this
model also provides a multilevel hierarchical description of the
network, that describes its structure at multiple scales.


Inferring the best partition
----------------------------

The simplest and most efficient approach is to find the best
partition of the network by maximizing Eq. :eq:`model-posterior`
according to some version of the model. This is obtained via the
functions :func:`~graph_tool.inference.minimize_blockmodel_dl` or
:func:`~graph_tool.inference.minimize_nested_blockmodel_dl`, which
employs an agglomerative multilevel `Markov chain Monte Carlo (MCMC)
<https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo>`_ algorithm
[peixoto-efficient-2014]_.

We focus first on the non-nested model, and we illustrate its use with a
network of American football teams, which we load from the
:mod:`~graph_tool.collection` module:

.. testsetup:: football

   import os
   try:
      os.chdir("demos/inference")
   except FileNotFoundError:
       pass

.. testcode:: football

   g = gt.collection.data["football"]
   print(g)

which yields

.. testoutput:: football

   <Graph object, undirected, with 115 vertices and 613 edges at 0x...>

we then fit the `traditional` model by calling

.. testcode:: football

   state = gt.minimize_blockmodel_dl(g, deg_corr=False)

This returns a :class:`~graph_tool.inference.BlockState` object that
includes the inference results.

.. note::

   The inference algorithm used is stochastic by nature, and may return
   a slightly different answer each time it is run. This may be due to
   the fact that there are alternative partitions with similar
   likelihoods, or that the optimum is difficult to find. Note that the
   inference problem here is, in general, `NP-Hard
   <https://en.wikipedia.org/wiki/NP-hardness>`_, hence there is no
   efficient algorithm that is guaranteed to always find the best
   answer.

   Because of this, typically one would call the algorithm many times,
   and select the partition with the largest posterior likelihood of
   Eq. :eq:`model-posterior`, or equivalently, the minimum description
   length of Eq. :eq:`model-dl`. The description length of a fit can be
   obtained with the :meth:`~graph_tool.inference.BlockState.entropy`
   method. See also :ref:`sec_model_selection` below.


We may perform a drawing of the partition obtained via the
:mod:`~graph_tool.inference.BlockState.draw` method, that functions as a
convenience wrapper to the :func:`~graph_tool.draw.graph_draw` function

.. testcode:: football

   state.draw(pos=g.vp.pos, output="football-sbm-fit.svg")

which yields the following image.

.. figure:: football-sbm-fit.*
   :align: center
   :width: 400px

   Stochastic block model inference of a network of American college
   football teams. The colors correspond to inferred group membership of
   the nodes.

We can obtain the group memberships as a
:class:`~graph_tool.PropertyMap` on the vertices via the
:mod:`~graph_tool.inference.BlockState.get_blocks` method:

.. testcode:: football

   b = state.get_blocks()
   r = b[10]   # group membership of vertex 10
   print(r)

which yields:

.. testoutput:: football

   3

We may also access the matrix of edge counts between groups via
:mod:`~graph_tool.inference.BlockState.get_matrix`

.. testcode:: football

   e = state.get_matrix()

   matshow(e.todense())
   savefig("football-edge-counts.svg")

.. figure:: football-edge-counts.*
   :align: center

   Matrix of edge counts between groups.

We may obtain the same matrix of edge counts as a graph, which has
internal edge and vertex property maps with the edge and vertex counts,
respectively:

.. testcode:: football

   bg = state.get_bg()
   ers = bg.ep.count    # edge counts
   nr = bg.vp.count     # node counts

.. _sec_model_selection:

Hierarchical partitions
+++++++++++++++++++++++

The inference of the nested family of SBMs is done in a similar manner,
but we must use instead the
:func:`~graph_tool.inference.minimize_nested_blockmodel_dl` function. We
illustrate its use with the neural network of the `C. elegans
<https://en.wikipedia.org/wiki/Caenorhabditis_elegans>`_ worm:

.. testcode:: celegans

   g = gt.collection.data["celegansneural"]
   print(g)

which has 297 vertices and 2359 edges.

.. testoutput:: celegans

   <Graph object, directed, with 297 vertices and 2359 edges at 0x...>

A hierarchical fit of the degree-corrected model is performed as follows.

.. testcode:: celegans

   state = gt.minimize_nested_blockmodel_dl(g, deg_corr=True)

The object returned is an instance of a
:class:`~graph_tool.inference.NestedBlockState` class, which
encapsulates the results. We can again draw the resulting hierarchical
clustering using the
:meth:`~graph_tool.inference.NestedBlockState.draw` method:

.. testcode:: celegans

   state.draw(output="celegans-hsbm-fit.svg")

.. figure:: celegans-hsbm-fit.*
   :align: center

   Most likely hierarchical partition of the neural network of
   the C. elegans worm according to the nested degree-corrected SBM.

.. note::

   If the ``output`` parameter to
   :meth:`~graph_tool.inference.NestedBlockState.draw` is omitted, an
   interactive visualization is performed, where the user can re-order
   the hierarchy nodes using the mouse and pressing the ``r`` key.

A summary of the inferred hierarchy can be obtained with the
:meth:`~graph_tool.inference.NestedBlockState.print_summary` method,
which shows the number of nodes and groups in all levels:

.. testcode:: celegans

   state.print_summary()

.. testoutput:: celegans

396
397
398
399
   l: 0, N: 297, B: 14
   l: 1, N: 14, B: 6
   l: 2, N: 6, B: 3
   l: 3, N: 3, B: 1
400
401
402
403
404
405
406
407
408
409
410
411
412

The hierarchical levels themselves are represented by individual
:meth:`~graph_tool.inference.BlockState` instances via the
:meth:`~graph_tool.inference.NestedBlockState.get_levels()` method:

.. testcode:: celegans

   levels = state.get_levels()
   for s in levels:
       print(s)

.. testoutput:: celegans

413
414
415
416
   <BlockState object with 14 blocks (14 nonempty), degree-corrected, for graph <Graph object, directed, with 297 vertices and 2359 edges at 0x...>, at 0x...>
   <BlockState object with 6 blocks (6 nonempty), for graph <Graph object, directed, with 14 vertices and 116 edges at 0x...>, at 0x...>
   <BlockState object with 3 blocks (3 nonempty), for graph <Graph object, directed, with 6 vertices and 28 edges at 0x...>, at 0x...>
   <BlockState object with 1 blocks (1 nonempty), for graph <Graph object, directed, with 3 vertices and 9 edges at 0x...>, at 0x...>
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431

This means that we can inspect the hierarchical partition just as before:

.. testcode:: celegans

   r = levels[0].get_blocks()[42]    # group membership of node 42 in level 0
   print(r)
   r = levels[0].get_blocks()[r]     # group membership of node 42 in level 1
   print(r)
   r = levels[0].get_blocks()[r]     # group membership of node 42 in level 2
   print(r)

.. testoutput:: celegans

   6
432
433
   2
   1
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455


Model selection
+++++++++++++++

As mentioned above, one can select the best model according to the
choice that yields the smallest description length. For instance, in
case of the `C. elegans` network we have

.. testcode:: model-selection

   g = gt.collection.data["celegansneural"]

   state_ndc = gt.minimize_nested_blockmodel_dl(g, deg_corr=False)
   state_dc  = gt.minimize_nested_blockmodel_dl(g, deg_corr=True)

   print("Non-degree-corrected DL:\t", state_ndc.entropy())
   print("Degree-corrected DL:\t", state_dc.entropy())

.. testoutput:: model-selection
   :options: +NORMALIZE_WHITESPACE

456
457
   Non-degree-corrected DL:	 8500.79633202
   Degree-corrected DL:	 8288.14138981
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481

Since it yields the smallest description length, the degree-corrected
fit should be preferred. The statistical significance of the choice can
be accessed by inspecting the posterior odds ratio (or more precisely,
the `Bayes factor <https://en.wikipedia.org/wiki/Bayes_factor>`_)
[peixoto-model-2016]_

.. math::

   \Lambda &= \frac{P(\boldsymbol b | G, \mathcal{H}_\text{NDC})}{P(\boldsymbol b | G, \mathcal{H}_\text{DC})} \\
           &= \exp(-\Delta\Sigma)

where :math:`\mathcal{H}_\text{NDC}` and :math:`\mathcal{H}_\text{DC}`
correspond to the non-degree-corrected and degree-corrected model
hypotheses, respectively, and :math:`\Delta\Sigma` is the difference of the
description length of both fits. In our particular case, we have

.. testcode:: model-selection

   print("ln Λ: ", state_dc.entropy() - state_ndc.entropy())

.. testoutput:: model-selection
   :options: +NORMALIZE_WHITESPACE

482
   ln Λ:  -212.654942209
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508

The precise threshold that should be used to decide when to `reject a
hypothesis <https://en.wikipedia.org/wiki/Hypothesis_testing>`_ is
subjective and context-dependent, but the value above implies that the
particular degree-corrected fit is around :math:`e^{196} \sim 10^{85}`
times more likely than the non-degree corrected one, and hence it can be
safely concluded that it provides a substantially better fit.

Although it is often true that the degree-corrected model provides a
better fit for many empirical networks, there are also exceptions. For
example, for the American football network above, we have:

.. testcode:: model-selection

   g = gt.collection.data["football"]

   state_ndc = gt.minimize_nested_blockmodel_dl(g, deg_corr=False)
   state_dc  = gt.minimize_nested_blockmodel_dl(g, deg_corr=True)

   print("Non-degree-corrected DL:\t", state_ndc.entropy())
   print("Degree-corrected DL:\t", state_dc.entropy())
   print("ln Λ:\t\t\t", state_ndc.entropy() - state_dc.entropy())

.. testoutput:: model-selection
   :options: +NORMALIZE_WHITESPACE

Tiago Peixoto's avatar
Tiago Peixoto committed
509
510
511
   Non-degree-corrected DL:     1738.00660528
   Degree-corrected DL:         1780.01146484
   ln Λ:                        -42.0048595573
512

513
514
Hence, with a posterior odds ratio of :math:`\Lambda \sim e^{-59} \sim
10^{-25}` in favor of the non-degree-corrected model, it seems like the
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
degree-corrected variant is an unnecessarily complex description for
this network.

Averaging over models
---------------------

When analyzing empirical networks, one should be open to the possibility
that there will be more than one fit of the SBM with similar posterior
likelihoods. In such situations, one should instead `sample` partitions
from the posterior likelihood, instead of simply finding its
maximum. One can then compute quantities that are averaged over the
different model fits, weighted according to their posterior likelihoods.

Full support for model averaging is implemented in ``graph-tool`` via an
efficient `Markov chain Monte Carlo (MCMC)
<https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo>`_ algorithm
[peixoto-efficient-2014]_. It works by attempting to move nodes into
different groups with specific probabilities, and `accepting or
rejecting
<https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm>`_
such moves such that, after a sufficiently long time, the partitions
will be observed with the desired posterior probability. The algorithm
is so designed, that its run-time is independent on the number of groups
being used in the model, and hence is suitable for use on very large
networks.

In order to perform such moves, one needs again to operate with
:class:`~graph_tool.inference.BlockState` or
:class:`~graph_tool.inference.NestedBlockState` instances, and calling
their :meth:`~graph_tool.inference.BlockState.mcmc_sweep` methods. For
example, the following will perform 1000 sweeps of the algorithm with
the network of characters in the novel Les Misérables, starting from a
random partition into 20 groups

.. testsetup:: model-averaging

   import os
   try:
       os.chdir("demos/inference")
   except FileNotFoundError:
       pass


.. testcode:: model-averaging

   g = gt.collection.data["lesmis"]

   state = gt.BlockState(g, B=20)   # This automatically initializes the state
                                    # with a random partition into B=20
                                    # nonempty groups; The user could
                                    # also pass an arbitrary initial
                                    # partition using the 'b' parameter.

   # If we work with the above state object, we will be restricted to
   # partitions into at most B=20 groups. But since we want to consider
   # an arbitrary number of groups in the range [1, N], we transform it
   # into a state with B=N groups (where N-20 will be empty).

   state = state.copy(B=g.num_vertices())

   # Now we run 1,000 sweeps of the MCMC

   dS, nmoves = state.mcmc_sweep(niter=1000)

   print("Change in description length:", dS)
   print("Number of accepted vertex moves:", nmoves)

.. testoutput:: model-averaging

584
   Change in description length: -374.329276...
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
   Number of accepted vertex moves: 4394

.. note::

   Starting from a random partition is rarely the best option, since it
   may take a long time for it to equilibrate; It was done above simply
   as an illustration on how to initialize
   :class:`~graph_tool.inference.BlockState` by hand. Instead, a much
   better option in practice is to start from the "ground state"
   obtained with :func:`~graph_tool.inference.minimize_blockmodel_dl`,
   e.g.

    .. testcode:: model-averaging

       state = gt.minimize_blockmodel_dl(g)
       state = state.copy(B=g.num_vertices())
       dS, nmoves = state.mcmc_sweep(niter=1000)

       print("Change in description length:", dS)
       print("Number of accepted vertex moves:", nmoves)

    .. testoutput:: model-averaging

608
609
       Change in description length: -4.2327044...
       Number of accepted vertex moves: 3887
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629

Although the above is sufficient to implement model averaging, there is a
convenience function called
:func:`~graph_tool.inference.mcmc_equilibrate` that is intend to
simplify the detection of equilibration, by keeping track of the maximum
and minimum values of description length encountered and how many sweeps
have been made without a "record breaking" event. For example,

.. testcode:: model-averaging

   # We will accept equilibration if 10 sweeps are completed without a
   # record breaking event, 2 consecutive times.

   gt.mcmc_equilibrate(state, wait=10, nbreaks=2, mcmc_args=dict(niter=10), verbose=True)

will output:

.. testoutput:: model-averaging
    :options: +NORMALIZE_WHITESPACE

630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
    niter:     1  count:    0  breaks:  0  min_S: 695.34285  max_S: 710.82373  S: 710.82373  ΔS:      15.4809  moves:    26 
    niter:     2  count:    1  breaks:  0  min_S: 695.34285  max_S: 710.82373  S: 700.88756  ΔS:     -9.93617  moves:    28 
    niter:     3  count:    0  breaks:  0  min_S: 695.34285  max_S: 715.67616  S: 715.67616  ΔS:      14.7886  moves:    36 
    niter:     4  count:    1  breaks:  0  min_S: 695.34285  max_S: 715.67616  S: 700.17714  ΔS:     -15.4990  moves:    47 
    niter:     5  count:    2  breaks:  0  min_S: 695.34285  max_S: 715.67616  S: 699.60917  ΔS:    -0.567973  moves:    26 
    niter:     6  count:    3  breaks:  0  min_S: 695.34285  max_S: 715.67616  S: 695.98465  ΔS:     -3.62452  moves:    26 
    niter:     7  count:    4  breaks:  0  min_S: 695.34285  max_S: 715.67616  S: 696.61192  ΔS:     0.627269  moves:    14 
    niter:     8  count:    5  breaks:  0  min_S: 695.34285  max_S: 715.67616  S: 708.40482  ΔS:      11.7929  moves:    23 
    niter:     9  count:    6  breaks:  0  min_S: 695.34285  max_S: 715.67616  S: 706.45612  ΔS:     -1.94870  moves:    27 
    niter:    10  count:    7  breaks:  0  min_S: 695.34285  max_S: 715.67616  S: 706.23034  ΔS:    -0.225778  moves:    23 
    niter:    11  count:    8  breaks:  0  min_S: 695.34285  max_S: 715.67616  S: 704.95338  ΔS:     -1.27696  moves:    41 
    niter:    12  count:    9  breaks:  0  min_S: 695.34285  max_S: 715.67616  S: 713.74824  ΔS:      8.79486  moves:    41 
    niter:    13  count:    0  breaks:  1  min_S: 704.05707  max_S: 704.05707  S: 704.05707  ΔS:     -9.69117  moves:    35 
    niter:    14  count:    0  breaks:  1  min_S: 704.05707  max_S: 708.98963  S: 708.98963  ΔS:      4.93256  moves:    42 
    niter:    15  count:    0  breaks:  1  min_S: 703.01886  max_S: 708.98963  S: 703.01886  ΔS:     -5.97077  moves:    24 
    niter:    16  count:    0  breaks:  1  min_S: 703.01886  max_S: 712.90264  S: 712.90264  ΔS:      9.88378  moves:    33 
    niter:    17  count:    0  breaks:  1  min_S: 703.01886  max_S: 722.28564  S: 722.28564  ΔS:      9.38300  moves:    48 
    niter:    18  count:    0  breaks:  1  min_S: 698.11815  max_S: 722.28564  S: 698.11815  ΔS:     -24.1675  moves:    34 
    niter:    19  count:    1  breaks:  1  min_S: 698.11815  max_S: 722.28564  S: 702.54589  ΔS:      4.42774  moves:    44 
    niter:    20  count:    2  breaks:  1  min_S: 698.11815  max_S: 722.28564  S: 698.87992  ΔS:     -3.66597  moves:    32 
    niter:    21  count:    3  breaks:  1  min_S: 698.11815  max_S: 722.28564  S: 699.07353  ΔS:     0.193605  moves:    17 
    niter:    22  count:    4  breaks:  1  min_S: 698.11815  max_S: 722.28564  S: 710.06346  ΔS:      10.9899  moves:    32 
    niter:    23  count:    5  breaks:  1  min_S: 698.11815  max_S: 722.28564  S: 713.65309  ΔS:      3.58963  moves:    43 
    niter:    24  count:    6  breaks:  1  min_S: 698.11815  max_S: 722.28564  S: 709.27203  ΔS:     -4.38106  moves:    29 
    niter:    25  count:    7  breaks:  1  min_S: 698.11815  max_S: 722.28564  S: 700.13040  ΔS:     -9.14163  moves:    21 
    niter:    26  count:    8  breaks:  1  min_S: 698.11815  max_S: 722.28564  S: 703.50075  ΔS:      3.37036  moves:    15 
    niter:    27  count:    9  breaks:  1  min_S: 698.11815  max_S: 722.28564  S: 716.09070  ΔS:      12.5899  moves:    37 
    niter:    28  count:   10  breaks:  2  min_S: 698.11815  max_S: 722.28564  S: 707.79215  ΔS:     -8.29855  moves:    25 
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793


Note that the value of `wait` above was made purposefully low so that
the output would not be overly long. The most appropriate value requires
experimentation, but a typically good value is `wait=1000`.

The function :func:`~graph_tool.inference.mcmc_equilibrate` accepts a
``callback`` argument that takes an optional function to be invoked
after each call to
:meth:`~graph_tool.inference.BlockState.mcmc_sweep`. This function
should accept a single parameter which will contain the actual
:class:`~graph_tool.inference.BlockState` instance. We will use this in
the example below to collect the posterior vertex marginals, i.e. the
posterior probability that a node belongs to a given group:

.. testcode:: model-averaging

   # We will first equilibrate the Markov chain
   gt.mcmc_equilibrate(state, wait=1000, mcmc_args=dict(niter=10))

   pv = None 

   def collect_marginals(s):
      global pv
      pv = s.collect_vertex_marginals(pv)

   # Now we collect the marginals for exactly 100,000 sweeps
   gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                       callback=collect_marginals)

   # Now the node marginals are stored in property map pv. We can
   # visualize them as pie charts on the nodes:
   state.draw(pos=g.vp.pos, vertex_shape="pie", vertex_pie_fractions=pv,
              edge_gradient=None, output="lesmis-sbm-marginals.svg")

.. figure:: lesmis-sbm-marginals.*
   :align: center
   :width: 450px

   Marginal probabilities of group memberships of the network of
   characters in the novel Les Misérables, according to the
   degree-corrected SBM. The `pie fractions
   <https://en.wikipedia.org/wiki/Pie_chart>`_ on the nodes correspond
   to the probability of being in group associated with the respective
   color.

We can also obtain a marginal probability on the number of groups
itself, as follows.

.. testcode:: model-averaging

   h = np.zeros(g.num_vertices() + 1)

   def collect_num_groups(s):
       B = s.get_nonempty_B()
       h[B] += 1

   # Now we collect the marginal distribution for exactly 100,000 sweeps
   gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                       callback=collect_num_groups)

.. testcode:: model-averaging
   :hide:

   figure()
   Bs = np.arange(len(h))
   idx = h > 0
   bar(Bs[idx] - .5, h[idx] / h.sum(), width=1, color="#ccb974")
   gca().set_xticks([6,7,8,9])
   xlabel("$B$")
   ylabel("$P(B|G)$")
   savefig("lesmis-B-posterior.svg")

.. figure:: lesmis-B-posterior.*
   :align: center

   Marginal posterior likelihood of the number of nonempty groups for the
   network of characters in the novel Les Misérables, according to the
   degree-corrected SBM.


Hierarchical partitions
+++++++++++++++++++++++

We can also perform model averaging using the nested SBM, which will
give us a distribution over hierarchies. The whole procedure is fairly
analogous, but now we make use of
:class:`~graph_tool.inference.NestedBlockState` instances.

.. note::

    When using :class:`~graph_tool.inference.NestedBlockState` instances
    to perform model averaging, they need to be constructed with the
    option `sampling=True`.

Here we perform the sampling of hierarchical partitions using the same
network as above.

.. testsetup:: nested-model-averaging

   import os
   try:
       os.chdir("demos/inference")
   except FileNotFoundError:
       pass

.. testcode:: nested-model-averaging

   g = gt.collection.data["lesmis"]

   state = gt.minimize_nested_blockmodel_dl(g) # Initialize he Markov
                                               # chain from the "ground
                                               # state"

   # Before doing model averaging, the need to create a NestedBlockState
   # by passing sampling = True.

   # We also want to increase the maximum hierarchy depth to L = 10

   # We can do both of the above by copying.

   bs = state.get_bs()                     # Get hierarchical partition.
   bs += [np.zeros(1)] * (10 - len(bs))    # Augment it to L = 10 with
                                           # single-group levels.

   state = state.copy(bs=bs, sampling=True)

   # Now we run 1000 sweeps of the MCMC

   dS, nmoves = state.mcmc_sweep(niter=1000)

   print("Change in description length:", dS)
   print("Number of accepted vertex moves:", nmoves)

.. testoutput:: nested-model-averaging

794
795
   Change in description length: 6.368298...
   Number of accepted vertex moves: 3765
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828

Similarly to the the non-nested case, we can use
:func:`~graph_tool.inference.mcmc_equilibrate` to do most of the boring
work, and we can now obtain vertex marginals on all hierarchical levels:


.. testcode:: nested-model-averaging

   # We will first equilibrate the Markov chain
   gt.mcmc_equilibrate(state, wait=1000, mcmc_args=dict(niter=10))

   pv = [None] * len(state.get_levels())

   def collect_marginals(s):
      global pv
      pv = [sl.collect_vertex_marginals(pv[l]) for l, sl in enumerate(s.get_levels())]

   # Now we collect the marginals for exactly 100,000 sweeps
   gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                       callback=collect_marginals)

   # Now the node marginals for all levels are stored in property map
   # list pv. We can visualize the first level as pie charts on the nodes:
   state_0 = state.get_levels()[0]
   state_0.draw(pos=g.vp.pos, vertex_shape="pie", vertex_pie_fractions=pv[0],
                edge_gradient=None, output="lesmis-nested-sbm-marginals.svg")

.. figure:: lesmis-nested-sbm-marginals.*
   :align: center
   :width: 450px

   Marginal probabilities of group memberships of the network of
   characters in the novel Les Misérables, according to the nested
Tiago Peixoto's avatar
Tiago Peixoto committed
829
830
831
   degree-corrected SBM. The `pie fractions <https://en.wikipedia.org/wiki/Pie_chart>`_
   on the nodes correspond to the probability of being in group
   associated with the respective color.
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000

We can also obtain a marginal probability of the number of groups
itself, as follows.

.. testcode:: nested-model-averaging

   h = [np.zeros(g.num_vertices() + 1) for s in state.get_levels()]

   def collect_num_groups(s):
       for l, sl in enumerate(s.get_levels()):
          B = sl.get_nonempty_B()
          h[l][B] += 1

   # Now we collect the marginal distribution for exactly 100,000 sweeps
   gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10),
                       callback=collect_num_groups)

.. testcode:: nested-model-averaging
   :hide:

   figure()
   f, ax = plt.subplots(2, 5, figsize=(10, 3))
   for l, h_ in enumerate(h):
       Bs = np.arange(len(h_))
       idx = h_ > 0
       i = l // 5
       j = l % 5
       ax[i,j].bar(Bs[idx] - .5, h_[idx] / h_.sum(), width=1, color="#ccb974")
       ax[i,j].set_xticks(Bs[idx])
       ax[i,j].set_xlabel("$B_{%d}$" % l)
       ax[i,j].set_ylabel("$P(B_{%d}|G)$" % l)
       locator = MaxNLocator(prune='both', nbins=5)
       ax[i,j].yaxis.set_major_locator(locator)
   tight_layout()
   savefig("lesmis-nested-B-posterior.svg")

.. figure:: lesmis-nested-B-posterior.*
   :align: center

   Marginal posterior likelihood of the number of nonempty groups
   :math:`B_l` at each hierarchy level :math:`l` for the network of
   characters in the novel Les Misérables, according to the nested
   degree-corrected SBM.

Below we obtain some hierarchical partitions sampled from the posterior
distribution.

.. testcode:: nested-model-averaging

   for i in range(10):
       state.mcmc_sweep(niter=1000)
       state.draw(output="lesmis-partition-sample-%i.svg" % i, empty_branches=False)

.. image:: lesmis-partition-sample-0.svg
   :width: 200px
.. image:: lesmis-partition-sample-1.svg
   :width: 200px
.. image:: lesmis-partition-sample-2.svg
   :width: 200px
.. image:: lesmis-partition-sample-3.svg
   :width: 200px
.. image:: lesmis-partition-sample-4.svg
   :width: 200px
.. image:: lesmis-partition-sample-5.svg
   :width: 200px
.. image:: lesmis-partition-sample-6.svg
   :width: 200px
.. image:: lesmis-partition-sample-7.svg
   :width: 200px
.. image:: lesmis-partition-sample-8.svg
   :width: 200px
.. image:: lesmis-partition-sample-9.svg
   :width: 200px

Model class selection
+++++++++++++++++++++

When averaging over partitions, we may be interested in evaluating which
**model class** provides a better fit of the data, considering all possible
parameter choices. This is done by evaluating the model evidence

.. math::

   P(G) = \sum_{\theta,\boldsymbol b}P(G,\theta, \boldsymbol b) =  \sum_{\boldsymbol b}P(G,\boldsymbol b).

This quantity is analogous to a `partition function
<https://en.wikipedia.org/wiki/Partition_function_(statistical_mechanics)>`_
in statistical physics, which we can write more conveniently as a
negative `free energy
<https://en.wikipedia.org/wiki/Thermodynamic_free_energy>`_ by taking
its logarithm

.. math::
   :label: free-energy

   \ln P(G) = \underbrace{\sum_{\boldsymbol b}q(\boldsymbol b)\ln P(G,\boldsymbol b)}_{-\left<\Sigma\right>}\;
              \underbrace{- \sum_{\boldsymbol b}q(\boldsymbol b)\ln q(\boldsymbol b)}_{\mathcal{S}}

where

.. math::

   q(\boldsymbol b) = \frac{P(G,\boldsymbol b)}{\sum_{\boldsymbol b'}P(G,\boldsymbol b')}

is the posterior likelihood of partition :math:`\boldsymbol b`. The
first term of Eq. :eq:`free-energy` (the "negative energy") is minus the
average of description length :math:`\left<\Sigma\right>`, weighted
according to the posterior distribution. The second term
:math:`\mathcal{S}` is the `entropy
<https://en.wikipedia.org/wiki/Entropy_(information_theory)>`_ of the
posterior distribution, and measures, in a sense, the "quality of fit"
of the model: If the posterior is very "peaked", i.e. dominated by a
single partition with a very large likelihood, the entropy will tend to
zero. However, if there are many partitions with similar likelihoods ---
meaning that there is no single partition that describes the
network uniquely well --- it will take a large value instead.

Since the MCMC algorithm samples partitions from the distribution
:math:`q(\boldsymbol b)`, it can be used to compute
:math:`\left<\Sigma\right>` easily, simply by averaging the description
length values encountered by sampling from the posterior distribution
many times.

The computation of the posterior entropy :math:`\mathcal{S}`, however,
is significantly more difficult, since it involves measuring the precise
value of :math:`q(\boldsymbol b)`. A direct "brute force" computation of
:math:`\mathcal{S}` is implemented via
:meth:`~graph_tool.inference.BlockState.collect_partition_histogram` and
:func:`~graph_tool.inference.microstate_entropy`, however this is only
feasible for very small networks. For larger networks, we are forced to
perform approximations. The simplest is a "mean field" one, where we
assume the posterior factorizes as

.. math::

   q(\boldsymbol b) \approx \prod_i{q_i(b_i)}

where

.. math::

   q_i(r) = P(b_i = r | G)

is the marginal group membership distribution of node :math:`i`. This
yields an entropy value given by

.. math::

   S \approx -\sum_i\sum_rq_i(r)\ln q_i(r).

This approximation should be seen as an upper bound, since any existing
correlation between the nodes (which are ignored here) will yield
smaller entropy values.

A more elaborate assumption is called the `Bethe approximation`
[mezard-information-2009]_, and takes into account the correlation
between adjacent nodes in the network,

.. math::

   q(\boldsymbol b) \approx \prod_{i<j}q_{ij}(b_i,b_j)^{A_{ij}}\prod_iq_i(b_i)^{1-k_i}

where :math:`A_{ij}` is the `adjacency matrix
<https://en.wikipedia.org/wiki/Adjacency_matrix>`_, :math:`k_i` is the
degree of node :math:`i`, and

.. math::

   q_{ij}(r, s) = P(b_i = r, b_j = s|G)