.. _inference-howto: Inferring network structure =========================== ``graph-tool`` includes algorithms to identify the large-scale structure of networks in the :mod:`~graph_tool.inference` submodule. Here we explain the basic functionality with self-contained examples. Background: Nonparametric statistical inference ----------------------------------------------- A common task when analyzing networks is to characterize their structures in simple terms, often by dividing the nodes into modules or "communities". A principled approach to perform this task is to formulate `generative models `_ that include the idea of "modules" in their descriptions, which then can be detected by `inferring `_ the model parameters from data. More precisely, given the partition :math:`\boldsymbol b = \{b_i\}` of the network into :math:`B` groups, where :math:`b_i\in[0,B-1]` is the group membership of node :math:`i`, we define a model that generates a network :math:`\boldsymbol G` with a probability .. math:: :label: model-likelihood P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b) where :math:`\boldsymbol\theta` are additional model parameters. Therefore, if we observe a network :math:`\boldsymbol G`, the likelihood that it was generated by a given partition :math:`\boldsymbol b` is obtained via the `Bayesian `_ posterior .. math:: :label: model-posterior-sum P(\boldsymbol b | \boldsymbol G) = \frac{\sum_{\boldsymbol\theta}P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b)P(\boldsymbol\theta, \boldsymbol b)}{P(\boldsymbol G)} where :math:`P(\boldsymbol\theta, \boldsymbol b)` is the `prior likelihood` of the model parameters, and .. math:: :label: model-evidence P(\boldsymbol G) = \sum_{\boldsymbol\theta,\boldsymbol b}P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b)P(\boldsymbol\theta, \boldsymbol b) is called the `model evidence`. The particular types of model that will be considered here have "hard constraints", such that there is only one choice for the remaining parameters :math:`\boldsymbol\theta` that is compatible with the generated network, such that Eq. :eq:`model-posterior-sum` simplifies to .. math:: :label: model-posterior P(\boldsymbol b | \boldsymbol G) = \frac{P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b)P(\boldsymbol\theta, \boldsymbol b)}{P(\boldsymbol G)} with :math:`\boldsymbol\theta` above being the only choice compatible with :math:`\boldsymbol G` and :math:`\boldsymbol b`. The inference procedures considered here will consist in either finding a network partition that maximizes Eq. :eq:`model-posterior`, or sampling different partitions according its posterior probability. As we will show below, this approach will also enable the comparison of `different` models according to statistical evidence (a.k.a. `model selection`). Minimum description length (MDL) ++++++++++++++++++++++++++++++++ We note that Eq. :eq:`model-posterior` can be written as .. math:: P(\boldsymbol b | \boldsymbol G) = \frac{\exp(-\Sigma)}{P(\boldsymbol G)} where .. math:: :label: model-dl \Sigma = -\ln P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b) - \ln P(\boldsymbol\theta, \boldsymbol b) is called the **description length** of the network :math:`\boldsymbol G`. It measures the amount of `information `_ required to describe the data, if we `encode `_ it using the particular parametrization of the generative model given by :math:`\boldsymbol\theta` and :math:`\boldsymbol b`, as well as the parameters themselves. Therefore, if we choose to maximize the posterior likelihood of Eq. :eq:`model-posterior` it will be fully equivalent to the so-called `minimum description length `_ method. This approach corresponds to an implementation of `Occam's razor `_, where the `simplest` model is selected, among all possibilities with the same explanatory power. The selection is based on the statistical evidence available, and therefore will not `overfit `_, i.e. mistake stochastic fluctuations for actual structure. The stochastic block model (SBM) -------------------------------- The `stochastic block model `_ is arguably the simplest generative process based on the notion of groups of nodes [holland-stochastic-1983]_. The `microcanonical `_ formulation [peixoto-nonparametric-2016]_ of the basic or "traditional" version takes as parameters the partition of the nodes into groups :math:`\boldsymbol b` and a :math:`B\times B` matrix of edge counts :math:`\boldsymbol e`, where :math:`e_{rs}` is the number of edges between groups :math:`r` and :math:`s`. Given these constraints, the edges are then placed randomly. Hence, nodes that belong to the same group possess the same probability of being connected with other nodes of the network. An example of a possible parametrization is given in the following figure. .. testcode:: sbm-example :hide: import os try: os.chdir("demos/inference") except FileNotFoundError: pass g = gt.load_graph("blockmodel-example.gt.gz") gt.graph_draw(g, pos=g.vp.pos, vertex_size=10, vertex_fill_color=g.vp.bo, vertex_color="#333333", edge_gradient=g.new_ep("vector", val=[0]), output="sbm-example.svg") ers = g.gp.w from pylab import * figure() matshow(log(ers)) xlabel("Group $r$") ylabel("Group $s$") gca().xaxis.set_label_position("top") savefig("sbm-example-ers.svg") .. table:: :class: figure +----------------------------------+------------------------------+ |.. figure:: sbm-example-ers.svg |.. figure:: sbm-example.svg | | :width: 300px | :width: 300px | | :align: center | :align: center | | | | | Matrix of edge counts | Generated network. | | :math:`\boldsymbol e` between | | | groups. | | +----------------------------------+------------------------------+ .. note:: We emphasize that no constraints are imposed on what `kind` of modular structure is allowed. Hence, we can detect the putatively typical pattern of `"community structure" `_, i.e. when nodes are connected mostly to other nodes of the same group, if it happens to be the most likely network description, but we can also detect a large multiplicity of other patterns, such as `bipartiteness `_, core-periphery, and many others, all under the same inference framework. Although quite general, the traditional model assumes that the edges are placed randomly inside each group, and as such the nodes that belong to the same group have very similar degrees. As it turns out, this is often a poor model for many networks, which possess highly heterogeneous degree distributions. A better model for such networks is called the `degree-corrected` stochastic block model [karrer-stochastic-2011]_, and it is defined just like the traditional model, with the addition of the degree sequence :math:`\boldsymbol k = \{k_i\}` of the graph as an additional set of parameters (assuming again a microcanonical formulation [peixoto-nonparametric-2016]_). The nested stochastic block model +++++++++++++++++++++++++++++++++ The regular SBM has a drawback when applied to very large networks. Namely, it cannot be used to find relatively small groups in very large networks: The maximum number of groups that can be found scales as :math:`B_{\text{max}}\sim\sqrt{N}`, where :math:`N` is the number of nodes in the network, if Bayesian inference is performed [peixoto-parsimonious-2013]_. In order to circumvent this, we need to replace the noninformative priors used by a hierarchy of priors and hyperpriors, which amounts to a `nested SBM`, where the groups themselves are clustered into groups, and the matrix :math:`e` of edge counts are generated from another SBM, and so on recursively [peixoto-hierarchical-2014]_. .. figure:: nested-diagram.* :width: 400px :align: center Example of a nested SBM with three levels. In addition to being able to find small groups in large networks, this model also provides a multilevel hierarchical description of the network, that describes its structure at multiple scales. Inferring the best partition ---------------------------- The simplest and most efficient approach is to find the best partition of the network by maximizing Eq. :eq:`model-posterior` according to some version of the model. This is obtained via the functions :func:`~graph_tool.inference.minimize_blockmodel_dl` or :func:`~graph_tool.inference.minimize_nested_blockmodel_dl`, which employs an agglomerative multilevel `Markov chain Monte Carlo (MCMC) `_ algorithm [peixoto-efficient-2014]_. We focus first on the non-nested model, and we illustrate its use with a network of American football teams, which we load from the :mod:`~graph_tool.collection` module: .. testsetup:: football import os try: os.chdir("demos/inference") except FileNotFoundError: pass gt.seed_rng(3) .. testcode:: football g = gt.collection.data["football"] print(g) which yields .. testoutput:: football we then fit the `traditional` model by calling .. testcode:: football state = gt.minimize_blockmodel_dl(g, deg_corr=False) This returns a :class:`~graph_tool.inference.BlockState` object that includes the inference results. .. note:: The inference algorithm used is stochastic by nature, and may return a slightly different answer each time it is run. This may be due to the fact that there are alternative partitions with similar likelihoods, or that the optimum is difficult to find. Note that the inference problem here is, in general, `NP-Hard `_, hence there is no efficient algorithm that is guaranteed to always find the best answer. Because of this, typically one would call the algorithm many times, and select the partition with the largest posterior likelihood of Eq. :eq:`model-posterior`, or equivalently, the minimum description length of Eq. :eq:`model-dl`. The description length of a fit can be obtained with the :meth:`~graph_tool.inference.BlockState.entropy` method. See also :ref:`sec_model_selection` below. We may perform a drawing of the partition obtained via the :mod:`~graph_tool.inference.BlockState.draw` method, that functions as a convenience wrapper to the :func:`~graph_tool.draw.graph_draw` function .. testcode:: football state.draw(pos=g.vp.pos, output="football-sbm-fit.svg") which yields the following image. .. figure:: football-sbm-fit.* :align: center :width: 400px Stochastic block model inference of a network of American college football teams. The colors correspond to inferred group membership of the nodes. We can obtain the group memberships as a :class:`~graph_tool.PropertyMap` on the vertices via the :mod:`~graph_tool.inference.BlockState.get_blocks` method: .. testcode:: football b = state.get_blocks() r = b[10] # group membership of vertex 10 print(r) which yields: .. testoutput:: football 3 We may also access the matrix of edge counts between groups via :mod:`~graph_tool.inference.BlockState.get_matrix` .. testcode:: football e = state.get_matrix() matshow(e.todense()) savefig("football-edge-counts.svg") .. figure:: football-edge-counts.* :align: center Matrix of edge counts between groups. We may obtain the same matrix of edge counts as a graph, which has internal edge and vertex property maps with the edge and vertex counts, respectively: .. testcode:: football bg = state.get_bg() ers = bg.ep.count # edge counts nr = bg.vp.count # node counts .. _sec_model_selection: Hierarchical partitions +++++++++++++++++++++++ The inference of the nested family of SBMs is done in a similar manner, but we must use instead the :func:`~graph_tool.inference.minimize_nested_blockmodel_dl` function. We illustrate its use with the neural network of the `C. elegans `_ worm: .. testcode:: celegans g = gt.collection.data["celegansneural"] print(g) which has 297 vertices and 2359 edges. .. testoutput:: celegans A hierarchical fit of the degree-corrected model is performed as follows. .. testcode:: celegans state = gt.minimize_nested_blockmodel_dl(g, deg_corr=True) The object returned is an instance of a :class:`~graph_tool.inference.NestedBlockState` class, which encapsulates the results. We can again draw the resulting hierarchical clustering using the :meth:`~graph_tool.inference.NestedBlockState.draw` method: .. testcode:: celegans state.draw(output="celegans-hsbm-fit.svg") .. figure:: celegans-hsbm-fit.* :align: center Most likely hierarchical partition of the neural network of the C. elegans worm according to the nested degree-corrected SBM. .. note:: If the ``output`` parameter to :meth:`~graph_tool.inference.NestedBlockState.draw` is omitted, an interactive visualization is performed, where the user can re-order the hierarchy nodes using the mouse and pressing the ``r`` key. A summary of the inferred hierarchy can be obtained with the :meth:`~graph_tool.inference.NestedBlockState.print_summary` method, which shows the number of nodes and groups in all levels: .. testcode:: celegans state.print_summary() .. testoutput:: celegans l: 0, N: 297, B: 13 l: 1, N: 13, B: 5 l: 2, N: 5, B: 2 l: 3, N: 2, B: 1 The hierarchical levels themselves are represented by individual :meth:`~graph_tool.inference.BlockState` instances obtained via the :meth:`~graph_tool.inference.NestedBlockState.get_levels()` method: .. testcode:: celegans levels = state.get_levels() for s in levels: print(s) .. testoutput:: celegans , at 0x...> , at 0x...> , at 0x...> , at 0x...> This means that we can inspect the hierarchical partition just as before: .. testcode:: celegans r = levels[0].get_blocks()[46] # group membership of node 46 in level 0 print(r) r = levels[0].get_blocks()[r] # group membership of node 46 in level 1 print(r) r = levels[0].get_blocks()[r] # group membership of node 46 in level 2 print(r) .. testoutput:: celegans 2 1 0 Model selection +++++++++++++++ As mentioned above, one can select the best model according to the choice that yields the smallest description length. For instance, in case of the `C. elegans` network we have .. testcode:: model-selection g = gt.collection.data["celegansneural"] state_ndc = gt.minimize_nested_blockmodel_dl(g, deg_corr=False) state_dc = gt.minimize_nested_blockmodel_dl(g, deg_corr=True) print("Non-degree-corrected DL:\t", state_ndc.entropy()) print("Degree-corrected DL:\t", state_dc.entropy()) .. testoutput:: model-selection :options: +NORMALIZE_WHITESPACE Non-degree-corrected DL: 8507.97432099 Degree-corrected DL: 8228.11609772 Since it yields the smallest description length, the degree-corrected fit should be preferred. The statistical significance of the choice can be accessed by inspecting the posterior odds ratio [peixoto-nonparametric-2016]_ .. math:: \Lambda &= \frac{P(\boldsymbol b, \mathcal{H}_\text{NDC} | \boldsymbol G)}{P(\boldsymbol b, \mathcal{H}_\text{DC} | \boldsymbol G)} \\ &= \frac{P(\boldsymbol G, \boldsymbol b | \mathcal{H}_\text{NDC})}{P(\boldsymbol G, \boldsymbol b | \mathcal{H}_\text{DC})}\times\frac{P(\mathcal{H}_\text{NDC})}{P(\mathcal{H}_\text{DC})} \\ &= \exp(-\Delta\Sigma) where :math:`\mathcal{H}_\text{NDC}` and :math:`\mathcal{H}_\text{DC}` correspond to the non-degree-corrected and degree-corrected model hypotheses (assumed to be equally likely `a priori`), respectively, and :math:`\Delta\Sigma` is the difference of the description length of both fits. In our particular case, we have .. testcode:: model-selection print("ln Λ: ", state_dc.entropy() - state_ndc.entropy()) .. testoutput:: model-selection :options: +NORMALIZE_WHITESPACE ln Λ: -279.858223272 The precise threshold that should be used to decide when to `reject a hypothesis `_ is subjective and context-dependent, but the value above implies that the particular degree-corrected fit is around :math:`e^{280} \sim 10^{121}` times more likely than the non-degree corrected one, and hence it can be safely concluded that it provides a substantially better fit. Although it is often true that the degree-corrected model provides a better fit for many empirical networks, there are also exceptions. For example, for the American football network above, we have: .. testcode:: model-selection g = gt.collection.data["football"] state_ndc = gt.minimize_nested_blockmodel_dl(g, deg_corr=False) state_dc = gt.minimize_nested_blockmodel_dl(g, deg_corr=True) print("Non-degree-corrected DL:\t", state_ndc.entropy()) print("Degree-corrected DL:\t", state_dc.entropy()) print("ln Λ:\t\t\t", state_ndc.entropy() - state_dc.entropy()) .. testoutput:: model-selection :options: +NORMALIZE_WHITESPACE Non-degree-corrected DL: 1751.86962605 Degree-corrected DL: 1787.64676873 ln Λ: -35.7771426724 Hence, with a posterior odds ratio of :math:`\Lambda \sim e^{-36} \sim 10^{-16}` in favor of the non-degree-corrected model, it seems like the degree-corrected variant is an unnecessarily complex description for this network. Averaging over models --------------------- When analyzing empirical networks, one should be open to the possibility that there will be more than one fit of the SBM with similar posterior likelihoods. In such situations, one should instead `sample` partitions from the posterior likelihood, instead of simply finding its maximum. One can then compute quantities that are averaged over the different model fits, weighted according to their posterior likelihoods. Full support for model averaging is implemented in ``graph-tool`` via an efficient `Markov chain Monte Carlo (MCMC) `_ algorithm [peixoto-efficient-2014]_. It works by attempting to move nodes into different groups with specific probabilities, and `accepting or rejecting `_ such moves such that, after a sufficiently long time, the partitions will be observed with the desired posterior probability. The algorithm is so designed, that its run-time is independent on the number of groups being used in the model, and hence is suitable for use on very large networks. In order to perform such moves, one needs again to operate with :class:`~graph_tool.inference.BlockState` or :class:`~graph_tool.inference.NestedBlockState` instances, and calling their :meth:`~graph_tool.inference.BlockState.mcmc_sweep` methods. For example, the following will perform 1000 sweeps of the algorithm with the network of characters in the novel Les Misérables, starting from a random partition into 20 groups .. testcode:: model-averaging g = gt.collection.data["lesmis"] state = gt.BlockState(g, B=20) # This automatically initializes the state # with a random partition into B=20 # nonempty groups; The user could # also pass an arbitrary initial # partition using the 'b' parameter. # If we work with the above state object, we will be restricted to # partitions into at most B=20 groups. But since we want to consider # an arbitrary number of groups in the range [1, N], we transform it # into a state with B=N groups (where N-20 will be empty). state = state.copy(B=g.num_vertices()) # Now we run 1,000 sweeps of the MCMC dS, nmoves = state.mcmc_sweep(niter=1000) print("Change in description length:", dS) print("Number of accepted vertex moves:", nmoves) .. testoutput:: model-averaging Change in description length: -355.3963421220926 Number of accepted vertex moves: 4561 .. note:: Starting from a random partition is rarely the best option, since it may take a long time for it to equilibrate; It was done above simply as an illustration on how to initialize :class:`~graph_tool.inference.BlockState` by hand. Instead, a much better option in practice is to start from the "ground state" obtained with :func:`~graph_tool.inference.minimize_blockmodel_dl`, e.g. .. testcode:: model-averaging state = gt.minimize_blockmodel_dl(g) state = state.copy(B=g.num_vertices()) dS, nmoves = state.mcmc_sweep(niter=1000) print("Change in description length:", dS) print("Number of accepted vertex moves:", nmoves) .. testoutput:: model-averaging Change in description length: 7.3423409719804855 Number of accepted vertex moves: 3939 Although the above is sufficient to implement model averaging, there is a convenience function called :func:`~graph_tool.inference.mcmc_equilibrate` that is intend to simplify the detection of equilibration, by keeping track of the maximum and minimum values of description length encountered and how many sweeps have been made without a "record breaking" event. For example, .. testcode:: model-averaging # We will accept equilibration if 10 sweeps are completed without a # record breaking event, 2 consecutive times. gt.mcmc_equilibrate(state, wait=10, nbreaks=2, mcmc_args=dict(niter=10), verbose=True) will output: .. testoutput:: model-averaging :options: +NORMALIZE_WHITESPACE niter: 1 count: 0 breaks: 0 min_S: 709.95524 max_S: 726.36140 S: 726.36140 ΔS: 16.4062 moves: 57 niter: 2 count: 1 breaks: 0 min_S: 709.95524 max_S: 726.36140 S: 721.68682 ΔS: -4.67459 moves: 67 niter: 3 count: 0 breaks: 0 min_S: 709.37313 max_S: 726.36140 S: 709.37313 ΔS: -12.3137 moves: 47 niter: 4 count: 1 breaks: 0 min_S: 709.37313 max_S: 726.36140 S: 711.61100 ΔS: 2.23787 moves: 57 niter: 5 count: 2 breaks: 0 min_S: 709.37313 max_S: 726.36140 S: 716.08147 ΔS: 4.47047 moves: 28 niter: 6 count: 3 breaks: 0 min_S: 709.37313 max_S: 726.36140 S: 712.93940 ΔS: -3.14207 moves: 47 niter: 7 count: 4 breaks: 0 min_S: 709.37313 max_S: 726.36140 S: 712.38780 ΔS: -0.551596 moves: 46 niter: 8 count: 5 breaks: 0 min_S: 709.37313 max_S: 726.36140 S: 718.00449 ΔS: 5.61668 moves: 40 niter: 9 count: 0 breaks: 0 min_S: 709.37313 max_S: 731.89940 S: 731.89940 ΔS: 13.8949 moves: 50 niter: 10 count: 0 breaks: 0 min_S: 707.07048 max_S: 731.89940 S: 707.07048 ΔS: -24.8289 moves: 45 niter: 11 count: 1 breaks: 0 min_S: 707.07048 max_S: 731.89940 S: 711.91030 ΔS: 4.83982 moves: 31 niter: 12 count: 2 breaks: 0 min_S: 707.07048 max_S: 731.89940 S: 726.56358 ΔS: 14.6533 moves: 56 niter: 13 count: 3 breaks: 0 min_S: 707.07048 max_S: 731.89940 S: 731.77165 ΔS: 5.20807 moves: 72 niter: 14 count: 4 breaks: 0 min_S: 707.07048 max_S: 731.89940 S: 707.08606 ΔS: -24.6856 moves: 57 niter: 15 count: 0 breaks: 0 min_S: 707.07048 max_S: 735.85102 S: 735.85102 ΔS: 28.7650 moves: 65 niter: 16 count: 1 breaks: 0 min_S: 707.07048 max_S: 735.85102 S: 707.29116 ΔS: -28.5599 moves: 43 niter: 17 count: 0 breaks: 0 min_S: 702.18860 max_S: 735.85102 S: 702.18860 ΔS: -5.10256 moves: 39 niter: 18 count: 1 breaks: 0 min_S: 702.18860 max_S: 735.85102 S: 716.40444 ΔS: 14.2158 moves: 55 niter: 19 count: 2 breaks: 0 min_S: 702.18860 max_S: 735.85102 S: 703.51896 ΔS: -12.8855 moves: 32 niter: 20 count: 3 breaks: 0 min_S: 702.18860 max_S: 735.85102 S: 714.30455 ΔS: 10.7856 moves: 34 niter: 21 count: 4 breaks: 0 min_S: 702.18860 max_S: 735.85102 S: 707.26722 ΔS: -7.03733 moves: 25 niter: 22 count: 5 breaks: 0 min_S: 702.18860 max_S: 735.85102 S: 730.23976 ΔS: 22.9725 moves: 21 niter: 23 count: 6 breaks: 0 min_S: 702.18860 max_S: 735.85102 S: 730.56562 ΔS: 0.325858 moves: 59 niter: 24 count: 0 breaks: 0 min_S: 702.18860 max_S: 738.45136 S: 738.45136 ΔS: 7.88574 moves: 60 niter: 25 count: 0 breaks: 0 min_S: 702.18860 max_S: 740.29015 S: 740.29015 ΔS: 1.83879 moves: 88 niter: 26 count: 1 breaks: 0 min_S: 702.18860 max_S: 740.29015 S: 720.86367 ΔS: -19.4265 moves: 68 niter: 27 count: 2 breaks: 0 min_S: 702.18860 max_S: 740.29015 S: 723.60308 ΔS: 2.73941 moves: 48 niter: 28 count: 3 breaks: 0 min_S: 702.18860 max_S: 740.29015 S: 732.81310 ΔS: 9.21002 moves: 44 niter: 29 count: 4 breaks: 0 min_S: 702.18860 max_S: 740.29015 S: 729.62283 ΔS: -3.19028 moves: 62 niter: 30 count: 5 breaks: 0 min_S: 702.18860 max_S: 740.29015 S: 730.15676 ΔS: 0.533935 moves: 59 niter: 31 count: 6 breaks: 0 min_S: 702.18860 max_S: 740.29015 S: 728.27350 ΔS: -1.88326 moves: 65 niter: 32 count: 7 breaks: 0 min_S: 702.18860 max_S: 740.29015 S: 732.19406 ΔS: 3.92056 moves: 57 niter: 33 count: 8 breaks: 0 min_S: 702.18860 max_S: 740.29015 S: 730.53906 ΔS: -1.65500 moves: 72 niter: 34 count: 9 breaks: 0 min_S: 702.18860 max_S: 740.29015 S: 725.59638 ΔS: -4.94268 moves: 72 niter: 35 count: 0 breaks: 1 min_S: 733.07687 max_S: 733.07687 S: 733.07687 ΔS: 7.48049 moves: 54 niter: 36 count: 0 breaks: 1 min_S: 728.56326 max_S: 733.07687 S: 728.56326 ΔS: -4.51361 moves: 57 niter: 37 count: 0 breaks: 1 min_S: 728.56326 max_S: 755.55140 S: 755.55140 ΔS: 26.9881 moves: 83 niter: 38 count: 0 breaks: 1 min_S: 728.56326 max_S: 761.09434 S: 761.09434 ΔS: 5.54294 moves: 96 niter: 39 count: 0 breaks: 1 min_S: 713.60740 max_S: 761.09434 S: 713.60740 ΔS: -47.4869 moves: 71 niter: 40 count: 1 breaks: 1 min_S: 713.60740 max_S: 761.09434 S: 713.98904 ΔS: 0.381637 moves: 67 niter: 41 count: 2 breaks: 1 min_S: 713.60740 max_S: 761.09434 S: 729.22460 ΔS: 15.2356 moves: 68 niter: 42 count: 3 breaks: 1 min_S: 713.60740 max_S: 761.09434 S: 724.70143 ΔS: -4.52317 moves: 69 niter: 43 count: 0 breaks: 1 min_S: 703.51896 max_S: 761.09434 S: 703.51896 ΔS: -21.1825 moves: 40 niter: 44 count: 0 breaks: 1 min_S: 702.85027 max_S: 761.09434 S: 702.85027 ΔS: -0.668696 moves: 33 niter: 45 count: 1 breaks: 1 min_S: 702.85027 max_S: 761.09434 S: 722.46508 ΔS: 19.6148 moves: 49 niter: 46 count: 2 breaks: 1 min_S: 702.85027 max_S: 761.09434 S: 714.77930 ΔS: -7.68578 moves: 62 niter: 47 count: 3 breaks: 1 min_S: 702.85027 max_S: 761.09434 S: 722.04551 ΔS: 7.26621 moves: 55 niter: 48 count: 4 breaks: 1 min_S: 702.85027 max_S: 761.09434 S: 708.96879 ΔS: -13.0767 moves: 37 niter: 49 count: 5 breaks: 1 min_S: 702.85027 max_S: 761.09434 S: 714.84009 ΔS: 5.87130 moves: 37 niter: 50 count: 6 breaks: 1 min_S: 702.85027 max_S: 761.09434 S: 718.28558 ΔS: 3.44549 moves: 55 niter: 51 count: 7 breaks: 1 min_S: 702.85027 max_S: 761.09434 S: 720.86398 ΔS: 2.57840 moves: 44 niter: 52 count: 8 breaks: 1 min_S: 702.85027 max_S: 761.09434 S: 710.93672 ΔS: -9.92726 moves: 45 niter: 53 count: 9 breaks: 1 min_S: 702.85027 max_S: 761.09434 S: 735.06773 ΔS: 24.1310 moves: 28 niter: 54 count: 10 breaks: 2 min_S: 702.85027 max_S: 761.09434 S: 738.16756 ΔS: 3.09983 moves: 115 Note that the value of `wait` above was made purposefully low so that the output would not be overly long. The most appropriate value requires experimentation, but a typically good value is `wait=1000`. The function :func:`~graph_tool.inference.mcmc_equilibrate` accepts a ``callback`` argument that takes an optional function to be invoked after each call to :meth:`~graph_tool.inference.BlockState.mcmc_sweep`. This function should accept a single parameter which will contain the actual :class:`~graph_tool.inference.BlockState` instance. We will use this in the example below to collect the posterior vertex marginals, i.e. the posterior probability that a node belongs to a given group: .. testcode:: model-averaging # We will first equilibrate the Markov chain gt.mcmc_equilibrate(state, wait=1000, mcmc_args=dict(niter=10)) pv = None def collect_marginals(s): global pv pv = s.collect_vertex_marginals(pv) # Now we collect the marginals for exactly 100,000 sweeps gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10), callback=collect_marginals) # Now the node marginals are stored in property map pv. We can # visualize them as pie charts on the nodes: state.draw(pos=g.vp.pos, vertex_shape="pie", vertex_pie_fractions=pv, edge_gradient=None, output="lesmis-sbm-marginals.svg") .. figure:: lesmis-sbm-marginals.* :align: center :width: 450px Marginal probabilities of group memberships of the network of characters in the novel Les Misérables, according to the degree-corrected SBM. The `pie fractions `_ on the nodes correspond to the probability of being in group associated with the respective color. We can also obtain a marginal probability on the number of groups itself, as follows. .. testcode:: model-averaging h = np.zeros(g.num_vertices() + 1) def collect_num_groups(s): B = s.get_nonempty_B() h[B] += 1 # Now we collect the marginal distribution for exactly 100,000 sweeps gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10), callback=collect_num_groups) .. testcode:: model-averaging :hide: figure() Bs = np.arange(len(h)) idx = h > 0 bar(Bs[idx] - .5, h[idx] / h.sum(), width=1, color="#ccb974") gca().set_xticks([6,7,8,9]) xlabel("$B$") ylabel(r"$P(B|\boldsymbol G)$") savefig("lesmis-B-posterior.svg") .. figure:: lesmis-B-posterior.* :align: center Marginal posterior likelihood of the number of nonempty groups for the network of characters in the novel Les Misérables, according to the degree-corrected SBM. Hierarchical partitions +++++++++++++++++++++++ We can also perform model averaging using the nested SBM, which will give us a distribution over hierarchies. The whole procedure is fairly analogous, but now we make use of :class:`~graph_tool.inference.NestedBlockState` instances. .. note:: When using :class:`~graph_tool.inference.NestedBlockState` instances to perform model averaging, they need to be constructed with the option `sampling=True`. Here we perform the sampling of hierarchical partitions using the same network as above. .. testcode:: nested-model-averaging g = gt.collection.data["lesmis"] state = gt.minimize_nested_blockmodel_dl(g) # Initialize he Markov # chain from the "ground # state" # Before doing model averaging, the need to create a NestedBlockState # by passing sampling = True. # We also want to increase the maximum hierarchy depth to L = 10 # We can do both of the above by copying. bs = state.get_bs() # Get hierarchical partition. bs += [np.zeros(1)] * (10 - len(bs)) # Augment it to L = 10 with # single-group levels. state = state.copy(bs=bs, sampling=True) # Now we run 1000 sweeps of the MCMC dS, nmoves = state.mcmc_sweep(niter=1000) print("Change in description length:", dS) print("Number of accepted vertex moves:", nmoves) .. testoutput:: nested-model-averaging Change in description length: 6.222068... Number of accepted vertex moves: 7615 Similarly to the the non-nested case, we can use :func:`~graph_tool.inference.mcmc_equilibrate` to do most of the boring work, and we can now obtain vertex marginals on all hierarchical levels: .. testcode:: nested-model-averaging # We will first equilibrate the Markov chain gt.mcmc_equilibrate(state, wait=1000, mcmc_args=dict(niter=10)) pv = [None] * len(state.get_levels()) def collect_marginals(s): global pv pv = [sl.collect_vertex_marginals(pv[l]) for l, sl in enumerate(s.get_levels())] # Now we collect the marginals for exactly 100,000 sweeps gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10), callback=collect_marginals) # Now the node marginals for all levels are stored in property map # list pv. We can visualize the first level as pie charts on the nodes: state_0 = state.get_levels()[0] state_0.draw(pos=g.vp.pos, vertex_shape="pie", vertex_pie_fractions=pv[0], edge_gradient=None, output="lesmis-nested-sbm-marginals.svg") .. figure:: lesmis-nested-sbm-marginals.* :align: center :width: 450px Marginal probabilities of group memberships of the network of characters in the novel Les Misérables, according to the nested degree-corrected SBM. The pie fractions on the nodes correspond to the probability of being in group associated with the respective color. We can also obtain a marginal probability of the number of groups itself, as follows. .. testcode:: nested-model-averaging h = [np.zeros(g.num_vertices() + 1) for s in state.get_levels()] def collect_num_groups(s): for l, sl in enumerate(s.get_levels()): B = sl.get_nonempty_B() h[l][B] += 1 # Now we collect the marginal distribution for exactly 100,000 sweeps gt.mcmc_equilibrate(state, force_niter=10000, mcmc_args=dict(niter=10), callback=collect_num_groups) .. testcode:: nested-model-averaging :hide: figure() f, ax = plt.subplots(1, 5, figsize=(10, 3)) for i, h_ in enumerate(h[:5]): Bs = np.arange(len(h_)) idx = h_ > 0 ax[i].bar(Bs[idx] - .5, h_[idx] / h_.sum(), width=1, color="#ccb974") ax[i].set_xticks(Bs[idx]) ax[i].set_xlabel("$B_{%d}$" % i) ax[i].set_ylabel(r"$P(B_{%d}|\boldsymbol G)$" % i) locator = MaxNLocator(prune='both', nbins=5) ax[i].yaxis.set_major_locator(locator) tight_layout() savefig("lesmis-nested-B-posterior.svg") .. figure:: lesmis-nested-B-posterior.* :align: center Marginal posterior likelihood of the number of nonempty groups :math:`B_l` at each hierarchy level :math:`l` for the network of characters in the novel Les Misérables, according to the nested degree-corrected SBM. Below we obtain some hierarchical partitions sampled from the posterior distribution. .. testcode:: nested-model-averaging for i in range(10): state.mcmc_sweep(niter=1000) state.draw(output="lesmis-partition-sample-%i.svg" % i, empty_branches=False) .. image:: lesmis-partition-sample-0.svg :width: 200px .. image:: lesmis-partition-sample-1.svg :width: 200px .. image:: lesmis-partition-sample-2.svg :width: 200px .. image:: lesmis-partition-sample-3.svg :width: 200px .. image:: lesmis-partition-sample-4.svg :width: 200px .. image:: lesmis-partition-sample-5.svg :width: 200px .. image:: lesmis-partition-sample-6.svg :width: 200px .. image:: lesmis-partition-sample-7.svg :width: 200px .. image:: lesmis-partition-sample-8.svg :width: 200px .. image:: lesmis-partition-sample-9.svg :width: 200px Model class selection +++++++++++++++++++++ When averaging over partitions, we may be interested in evaluating which **model class** provides a better fit of the data, considering all possible parameter choices. This is done by evaluating the model evidence [peixoto-nonparametric-2016]_ .. math:: P(\boldsymbol G) = \sum_{\boldsymbol\theta,\boldsymbol b}P(\boldsymbol G,\boldsymbol\theta, \boldsymbol b) = \sum_{\boldsymbol b}P(\boldsymbol G,\boldsymbol b). This quantity is analogous to a `partition function `_ in statistical physics, which we can write more conveniently as a negative `free energy `_ by taking its logarithm .. math:: :label: free-energy \ln P(\boldsymbol G) = \underbrace{\sum_{\boldsymbol b}q(\boldsymbol b)\ln P(\boldsymbol G,\boldsymbol b)}_{-\left<\Sigma\right>}\; \underbrace{- \sum_{\boldsymbol b}q(\boldsymbol b)\ln q(\boldsymbol b)}_{\mathcal{S}} where .. math:: q(\boldsymbol b) = \frac{P(\boldsymbol G,\boldsymbol b)}{\sum_{\boldsymbol b'}P(\boldsymbol G,\boldsymbol b')} is the posterior likelihood of partition :math:`\boldsymbol b`. The first term of Eq. :eq:`free-energy` (the "negative energy") is minus the average of description length :math:`\left<\Sigma\right>`, weighted according to the posterior distribution. The second term :math:`\mathcal{S}` is the `entropy `_ of the posterior distribution, and measures, in a sense, the "quality of fit" of the model: If the posterior is very "peaked", i.e. dominated by a single partition with a very large likelihood, the entropy will tend to zero. However, if there are many partitions with similar likelihoods --- meaning that there is no single partition that describes the network uniquely well --- it will take a large value instead. Since the MCMC algorithm samples partitions from the distribution :math:`q(\boldsymbol b)`, it can be used to compute :math:`\left<\Sigma\right>` easily, simply by averaging the description length values encountered by sampling from the posterior distribution many times. The computation of the posterior entropy :math:`\mathcal{S}`, however, is significantly more difficult, since it involves measuring the precise value of :math:`q(\boldsymbol b)`. A direct "brute force" computation of :math:`\mathcal{S}` is implemented via :meth:`~graph_tool.inference.BlockState.collect_partition_histogram` and :func:`~graph_tool.inference.microstate_entropy`, however this is only feasible for very small networks. For larger networks, we are forced to perform approximations. The simplest is a "mean field" one, where we assume the posterior factorizes as .. math:: q(\boldsymbol b) \approx \prod_i{q_i(b_i)} where .. math:: q_i(r) = P(b_i = r | \boldsymbol G) is the marginal group membership distribution of node :math:`i`. This yields an entropy value given by .. math:: S \approx -\sum_i\sum_rq_i(r)\ln q_i(r). This approximation should be seen as an upper bound, since any existing correlation between the nodes (which are ignored here) will yield smaller entropy values. A more accurate assumption is called the `Bethe approximation` [mezard-information-2009]_, and takes into account the correlation between adjacent nodes in the network, .. math:: q(\boldsymbol b) \approx \prod_{i`_, :math:`k_i` is the degree of node :math:`i`, and .. math:: q_{ij}(r, s) = P(b_i = r, b_j = s|\boldsymbol G) is the joint group membership distribution of nodes :math:`i` and :math:`j` (a.k.a. the `edge marginals`). This yields an entropy value given by .. math:: S \approx -\sum_{i0` only the mean-field approximation is applicable, since the adjacency matrix of the higher layers is not constant. We show below the approach for the same network, using the nested model. .. testcode:: model-evidence g = gt.collection.data["lesmis"] L = 10 for deg_corr in [True, False]: state = gt.minimize_nested_blockmodel_dl(g, deg_corr=deg_corr) # Initialize the Markov # chain from the "ground # state" bs = state.get_bs() # Get hierarchical partition. bs += [np.zeros(1)] * (L - len(bs)) # Augment it to L = 10 with # single-group levels. state = state.copy(bs=bs, sampling=True) dls = [] # description length history vm = [None] * len(state.get_levels()) # vertex marginals em = None # edge marginals def collect_marginals(s): global vm, em levels = s.get_levels() vm = [sl.collect_vertex_marginals(vm[l]) for l, sl in enumerate(levels)] em = levels[0].collect_edge_marginals(em) dls.append(s.entropy()) # Now we collect the marginal distributions for exactly 200,000 sweeps gt.mcmc_equilibrate(state, force_niter=20000, mcmc_args=dict(niter=10), callback=collect_marginals) S_mf = [gt.mf_entropy(sl.g, vm[l]) for l, sl in enumerate(state.get_levels())] S_bethe = gt.bethe_entropy(g, em)[0] L = -mean(dls) print("Model evidence for deg_corr = %s:" % deg_corr, L + sum(S_mf), "(mean field),", L + S_bethe + sum(S_mf[1:]), "(Bethe)") .. testoutput:: model-evidence Model evidence for deg_corr = True: -358.493559653 (mean field), -649.40897099 (Bethe) Model evidence for deg_corr = False: -372.104532802 (mean field), -561.973406506 (Bethe) The results are similar: If we consider the most accurate approximation, the non-degree-corrected model possesses the largest evidence. Note also that we observe a better evidence for the nested models themselves, when comparing to the evidences for the non-nested model --- which is not quite surprising, since the non-nested model is a special case of the nested one. Edge layers and covariates -------------------------- In many situations, the edges of the network may posses discrete covariates on them, or they may be distributed in discrete "layers". Extensions to the SBM may be defined for such data, and they can be inferred using the exact same interface shown above, except one should use the :class:`~graph_tool.inference.LayeredBlockState` class, instead of :class:`~graph_tool.inference.BlockState`. This class takes two additional parameters: the ``ec`` parameter, that must correspond to an edge :class:`~graph_tool.PropertyMap` with the layer/covariate values on the edges, and the Boolean ``layers`` parameter, which if ``True`` specifies a layered model, otherwise one with edge covariates. If we use :func:`~graph_tool.inference.minimize_blockmodel_dl`, this can be achieved simply by passing the option ``layers=True`` as well as the appropriate value of ``state_args``, which will be propagated to :class:`~graph_tool.inference.LayeredBlockState`'s constructor. For example, consider again the Les Misérables network, where we consider the number of co-appearances between characters as edge covariates. .. testsetup:: layered-model import os try: os.chdir("demos/inference") except FileNotFoundError: pass .. testcode:: layered-model g = gt.collection.data["lesmis"] # Note the different meaning of the two 'layers' parameters below: The # first enables the use of LayeredBlockState, and the second selects # the 'edge covariates' version. state = gt.minimize_blockmodel_dl(g, deg_corr=False, layers=True, state_args=dict(ec=g.ep.value, layers=False)) state.draw(pos=g.vp.pos, edge_color=g.ep.value, edge_gradient=None, output="lesmis-sbm-edge-cov.svg") .. figure:: lesmis-sbm-edge-cov.* :align: center :width: 350px Best fit of the non-degree-corrected SBM with edge covariates for the network of characters in the novel Les Misérables, using the number of co-appearances as edge covariates. The edge colors correspond to the edge covariates. In the case of the nested model, we still should use the :class:`~graph_tool.inference.NestedBlockState` class, but it must be initialized with the parameter ``base_type = LayeredBlockState``. But if we use :func:`~graph_tool.inference.minimize_nested_blockmodel_dl`, it works identically to the above: .. testcode:: layered-model state = gt.minimize_nested_blockmodel_dl(g, deg_corr=False, layers=True, state_args=dict(ec=g.ep.value, layers=False)) state.draw(eprops=dict(color=g.ep.value, gradient=None), output="lesmis-nested-sbm-edge-cov.svg") .. figure:: lesmis-nested-sbm-edge-cov.* :align: center :width: 350px Best fit of the nested non-degree-corrected SBM with edge covariates for the network of characters in the novel Les Misérables, using the number of co-appearances as edge covariates. The edge colors correspond to the edge covariates. It is possible to perform model averaging of all layered variants exactly like for the regular SBMs as was shown above. Predicting spurious and missing edges ------------------------------------- An important application of generative models is to be able to generalize from observations and make predictions that go beyond what is seen in the data. This is particularly useful when the network we observe is incomplete, or contains errors, i.e. some of the edges are either missing or are outcomes of mistakes in measurement. In this situation, the fit we make of the observed network can help us predict missing or spurious edges in the network [clauset-hierarchical-2008]_ [guimera-missing-2009]_. We do so by dividing the edges into two sets :math:`\boldsymbol G` and :math:`\delta \boldsymbol G`, where the former corresponds to the observed network and the latter either to the missing or spurious edges. In the case of missing edges, we may compute the posterior of :math:`\delta \boldsymbol G` as .. math:: :label: posterior-missing P(\delta \boldsymbol G | \boldsymbol G) = \frac{\sum_{\boldsymbol b}P(\boldsymbol G+\delta \boldsymbol G | \boldsymbol b)P(\boldsymbol b | \boldsymbol G)}{P_{\delta}(\boldsymbol G)} where .. math:: P_{\delta}(\boldsymbol G) = \sum_{\delta \boldsymbol G}\sum_{\boldsymbol b}P(\boldsymbol G+\delta \boldsymbol G | \boldsymbol b)P(\boldsymbol b | \boldsymbol G) is a normalization constant. Although the value of :math:`P_{\delta}(\boldsymbol G)` is difficult to obtain in general (since we need to perform a sum over all possible spurious/missing edges), the numerator of Eq. :eq:`posterior-missing` can be computed by sampling partitions from the posterior, and then inserting or deleting edges from the graph and computing the new likelihood. This means that we can easily compare alternative predictive hypotheses :math:`\{\delta \boldsymbol G_i\}` via their likelihood ratios .. math:: \lambda_i = \frac{P(\delta \boldsymbol G_i | \boldsymbol G)}{\sum_j P(\delta \boldsymbol G_j | \boldsymbol G)} = \frac{\sum_{\boldsymbol b}P(\boldsymbol G+\delta \boldsymbol G_i | \boldsymbol b)P(\boldsymbol b | \boldsymbol G)} {\sum_j \sum_{\boldsymbol b}P(\boldsymbol G+\delta \boldsymbol G_j | \boldsymbol b)P(\boldsymbol b | \boldsymbol G)} which do not depend on the value of :math:`P_{\delta}(\boldsymbol G)`. The values :math:`P(\boldsymbol G+\delta \boldsymbol G | \boldsymbol b)` can be computed with :meth:`~graph_tool.inference.BlockState.get_edges_prob`. Hence, we can compute spurious/missing edge probabilities just as if we were collecting marginal distributions when doing model averaging. Below is an example for predicting the two following edges in the football network, using the nested model (for which we need to replace :math:`\boldsymbol b` by :math:`\{\boldsymbol b_l\}` in the equations above). .. testcode:: missing-edges :hide: g = gt.collection.data["football"].copy() color = g.new_vp("string", val="#cccccc") ecolor = g.new_ep("string", val="#cccccc") e = g.add_edge(101, 102) ecolor[e] = "#a40000" e = g.add_edge(17, 56) ecolor[e] = "#a40000" eorder = g.edge_index.copy("int") gt.graph_draw(g, pos=g.vp.pos, vertex_color=color, vertex_fill_color=color, edge_color=ecolor, eorder=eorder, output="football_missing.svg") .. figure:: football_missing.* :align: center :width: 350px Two non-existing edges in the football network (in red): :math:`(101,102)` in the middle, and :math:`(17,56)` in the upper right region of the figure. .. testcode:: missing-edges g = gt.collection.data["football"] missing_edges = [(101, 102), (17, 56)] L = 10 state = gt.minimize_nested_blockmodel_dl(g, deg_corr=True) bs = state.get_bs() # Get hierarchical partition. bs += [np.zeros(1)] * (L - len(bs)) # Augment it to L = 10 with # single-group levels. state = state.copy(bs=bs, sampling=True) probs = ([], []) def collect_edge_probs(s): p1 = s.get_edges_prob([missing_edges[0]], entropy_args=dict(partition_dl=False)) p2 = s.get_edges_prob([missing_edges[1]], entropy_args=dict(partition_dl=False)) probs[0].append(p1) probs[1].append(p2) # Now we collect the probabilities for exactly 10,000 sweeps gt.mcmc_equilibrate(state, force_niter=1000, mcmc_args=dict(niter=10), callback=collect_edge_probs) def get_avg(p): p = np.array(p) pmax = p.max() p -= pmax return pmax + log(exp(p).mean()) p1 = get_avg(probs[0]) p2 = get_avg(probs[1]) p_sum = get_avg([p1, p2]) + log(2) l1 = p1 - p_sum l2 = p2 - p_sum print("likelihood-ratio for %s: %g" % (missing_edges[0], exp(l1))) print("likelihood-ratio for %s: %g" % (missing_edges[1], exp(l2))) .. testoutput:: missing-edges likelihood-ratio for (101, 102): 0.372308 likelihood-ratio for (17, 56): 0.627692 From which we can conclude that edge :math:`(17, 56)` is around twice as likely as :math:`(101, 102)` to be a missing edge. The prediction using the non-nested model can be performed in an entirely analogous fashion. References ---------- .. [holland-stochastic-1983] Paul W. Holland, Kathryn Blackmond Laskey, Samuel Leinhardt, "Stochastic blockmodels: First steps", Social Networks Volume 5, Issue 2, Pages 109-137 (1983), :doi:`10.1016/0378-8733(83)90021-7` .. [karrer-stochastic-2011] Brian Karrer, M. E. J. Newman "Stochastic blockmodels and community structure in networks", Phys. Rev. E 83, 016107 (2011), :doi:`10.1103/PhysRevE.83.016107`, :arxiv:`1008.3926` .. [peixoto-nonparametric-2016] Tiago P. Peixoto, "Nonparametric Bayesian inference of the microcanonical stochastic block model" :arxiv:`1610.02703` .. [peixoto-parsimonious-2013] Tiago P. Peixoto, "Parsimonious module inference in large networks", Phys. Rev. Lett. 110, 148701 (2013), :doi:`10.1103/PhysRevLett.110.148701`, :arxiv:`1212.4794`. .. [peixoto-hierarchical-2014] Tiago P. Peixoto, "Hierarchical block structures and high-resolution model selection in large networks", Phys. Rev. X 4, 011047 (2014), :doi:`10.1103/PhysRevX.4.011047`, :arxiv:`1310.4377`. .. [peixoto-model-2016] Tiago P. Peixoto, "Model selection and hypothesis testing for large-scale network models with overlapping groups", Phys. Rev. X 5, 011033 (2016), :doi:`10.1103/PhysRevX.5.011033`, :arxiv:`1409.3059`. .. [peixoto-efficient-2014] Tiago P. Peixoto, "Efficient Monte Carlo and greedy heuristic for the inference of stochastic block models", Phys. Rev. E 89, 012804 (2014), :doi:`10.1103/PhysRevE.89.012804`, :arxiv:`1310.4378` .. [clauset-hierarchical-2008] Aaron Clauset, Cristopher Moore, M. E. J. Newman, "Hierarchical structure and the prediction of missing links in networks", Nature 453, 98-101 (2008), :doi:`10.1038/nature06830` .. [guimera-missing-2009] Roger Guimerà, Marta Sales-Pardo, "Missing and spurious interactions and the reconstruction of complex networks", PNAS vol. 106 no. 52 (2009), :doi:`10.1073/pnas.0908366106` .. [mezard-information-2009] Marc Mézard, Andrea Montanari, "Information, Physics, and Computation", Oxford Univ Press, 2009. :DOI:`10.1093/acprof:oso/9780198570837.001.0001`