Commit 3d9964ab authored by Tiago Peixoto's avatar Tiago Peixoto
Browse files

Implement posterior sampling for layered networks

This includes support for sampling from posteriors of layered models
that are also overlapping and/or with edge covariates.

This fixes issue #325
parent ce270193
Pipeline #355 failed with stage
in 334 minutes and 27 seconds
.. _inference-howto:
Inferring network structure
===========================
Inferring modular network structure
===================================
``graph-tool`` includes algorithms to identify the large-scale structure
of networks in the :mod:`~graph_tool.inference` submodule. Here we
......@@ -31,9 +31,11 @@ probability
P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b)
where :math:`\boldsymbol\theta` are additional model parameters. Therefore, if we
observe a network :math:`\boldsymbol G`, the likelihood that it was generated by a
given partition :math:`\boldsymbol b` is obtained via the `Bayesian
where :math:`\boldsymbol\theta` are additional model parameters that
control how the node partition affects the structure of the
network. Therefore, if we observe a network :math:`\boldsymbol G`, the
likelihood that it was generated by a given partition :math:`\boldsymbol
b` is obtained via the `Bayesian
<https://en.wikipedia.org/wiki/Bayesian_inference>`_ posterior
.. math::
......@@ -50,10 +52,11 @@ model parameters, and
P(\boldsymbol G) = \sum_{\boldsymbol\theta,\boldsymbol b}P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b)P(\boldsymbol\theta, \boldsymbol b)
is called the `model evidence`. The particular types of model that will
be considered here have "hard constraints", such that there is only one
choice for the remaining parameters :math:`\boldsymbol\theta` that is compatible
with the generated network, such that Eq. :eq:`model-posterior-sum` simplifies to
is called the `evidence`. The particular types of model that will be
considered here have "hard constraints", such that there is only one
choice for the remaining parameters :math:`\boldsymbol\theta` that is
compatible with the generated network, such that
Eq. :eq:`model-posterior-sum` simplifies to
.. math::
:label: model-posterior
......@@ -86,16 +89,16 @@ where
\Sigma = -\ln P(\boldsymbol G|\boldsymbol\theta, \boldsymbol b) - \ln P(\boldsymbol\theta, \boldsymbol b)
is called the **description length** of the network :math:`\boldsymbol G`. It
measures the amount of `information
is called the **description length** of the network :math:`\boldsymbol
G`. It measures the amount of `information
<https://en.wikipedia.org/wiki/Information_theory>`_ required to
describe the data, if we `encode
<https://en.wikipedia.org/wiki/Entropy_encoding>`_ it using the
particular parametrization of the generative model given by
:math:`\boldsymbol\theta` and :math:`\boldsymbol b`, as well as the parameters
themselves. Therefore, if we choose to maximize the posterior distribution
of Eq. :eq:`model-posterior` it will be fully equivalent to the
so-called `minimum description length
:math:`\boldsymbol\theta` and :math:`\boldsymbol b`, as well as the
parameters themselves. Therefore, if we choose to maximize the posterior
distribution of Eq. :eq:`model-posterior` it will be fully equivalent to
the so-called `minimum description length
<https://en.wikipedia.org/wiki/Minimum_description_length>`_
method. This approach corresponds to an implementation of `Occam's razor
<https://en.wikipedia.org/wiki/Occam%27s_razor>`_, where the `simplest`
......@@ -103,7 +106,10 @@ model is selected, among all possibilities with the same explanatory
power. The selection is based on the statistical evidence available, and
therefore will not `overfit
<https://en.wikipedia.org/wiki/Overfitting>`_, i.e. mistake stochastic
fluctuations for actual structure.
fluctuations for actual structure. In particular this means that we will
not find modules in networks if they could have arisen simply because of
stochastic fluctuations, as they do in fully random graphs
[guimera-modularity-2004]_.
The stochastic block model (SBM)
--------------------------------
......@@ -213,8 +219,9 @@ counts are generated from another SBM, and so on recursively
With this model, the maximum number of groups that can be inferred
scales as :math:`B_{\text{max}}=O(N/\log(N))`. In addition to being able
to find small groups in large networks, this model also provides a
multilevel hierarchical description of the network, that describes its
structure at multiple scales.
multilevel hierarchical description of the network. With such a
description, we can uncover structural patterns at multiple scales,
representing different levels of coarse-graining.
Inferring the best partition
----------------------------
......@@ -465,8 +472,8 @@ case of the `C. elegans` network we have
.. testoutput:: model-selection
:options: +NORMALIZE_WHITESPACE
Non-degree-corrected DL: 8562.47391385
Degree-corrected DL: 8244.06093886
Non-degree-corrected DL: 8557.215637...
Degree-corrected DL: 8229.533915...
Since it yields the smallest description length, the degree-corrected
fit should be preferred. The statistical significance of the choice can
......@@ -492,12 +499,12 @@ fits. In our particular case, we have
.. testoutput:: model-selection
:options: +NORMALIZE_WHITESPACE
ln Λ: -318.4129...
ln Λ: -327.681722...
The precise threshold that should be used to decide when to `reject a
hypothesis <https://en.wikipedia.org/wiki/Hypothesis_testing>`_ is
subjective and context-dependent, but the value above implies that the
particular degree-corrected fit is around :math:`\mathrm{e}^{318} \approx 10^{138}`
particular degree-corrected fit is around :math:`\mathrm{e}^{327} \approx 10^{142}`
times more likely than the non-degree corrected one, and hence it can be
safely concluded that it provides a substantially better fit.
......@@ -519,12 +526,12 @@ example, for the American football network above, we have:
.. testoutput:: model-selection
:options: +NORMALIZE_WHITESPACE
Non-degree-corrected DL: 1746.4098...
Degree-corrected DL: 1780.5767...
ln Λ: -34.1668...
Non-degree-corrected DL: 1755.860047...
Degree-corrected DL: 1780.576716...
ln Λ: -24.716669...
Hence, with a posterior odds ratio of :math:`\Lambda \approx \mathrm{e}^{-34} \approx
10^{-18}` in favor of the non-degree-corrected model, it seems like the
Hence, with a posterior odds ratio of :math:`\Lambda \approx \mathrm{e}^{-24} \approx
10^{-10}` in favor of the non-degree-corrected model, it seems like the
degree-corrected variant is an unnecessarily complex description for
this network.
......@@ -549,9 +556,11 @@ different groups with specific probabilities, and `accepting or
rejecting
<https://en.wikipedia.org/wiki/Metropolis%E2%80%93Hastings_algorithm>`_
such moves so that, after a sufficiently long time, the partitions will
be observed with the desired posterior probability. The algorithm is so
designed, that its run-time is independent on the number of groups being
used in the model, and hence is suitable for use on very large networks.
be observed with the desired posterior probability. The algorithm is
designed so that its run-time (i.e. each sweep of the MCMC) is linear on
the number of edges in the network, and independent on the number of
groups being used in the model, and hence is suitable for use on very
large networks.
In order to perform such moves, one needs again to operate with
:class:`~graph_tool.inference.BlockState` or
......@@ -820,8 +829,8 @@ network as above.
.. testoutput:: nested-model-averaging
Change in description length: 9.5392...
Number of accepted vertex moves: 44362
Change in description length: 23.368680...
Number of accepted vertex moves: 46167
Similarly to the the non-nested case, we can use
:func:`~graph_tool.inference.mcmc_equilibrate` to do most of the boring
......@@ -1187,15 +1196,15 @@ weights
.. math::
P(\boldsymbol x|\boldsymbol G,\boldsymbol b) =
\prod_{r\le s}\int P(\boldsymbol x_{rs}|\gamma)P(\gamma)\,\mathrm{d}\gamma,
\prod_{r\le s}\int P({\boldsymbol x}_{rs}|\gamma)P(\gamma)\,\mathrm{d}\gamma,
where :math:`P(\boldsymbol x_{rs}|\gamma)` is some model for the weights
between groups :math:`(r,s)`, conditioned on some parameter
:math:`\gamma`, sampled from its prior :math:`P(\gamma)`. A hierarchical
version of the model can also be implemented by replacing this prior by
a nested sequence of priors and hyperpriors, as described in
[peixoto-weighted-2017]_. The posterior partition distribution is then
simply
where :math:`P({\boldsymbol x}_{rs}|\gamma)` is some model for the weights
:math:`{\boldsymbol x}_{rs}` between groups :math:`(r,s)`, conditioned on
some parameter :math:`\gamma`, sampled from its prior
:math:`P(\gamma)`. A hierarchical version of the model can also be
implemented by replacing this prior by a nested sequence of priors and
hyperpriors, as described in [peixoto-weighted-2017]_. The posterior
partition distribution is then simply
.. math::
......@@ -1244,11 +1253,10 @@ and any of the other discrete distributions for the magnitude,
The support for weighted networks is activated by passing the parameters
``recs`` and ``rec_types`` to :class:`~graph_tool.inference.BlockState`
(or :class:`~graph_tool.inference.OverlapBlockState` or
:class:`~graph_tool.inference.LayeredBlockState`), that specify the edge
covariates (an edge :class:`~graph_tool.PropertyMap`) and their types (a
string from the table above), respectively. Note that these parameters
expect *lists*, so that multiple edge weights can be used
(or :class:`~graph_tool.inference.OverlapBlockState`), that specify the
edge covariates (an edge :class:`~graph_tool.PropertyMap`) and their
types (a string from the table above), respectively. Note that these
parameters expect *lists*, so that multiple edge weights can be used
simultaneously.
For example, let us consider a network of suspected terrorists involved
......@@ -1280,7 +1288,7 @@ the weights, as follows:
state = gt.minimize_nested_blockmodel_dl(g, state_args=dict(recs=[g.ep.weight],
rec_types=["discrete-binomial"]))
state.draw(edge_color=g.ep.weight, ecmap=matplotlib.cm.inferno,
state.draw(edge_color=g.ep.weight, ecmap=(matplotlib.cm.inferno, .6),
eorder=g.ep.weight, edge_pen_width=gt.prop_to_size(g.ep.weight, 1, 4, power=1),
edge_gradient=[], output="moreno-train-wsbm.svg")
......@@ -1297,8 +1305,8 @@ Model selection
In order to select the best weighted model, we proceed in the same
manner as described in Sec. :ref:`model_selection`. However, when using
transformation on continuous weights, we must include the associated
scaling of probability density, as described in
transformations on continuous weights, we must include the associated
scaling of the probability density, as described in
[peixoto-weighted-2017]_.
For example, consider a `food web
......@@ -1317,7 +1325,7 @@ follows:
os.chdir("demos/inference")
except FileNotFoundError:
pass
gt.seed_rng(42)
gt.seed_rng(44)
.. testcode:: food-web
......@@ -1330,7 +1338,7 @@ follows:
state = gt.minimize_nested_blockmodel_dl(g, state_args=dict(recs=[g.ep.weight],
rec_types=["real-exponential"]))
state.draw(edge_color=gt.prop_to_size(g.ep.weight, power=1, log=True), ecmap=matplotlib.cm.inferno,
state.draw(edge_color=gt.prop_to_size(g.ep.weight, power=1, log=True), ecmap=(matplotlib.cm.inferno, .6),
eorder=g.ep.weight, edge_pen_width=gt.prop_to_size(g.ep.weight, 1, 4, power=1, log=True),
edge_gradient=[], output="foodweb-wsbm.svg")
......@@ -1342,7 +1350,7 @@ follows:
web, using the biomass flow as edge covariates (indicated by the edge
colors and widths).
Alternatively, we may consider a transform of the type
Alternatively, we may consider a transformation of the type
.. math::
:label: log_transform
......@@ -1352,9 +1360,9 @@ Alternatively, we may consider a transform of the type
so that :math:`y_{ij} \in [-\infty,\infty]`. If we use a model
``"real-normal"`` for :math:`\boldsymbol y`, it amounts to a `log-normal
<https://en.wikipedia.org/wiki/Log-normal_distribution>`_ model for
:math:`x`. This model can be better if the weights are distributed
across many orders of magnitude. We can fit this alternative model
simply by using the transformed weights:
:math:`\boldsymbol x`. This can be a better choice if the weights are
distributed across many orders of magnitude, or show multi-modality. We
can fit this alternative model simply by using the transformed weights:
.. testcode:: food-web
......@@ -1365,7 +1373,7 @@ simply by using the transformed weights:
state_ln = gt.minimize_nested_blockmodel_dl(g, state_args=dict(recs=[y],
rec_types=["real-normal"]))
state_ln.draw(edge_color=gt.prop_to_size(g.ep.weight, power=1, log=True), ecmap=matplotlib.cm.inferno,
state_ln.draw(edge_color=gt.prop_to_size(g.ep.weight, power=1, log=True), ecmap=(matplotlib.cm.inferno, .6),
eorder=g.ep.weight, edge_pen_width=gt.prop_to_size(g.ep.weight, 1, 4, power=1, log=True),
edge_gradient=[], output="foodweb-wsbm-lognormal.svg")
......@@ -1377,8 +1385,8 @@ simply by using the transformed weights:
web, using the biomass flow as edge covariates (indicated by the edge
colors and widths).
At this point, we ask ourselves which of the above models is the best
fit of the data. This is answered by performing model selection via
At this point, we ask ourselves which of the above models yields the
best fit of the data. This is answered by performing model selection via
posterior odds ratios just like in Sec. :ref:`model_selection`. However,
here we need to take into account the scaling of the probability density
incurred by the variable transformation, i.e.
......@@ -1408,11 +1416,12 @@ Therefore, we can compute the posterior odds ratio between both models as:
.. testoutput:: food-web
:options: +NORMALIZE_WHITESPACE
ln Λ: 37.210511...
ln Λ: -43.189790...
A value of :math:`\Lambda \approx \mathrm{e}^{37} \approx 10^{16}` in
favor the log-normal model indicates that it is indeed a better choice
for this data.
A value of :math:`\Lambda \approx \mathrm{e}^{-43} \approx 10^{-19}` in
favor the exponential model indicates that the log-normal model does not
provide a better fit for this particular data. Based on this, we
conclude that the exponential model should be preferred in this case.
Posterior sampling
......@@ -1476,7 +1485,7 @@ separating these two types of interactions in two layers:
# The edge types are stored in the edge property map "weights".
# Note the different meaning of the two 'layers' parameters below: The
# Note the different meanings of the two 'layers' parameters below: The
# first enables the use of LayeredBlockState, and the second selects
# the 'edge layers' version (instead of 'edge covariates').
......@@ -1484,7 +1493,7 @@ separating these two types of interactions in two layers:
state_args=dict(ec=g.ep.weight, layers=True))
state.draw(edge_color=g.ep.weight, edge_gradient=[],
ecmap=matplotlib.cm.RdBu, edge_pen_width=3,
ecmap=(matplotlib.cm.coolwarm_r, .6), edge_pen_width=5,
output="tribes-sbm-edge-layers.svg")
.. figure:: tribes-sbm-edge-layers.*
......@@ -1520,7 +1529,7 @@ edges. We may compute the posterior of :math:`\delta \boldsymbol G` as
:label: posterior-missing
P(\delta \boldsymbol G | \boldsymbol G) \propto
\sum_{\boldsymbol b}\frac{P(\boldsymbol G + \delta\boldsymbol G| \boldsymbol b)}{P(\boldsymbol G| \boldsymbol b)}P(\boldsymbol b | \boldsymbol G)
\sum_{\boldsymbol b}\frac{P(\boldsymbol G \cup \delta\boldsymbol G| \boldsymbol b)}{P(\boldsymbol G| \boldsymbol b)}P(\boldsymbol b | \boldsymbol G)
up to a normalization constant. Although the normalization constant is
difficult to obtain in general (since we need to perform a sum over all
......@@ -1584,7 +1593,7 @@ above).
.. testsetup:: missing-edges
gt.seed_rng(42)
gt.seed_rng(7)
.. testcode:: missing-edges
......@@ -1712,6 +1721,11 @@ References
Physics, and Computation", Oxford Univ Press (2009).
:DOI:`10.1093/acprof:oso/9780198570837.001.0001`
.. [guimera-modularity-2004] Roger Guimerà, Marta Sales-Pardo, and
Luís A. Nunes Amaral, "Modularity from fluctuations in random graphs
and complex networks", Phys. Rev. E 70, 025101(R) (2004),
:doi:`10.1103/PhysRevE.70.025101`
.. [hayes-connecting-2006] Brian Hayes, "Connecting the dots. can the
tools of graph theory and social-network studies unravel the next big
plot?", American Scientist, 94(5):400-404, 2006.
......
......@@ -155,7 +155,8 @@ void export_blockmodel_state()
.def_readwrite("partition_dl", &entropy_args_t::partition_dl)
.def_readwrite("degree_dl", &entropy_args_t::degree_dl)
.def_readwrite("degree_dl_kind", &entropy_args_t::degree_dl_kind)
.def_readwrite("edges_dl", &entropy_args_t::edges_dl);
.def_readwrite("edges_dl", &entropy_args_t::edges_dl)
.def_readwrite("recs_dl", &entropy_args_t::recs_dl);
enum_<deg_dl_kind>("deg_dl_kind")
.value("ent", deg_dl_kind::ENT)
......
This diff is collapsed.
......@@ -86,11 +86,11 @@ vector<std::reference_wrapper<Type>> from_any_list(boost::python::object list)
};
void split_layers(GraphInterface& gi, boost::any& aec, boost::any& ab,
boost::any& arec, boost::any& adrec, boost::any& aeweight,
boost::any& avweight, boost::any& avc, boost::any& avmap,
boost::any& alweight, boost::python::object& ous,
boost::python::object& oub, boost::python::object& ourec,
boost::python::object& oudrec,
boost::python::object& arec, boost::python::object& adrec,
boost::any& aeweight, boost::any& avweight, boost::any& avc,
boost::any& avmap, boost::any& alweight,
boost::python::object& ous, boost::python::object& oub,
boost::python::object& ourec, boost::python::object& oudrec,
boost::python::object& oueweight,
boost::python::object& ouvweight, vbmap_t& block_map,
boost::python::object& obrmap, boost::python::object& ouvmap)
......@@ -98,12 +98,12 @@ void split_layers(GraphInterface& gi, boost::any& aec, boost::any& ab,
typedef vprop_map_t<int32_t>::type vmap_t;
typedef vprop_map_t<vector<int32_t>>::type vvmap_t;
typedef eprop_map_t<int32_t>::type emap_t;
typedef eprop_map_t<std::vector<double>>::type remap_t;
typedef eprop_map_t<double>::type remap_t;
emap_t& ec = any_cast<emap_t&>(aec);
vmap_t& b = any_cast<vmap_t&>(ab);
remap_t& rec = any_cast<remap_t&>(arec);
remap_t& drec = any_cast<remap_t&>(adrec);
auto rec = from_any_list<remap_t>(arec);
auto drec = from_any_list<remap_t>(adrec);
vmap_t& vweight = any_cast<vmap_t&>(avweight);
emap_t& eweight = any_cast<emap_t&>(aeweight);
vvmap_t& vc = any_cast<vvmap_t&>(avc);
......@@ -112,8 +112,11 @@ void split_layers(GraphInterface& gi, boost::any& aec, boost::any& ab,
auto us = from_rlist<GraphInterface>(ous);
auto ub = from_any_list<vmap_t>(oub);
auto urec = from_any_list<remap_t>(ourec);
auto udrec = from_any_list<remap_t>(oudrec);
vector<vector<std::reference_wrapper<remap_t>>> urec, udrec;
for (int i = 0; i < boost::python::len(ourec); ++i)
urec.push_back(from_any_list<remap_t>(ourec[i]));
for (int i = 0; i < boost::python::len(oudrec); ++i)
udrec.push_back(from_any_list<remap_t>(oudrec[i]));
auto uvweight = from_any_list<vmap_t>(ouvweight);
auto ueweight = from_any_list<emap_t>(oueweight);
......@@ -150,9 +153,14 @@ void split_layers(GraphInterface& gi, boost::any& aec, boost::any& ab,
vmap[v].insert(vmap[v].begin() + pos, u);
uvmap[l].get()[u] = v;
if (lw[v].empty())
{
uvweight[l].get()[u] = vweight[v];
}
else
{
assert(lw[v].find(l) != lw[v].end());
uvweight[l].get()[u] = lw[v][l];
}
size_t r = b[v];
size_t u_r;
......@@ -190,8 +198,104 @@ void split_layers(GraphInterface& gi, boost::any& aec, boost::any& ab,
auto u_t = get_v(t, l);
auto ne = add_edge(u_s, u_t, us[l].get().get_graph()).first;
ueweight[l].get()[ne] = eweight[e];
urec[l].get()[ne] = rec[e];
udrec[l].get()[ne] = drec[e];
for (size_t i = 0; i < rec.size(); ++i)
{
urec[l][i].get()[ne] = rec[i].get()[e];
udrec[l][i].get()[ne] = drec[i].get()[e];
}
assert(uvweight[l].get()[u_s] > 0 || eweight[e] == 0);
assert(uvweight[l].get()[u_t] > 0 || eweight[e] == 0);
}
})();
}
void split_groups(boost::any& ab, boost::any& avc, boost::any& avmap,
boost::python::object& ous, boost::python::object& oub,
boost::python::object& ouvweight,
vbmap_t& block_map, boost::python::object& obrmap,
boost::python::object& ouvmap)
{
typedef vprop_map_t<int32_t>::type vmap_t;
typedef vprop_map_t<vector<int32_t>>::type vvmap_t;
vmap_t& b = any_cast<vmap_t&>(ab);
vvmap_t& vc = any_cast<vvmap_t&>(avc);
vvmap_t& vmap = any_cast<vvmap_t&>(avmap);
auto us = from_rlist<GraphInterface>(ous);
auto ub = from_any_list<vmap_t>(oub);
auto block_rmap = from_any_list<vmap_t>(obrmap);
auto uvmap = from_any_list<vmap_t>(ouvmap);
auto uvweight = from_any_list<vmap_t>(ouvweight);
block_map.resize(us.size());
for (size_t l = 0; l < us.size(); ++l)
{
auto& u = us[l].get().get_graph();
auto& uvmap_l = uvmap[l].get();
auto& ub_l = ub[l].get();
auto& bmap = block_map[l];
auto& brmap = block_rmap[l].get();
auto& vweight_l = uvweight[l].get();
for (auto w : vertices_range(u))
{
if (vweight_l[w] == 0)
continue;
auto v = uvmap_l[w];
vc[v].push_back(l);
vmap[v].push_back(w);
size_t r = b[v];
size_t u_r;
auto riter = bmap.find(r);
if (riter == bmap.end())
{
u_r = bmap.size();
bmap[r] = u_r;
brmap[u_r] = r;
}
else
{
u_r = riter->second;
}
ub_l[w] = u_r;
}
}
}
void get_rvmap(GraphInterface& gi, boost::any& avc, boost::any& avmap,
boost::python::object& ouvmap)
{
typedef vprop_map_t<int32_t>::type vmap_t;
typedef vprop_map_t<vector<int32_t>>::type vvmap_t;
vvmap_t& vc = any_cast<vvmap_t&>(avc);
vvmap_t& vmap = any_cast<vvmap_t&>(avmap);
auto uvmap = from_any_list<vmap_t>(ouvmap);
run_action<>()(gi,
[&](auto& g)
{
for (auto v : vertices_range(g))
{
auto& ls = vc[v];
auto& vs = vmap[v];
for (size_t i = 0; i < ls.size(); ++i)
{
auto l = ls[i];
auto w = vs[i];
uvmap[l].get()[w] = v;
}
}
})();
}
......@@ -445,6 +549,8 @@ void export_layered_blockmodel_state()
def("make_layered_block_state", &make_layered_block_state);
def("split_layers", &split_layers);
def("split_groups", &split_groups);
def("get_rvmap", &get_rvmap);
def("get_layered_block_degs", &get_layered_block_degs);
def("get_mapped_block_degs", &get_mapped_block_degs);
def("get_ldegs", &get_ldegs);
......
......@@ -41,6 +41,9 @@ void export_lsbm()
{
using namespace boost::python;
class_<LayeredBlockStateVirtualBase, boost::noncopyable>
("LayeredBlockStateVirtualBase", no_init);
block_state::dispatch
([&](auto* bs)
{
......@@ -68,9 +71,13 @@ void export_lsbm()
&state_t::remove_vertices;
void (state_t::*add_vertices)(python::object, python::object) =
&state_t::add_vertices;
void (state_t::*couple_state)(LayeredBlockStateVirtualBase&,
entropy_args_t) =
&state_t::couple_state;
class_<state_t> c(name_demangle(typeid(state_t).name()).c_str(),
no_init);
class_<state_t, bases<LayeredBlockStateVirtualBase>>
c(name_demangle(typeid(state_t).name()).c_str(),
no_init);
c.def("remove_vertex", &state_t::remove_vertex)
.def("add_vertex", &state_t::add_vertex)
.def("move_vertex", &state_t::move_vertex)
......@@ -85,16 +92,32 @@ void export_lsbm()
.def("get_partition_dl", &state_t::get_partition_dl)
.def("get_deg_dl", &state_t::get_deg_dl)
.def("get_move_prob", get_move_prob)
.def("couple_state", couple_state)
.def("decouple_state",
&state_t::decouple_state)
.def("get_B_E",
&state_t::get_B_E)
.def("get_B_E_D",
&state_t::get_B_E_D)
.def("get_layer",
+[](state_t& state, size_t l) -> python::object
{
return python::object(block_state_t(state.get_layer(l)));
})
.def("enable_partition_stats",
&state_t::enable_partition_stats)
.def("disable_partition_stats",
&state_t::disable_partition_stats)
.def("is_partition_stats_enabled",
&state_t::is_partition_stats_enabled);
&state_t::is_partition_stats_enabled)
.def("clear_egroups",
&state_t::clear_egroups)
.def("rebuild_neighbour_sampler",
&state_t::rebuild_neighbour_sampler)
.def("sync_emat",
&state_t::sync_emat)
.def("sync_bclabel",
&state_t::sync_bclabel);
});
});
}
......@@ -85,6 +85,9 @@ void export_layered_overlap_blockmodel_state()
void (state_t::*move_vertices)(python::object,
python::object) =
&state_t::move_vertices;
void (state_t::*couple_state)(LayeredBlockStateVirtualBase&,
entropy_args_t) =
&state_t::couple_state;
class_<state_t> c(name_demangle(typeid(state_t).name()).c_str(),
no_init);
......@@ -98,16 +101,32 @@ void export_layered_overlap_blockmodel_state()
.def("get_partition_dl", &state_t::get_partition_dl)