parallel minimization crashes
Bug reports:
Please follow the general troubleshooting steps first:
-
Are you running the latest graph-tool
version? -
Do you observe the problem with the current git version? -
Are you using Macports or Homebrew? If yes, please submit an issue there instead: https://github.com/Homebrew/brew/issues and https://trac.macports.org/newticket -
Did you compile graph-tool
manually? -
If you answered yes above, did you use the exact same compiler to build graph-tool
,boost-python
andPython
?
I am aware that parallel support for SBM is not supported (yet?), but as parallel
option is available in multilevel_mcmc_sweep
, why not testing it? After all, I see there's a (closed) issue on the same line (#252 (closed)). Anyhow, whenever the option is enabled, I obtain this error
terminate called after throwing an instance of 'std::bad_any_cast'
what(): bad any_cast
terminate called recursively
terminate called recursively
Aborted (core dumped)
It's easy to obtain that:
import graph_tool.all as gt
g = gt.collection.data['celegansneural']
state = gt.minimize_blockmodel_dl(g,multilevel_mcmc_args=dict(parallel=True))
I have verified that setting the number of threads to 1 does not have the same issue, so the following works:
import graph_tool.all as gt
g = gt.collection.data['celegansneural']
gt.openmp_set_num_threads(1)
state = gt.minimize_blockmodel_dl(g,multilevel_mcmc_args=dict(parallel=True))
$ uname -a
Linux srcn15 5.14.0-503.14.1.el9_5.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Nov 15 12:04:32 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
$ python --version
Python 3.12.9
graph-tool version: 2.97 (commit 4f33d8da, ) installed via conda-forge