Skip to content

  • Projects
  • Groups
  • Snippets
  • Help
    • Loading...
    • Help
    • Support
    • Submit feedback
    • Contribute to GitLab
  • Sign in / Register
graph-tool
graph-tool
  • Project overview
    • Project overview
    • Details
    • Activity
    • Releases
    • Cycle Analytics
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
    • Charts
  • Issues 44
    • Issues 44
    • List
    • Boards
    • Labels
    • Milestones
  • Merge Requests 1
    • Merge Requests 1
  • CI / CD
    • CI / CD
    • Pipelines
    • Jobs
    • Schedules
    • Charts
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Members
    • Members
  • Collapse sidebar
  • Activity
  • Graph
  • Charts
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
  • Tiago Peixoto
  • graph-toolgraph-tool
  • Issues
  • #452

Closed
Open
Opened Mar 26, 2018 by Katharina Baum@kbaum
  • Report abuse
  • New issue
Report abuse New issue

get_edges_prob() alters state entropy with real-normal edge covariates

Bug report:

Experienced in version 2.26, under Python 2.7 and 3.6 as well as using the latest Docker image (18-03-26).

Bug description

Calling get_edges_prob() alters the state object and gives inconsistent results if using a real-normal edge prior (apparently not for real-exponential prior or models without edge covariates).

Example illustrating the problem

import graph_tool.all as gt
g=gt.collection.data['celegansneural']
state=gt.minimize_blockmodel_dl(g,state_args=dict(recs=[g.ep.value],rec_types=['real-normal']))
original_entropy=state.entropy()
edge_prob=[]
for i in range(10000): edge_prob.append(state.get_edges_prob(missing=[],spurious=[(0,2)]))

original_entropy-state.entropy() #this is not zero...
edge_prob[0]-edge_prob[-1] #this is not zero...
Assignee
Assign to
None
Milestone
None
Assign milestone
Time tracking
None
Due date
None
0
Labels
None
Assign labels
  • View project labels
Reference: count0/graph-tool#452