get_edges_prob() alters state entropy with real-normal edge covariates
Bug report:
Experienced in version 2.26, under Python 2.7 and 3.6 as well as using the latest Docker image (18-03-26).
Bug description
Calling get_edges_prob() alters the state object and gives inconsistent results if using a real-normal edge prior (apparently not for real-exponential prior or models without edge covariates).
Example illustrating the problem
import graph_tool.all as gt
g=gt.collection.data['celegansneural']
state=gt.minimize_blockmodel_dl(g,state_args=dict(recs=[g.ep.value],rec_types=['real-normal']))
original_entropy=state.entropy()
edge_prob=[]
for i in range(10000): edge_prob.append(state.get_edges_prob(missing=[],spurious=[(0,2)]))
original_entropy-state.entropy() #this is not zero...
edge_prob[0]-edge_prob[-1] #this is not zero...