I have a Graph with 20 vertices, i.e. the index is 0, 1, 2,...19. Vertex with
index 12 is connected with all other vertices except 3 and 15. I want to
remove vertex 12 and all its neighbors, that is I want to keep 3 and 5 in
the graph. What should I do? I tried the remove_vertex() API, it seems there
is something wrong. I ask for help from You.
--
Sent from: http://main-discussion-list-for-the-graph-tool-project.982480.n3.nabble.com/

I have an edge property for weight. The weight for each edge is one integer between 1 and 4. The unweighted global clustering coefficient is about 0.45. The weighted global clustering coefficient is -0.61. Are negative values in the range?
Additionally, if I filter on edge weight and calculate the weighted clustering I get values > 1.0. If all weights are 3, for example, would we not expect the same coefficient for weighted and unweighted. The unweighted coefficient for the above example is reasonable.
The graph has 22K vertices and 3.7M nodes.
I suspect this is something that I have overlook as I am new to graph-tool.
Thank you.

Hi,
I've calculated betweenness for each node in a network of approximately
100,000 nodes and one million edges. For almost all nodes, betweenness
values range from 0 to 9,149.8877, in gradual increments. However, the
highest nine values are as follows:: 3 nodes have the value of 10,000, 5
nodes have a betweenness of 20,000, and one node has a value of 40,000.
These nodes do have relatively high out- and in- degree but there is nothing
else unusual about them.
I calculated these values in graph-tool about a year ago.
Is this normal for the calculation of betweenness (i.e. values >= 10,0000 go
up in increments of 10,0000), or has something gone wrong?

Hello Tiago,
> you cannot replace the underlying graph from under the hood like this.
> What you can do, however, is to work with the same filtered graph the
> whole time, and then change the values of the filter property map
> dynamically.
Thanks for the hint, this is exactly what I need!
In case anyone else is interested in this great feature, I've written a
small minimal working example, see below.
Best regards
Rolf
##############################################################################
import graph_tool.all as gt
def mykeypressed(self, g, keyval, picked, pos, vprops, eprops):
if (chr(keyval)=='1'):
g.vp.myfilter.a = True
self.fit_to_window()
self.regenerate_surface(reset=True)
self.queue_draw()
if (chr(keyval)=='2'):
g.vp.myfilter.a = True
g.vp.myfilter.a = [n%2==0 for n in g.vertex_index]
self.fit_to_window()
self.regenerate_surface(reset=True)
self.queue_draw()
if (chr(keyval)=='3'):
g.vp.myfilter.a = True
g.vp.myfilter.a = [n%3==0 for n in g.vertex_index]
self.fit_to_window()
self.regenerate_surface(reset=True)
self.queue_draw()
g1 = gt.random_graph(10, lambda: (3, 3))
g1.vp.myfilter = g1.new_vertex_property('bool')
g1.vp.myfilter.a = True
g1.set_vertex_filter(g1.vp.myfilter)
gt.interactive_window(g1, key_press_callback=mykeypressed,
vertex_text=g1.vertex_index)
##############################################################################
--
-----------------------------------------------------------------------
Rolf Sander phone: [+49] 6131/305-4610
Max-Planck Institute of Chemistry email: rolf.sander(a)mpic.de
PO Box 3060, 55020 Mainz, Germany homepage: www.rolf-sander.net
-----------------------------------------------------------------------
https://www.encyclopedia-of-geosciences.nethttps://www.geoscientific-model-development.net
-----------------------------------------------------------------------

Hi,
I am not quite sure if it is a bug or a design decision (although it seems
quite inconvenient, so I guess it may be a bug), but `collection` module is
not accessible from the top module object.
More specifically, this code does not run:
import graph_tool as gt
gt.collection.data['karate']
----
AttributeError: module 'graph_tool' has no attribute 'collection'
----
And according to the documentation it should
(https://graph-tool.skewed.de/static/doc/collection.html)
As a matter of fact the same problem affects also other submodules, for
instance `spectral` also cannot
be accesses via an attribute of the top module `gt` after importing.
Graph-tool version I am using is: '2.31 (commit b1411e3e, Sun Mar 29
21:53:41 2020 +0200)'
I installed it a few days ago from conda-forge, so I guess it is up-to-date.
Thanks in advance for any help with this problem (maybe I am doing something
wrong?). And thanks for this great package!
Best,
Szymon.
--
Sent from: http://main-discussion-list-for-the-graph-tool-project.982480.n3.nabble.com/

Hi,
I am using the minimize_blockmodel_dl() function and the LayeredBlockState
class to conduct community detection with metadata. I would like to know if
it is possible to specify the block number limits separately for the user
clusters and the metadata clusters (instead of the total block number limit)
during the SBM inference? Since we want to have different granularity for
user clusters and metadata clusters.
(P.S. I tried using nested SBM and merging separately the user clusters and
the metadata clusters. It worked to some extent, but it seems impossible to
accurately control the final number of clusters, which we might need to.)
Thanks!
Yan
--
Sent from: http://main-discussion-list-for-the-graph-tool-project.982480.n3.nabble.com/

Hi,
I am working on a very large graph of companies and wanted to make some
functions to easily filter out certain subgraphs I would need for some
calculations.
So I made a graph G and populated it with nodes and edges and some internal
property maps because I don't want to always remake this graph. The point is
to just write the complete graph out to a file once I get through all the
data cleaning and have a final graph in a file.
So this gives me:
G, a <Graph object, directed, with 11944189 vertices and 7828750 edges at
0x7f49254e90f0>
with
G.list_properties()
ID (vertex) (type: string)
company_country (edge) (type: string)
shareholder_country (edge) (type: string)
shareholderdirect (edge) (type: double)
Now I just want to do a filtering based on these properties as suggested
earlier in this forum:
g_AT = GraphView(G, efilt=G.ep.company_country.a == 'AT')
But as mentioned in the docs for internal properties: "Internal graph
property maps behave slightly differently. Instead of returning the property
map object, the value itself is returned from the dictionaries"
Which I guess is why running G.ep.company_country.a gives me None, and
running G.ep['company_country'][G.edges().next()] gives me 'AT'.
So for filtering I now do:
ep_filter = G.new_ep('bool')
for e in G.edges():
ep_filter[e] = G.ep['company_country'][e] == 'AT'
But I was wondering if there is some way to not have to go through this edge
by edge but rather just get the whole PropertyArray returned which would be
more elegant and avoid having to constantly make new boolean properties.
PS: for the double type, G.ep.shareholderdirect.a does give me a nice
PropertyArray which I can directly use in the form of
G.ep.shareholderdirect.a > .5, which gives me an easy to use filtering array
to input into GraphViews.
Thanks in advance,
Milan
--
Sent from: http://main-discussion-list-for-the-graph-tool-project.982480.n3.nabble.com/

Hello,
I am trying to spread an infection over a random graph where the beta is an
edge property. I want to implement something that could simulate the
lockdown, this is what I wrote:
import matplotlib.pyplot as plt
from graph_tool.all import *
import numpy as np
import random
# graph parameters
N = 6000 # number of vertices
max_count = 100 # time_stamp
beta = 0.01
time_cut = 10
def deg_sample(k):
return np.random.poisson(k)
def evolution(G, beta, counts, perc):
eprop = G.new_edge_property("double")
eprop.a = beta
state = SIState(G, beta=eprop, constant_beta=False)
infected = [state.get_state().fa.sum()]
time = range(counts)
for i in time:
state.iterate_sync()
infected.append(state.get_state().fa.sum())
if (i == time_cut) & (perc != 0):
n = np.array(random.sample(range(len(list(G.edges()))),
int(G.num_edges() * perc/100)))
eprop.a[n] = 0
else:
pass
return infected
G = random_graph(N, lambda: deg_sample(5), directed=False)
G = extract_largest_component(G)
graph_draw(G, output='network_layout.pdf')
x = evolution(G, beta, max_count, 100)
plt.plot(x)
plt.xlabel(r"Time")
plt.ylabel(r"Infectious nodes")
plt.title('infected vs time with all edges cutted at time=%d' % time_cut)
plt.tight_layout()
plt.show()
In evolution I change the beta of all the edges of the graph at a given
timestamp, and I would expect that the infection will stop to spread after I
change the edge property map. But it doesn't happen and the infenction
continue to spread in the network.
I want to understand better what's happening in the SIState fuction.
Regards,
BH.
--
Sent from: http://main-discussion-list-for-the-graph-tool-project.982480.n3.nabble.com/

Hi, everyone!
I met a problem when I'm learning how to use graph-tool. I read the paper,
network reconstruction and community detection from dynamics, and I am
trying to achieve the same result. When I followed the same settings for
real networks with synthetic dynamics, their similarities were just about
0.2. I have a question about how to control the number of infection events
per node,a, for the first model and the number of micro-state, M, for the
second model. The whole process is shown as following.
import graph_tool.all as gt
from matplotlib import cm
g = gt.collection.konect_data["openflights"] ## airport network with SIS
dynamics
gt.remove_parallel_edges(g)
g = gt.extract_largest_component(g, prune=False)
#simulation of an empirical dynamic model
# The algorithm accepts multiple independent time-series for the
# reconstruction. We will generate 100 SIS cascades starting from a
# random node each time, and uniform infection probability beta=0.2.
ss = []
for i in range(100):
si_state = gt.SISState(g, beta=.2)
s = [si_state.get_state().copy()]
for j in range(10):
si_state.iterate_sync()
s.append(si_state.get_state().copy())
# Each time series should be represented as a single vector-valued
# vertex property map with the states for each note at each time.
s = gt.group_vector_property(s)
ss.append(s)
# Prepare the initial state of the reconstruction as an empty graph
u = g.copy()
u.clear_edges()
ss = [u.own_property(s) for s in ss] # time series properties need to be
'owned' by graph u
# Create reconstruction state
rstate = gt.EpidemicsBlockState(u, s=ss, beta = None, r=1e-6,
global_beta=.2,
state_args=dict(B=20), nested=False,
aE=g.num_edges())
# Now we collect the marginals for exactly 10,000 sweeps, at
# intervals of 10 sweeps:
gm = None
bm = None
betas = []
def collect_marginals(s):
global gm, bm
gm = s.collect_marginal(gm)
b = gt.perfect_prop_hash([s.bstate.b])[0]
bm = s.bstate.collect_vertex_marginals(bm, b=b)
betas.append(s.params["global_beta"])
gt.mcmc_equilibrate(rstate, force_niter=1000, mcmc_args=dict(niter=10,
xstep=0),
callback=collect_marginals)
print("Posterior similarity: ", gt.similarity(g, gm, g.new_ep("double", 1),
gm.ep.eprob))
print("Inferred infection probability: %g ± %g" % (mean(betas), std(betas)))
##########################################################
g = gt.GraphView(gt.collection.konect_data["maayan-foodweb"],
directed=True)##a food web network with Ising dynamic
gt.remove_parallel_edges(g)
# The algorithm accepts multiple independent time-series for the
# reconstruction. We will generate 1000 Ising cascades starting from a
# random node each time, and the uniform inverse temperature beta=0.2.
ss = []
for i in range(1000):
si_state = gt.IsingGlauberState(g, beta=.1)
s = [si_state.get_state().copy()]
si_state.iterate_async(niter=1000)
s.append(si_state.get_state().copy())
# Each time series should be represented as a single vector-valued
# vertex property map with the states for each note at each time.
s = gt.group_vector_property(s)
ss.append(s)
u = g.copy()
u.clear_edges()
ss = [u.own_property(s) for s in ss]
rstate = gt.PseudoIsingBlockState(g,s=ss,beta=0.1,state_args=dict(B=1),
nested=False, aE=g.num_edges())
gm = None
bm = None
betas = []
def collect_marginals(s):
global gm, bm
gm = s.collect_marginal(gm)
b = gt.perfect_prop_hash([s.bstate.b])[0]
bm = s.bstate.collect_vertex_marginals(bm, b=b)
betas.append(s.params["beta"])
gt.mcmc_equilibrate(rstate, force_niter=1000, mcmc_args=dict(niter=10,
xstep=0),
callback=collect_marginals)
print("Posterior similarity: ", gt.similarity(g, gm, g.new_ep("double", 1),
gm.ep.eprob))
print("Inversed temperature: %g ± %g" % (mean(betas), std(betas)))
Moreover, I also wonder how to do a nested version for the same network.
Please let me know if you need more information on the question. otherwise,
I hope to hear how this can be achieved using graph-tool?
Thanks,
Gege Hou
--
Sent from: http://main-discussion-list-for-the-graph-tool-project.982480.n3.nabble.com/