sometimes, when delete the graph and create a new one, then plot it, the
plotted graph is shown upside down (in terms of the node text). So how could
we control the orientation of the graph? Or which parameter we should use to
always make node text shown in a correct orientation. Thanks a lot.
View this message in context: http://main-discussion-list-for-the-graph-tool-project.982480.n3.nabble.com…
Sent from the Main discussion list for the graph-tool project mailing list archive at Nabble.com.
I have a directed graph of about half a million nodes and approximately a
million edges following scale free behaviour and a power law degree
distribution. To test some of my hypothesis, I would like to generate random
smaller graphs (about 50 up to 200 nodes) representative of the big one.
When I used a sample function that samples straight away from the real
distribution of the big network, I have following problems:
- I generate unconnected nodes with both 0 in AND out degree.
- I generate small sub parts of a few nodes that are not connected to the
- If only sampling from nodes with at least 1 degree, the generated graph is
coherent, but not representative anymore as I need a big portion of nodes
with either only one in or one out degree.
Here is the part of my script I used for that, where samples are drawn from
dictionaries of the degrees:
k_in = in_degrees[a]
g=gt.random_graph(N, lambda:(sample_in(), sample_out()),
I also tried sampling from a list of tuples as you have mentioned before in
the forum, but I didn't receive any results, as the tuples randomly drawn
from my list might not be combinable.
g = gt.random_graph(4, lambda i: degs[i], directed=True)
- Is there any option I could active that would help me in those cases I
- Is there a better way how to create representative small networks?
Any help on that issue will be much appreciated.
Sent from: http://main-discussion-list-for-the-graph-tool-project.982480.n3.nabble.com/
I'm writing a small package that builds on graph-tool, but not on its graphics capabilities (also because I have to represent other things rather than the graph itself). Still I could use some of the functions "under the hood" for my purposes. I have a question about gt.draw.get_hierarchy_control_points(): the function returns the Bézier spline control points for edges in a given graph, but I'm having difficulties in understanding how this information is encoded. For a single edge in graph, I have dozens of values as control points (half dozens + 2), hence I suspect all splines going from node A to the root of a hierarchy and back to node B are encoded there, and control points should be taken 6 by 6 (3x2 by 3x2 coordinates?). How (x,y) for control points are encoded then: (x, x, x, y, y, y) or (x, y, x, y, x, y)? What are the 2 additiona values I have for each vector? Also, are values absolute or relative to one node in particular (A, B or root...)?
From the documentation for random_rewire : "If parallel_edges = False, parallel edges are not placed during rewiring. In this case, the returned graph will be a uncorrelated sample from the desired ensemble only if n_iter is sufficiently large."
If I set "model = 'configuration'" and "parallel_edges = True, self_loops = True", will "n_inter = 1" suffice?
Using python graph_tool library. I see there is the pseudo_diameter method
but I was looking so something that can give me the exact diameter but
cannot seem to find.
Could anyone shine some light where this is (or why not included)?
Hi graphtool community!
In this period I am stuck in a timing problem.
I need to execute this function:
def *evolutionSIR*(G, ep_beta, count, v0):
state = SIRState(G, beta=ep_beta, gamma=gamma, v0=v0,
s = state.get_state()
temp_state = np.zeros((count + 1, G.num_vertices()), dtype=int)
temp_state[0, :] = s.a[:]
for i in range(count):
s = state.get_state()
temp_state[i + 1, :] = s.a[:]
After this, I use a fuction to determine the timestep where each community
My pc runs the *evolutionSIR* function in 1.1 s ± 48 ms, and the total time
to run the both functions is 1.16 s ± 48 ms. For my study I need to run
O(1E6) the two functions, and it takes a long time on my personal computer.
Reading the documentation, I noticed that *iterate_sync* has the *niter*
parameter and I tried it like this:
v0 = np.random.choice(G.get_vertices(), 1)
state = SIRState(G, beta=ep_beta, gamma=gamma, v0=v0, constant_beta=True)
Doing this, it results to be almost 2.85 times faster (385 ms ± 39.3 ms)
than the *evolutionSIR* I wrote before but the problem is that I can't have
the states in each evolution step.
Is there a way to get the history of the diffusion using
Or is there a way to optimize the evolutionSIR function?
Sent from: https://nabble.skewed.de/
Hello everyone, I'm having a hard time dealing with multiple edges in a graph
with the use of gt.shortest_path with negative weights. This is a simple
code that creates a simple graph in order to show my problem:
g70 = gt.Graph()
edge_weight = g70.new_edge_property("double")
g70.edge_properties["weight"] = edge_weight
edges = [[0,1],[1,2],[0,2],[0,2]]
weights = [-1,0,-2, 0]
for i in range(len(edges)):
e = g70.add_edge(edges[i], edges[i])
g70.ep.weight[e] = weights[i]
for path in gt.shortest_path(g70, 0, 2, weights=g70.ep.weight,
gt.graph_draw(g70, vertex_text=g70.vertex_index, edge_text=g70.ep.weight)
As you can see in the image, there are 2 edges from node 0 to node 2, the
solution that appears before the image specifies: <Edge object with source
'0' and target '2' at 0x7f2b70d49930> meaning that the shortest path from
node 0 to node 2 is an edge from node 0 to node 2. However it doesn't
specify which one of the two edges: (0,2) ->0, (0,2)->-2 is the solution
Since this is a small part of an another algorithm I'm writing, I also need
to know the final sum of the path (-2 in this case), because I'm using
Bellman-Ford as a solution to linear inequalities, so I tried accessing the
edge with the nodes like g70.weight[g70.edge(path[node],path[node+1])] and
because the path doesn't specify which of the two edges is the solution, I
can't seem to find the SPECIFIC edge that appears in the path. (In this case
it was simple: (0->2), however in my program for example a path is:
(0->4->5->6) and I have two edges (5->6) )
TL;DR: I have a directed graph with multiple edges and negative weights. I
plan to use Bellman Ford to solve a small system of linear inequalities.
After using gt.shortest_path, how can I access each EDGE of the path in
order to sum each weight of the specific edge that appears in the path?
Sent from: https://nabble.skewed.de/
we noticed that with certain weighted graphs minimize_blockmodel_dl() tends to put hubs (vertices with many edges) into the same cluster. Please find a minimal example below, which produces the clustered graph in the attached plot. This happens even if edge weights are distributed uniformly over edges. Is this intended behavior?
We wonder if a possible explanation could be that the WSBM is fit to predict edge weights *as well as edge probabilities*. (Compare to formulas (1) and (4) in .) Hence, vertices with similar degrees tend to end up in the same cluster, if the edge weights do not contradict this. Is this correct?
In case the above makes any sense, is there a way to suppress the likelihood of the edge probabilities as in  where the alpha-parameter can be used to fit "only to the weight information"? (Compare to formula (4) in .)
This is also related to the question we asked here:
Dominik (and Enrique)
 Tiago Peixoto. 2017. Nonparametric weighted stochastic block models. Physical Review E, 97.
 C. Aicher, A. Z. Jacobs, and A. Clauset. 2014. Learning latent block structure in weighted networks. Journal of Complex Networks, 3(2):221–248.
import graph_tool.all as gt
h = gt.Graph(directed=False)
ew = h.new_edge_property("double")
ew.a =  * len(h.get_edges())
h.ep['weight'] = ew
state = gt.minimize_blockmodel_dl(
b = state.get_blocks()
b = gt.perfect_prop_hash([b])