Hi Tiago,
sometimes, when delete the graph and create a new one, then plot it, the
plotted graph is shown upside down (in terms of the node text). So how could
we control the orientation of the graph? Or which parameter we should use to
always make node text shown in a correct orientation. Thanks a lot.
--
View this message in context: http://main-discussion-list-for-the-graph-tool-project.982480.n3.nabble.com…
Sent from the Main discussion list for the graph-tool project mailing list archive at Nabble.com.

Hi,
I'm suffering from the same issue mentioned in this post:
https://git.skewed.de/count0/graph-tool/issues/174
Namely, I'm trying to draw a graph that includes a lot of self-looping
edges, and my labels are being printed upside down. If I remove the
self-loops the labels are shown the right way up.
Is there a fix for it?
Thanks,
Charlie
--
View this message in context: http://main-discussion-list-for-the-graph-tool-project.982480.n3.nabble.com…
Sent from the Main discussion list for the graph-tool project mailing list archive at Nabble.com.

Dear Tiago,
I have a directed graph of about half a million nodes and approximately a
million edges following scale free behaviour and a power law degree
distribution. To test some of my hypothesis, I would like to generate random
smaller graphs (about 50 up to 200 nodes) representative of the big one.
When I used a sample function that samples straight away from the real
distribution of the big network, I have following problems:
- I generate unconnected nodes with both 0 in AND out degree.
- I generate small sub parts of a few nodes that are not connected to the
main graph.
- If only sampling from nodes with at least 1 degree, the generated graph is
coherent, but not representative anymore as I need a big portion of nodes
with either only one in or one out degree.
Here is the part of my script I used for that, where samples are drawn from
dictionaries of the degrees:
def sample_in():
a=np.random.randint(num)
k_in = in_degrees[a]
return k_in
def sample_out():
if sample_in()==0:
b=np.random.randint(num_out)
k_out=out_zero_zeros.values()[b]
return k_out
else:
b=np.random.randint(num)
k_out=out_degrees[b]
return k_out
N=200
g=gt.random_graph(N, lambda:(sample_in(), sample_out()),
model="constrained-configuration", directed=True)
I also tried sampling from a list of tuples as you have mentioned before in
the forum, but I didn't receive any results, as the tuples randomly drawn
from my list might not be combinable.
degs=[(7,1),(4,3),(5,6),(2,4),(6,8),(2,0),(3,5),(0,3),(2,7),(2,1)]
g = gt.random_graph(4, lambda i: degs[i], directed=True)
- Is there any option I could active that would help me in those cases I
described above?
- Is there a better way how to create representative small networks?
Any help on that issue will be much appreciated.
Best wishes,
Jana
--
Sent from: http://main-discussion-list-for-the-graph-tool-project.982480.n3.nabble.com/

Hi again,
I'm writing a small package that builds on graph-tool, but not on its graphics capabilities (also because I have to represent other things rather than the graph itself). Still I could use some of the functions "under the hood" for my purposes. I have a question about gt.draw.get_hierarchy_control_points(): the function returns the Bézier spline control points for edges in a given graph, but I'm having difficulties in understanding how this information is encoded. For a single edge in graph, I have dozens of values as control points (half dozens + 2), hence I suspect all splines going from node A to the root of a hierarchy and back to node B are encoded there, and control points should be taken 6 by 6 (3x2 by 3x2 coordinates?). How (x,y) for control points are encoded then: (x, x, x, y, y, y) or (x, y, x, y, x, y)? What are the 2 additiona values I have for each vector? Also, are values absolute or relative to one node in particular (A, B or root...)?
Thanks
d

Hi, all,
I am working on time-series networks. I have one network in each year, and
I totally have 5 networks in 1970, 1980,1990,2000, 2010. Then I use the
layered model to generate communities.
state_G_layers=graph_tool.all.minimize_blockmodel_dl(G_layers,layers=True,
deg_corr=True,
state_args=dict(ec=G_layers.ep.layer,recs=[G_layers.ep.weight],rec_types=["real-exponential"],layers=True))
Then I got 10 communities. My questions:
1. Does it mean that in each year there are 10 communities , i.e., the
number of communities remains constant over time ?
2. If so, how can we detect the change in the number of communities as the
time goes by? As time goes by, some communities may disappear or merge with
other communities, or the whole network becomes more homophily and forms one
community.
It will be appreciated if you can help me.
Best regards,
Jianjian
--
Sent from: https://nabble.skewed.de/

Hi Tiago,
From the documentation for random_rewire : "If parallel_edges = False, parallel edges are not placed during rewiring. In this case, the returned graph will be a uncorrelated sample from the desired ensemble only if n_iter is sufficiently large."
If I set "model = 'configuration'" and "parallel_edges = True, self_loops = True", will "n_inter = 1" suffice?
Thank you
SS

Hi Tiago,
Thanks for an amazing library! I'm trying to use the multilayer aspect of
graph-tool and I have a few questions that I haven't found answers to
online:
1) How do you access parameters inferred for independent models on separate
layers, say for reproducing Figure 5(b) from the paper? Unless you're just
masking the graph and only drawing edges applicable to each layer?
2) I've seen you say several times that it's simple to incorporate binning
of layers into the model, but I haven't been able to figure out how to do so
- a brief example of how to do so would be hugely appreciated! (or some
initial pointers if you're pressed for time)
3) In this binning procedure, is it possible to keep subsets of layers
separate? Each of my layers is actually defined by a (property X, property
Y) combo, and I would like to investigate how property X affects the
grouping of property Y, as well as whether it noticeably impacts the
networks structure.
Apologies if I've missed the obvious! I've been trying to familiarise myself
quickly with the package, and have hugely appreciated that you've coded this
all yourself in the first place!
Thanks,
John
--
Sent from: https://nabble.skewed.de/

Hi there,
Layered overlapping models aren't working for GraphViews, due to the
following problematic lines in LayeredBlockState:
if overlap and self.ec is not None:
self.base_ec = self.base_g.own_property(ec.copy())
ec = agg_state.eindex.copy()
pmap(ec, self.ec) <-- this line won't work as the GraphView
inherits eindex
self.ec = ec.copy("int")
Is there an easy fix? Minimum working example below:
import numpy.random as rand
import graph_tool.all as gt
N = 20
L = 3
tg = gt.price_network(N) # generate random network size N
elayer = tg.new_ep('int')
E = len(tg.get_edges())
elayer.a = rand.randint(0,high=L,size=E) # randomly assign each edge to one
of L layers
tg.ep['elayer'] = elayer
utg = gt.GraphView(tg,rand.choice([True,False],size=N)) # take random
subsample of graph
stest = gt.minimize_nested_blockmodel_dl(utg,layers=True,overlap=True
state_args=dict(ec=utg.ep.elayer,layers=True))
Thanks,
John
--
Sent from: https://nabble.skewed.de/

dear graph-tool mailing list,
do you have any recommendations for modelling highly skewed distributions of
discrete edge weights?
my network is a multigraph which i collapse to a simple graph with
edge-weights represent the number of edges in the multigraph between two
vertices
in my data, the modal edge weight is equal to 1, but the max is above 2000
if i fit a degree-corrected Poisson SBM to the multigraph, every pair of
firms with a large number of edges together are grouped together in their
own block. this makes sense, since the poisson model will assign very low
probability to the edges for any value of a poisson parameter that can
rationalize the otherwise sparse rate of edge formation.
while this is not necessarily a problem per se, the large number of blocks
that this creates complicates my analysis considerably, and it would be
useful to use edge-covariates with a distribution that can account for the
skewness to get a smaller number of blocks.
wondering if Tiago or anyone else on the list can suggest any
transformation-distribution combination that might help. i tried (without
thinking too deeply) the transformation weight = log(weight) + 1 with
real-geometric weights, but minimize_blockmodel_dl() was taking an unusually
long time to fit so i escaped.
the other option that came to my mind was to use a hierarchical SBM and
choose a higher level where the blocks are merged. i haven't read the papers
on hierarchical SBM or used them in graph-tool yet.
thx,
-sam
--
Sent from: https://nabble.skewed.de/

When a pydev project includes an import of graph_tool, Eclipse consumes very
high cpu cycles by percentage.
Open system monitor.
Create the project.
Open the source in the editor.
On a new line, type "g.".
The Eclipse java process increases to a very high percentage of cpu cycles.
This might take a minute to occur. Once it happens, it continues until
Eclipse is closed.
The Code Analysis never seems to complete.
When this problem occurred in a VirtualBox, typing became so slow as to be
useless.
Running in a non-virtual environment, the entire system became slow.
Source code in pydev project module:
from graph_tool.all import *
g = Graph()
v1 = g.add_vertex()
v2 = g.add_vertex()
e12 = g.add_edge(v1, v2)
print(g)
g.add
OS: Ubuntu 18.04
Eclipse:
Version: 2020-06 (4.16.0)
Build id: 20200615-1200
PyDev for Eclipse 7.7.0.202008021154 org.python.pydev.feature.feature.group
Python interpreter: python3.8 in venv (created with pipenv)
graph_tool requires matplotlib, numpy, and scipy in order to prevent
warnings, but these might not be required to cause the problem.
<https://nabble.skewed.de/file/t496263/processes-detail-cr-2k.png>
<https://nabble.skewed.de/file/t496263/resources.png>
--
Sent from: https://nabble.skewed.de/