Hi Tiago,
I am trying to calculate the shortest distances of a graph after applying a
filter. I have a code that looks like this:
g=gt.load_graph("myGraph.xml",format="xml")
#for later use
distances = gt.shortest_distance(g)
#extract the components of the graph
comp = g.label_components(g)
#This splits the graph in several components
#I want to calculate the shortest distances
#for the component 2 for example
filtering = g.new_vertex_property("boolean")
for v in g.vertices():
if comp[v]==2:
filtering[v]=True
else:
filtering[v]=False
#set the vertex filter
g.set_vertex_filter(filtering)
distances_comp=gt.shortest_distance(g)
The last line of code rises a segmentation fault. I have plotted the graph
with the filtered graph and its correct, also I can calculate the
local_clustering_coefficient without problems. Am I doing something wrong?
Is there any other way to filter the graph and calculate the shortest
distances? Is this a bug?
Thanks so much,
Juan

I want to use graph-tool using the multiprocessing library in Python 2.6. I
keep running into issues though with trying to share a full graph object. I
can store the graph in a multiprocessing.Namespace, but it doesn't keep the
dicts of properties. Example:
def initNS( ns ):
_g = Graph( directed = False)
ns.graph = _g
ns.edge_properties = {
'genres': _g.new_edge_property("vector<string>"),
'movieid': _g.new_edge_property("int"),
}
ns.vertex_properties = {
'personid': _g.new_vertex_property("int32_t")
}
"""
Build property maps for edges and vertices to hold our data
The graph vertices represent actors whereas movies represent edges
"""
# Edges
_g.edge_properties["genres"] = ns.edge_properties['genres']
_g.edge_properties["movieid"] = ns.edge_properties['movieid']
# Vertices
_g.vertex_properties["personid"] = ns.vertex_properties['personid']
ns.graph = _g
##########
This initializes 'ns', which is a multiprocessing.Namespace. The problem is
that for example, ns.edge_properties[ * ] tells me that the type isn't
pickle-able. I tried to just skip that and use the _g.edge_properties to
access it, but those dicts aren't carried over to the different process in
the pool. Presumably b/c they aren't pickle-able.
Any thoughts about how to fix this?
(For those interested, I'm attempting to use the IMDbPy library to do some
graph analysis on the relationships among actors and movies. Each process
has it's own db connection and trying to populate the graph with actor and
movie information in parallel since it's a pretty large and dense graph.
Somewhere in the neighborhood of 250,000+ vertices for just a depth of three
relationships)
Thanks,
--
Derek

Hi!
I was wondering if it is possible to overlay two drawings of the same graph where I use different color-coded vertex sizes? Theses colors would need to be transparent and the graph layout must match in each drawing. Is there an easy way of doing so with graph-tool?
Thanks!
Regards,
--
Sebastian Weber
Group of Cell Communication and Control
Freiburg Institute for Advanced Studies - FRIAS
School of Life Sciences - LIFENET
Albert-Ludwigs-Universität Freiburg
Albertstr. 19
79104 Freiburg
T.: +49-761-203-97237
Fax:+49-761-203-97334

Hi, with ./configure I get this error
checking python2.7 module: numpy... yes
checking for
/usr/lib/python2.7/dist-packages/numpy/core/include/numpy/arrayobject.h...
no
configure: error: Numpy extension header not found
Use latest numpy built from source
--
View this message in context: http://main-discussion-list-for-the-graph-tool-project.982480.n3.nabble.com…
Sent from the Main discussion list for the graph-tool project mailing list archive at Nabble.com.

Hi!
I just discovered a quite strange behavior of graph-tool. I want to keep only the largest component of my network and the following code
comp = label_largest_component(G)
lc = comp.a == 1
Gv = Graph(GraphView(G, vfilt=lc), prune=True)
simply gives wrong results. It deletes vertices which are in the LC and have comp[v]==1 !!!
Instead, the much slower
G.remove_vertex_if(lambda v: comp[v] == 0)
call works just fine. The version of graph-tool is something close to 2.2.12, but I haven't seen any changes on git which seems to relate to this issue. It is quite scary that vertex deletion is somewhat unpredictable or I am doing something wrong here.
Any help would be great here.
Cheers,
--
Sebastian Weber
Group of Cell Communication and Control
Freiburg Institute for Advanced Studies - FRIAS
School of Life Sciences - LIFENET
Albert-Ludwigs-Universität Freiburg
Albertstr. 19
79104 Freiburg
T.: +49-761-203-97237
Fax:+49-761-203-97334

Hi,
I can't get past the ./configure because it can't find boost, but I
installed boost under /usr/local/boost... where does it look for boost?
--
View this message in context: http://main-discussion-list-for-the-graph-tool-project.982480.n3.nabble.com…
Sent from the Main discussion list for the graph-tool project mailing list archive at Nabble.com.

Hi,
it seems that I've a problem with graph_tool.draw.random_layout(g,
shape=None, pos=None, dim=2) function.
I wrote a script to load a graph from dot file then I was trying to assign a
random position for each vertex in the space.
I found that random_layout function could do that, but I doesn't work with
me.
here is the code:
g = load_graph(infile+".dot")
pos = gt.graph_draw(g, output=None) # initial configuration
pos = gt.random_layout(g, shape=tuple, pos=pos, dim=2)
the error is:
File "searchA.py", line 39, in <module>
pos = gt.random_layout(g, shape=tuple, pos=pos, dim=2)
File "/usr/local/lib/python2.6/dist-packages/graph_tool/draw/__init__.py",
line 531, in random_layout
pos = ungroup_vector_property(pos)
TypeError: ungroup_vector_property() takes at least 2 arguments (1 given)
could you help me to solve this problem?
I'd like to assign a random position to each vertex on my graph.
Thank you in advance :)
--
View this message in context: http://main-discussion-list-for-the-graph-tool-project.982480.n3.nabble.com…
Sent from the Main discussion list for the graph-tool project mailing list archive at Nabble.com.

Apologies if I'm missing something, but the Pagerank algorithm doesn't seem
to be normalizing to the number of nodes in the graph, as suggested by the
definition.
This can be easily seen even in the example in the documentation
(http://projects.skewed.de/graph-tool/doc/centrality.html#graph_tool.central…):
nodes with no out-links should have page rank centrality of (d - 1) / N (the
Gamma-(v) set is empty, so the sum term is zero.) Therefore, in a 100-node
graph, as in the example, nodes with no out-links should have centrality of
(1 - 0.8) / 100 = 0.002, however, the lowest centrality of any node in the
graph is 0.2 (which happens six times), which leads me to believe that the
algorithm is not dividing by N, as in the definition.
Please let me know if I'm making a mistake, and thanks for your wonderful
package!
--
View this message in context: http://main-discussion-list-for-the-graph-tool-project.982480.n3.nabble.com…
Sent from the Main discussion list for the graph-tool project mailing list archive at Nabble.com.