The algorithms in graph-tool are written in C++, just like the algorithms
in igraph are writen in C. In both cases, the C++/C code gets compiled
Graph-tool's python library, like the higher level libraries for igraph,
are only interfaces to the underlying compiled code.
Therefore, there is no relationship between "filtering capabilities"
increasing compile time, and the runtime speed or memory consumption of
For the kind of things you describe, if igraph does it within your memory
limits, graph-tool should do it as well. And for the numbers you give, it
should be orders of magnitude lower than what you're expecting.
On Tue, Feb 9, 2021 at 12:12 AM nikhrao <nikhrao(a)umich.edu> wrote:
Thanks for the response! Yes, sorry for the confusion. I think I have a key
misunderstanding (I haven't used python before). When you refer to
compilation, does python compile the code I write as I write it? Or rather,
is the code compiled once I decide to run it, and it's at this step that
memory usage is high due to filtering?
For context, the networks that I'm working with have around 400,000
and 300,000 edges. If I tried fairly rudimentary tasks such as computing
degree for each vertex or computing eigenvector centrality, would these
require large amounts of RAM (say, 90-100 GB) due the filtering
that graph-tool has? I'm just trying to get an idea of the upper bound of
memory requirements that I may need on my server.
Sent from: https://nabble.skewed.de/
graph-tool mailing list