Ni!

The algorithms in graph-tool are written in C++, just like the algorithms in igraph are writen in C. In both cases, the C++/C code gets compiled before installation.

Graph-tool's python library, like the higher level libraries for igraph, are only interfaces to the underlying compiled code.

Therefore, there is no relationship between "filtering capabilities" increasing compile time, and the runtime speed or memory consumption of graph-tool.

For the kind of things you describe, if igraph does it within your memory limits, graph-tool should do it as well. And for the numbers you give, it should be orders of magnitude lower than what you're expecting.

Cheers,
.~ยด


On Tue, Feb 9, 2021 at 12:12 AM nikhrao <nikhrao@umich.edu> wrote:
Hi Alexandre,

Thanks for the response! Yes, sorry for the confusion. I think I have a key
misunderstanding (I haven't used python before). When you refer to
compilation, does python compile the code I write as I write it? Or rather,
is the code compiled once I decide to run it, and it's at this step that the
memory usage is high due to filtering?

For context, the networks that I'm working with have around 400,000 vertices
and 300,000 edges. If I tried fairly rudimentary tasks such as computing the
degree for each vertex or computing eigenvector centrality, would these
require large amounts of RAM (say, 90-100 GB) due the filtering capabilities
that graph-tool has? I'm just trying to get an idea of the upper bound of
memory requirements that I may need on my server.

Thanks,
N



--
Sent from: https://nabble.skewed.de/
_______________________________________________
graph-tool mailing list
graph-tool@skewed.de
https://lists.skewed.de/mailman/listinfo/graph-tool