Hi Tiago,
sometimes, when delete the graph and create a new one, then plot it, the
plotted graph is shown upside down (in terms of the node text). So how could
we control the orientation of the graph? Or which parameter we should use to
always make node text shown in a correct orientation. Thanks a lot.
--
View this message in context: http://main-discussion-list-for-the-graph-tool-project.982480.n3.nabble.com…
Sent from the Main discussion list for the graph-tool project mailing list archive at Nabble.com.

Hi,
I'm suffering from the same issue mentioned in this post:
https://git.skewed.de/count0/graph-tool/issues/174
Namely, I'm trying to draw a graph that includes a lot of self-looping
edges, and my labels are being printed upside down. If I remove the
self-loops the labels are shown the right way up.
Is there a fix for it?
Thanks,
Charlie
--
View this message in context: http://main-discussion-list-for-the-graph-tool-project.982480.n3.nabble.com…
Sent from the Main discussion list for the graph-tool project mailing list archive at Nabble.com.

Dear Tiago,
I have a directed graph of about half a million nodes and approximately a
million edges following scale free behaviour and a power law degree
distribution. To test some of my hypothesis, I would like to generate random
smaller graphs (about 50 up to 200 nodes) representative of the big one.
When I used a sample function that samples straight away from the real
distribution of the big network, I have following problems:
- I generate unconnected nodes with both 0 in AND out degree.
- I generate small sub parts of a few nodes that are not connected to the
main graph.
- If only sampling from nodes with at least 1 degree, the generated graph is
coherent, but not representative anymore as I need a big portion of nodes
with either only one in or one out degree.
Here is the part of my script I used for that, where samples are drawn from
dictionaries of the degrees:
def sample_in():
a=np.random.randint(num)
k_in = in_degrees[a]
return k_in
def sample_out():
if sample_in()==0:
b=np.random.randint(num_out)
k_out=out_zero_zeros.values()[b]
return k_out
else:
b=np.random.randint(num)
k_out=out_degrees[b]
return k_out
N=200
g=gt.random_graph(N, lambda:(sample_in(), sample_out()),
model="constrained-configuration", directed=True)
I also tried sampling from a list of tuples as you have mentioned before in
the forum, but I didn't receive any results, as the tuples randomly drawn
from my list might not be combinable.
degs=[(7,1),(4,3),(5,6),(2,4),(6,8),(2,0),(3,5),(0,3),(2,7),(2,1)]
g = gt.random_graph(4, lambda i: degs[i], directed=True)
- Is there any option I could active that would help me in those cases I
described above?
- Is there a better way how to create representative small networks?
Any help on that issue will be much appreciated.
Best wishes,
Jana
--
Sent from: http://main-discussion-list-for-the-graph-tool-project.982480.n3.nabble.com/

Hello, I need some hints about generating graphs using gt.generate_sbm. I had myself used it to generate a random graph given a predefined modular structure passing blocks and their connectivities.
I have now a more complicated problem: I'm inferring communities from a complete weighted graph and I want to generate random graphs using the inferred structure. Is it possible to generate the graph with edge weights modeled after the types used to define the original BlockState?
Best,
d

Hi,
I have a question about edge attributes. Currently, I am using graphs that have three edge properties. The first one is a float which serves as the edge weight. The second one is a string that can only be two values and finally, I have an edge id.
Does the nested stochastic block model use the weight and the other binary string property for finding the hierarchies?
Best,
Hristo

Hi. I have a device with an Apple M1 chip and I'm trying to build a graph-tool deb package in a debian docker container on an arm64 architecture. I am using the (https://git.skewed.de/count0/graph-tool/-/blob/master/release/debian/Docker…) image as the base of the docker file. I understand the dpkg-buildpackage command and the cpp files are being compiled. This process consumes very much memory (about 10 GB). My device doesn't have that much memory and the build crashes. Can we expect there will be a ready version for the arm64 architecture in the near future or can you suggest steps to optimize the build?

hi,
I'm writing to report several issues with the multicanonical sampler, using gt version 2.37. ((haven't been able to login on gitlab tracker)
First with the NestedBlockState
g = gt.collection.data["celegansneural"]
state = gt.NestedBlockState(g)
nbins=100
S0 = state.entropy()
Smin,Smax = S0*0.90,S0*1.1
ms= gt.MulticanonicalState(state,Smin,Smax, nbins=nbins)
gt.multicanonical_equilibrate(ms)
Will return:
/usr/lib/python3/dist-packages/graph_tool/inference/mcmc.py in sweep(self, **kwargs)
426
427 def sweep(self, **kwargs):
--> 428 self._state.multicanonical_sweep(self, **kwargs)
429
430 def get_energies(self):
TypeError: multicanonical_sweep() takes 1 positional argument but 2 were given
Then with BlockState:
state = gt.BlockState(g)
nbins=100
S0 = state.entropy()
Smin,Smax = S0*0.90,S0*1.1
ms= gt.MulticanonicalState(state,Smin,Smax, nbins=nbins)
gt.multicanonical_equilibrate(ms) #THIS IS OK
ds,nattempts,nmoves = state.multicanonical_sweep(ms,niter=10 )
The last line fails with the following output:
/usr/lib/python3/dist-packages/graph_tool/inference/blockmodel.py in _multicanonical_sweep_dispatch(self, multicanonical_state)
1702 _get_rng())
1703 else:
-> 1704 return libinference.multicanonical_sweep(multicanonical_state,
1705 self._state, _get_rng())
1706
TypeError: No registered converter was able to extract a C++ reference to type boost::any from this Python object of type NoneType
Thanks for this wonderful module!

Hi team.
I'm wondering whether you could help me to see what is happening with your reduced_mutual_information() function because of several mismatching outputs I found on this implementation.
1. RMI is a value between [0, 1], but why in your example the output is negative if I compare two partition?
x = np.random.randint(0, 10, 1000)
y = np.random.randint(0, 10, 1000)
gt.reduced_mutual_information(x, y)
-0.065562...
2. In your example, you create sort of two partitions from a random distribution, Is it not the specific case when RMI is zero, or very close to zero?
3. When I use the exact partitions Newman offer in your own code (wine.txt), your function gives
0.7890319931250596
But the Newman function gives
Reduced mutual information M = 1.21946279985 bits per object
Why do these results are so different or how can we associate them?
4. Finally, what is (or where is) the description of the format one must pass the partitions to the function?
I mean, I'm confused about how x (or y) variables should arranged. Each row index is the node label? If so, how to write nodes sharing several partitions?
Thanks in advance for your answers and congratulation for creating this tool!
JM