Hi Tiago,
sometimes, when delete the graph and create a new one, then plot it, the
plotted graph is shown upside down (in terms of the node text). So how could
we control the orientation of the graph? Or which parameter we should use to
always make node text shown in a correct orientation. Thanks a lot.
--
View this message in context: http://main-discussion-list-for-the-graph-tool-project.982480.n3.nabble.com…
Sent from the Main discussion list for the graph-tool project mailing list archive at Nabble.com.

Hi,
I'm suffering from the same issue mentioned in this post:
https://git.skewed.de/count0/graph-tool/issues/174
Namely, I'm trying to draw a graph that includes a lot of self-looping
edges, and my labels are being printed upside down. If I remove the
self-loops the labels are shown the right way up.
Is there a fix for it?
Thanks,
Charlie
--
View this message in context: http://main-discussion-list-for-the-graph-tool-project.982480.n3.nabble.com…
Sent from the Main discussion list for the graph-tool project mailing list archive at Nabble.com.

Dear Tiago,
I have a directed graph of about half a million nodes and approximately a
million edges following scale free behaviour and a power law degree
distribution. To test some of my hypothesis, I would like to generate random
smaller graphs (about 50 up to 200 nodes) representative of the big one.
When I used a sample function that samples straight away from the real
distribution of the big network, I have following problems:
- I generate unconnected nodes with both 0 in AND out degree.
- I generate small sub parts of a few nodes that are not connected to the
main graph.
- If only sampling from nodes with at least 1 degree, the generated graph is
coherent, but not representative anymore as I need a big portion of nodes
with either only one in or one out degree.
Here is the part of my script I used for that, where samples are drawn from
dictionaries of the degrees:
def sample_in():
a=np.random.randint(num)
k_in = in_degrees[a]
return k_in
def sample_out():
if sample_in()==0:
b=np.random.randint(num_out)
k_out=out_zero_zeros.values()[b]
return k_out
else:
b=np.random.randint(num)
k_out=out_degrees[b]
return k_out
N=200
g=gt.random_graph(N, lambda:(sample_in(), sample_out()),
model="constrained-configuration", directed=True)
I also tried sampling from a list of tuples as you have mentioned before in
the forum, but I didn't receive any results, as the tuples randomly drawn
from my list might not be combinable.
degs=[(7,1),(4,3),(5,6),(2,4),(6,8),(2,0),(3,5),(0,3),(2,7),(2,1)]
g = gt.random_graph(4, lambda i: degs[i], directed=True)
- Is there any option I could active that would help me in those cases I
described above?
- Is there a better way how to create representative small networks?
Any help on that issue will be much appreciated.
Best wishes,
Jana
--
Sent from: http://main-discussion-list-for-the-graph-tool-project.982480.n3.nabble.com/

Hi team.
I'm wondering whether you could help me to see what is happening with your reduced_mutual_information() function because of several mismatching outputs I found on this implementation.
1. RMI is a value between [0, 1], but why in your example the output is negative if I compare two partition?
x = np.random.randint(0, 10, 1000)
y = np.random.randint(0, 10, 1000)
gt.reduced_mutual_information(x, y)
-0.065562...
2. In your example, you create sort of two partitions from a random distribution, Is it not the specific case when RMI is zero, or very close to zero?
3. When I use the exact partitions Newman offer in your own code (wine.txt), your function gives
0.7890319931250596
But the Newman function gives
Reduced mutual information M = 1.21946279985 bits per object
Why do these results are so different or how can we associate them?
4. Finally, what is (or where is) the description of the format one must pass the partitions to the function?
I mean, I'm confused about how x (or y) variables should arranged. Each row index is the node label? If so, how to write nodes sharing several partitions?
Thanks in advance for your answers and congratulation for creating this tool!
JM

> I am going to compare NestedBlockState.entropy() of the two run, but I am not sure this is correct.
> How should I take into account the fact that the networks are slightly different?
Would normalization make the two entropies comparable? I'd be interested to hear opinions about using, for normalization, the entropy of a NestedBlockState where each node is in its own group.
Best
Haiko

Hi all,
I have my PartitionModeState for multiple NestedBlockState models. Using pmode.get_marginal(g) I can retrieve the 2d matrix of the node marginals for the deepest level of the nested hierarchy. If M contains the marginals for level 0 and C is a binary matrix with correspondences between level 0 (rows) and any other level (columns), then M @ C gives me the node's marginals for any other level in the hierarchy. As NSBM groups blocks hierarchically, how can I get the marginals of blocks identified at any level for the containing level? e.g. if I have 150 blocks at level 0, what is their marginal for the 33 blocks in level 1? Does this makes any sense?
Best,
d
_______________________________________________
graph-tool mailing list -- graph-tool(a)skewed.de
To unsubscribe send an email to graph-tool-leave(a)skewed.de

> I am going to compare NestedBlockState.entropy() of the two run, but I am not sure this is correct.
> How should I take into account the fact that the networks are slightly different?
Would normalization make the two entropies comparable? I'd be interested to hear opinions about using, for normalization, the entropy of a NestedBlockState where each node is in its own group.
Best
Haiko

Hi.
I'm trying to create an interactive web plot with the result obtained form minimize_nested_blockmodel_dl.
Until now, I'm able to draw the edges and nodes positions in webgl. However, I have a issue.
How can I get the node positions and edges from the dendrogram graph?
graph-tool result https://ibb.co/hLKrcxs
webgl result https://ibb.co/YLnhy83
```python
state = gt.inference.minimize.minimize_nested_blockmodel_dl(g,
...
pos, tg, tpos = state.draw(**options)
```
# beizer control points
cts = gt.draw.get_hierarchy_control_points(g, tg, tpos)
```
Thanks

Hello there,
I have issues with installing GTK3 on a machine but I would like to use graph-tool without any plot support, I only need to fit very large models.
I wonder if it is (or if it will be) possible to import graph_tool in a "light" way, without plotting support.
Thanks
d

Dear Community,
I've been working on community detection for some time already (Louvain/Leiden algorithms), but I got started with graph-tool only recently. After spending some time scrolling the HOWTO, I still haven't found answers to my two concerns that are related to the applicability of SBM to my data.
I'm analysing political coalitions by combining data from a survey, newspapers, and Twitter. To study coalitions across the three weighted networks (or more if I include temporal slices), I am using LayeredBlockState through NestedBlockState.
Question 1:
I haven't found an answer to whether it is possible to run SBM for layered networks with edge weights? I can get around this problem by extracting the noise-corrected 'backbone' of each network, which yields simple graphs, but ideally I wouldn't have to do too much violence to the data but be able to use the raw weighted networks as inputs. Is it possible to run LayeredBlockState with edge weights?
Question 2:
Even if the LayeredBlockState would not (yet) support weighted networks, I also have another, more fundamental question. As my three layers come from different data generation processes, they do not share the exactly same set of nodes. For example, one organisation responded the survey but doesn't necessarily appear in the newspaper data. Is possible to determine constraints for certain nodes in certain layers that would tell the LayeredBlockState to not consider layer-specific isolates?
This is how my current code looks like, but I am not sure if I can count on the results due to the issue outlined in Q2.
g = binary multigraph with three layers stored in ep.layer
state = gt.inference.nested_blockmodel.NestedBlockState(g, base_type=LayeredBlockState, state_args=dict(ec=g.ep.layer, layers=True))
After running this, I sample from the posterior distribution.
Any help is much appreciated.