Am 27.04.21 um 11:50 schrieb Lietz, Haiko:
I am going to compare NestedBlockState.entropy() of the two run, but I am not sure this is correct.
How should I take into account the fact that the networks are slightly different?
Would normalization make the two entropies comparable? I’d be interested to hear opinions about using, for normalization, the entropy of a NestedBlockState where each node is in its own group.
The description length (DL) tells you how much information is needed to encode both the network and the model parameters. If we compare the DL for the same network but different models, this tells which model most compresses the data. But if we compare two different networks with two different models, this tells us very little, because it mixes a comparison of which network is more regular with the quality of fit of each model. The results of this kind of comparison is often trivial: the more nodes and edges, the higher will be the DL. You *could* compute something like the DL per edge in order to compare two networks, but since the DL is not a linear function of the number of nodes or edges, it is difficult to put this evaluation on solid statistical grounds. Best, Tiago -- Tiago de Paula Peixoto <tiago@skewed.de>