Hi Tiago,

Thanks for the explanation. I have another question:

In the "Inferring the mesoscale structure of layered, edge-valued and time-varying networks", you compared two way of constructing layered structures: first approach: You assumed an adjacency matrix in each independent layer. The second method, the collapsed graph considered as a result of merging all the adjacency matrices together.

I am wondering how I can use graph_tool for the first method? Which method or class should I use? If there is a class, is it still possible to consider a graph with weighted edges?

Thanks again.

Regards,

Zahra

On Mon, Jul 16, 2018 at 4:38 PM, Tiago de Paula Peixoto <tiago@skewed.de> wrote:

Am 16.07.2018 um 15:15 schrieb Zahra Sheikhbahaee:

> For the non-parametric weighted SBMs, how can I extract the "description

> length" from the the state.entropy() method? Is it also equivalent of having

> the maximum entropy values after running the algorithm multiple times ?

The entropy() method returns the negative joint log-likelihood of the data

and model parameters. For discrete data and model parameters, this equals

the description length.

For the weighted SBM with continuous covariates, the data and model are no

longer discrete, so this value can no longer be called a description length,

although it plays the same role. However, for discrete covariates, it is the

description length.

> I also have a theoretical question: I read most of your recent papers and I

> see this statement but I could not find more description why it is the case?

> Why do you use the "micro-canonical formulation"? You stated that "it

> approaches to the canonical distributions asymptotically". In case you have

> explained it in one of your papers, would you kindly refer me to the right

> paper?

The microcanonical model is identical to the canonical model, if the latter

is integrated over its continuous parameters using uninformative priors, as

explained in detail here:

https://arxiv.org/abs/1705.10225

Therefore, in a Bayesian setting, it makes no difference which one is used,

as they yield the same posterior distribution.

The main reason to use the microcanonical formulation is that it makes it

easier to extend the Bayesian hierarchy, i.e. include deeper priors and

hyperpriors, thus achieving more robust models without a resolution limit,

accepting of arbitrary group sizes and degree distributions, etc. Within the

canonical formulation, this is technically more difficult.

_______________________________________________

graph-tool mailing list

graph-tool@skewed.de

https://lists.skewed.de/mailman/listinfo/graph-tool