Thanks for the explanation. I have another question:
In the "Inferring the mesoscale structure of layered, edge-valued and time-varying networks", you compared two way of constructing layered structures: first approach: You assumed an adjacency matrix in each independent layer. The second method, the collapsed graph considered as a result of merging all the adjacency matrices together.
I am wondering how I can use graph_tool for the first method? Which method or class should I use? If there is a class, is it still possible to consider a graph with weighted edges?
On Mon, Jul 16, 2018 at 4:38 PM, Tiago de Paula Peixoto email@example.com wrote:
Am 16.07.2018 um 15:15 schrieb Zahra Sheikhbahaee:
For the non-parametric weighted SBMs, how can I extract the "description length" from the the state.entropy() method? Is it also equivalent of
the maximum entropy values after running the algorithm multiple times ?
The entropy() method returns the negative joint log-likelihood of the data and model parameters. For discrete data and model parameters, this equals the description length.
For the weighted SBM with continuous covariates, the data and model are no longer discrete, so this value can no longer be called a description length, although it plays the same role. However, for discrete covariates, it is the description length.
I also have a theoretical question: I read most of your recent papers
see this statement but I could not find more description why it is the
Why do you use the "micro-canonical formulation"? You stated that "it approaches to the canonical distributions asymptotically". In case you
explained it in one of your papers, would you kindly refer me to the
The microcanonical model is identical to the canonical model, if the latter is integrated over its continuous parameters using uninformative priors, as explained in detail here:
Therefore, in a Bayesian setting, it makes no difference which one is used, as they yield the same posterior distribution.
The main reason to use the microcanonical formulation is that it makes it easier to extend the Bayesian hierarchy, i.e. include deeper priors and hyperpriors, thus achieving more robust models without a resolution limit, accepting of arbitrary group sizes and degree distributions, etc. Within the canonical formulation, this is technically more difficult.
-- Tiago de Paula Peixoto firstname.lastname@example.org
graph-tool mailing list email@example.com https://lists.skewed.de/mailman/listinfo/graph-tool