It's possible that the compiler is able to compile this in less time and memory than what is currently done with MPL + boost::any (which predates the existence of std::variant). It's something to be considered.
I'm sorry to say that in my experiments it made only a small difference. I think the main benefit would be felt at runtime. It may be worth it for that, though.
I don't find this kind of modification interesting, because the trade-off goes precisely in the opposite direction of what is the intention of the library: you're sacrificing run time for compile time. In particular you replace _every_ lookup of property maps with a variant visit, including type conversion. Sure, that compiles faster because it
Quite true! Thanks for considering it, though. Best, Jeff On Fri, Feb 7, 2020 at 12:51 PM Tiago de Paula Peixoto <tiago@skewed.de> wrote:
Am 07.02.20 um 20:28 schrieb Jeff Trull:
But that is not the relevant pattern. What we require is the *joint* instantiation of A B and C types, and a dispatch that is specialized for this joint combination, and hence is as fast as possible.
The joint combination is straightforward:
template<class A, class B, class C> void fooImpl(A a, B b, C c);
void foo(std::variant<A1..> avar, std::variant<B1..> bvar, std::variant<C1..> cvar) { std::visit( [](auto a, auto b, auto c) { fooImpl(a, b, c); }, avar, bvar, cvar); }
but in this case, as you pointed out, we don't get much compile time advantage - but we do still enjoy the fast dispatch.
It's possible that the compiler is able to compile this in less time and memory than what is currently done with MPL + boost::any (which predates the existence of std::variant). It's something to be considered.
What you are describing (independent dispatching for every type) is not very different from dynamic typing, unless the code can be divided into clear disjoint blocks as in your example, which is not the case for most algorithms in graph-tool
I will defer to your judgment on that one, though perhaps surprisingly I found this worked in the first (only) algorithm I tried applying it to: assortativity. I selected it based on how long it took to build. The results were:
boost::any + typelist : 177s, 4.5GB memory std::variant for edge weights only: 37s + 1.74GB
The memory reduction is very useful in that it enables parallel builds.
The prototype can be found here: https://git.skewed.de/jaafar/graph-tool/compare/master...feature%2Fvariant-c...
I don't find this kind of modification interesting, because the trade-off goes precisely in the opposite direction of what is the intention of the library: you're sacrificing run time for compile time. In particular you replace _every_ lookup of property maps with a variant visit, including type conversion. Sure, that compiles faster because it reduces the type combinations, but it just shifts the burden to run time. This could also be achieved in the traditional way of doing run time polymorphism, virtual functions, etc.
Best, Tiago
-- Tiago de Paula Peixoto <tiago@skewed.de> _______________________________________________ graph-tool mailing list graph-tool@skewed.de https://lists.skewed.de/mailman/listinfo/graph-tool