Commit a8504509 authored by Alexandre Duret-Lutz's avatar Alexandre Duret-Lutz
Browse files

introduce output_aborter, and use it in ltlcross

* spot/twaalgos/alternation.cc, spot/twaalgos/alternation.hh,
spot/twaalgos/complement.cc, spot/twaalgos/complement.hh,
spot/twaalgos/determinize.cc, spot/twaalgos/determinize.hh,
spot/twaalgos/minimize.cc, spot/twaalgos/minimize.hh,
spot/twaalgos/postproc.cc, spot/twaalgos/postproc.hh,
spot/twaalgos/powerset.cc, spot/twaalgos/powerset.hh,
spot/twaalgos/product.cc, spot/twaalgos/product.hh: Use an
output_aborter argument to abort if the output is too large.
* bin/ltlcross.cc: Use complement() with an output_aborter
so that ltlcross will not attempt to build complement larger
than 500 states or 5000 edges.  Add --determinize-max-states
and --determinize-max-edges options.
* tests/core/ltlcross3.test, tests/core/ltlcrossce2.test,
tests/core/sccsimpl.test, tests/core/wdba2.test,
tests/python/stutter-inv.ipynb: Adjust test cases.
* NEWS: Document this.
* bin/spot-x.cc: Add documentation for postprocessor's
det-max-states and det-max-edges arguments.
* doc/org/ltlcross.org: Update description.
parent 5c3a33f7
Pipeline #9527 passed with stages
in 158 minutes and 17 seconds
......@@ -8,6 +8,19 @@ New in spot 2.7.4.dev (not yet released)
- ltldo, ltlcross, and autcross are now preferring posix_spawn()
over fork()+exec() when available.
- ltlcross has new options --determinize-max-states=N and
--determinize-max-edges=M to restrict the use of
determinization-based complementation to cases where it produces
automata with at most N states and M edges. By default
determinization is now attempted up to 500 states and 5000
edges. This is an improvement over the previous default where
determinization-based complementation was not performed at all,
unless -D was specified.
- ltlcross will no skip unnecessary cross-checks and
consistency-checks (they are unnecessary when all automata
could be complemented and statistics were not required).
Library:
- Add generic_accepting_run() as a variant of generic_emptiness_check() that
......@@ -32,7 +45,7 @@ New in spot 2.7.4.dev (not yet released)
allows "autfilt [-D] --small" to minimize very-weak automata
whenever they are found to represent obligation properties.)
- There is a new spot::scc_and_mark_filter objet that simplify the
- There is a new spot::scc_and_mark_filter objet that simplifies the
creation of filters to restrict spot::scc_info to some particular
SCC while cutting new SCCs on given acceptance sets. This is used
by spot::generic_emptiness_check() when processing SCCs
......@@ -51,6 +64,20 @@ New in spot 2.7.4.dev (not yet released)
acceptance condition. The output can be alternating only if the
input was alternating.
- There is a new class output_aborter that is used to specify
upper bounds on the size of automata produced by some algorithms.
Several functions have been changed to accept an output_aborter.
This includes:
* tgba_determinize()
* tgba_powerset()
* minimize_obligation()
* minimize_wdba()
* remove_alternation()
* product()
* the new complement()
* the postprocessor class, via the "det-max-state" and
"det-max-edges" options.
- SVA's first_match operator can now be used in SERE formulas and
that is supported by the ltl_to_tgba_fm() translation. See
doc/tl/tl.pdf for the semantics. *WARNING* Because this adds a
......@@ -66,7 +93,7 @@ New in spot 2.7.4.dev (not yet released)
terms of the existing PSL operators. ##[+] and ##[*] are sugar
for ##[1:$] and ##[0:$].
- spot::relabel_apply() make it easier to reverse the effect
- spot::relabel_apply() makes it easier to reverse the effect
of spot::relabel() or spot::relabel_bse() on formula.
- The LTL simplifier learned the following optional rules:
......
// -*- coding: utf-8 -*-
// Copyright (C) 2012-2018 Laboratoire de Recherche et Développement
// Copyright (C) 2012-2019 Laboratoire de Recherche et Développement
// de l'Epita (LRDE).
//
// This file is part of Spot, a model checking library.
......@@ -55,7 +55,7 @@
#include <spot/twaalgos/sccinfo.hh>
#include <spot/twaalgos/isweakscc.hh>
#include <spot/twaalgos/word.hh>
#include <spot/twaalgos/dualize.hh>
#include <spot/twaalgos/complement.hh>
#include <spot/twaalgos/cleanacc.hh>
#include <spot/twaalgos/alternation.hh>
#include <spot/misc/formater.hh>
......@@ -86,6 +86,8 @@ enum {
OPT_BOGUS,
OPT_CSV,
OPT_DENSITY,
OPT_DET_MAX_STATES,
OPT_DET_MAX_EDGES,
OPT_DUPS,
OPT_FAIL_ON_TIMEOUT,
OPT_GRIND,
......@@ -118,8 +120,18 @@ static const argp_option options[] =
{ "no-complement", OPT_NOCOMP, nullptr, 0,
"do not complement deterministic automata to perform extra checks", 0 },
{ "determinize", 'D', nullptr, 0,
"determinize non-deterministic automata so that they"
"always determinize non-deterministic automata so that they"
"can be complemented; also implicitly sets --products=0", 0 },
{ "determinize-max-states", OPT_DET_MAX_STATES, "N", 0,
"attempt to determinize non-deterministic automata so they can be "
"complemented, unless the deterministic automaton would have more "
"than N states. Without this option or -D, determinizations "
"are attempted up to 500 states.", 0 },
{ "determinize-max-edges", OPT_DET_MAX_EDGES, "N", 0,
"attempt to determinize non-deterministic automata so they can be "
"complemented, unless the deterministic automaton would have more "
"than N edges. Without this option or -D, determinizations "
"are attempted up to 5000 edges.", 0 },
{ "stop-on-error", OPT_STOP_ERR, nullptr, 0,
"stop on first execution error or failure to pass"
" sanity checks (timeouts are OK)", 0 },
......@@ -195,6 +207,10 @@ static bool allow_dups = false;
static bool no_checks = false;
static bool no_complement = false;
static bool determinize = false;
static bool max_det_states_given = false;
static bool max_det_edges_given = false;
static unsigned max_det_states = 500U;
static unsigned max_det_edges = 5000U;
static bool stop_on_error = false;
static int seed = 0;
static unsigned products = 1;
......@@ -427,6 +443,28 @@ parse_opt(int key, char* arg, struct argp_state*)
case 'D':
determinize = true;
products = 0;
max_det_states = -1U;
max_det_edges = -1U;
if (max_det_states_given)
error(2, 0, "Options --determinize-max-states and "
"--determinize are incompatible.");
if (max_det_edges_given)
error(2, 0, "Options --determinize-max-edges and "
"--determinize are incompatible.");
break;
case OPT_DET_MAX_EDGES:
max_det_edges_given = true;
max_det_states = to_pos_int(arg, "--determinize-max-edges");
if (determinize)
error(2, 0, "Options --determinize-max-edges and "
"--determinize are incompatible.");
break;
case OPT_DET_MAX_STATES:
max_det_states_given = true;
max_det_states = to_pos_int(arg, "--determinize-max-states");
if (determinize)
error(2, 0, "Options --determinize-max-states and "
"--determinize are incompatible.");
break;
case ARGP_KEY_ARG:
if (arg[0] == '-' && !arg[1])
......@@ -434,6 +472,9 @@ parse_opt(int key, char* arg, struct argp_state*)
else
tools_push_trans(arg);
break;
case OPT_AMBIGUOUS:
opt_ambiguous = true;
break;
case OPT_AUTOMATA:
opt_automata = true;
break;
......@@ -466,16 +507,6 @@ parse_opt(int key, char* arg, struct argp_state*)
want_stats = true;
json_output = arg ? arg : "-";
break;
case OPT_PRODUCTS:
if (*arg == '+')
{
products_avg = false;
++arg;
}
products = to_pos_int(arg, "--products");
if (products == 0)
products_avg = false;
break;
case OPT_NOCHECKS:
no_checks = true;
no_complement = true;
......@@ -486,6 +517,16 @@ parse_opt(int key, char* arg, struct argp_state*)
case OPT_OMIT:
opt_omit = true;
break;
case OPT_PRODUCTS:
if (*arg == '+')
{
products_avg = false;
++arg;
}
products = to_pos_int(arg, "--products");
if (products == 0)
products_avg = false;
break;
case OPT_REFERENCE:
tools_push_trans(arg, true);
break;
......@@ -501,9 +542,6 @@ parse_opt(int key, char* arg, struct argp_state*)
case OPT_STRENGTH:
opt_strength = true;
break;
case OPT_AMBIGUOUS:
opt_ambiguous = true;
break;
case OPT_VERBOSE:
verbose = true;
break;
......@@ -1094,6 +1132,8 @@ namespace
}
}
bool missing_complement = true;
if (!no_checks)
{
std::cerr << "Performing sanity checks and gathering statistics..."
......@@ -1122,7 +1162,7 @@ namespace
return smallest_ref;
};
// This are not our definitive choice for reference
// These are not our definitive choice for reference
// automata, because the sizes might change after we remove
// alternation and Fin acceptance. But we need to know now
// if we will have a pair of reference automata in order to
......@@ -1158,32 +1198,27 @@ namespace
std::cerr << ")\n";
}
};
auto complement = [&](const std::vector<spot::twa_graph_ptr>& x,
std::vector<spot::twa_graph_ptr>& comp,
unsigned i)
{
if (!no_complement && x[i] && is_universal(x[i]))
comp[i] = dualize(x[i]);
};
// Remove alternation
for (size_t i = 0; i < m; ++i)
{
unalt(pos, i, 'P');
unalt(neg, i, 'N');
// Do not complement reference automata if we have a
// reference pair.
if (smallest_pos_ref >= 0 && tools[i].reference)
continue;
complement(pos, comp_pos, i);
complement(neg, comp_neg, i);
}
if (determinize && !no_complement)
// Complement
if (!no_complement)
{
spot::output_aborter aborter_(max_det_states,
max_det_edges);
spot::output_aborter* aborter = nullptr;
if (max_det_states != -1U || max_det_edges != -1U)
aborter = &aborter_;
print_first = verbose;
auto tmp = [&](std::vector<spot::twa_graph_ptr>& from,
std::vector<spot::twa_graph_ptr>& to, int i,
char prefix)
auto comp = [&](std::vector<spot::twa_graph_ptr>& from,
std::vector<spot::twa_graph_ptr>& to, int i,
char prefix)
{
if (from[i] && !to[i])
{
......@@ -1193,29 +1228,41 @@ namespace
return;
if (print_first)
{
std::cerr << "info: complementing non-deterministic "
"automata via determinization...\n";
std::cerr << "info: complementing automata...\n";
print_first = false;
}
spot::postprocessor p;
p.set_type(spot::postprocessor::Generic);
p.set_pref(spot::postprocessor::Deterministic);
p.set_level(spot::postprocessor::Low);
to[i] = dualize(p.run(from[i]));
if (verbose)
std::cerr << "info: " << prefix << i;
if (aborter && aborter->too_large(from[i])
&& !spot::is_universal(from[i]))
missing_complement = true;
else
to[i] = spot::complement(from[i], aborter);
if (verbose)
{
std::cerr << "info: " << prefix << i << "\t(";
printsize(from[i]);
std::cerr << ") -> (";
printsize(to[i]);
std::cerr << ")\tComp(" << prefix << i << ")\n";
if (to[i])
{
std::cerr << "\t(";
printsize(from[i]);
std::cerr << ") -> (";
printsize(to[i]);
std::cerr << ")\tComp(" << prefix << i << ")\n";
}
else
{
std::cerr << "\tnot complemented";
if (aborter)
aborter->print_reason(std::cerr << " (") << ')';
std::cerr << '\n';
}
}
}
};
missing_complement = false;
for (unsigned i = 0; i < m; ++i)
{
tmp(pos, comp_pos, i, 'P');
tmp(neg, comp_neg, i, 'N');
comp(pos, comp_pos, i, 'P');
comp(neg, comp_neg, i, 'N');
}
}
......@@ -1362,15 +1409,41 @@ namespace
(*nstats)[i].product_scc.reserve(products);
}
}
for (unsigned p = 0; p < products; ++p)
// Decide if we need products with state-space.
unsigned actual_products = products;
if (actual_products)
{
if (missing_complement)
{
if (verbose)
std::cerr
<< ("info: complements not computed for some automata\ninfo: "
"continuing with cross_checks and consistency_checks\n");
}
else if (want_stats)
{
if (verbose)
std::cerr
<< ("info: running cross_checks and consistency_checks"
"just for statistics\n");
}
else
{
if (verbose)
std::cerr
<< "info: cross_checks and consistency_checks unnecessary\n";
actual_products = 0;
}
}
for (unsigned p = 0; p < actual_products; ++p)
{
// build a random state-space.
spot::srand(seed);
if (verbose)
std::cerr << "info: building state-space #" << p << '/' << products
<< " of " << states << " states with seed " << seed
<< '\n';
std::cerr << "info: building state-space #" << (p+1) << '/'
<< products << " of " << states
<< " states with seed " << seed << '\n';
auto statespace = spot::random_graph(states, density, ap, dict);
......@@ -1408,7 +1481,7 @@ namespace
std::cerr << ("warning: not enough memory to build "
"product of P") << i << " with state-space";
if (products > 1)
std::cerr << " #" << p << '/' << products << '\n';
std::cerr << " #" << (p+1) << '/' << products << '\n';
std::cerr << '\n';
++oom_count;
}
......
......@@ -103,6 +103,14 @@ for the presence of an accepting self-loop.") },
{ DOC("degen-remscc", "If non-zero (the default), make sure the output \
of the degenalization has as many SCCs as the input, by removing superfluous \
ones.") },
{ DOC("det-max-states", "When defined to a positive integer N, \
determinizations will be aborted whenever the number of generated \
states would exceed N. In this case a non-deterministic automaton \
will be returned.")},
{ DOC("det-max-edges", "When defined to a positive integer N, \
determinizations will be aborted whenever the number of generated \
edges would exceed N. In this case a non-deterministic automaton \
will be returned.")},
{ DOC("det-scc", "Set to 0 to disable scc-based optimizations in \
the determinization algorithm.") },
{ DOC("det-simul", "Set to 0 to disable simulation-based optimizations in \
......
......@@ -15,34 +15,37 @@ The main differences with LBTT are:
- *support for PSL formulas in addition to LTL*
- support for (non-alternating) automata with *any type of acceptance condition*,
- support for *weak alternating automata*,
- additional intersection *checks with the complement* (option =-D=), allowing
to check equivalence of automata more precisely,
- additional intersection *checks with the complement* allowing to
check equivalence of automata more precisely,
- *more statistics*, especially:
- the number of logical transitions represented by each physical edge,
- the number of deterministic states and automata
- the number of SCCs with their various strengths (nonaccepting, terminal, weak, strong)
- the number of terminal, weak, and strong automata
- an option to *reduce counterexample* by attempting to mutate and
- an option to *reduce counterexamples* by attempting to mutate and
shorten troublesome formulas (option =--grind=),
- statistics output in *CSV* for easier post-processing,
- *more precise time measurement* (LBTT was only precise to
1/100 of a second, reporting most times as "0.00s").
Although =ltlcross= performs the same sanity checks as LBTT, it does
Although =ltlcross= performs similar sanity checks as LBTT, it does
not implement any of the interactive features of LBTT. In our almost
10-year usage of LBTT, we never had to use its interactive features to
understand bugs in our translation. Therefore =ltlcross= will report
problems, maybe with a conterexample, but you will be on your own to
investigate and fix them.
investigate and fix them (the =--grind= option may help you reduce the
problem to a shorter formula).
The core of =ltlcross= is a loop that does the following steps:
- Input a formula
- Translate the formula and its negation using each configured translator.
If there are 3 translators, the positive and negative translations
will be denoted =P0=, =N0=, =P1=, =N1=, =P2=, =N2=. Optionally
build complemented automata denoted =Comp(P0)=, =Comp(N0)=, etc.
will be denoted =P0=, =N0=, =P1=, =N1=, =P2=, =N2=.
- Optionally build complemented automata denoted =Comp(P0)=, =Comp(N0)=, etc.
(By default, this is done only for small automata, but see options =-D=,
=--determinize-max-states= and =--determinize-max-edges=.)
- Perform sanity checks between all these automata to detect any problem.
- Build the products of these automata with a random state-space (the same
- Optionally build the products of these automata with a random state-space (the same
state-space for all translations). (If the =--products=N= option is given,
=N= products are performed instead.)
- Gather statistics if requested.
......@@ -282,29 +285,34 @@ positive and negative formulas by the ith translator).
The cycle part is denoted by =cycle{...}=.
- Complemented intersection check. If $P_i$ and $N_i$ are
deterministic, =ltlcross= builds their complements, $Comp(P_i)$
and $Comp(N_i)$, and then ensures that $Comp(P_i)\otimes
Comp(N_i)$ is empty. If only one of them is deterministic, for
instance $P_i$, we check that $P_j\otimes Comp(P_i)$ for all $j
\ne i$; likewise if it's $N_i$ that is deterministic.
By default this check is only done for deterministic automata,
because complementation is relatively cheap is that case (at least
it is cheap for simple acceptance conditions). Using option
=--determinize=, =ltlcross= can be instructed to perform
complementation of non-deterministic automata as well, ensuring
precise equivalence checks between all automata. However be aware
that this determinization + complementation may generate large
automata.
deterministic or if they are small enough, =ltlcross= attempts to
build their complements, $Comp(P_i)$ and $Comp(N_i)$.
Complementation is not always attempted, especially when it
requires a determinization-based construction. The conditions
specifying when the complement automata are constructed can be
modified with the =--determinize-max-states=N= and
=--determinize-max-edges=M= options, which abort the
complementation if it would produce an automaton with more than
=N= states (500 by default) or more than =M= edges (5000 by
default). Alternatively, use =--determinize= (a.k.a. =-D=) to
force the complementation of all automata.
If both complement automata could be computed, =ltlcross= ensures that
$Comp(P_i)\otimes Comp(N_i)$ is empty.
If only one automaton has been complemented, for instance $P_i$,
=ltlcross= checks that $P_j\otimes Comp(P_i)$ for all $j \ne i$;
likewise if it's $N_i$ that is deterministic.
When validating a translator with =ltlcross= without using the
=--determinize= option we highly recommend to include a translator
with good deterministic output to augment test coverage. Using
'=ltl2tgba -lD %f >%O=' will produce deterministic automata for
all obligation properties and many recurrence properties. Using
'=ltl2dstar --ltl2nba=spin:pathto/ltl2tgba@-Ds %L %O=' will
systematically produce a deterministic Rabin automaton (that
=ltlcross= can complement easily).
'=ltl2tgba -D %f >%O=' will produce deterministic automata for all
obligation properties and many recurrence properties. Using
'=ltl2tgba -PD %f >%O=' will systematically produce a
deterministic Parity automaton (that =ltlcross= can complement
easily).
- Cross-comparison checks: for some state-space $S$,
all $P_i\otimes S$ are either all empty, or all non-empty.
......@@ -325,7 +333,7 @@ positive and negative formulas by the ith translator).
These products tests may sometime catch errors that were not
captured by the first two tests if one non-deterministic automaton
recognize less words than what it should. If the input automata
are deterministic or the =--determinize= option is used, this test
are all deterministic or the =--determinize= option is used, this test
is redundant and can be disabled. (In fact, the =--determinize=
option implies option =--product=0= to do so.)
......@@ -1111,16 +1119,16 @@ produce transition-based generalized Büchi automata, while =ltl3ba
-H1= produces co-Büchi alternating automata.
#+BEGIN_SRC sh :prologue "export SPOT_HOA_TOLERANT=1; exec 2>&1"
ltlcross -f 'FGa' ltl2tgba 'ltl3ba -H1' --determinize --verbose
ltlcross -f 'FGa' ltl2tgba 'ltl3ba -H1' --verbose
#+END_SRC
#+RESULTS:
#+begin_example
F(G(a))
Running [P0]: ltl2tgba -H 'F(G(a))'>'lcr-o0-cNwEjy'
Running [P1]: ltl3ba -H1 -f '<>([](a))'>'lcr-o1-WQo28q'
Running [N0]: ltl2tgba -H '!(F(G(a)))'>'lcr-o0-KT96Zj'
Running [N1]: ltl3ba -H1 -f '!(<>([](a)))'>'lcr-o1-WE1RXc'
Running [P0]: ltl2tgba -H 'F(G(a))'>'lcr-o0-ltzvEc'
Running [P1]: ltl3ba -H1 -f '<>([](a))'>'lcr-o1-dqnX28'
Running [N0]: ltl2tgba -H '!(F(G(a)))'>'lcr-o0-wmIXr5'
Running [N1]: ltl3ba -H1 -f '!(<>([](a)))'>'lcr-o1-yJesT1'
info: collected automata:
info: P0 (2 st.,3 ed.,1 sets)
info: N0 (1 st.,2 ed.,1 sets) deterministic complete
......@@ -1129,14 +1137,14 @@ info: N1 (3 st.,5 ed.,1 sets) univ-edges complete
Performing sanity checks and gathering statistics...
info: getting rid of universal edges...
info: N1 (3 st.,5 ed.,1 sets) -> (2 st.,4 ed.,1 sets)
info: complementing non-deterministic automata via determinization...
info: P0 (2 st.,3 ed.,1 sets) -> (2 st.,4 ed.,2 sets) Comp(P0)
info: P1 (2 st.,3 ed.,1 sets) -> (2 st.,4 ed.,2 sets) Comp(P1)
info: complementing automata...
info: P0 (2 st.,3 ed.,1 sets) -> (2 st.,4 ed.,1 sets) Comp(P0)
info: N0 (1 st.,2 ed.,1 sets) -> (1 st.,2 ed.,1 sets) Comp(N0)
info: P1 (2 st.,3 ed.,1 sets) -> (2 st.,4 ed.,1 sets) Comp(P1)
info: N1 (2 st.,4 ed.,1 sets) -> (2 st.,4 ed.,1 sets) Comp(N1)
info: getting rid of any Fin acceptance...
info: Comp(P0) (2 st.,4 ed.,2 sets) -> (2 st.,4 ed.,1 sets)
info: Comp(N0) (1 st.,2 ed.,1 sets) -> (2 st.,3 ed.,1 sets)
info: P1 (2 st.,3 ed.,1 sets) -> (2 st.,3 ed.,1 sets)
info: Comp(P1) (2 st.,4 ed.,2 sets) -> (2 st.,4 ed.,1 sets)
info: Comp(N1) (2 st.,4 ed.,1 sets) -> (3 st.,6 ed.,1 sets)
info: check_empty P0*N0
info: check_empty Comp(N0)*Comp(P0)
......@@ -1144,6 +1152,7 @@ info: check_empty P0*N1
info: check_empty P1*N0
info: check_empty P1*N1
info: check_empty Comp(N1)*Comp(P1)
info: cross_checks and consistency_checks unnecessary
No problem detected.
#+end_example
......@@ -1162,41 +1171,35 @@ alternating automata are supported) into non-alternating automata.
Here only =N1= needs this transformation.
Then =ltlcross= computes the complement of these four
automata. Since =P0= and =P1= are nondeterministic and the
=--determinize= option was given, a first pass determinize and
complete these two automata, creating =Comp(P0)= and =Comp(P1)=.
Apparently =N0= and =N1= are already deterministic, so their
complement could be obtained by just complementing their acceptance
condition (this is not written -- we only deduce so because they do
not appear in the list of automata that had to be determinized).
automata.
Now that =ltlcross= has four complemented automata, it has to make
sure they use only =Inf= acceptance because that is what our emptiness
check procedure can handle. So there is a new pass over all automata,
rewriting them to get rid of any =Fin= acceptance.
After this preparatory work, it is time for actually comparing these
After this preparatory work, it is time to actually compare these
automata. Together, the tests =P0*N0= and =Comp(N0)*Comp(P0)= ensure
that the automaton =N0= is really the complement of =P0=. Similarly
=P1*N1= and =Comp(N1)*Comp(P1)= ensure that =N1= is the complement of
=P1=. Finally =P0*N1= and =P1*N0= ensure that =P1= is equivalent to
=P0= and =N1= is equivalent to =N0=.
Note that if we had not used the =--determinize= option, the procedure
would look slightly more complex:
Note that if we reduce =ltlcross='s ability to determinize
automata for complementation, the procedure
can look slightly more complex:
#+BEGIN_SRC sh :prologue "export SPOT_HOA_TOLERANT=1; exec 2>&1"
ltlcross -f 'FGa' ltl2tgba 'ltl3ba -H1' --verbose
ltlcross -f 'FGa' ltl2tgba 'ltl3ba -H1' --determinize-max-states=1 --verbose
#+END_SRC
#+RESULTS:
#+begin_example
F(G(a))
Running [P0]: ltl2tgba -H 'F(G(a))'>'lcr-o0-Ot1KDa'
Running [P1]: ltl3ba -H1 -f '<>([](a))'>'lcr-o1-Kvzdfm'
Running [N0]: ltl2tgba -H '!(F(G(a)))'>'lcr-o0-X2dURx'
Running [N1]: ltl3ba -H1 -f '!(<>([](a)))'>'lcr-o1-wuLpzJ'
Running [P0]: ltl2tgba -H 'F(G(a))'>'lcr-o0-HHyVWR'
Running [P1]: ltl3ba -H1 -f '<>([](a))'>'lcr-o1-scKnIH'
Running [N0]: ltl2tgba -H '!(F(G(a)))'>'lcr-o0-6Wloux'
Running [N1]: ltl3ba -H1 -f '!(<>([](a)))'>'lcr-o1-MQ7Rin'
info: collected automata:
info: P0 (2 st.,3 ed.,1 sets)
info: N0 (1 st.,2 ed.,1 sets) deterministic complete
......@@ -1205,6 +1208,11 @@ info: N1 (3 st.,5 ed.,1 sets) univ-edges complete
Performing sanity checks and gathering statistics...
info: getting rid of universal edges...
info: N1 (3 st.,5 ed.,1 sets) -> (2 st.,4 ed.,1 sets)
info: complementing automata...
info: P0 not complemented (more than 1 states required)
info: N0 (1 st.,2 ed.,1 sets) -> (1 st.,2 ed.,1 sets) Comp(N0)
info: P1 not complemented (more than 1 states required)
info: N1 (2 st.,4 ed.,1 sets) -> (2 st.,4 ed.,1 sets) Comp(N1)
info: getting rid of any Fin acceptance...
info: Comp(N0) (1 st.,2 ed.,1 sets) -> (2 st.,3 ed.,1 sets)
info: P1 (2 st.,3 ed.,1 sets) -> (2 st.,3 ed.,1 sets)
......@@ -1215,7 +1223,9 @@ info: check_empty Comp(N0)*N1
info: check_empty P1*N0
info: check_empty Comp(N1)*N0
info: check_empty P1*N1
info: building state-space #0/1 of 200 states with seed 0
info: complements were not computed for some automata
info: continuing with cross_checks and consistency_checks
info: building state-space #1/1 of 200 states with seed 0
info: state-space has 4136 edges
info: building product between state-space and P0 (2 st., 3 ed.)
info: product has 400 st., 8298 ed.
......@@ -1263,10 +1273,10 @@ ltlcross -f 'FGa' ltl2tgba --reference 'ltl3ba -H1' --verbose
#+RESULTS:
#+begin_example
F(G(a))
Running [P0]: ltl3ba -H1 -f '<>([](a))'>'lcr-o0-hsnlkV'
Running [P1]: ltl2tgba -H 'F(G(a))'>'lcr-o1-R0jOmP'
Running [N0]: ltl3ba -H1 -f '!(<>([](a)))'>'lcr-o0-7GwxvJ'
Running [N1]: ltl2tgba -H '!(F(G(a)))'>'lcr-o1-5sgPFD'
Running [P0]: ltl3ba -H1 -f '<>([](a))'>'lcr-o0-bh9PHg'
Running [P1]: ltl2tgba -H 'F(G(a))'>'lcr-o1-LvvYEm'
Running [N0]: ltl3ba -H1 -f '!(<>([](a)))'>'lcr-o0-bcUDEs'
Running [N1]: ltl2tgba -H '!(F(G(a)))'>'lcr-o1-Pw1REy'
info: collected automata:
info: P0 (2 st.,3 ed.,1 sets)
info: N0 (3 st.,5 ed.,1 sets) univ-edges complete
......@@ -1275,31 +1285,18 @@ info: N1 (1 st.,2 ed.,1 sets) deterministic complete
Performing sanity checks and gathering statistics...
info: getting rid of universal edges...
info: N0 (3 st.,5 ed.,1 sets) -> (2 st.,4 ed.,1 sets)
info: complementing automata...
info: P1 (2 st.,3 ed.,1 sets) -> (2 st.,4 ed.,1 sets) Comp(P1)
info: N1 (1 st.,2 ed.,1 sets) -> (1 st.,2 ed.,1 sets) Comp(N1)
info: getting rid of any Fin acceptance...
info: P0 (2 st.,3 ed.,1 sets) -> (2 st.,3 ed.,1 sets)
info: Comp(N1) (1 st.,2 ed.,1 sets) -> (2 st.,3 ed.,1 sets)
info: P0 and N0 assumed correct and used as references