Giter Site home page Giter Site logo

heisenbugltd / spat Goto Github PK

View Code? Open in Web Editor NEW
15.0 8.0 4.0 3.6 MB

SPARK Proof Analysis Tool

Home Page: https://github.heisenbug.eu/spat/

License: Do What The F*ck You Want To Public License

Ada 95.95% Shell 2.45% Python 1.60%
spark2014 ada2012 formal-verification ada spark-proof-analysis proof spark-ada ada-language gnatprove

spat's Introduction

SPARK Proof Analysis Tool

There's a chat now in case you have questions or suggestions: Join the chat at https://gitter.im/HeisenbugLtd/spat-discussion (This might be a bit more lightweight than opening an issue and we can more easily discuss features.)

Build Linux Alire

Content

Introduction

The SPARK tools (i.e. GNATprove) leave behind a trove of information after a proof run. spat is intended to take these information and extract some useful information from it (like for example, where the provers spent their time, which provers solved the problem first etc. pp.). In a way, this tool serves a similar purpose as the gnatprove2xls tool, but it parses the "raw" data instead of post-processed output, hence it has more information available, and it's written in Ada. (I considered using Python, but rejected it, because that would have been too easy).

Motivation

The idea is that making use of that information will help identify and fix bottlenecks during proof. As the format of these files is virtually undocumented, a little bit of reverse engineering may be required, but on the other hand, maybe the result is actual documentation.

Compiling the tool

Requirements

You need a recent version of GNAT with the GNATCOLL library. GNAT CE 2019 or GNAT CE 2020 should suffice.

You also need the si_units library version 0.2.0 (or later) to be installed.

Step by step instructions (manual install)

Note that the instructions are currently for Linux only, but installing it on Windows should be similarly straightforward. I also assume that SPARK users are familiar with compiling Ada code, so installing it on Windows shouldn't be an issue.

Compile and install SI_Units

  • Clone the SI_Units repository: git clone https://github.com/HeisenbugLtd/si_units

You may want to check out tag v0.2.0, but any more recent version should do.

  • Compile the SI_Units library: gprbuild -p -P si_units/si_units_lib.gpr
  • Install the SI Units library: gprinstall -f -p -P si_units/si_units_lib.gpr

Depending on how your GNAT installation is set up, the latter command may require elevated privileges to write into the installation directory, so if needed, prepend sudo </path/to/gnat/installation/>/bin/ to the gprinstall instruction above.

Compile and install SPAT

  • Clone the SPAT repository: git clone https://github.com/HeisenbugLtd/spat
  • Compile it: gprbuild -p -P spat/spat.gpr
  • Install it: gprinstall -f -p -P spat/spat.gpr

Depending on how your GNAT installation is set up, the latter command may require elevated privileges to write into the installation directory, so if needed, prepend sudo </path/to/gnat/installation/>/bin/ to the gprinstall instruction above.

After that, the run_spat executable should be installed in your GNAT installation and is ready to use.

Step by step instructions (Alire)

spat comes with Alire integration. all you need to do is

alr get spat
cd spat_<directory>
alr build
gprinstall --relocate-build-tree=alire/build -f -p -P spat/spat.gpr

Depending on how your GNAT installation is set up, the latter command may require elevated privileges to write into the installation directory, so if needed, prepend sudo </path/to/gnat/installation/>/bin/ to the gprinstall instruction above.

The spat.py plug-in

I added a tiny plug-in for GNAT Studio that parses the output of spat and shows the proofs with their respective maximum times in the location window. To make use of the script, you need to link or copy it into your ~/.gnatstudio/plug-ins directory.

The plug-in adds the new menu item SPAT into the SPARK menu in GNAT Studio with the two entries Show All and Show Unproved.

Invoking the tool

Like many other GNAT related tools, spat is designed to run against a GNAT project file (. gpr) to collect the information about the source files in the project.

Command line

Quick help:

run_spat -h

will give you a quick overview over the available command line options:

usage: run_spat [--help|-h] [--project|-P PROJECT] [--summary|-s] 
               [--report-mode|-r REPORT-MODE] [--suggest|-g] [--entity|-e 
               ENTITY[ENTITY...]] [--sort-by|-c SORT-BY] [--cut-off|-p CUT-OFF] 
               [--details|-d DETAILS] [--version|-V] [--raw|-R] [--verbose|-v] 

Parses .spark files and outputs information about them.

positional arguments:
   
optional arguments:
   --help, -h            Show this help message
   --project, -P         PROJECT = GNAT project file (.gpr) (mandatory!)
   --summary, -s         List summary (per file)
   --report-mode, -r     Output reporting mode (REPORT-MODE: a = all, f = 
                         failed, u = unproved, j = unjustified [implies 
                         unproved])
   --suggest, -g         Show suggestion for an optimal prover configuration
   --entity, -e          Filter output by ENTITY (regular expression), this 
                         option can  be specified multiple times
   --sort-by, -c         Sorting criterion (SORT-BY: a = alphabetical, s = by 
                         minimum time for successful proof, t = by maximum proof
                          time, p = by minimum steps for successful proof, q = 
                         by maximum steps)
   --cut-off, -p         Cut off point, do not show entities with proof times 
                         less than that (CUT-OFF: <numeral>[s|ms])
   --details, -d         Show details for entities (report mode) (DETAILS: 
                         [1|2|f] for level 1, 2 and full details. Please note 
                         that 2 and f are currently equivalent.)
   --version, -V         Show version information and exit
   --raw, -R             Output timings in raw format (for script use)
   --verbose, -v         Verbose (tracing) output

The --project argument is the only argument that is not optional, but without a --report-mode, or --summary, or --suggest argument, run_spat will not output anything. It will still try to parse the files it finds, though.

The --summary option

This option is intended to show a quick summary of the files analyzed.

run_spat -s -P saatana.gpr

Typical output would look like this:

saatana-crypto.spark                     => (Flow  => 9.0 ms,
                                             Proof => 80.0 ms (1 step)/80.0 ms (1 step)/6.8 s)
test_phelix.spark                        => (Flow  => 180.0 µs,
                                             Proof => 0.0 s (0 steps)/0.0 s (0 steps)/0.0 s)
saatana-crypto-phelix.spark              => (Flow  => 206.5 ms,
                                             Proof => 174.3 s (14009 steps)/206.4 s (131078 steps)/568.7 s)
saatana.spark                            => (Flow  => 464.0 µs,
                                             Proof => 0.0 s (0 steps)/0.0 s (0 steps)/0.0 s)
saatana-crypto-lemmas.spark              => (Flow  => 2.1 ms,
                                             Proof => 210.0 ms (1 step)/210.0 ms (1 step)/2.2 s)
test_phelix_api.spark                    => (Flow  => 14.4 ms,
                                             Proof => 240.0 ms (1 step)/240.0 ms (1 step)/23.1 s)
saatana-crypto-stream_tools.spark        => (Flow  => 71.0 µs,
                                             Proof => 0.0 s (0 steps)/0.0 s (0 steps)/0.0 s)
saatana-crypto-phelix-test_vectors.spark => (Flow  => 24.0 µs,
                                             Proof => 0.0 s (0 steps)/0.0 s (0 steps)/0.0 s)

You can use the --sort-by option with --summary, either for an alphabetical list (--sort-by=a), a list sorted by time (--sort-by=t, descending order, so files with the most time needed by the provers come first), or a list sorted by steps (--sort-by=p). The options to sort by maximum time for successful proof (--sort-by=s) or by maximum steps for successful proof (--sort-by=q) are also available. By default, no particular order is imposed on the output.

For the meaning of the three timings after Proof =>, please see below.

Note that the --details option has no effect on the output here, this option is designed to work with the --report-mode option only.

The --report-mode option

This is the main mode the tool is designed to be run in. It outputs the list of entities (i.e. Ada language identifiers) it finds in the .spark files that match the given filter option (see below).

By default, the output has no particular order, but as mentioned in the previous chapter, with the --sort-by option you can force one.

Output can be filtered progressively by applying more restrictions. These will be explained below.

If you just want to take a look at how the output of the tool looks like with all kind of different options, you can take a peek at the repository's test directory where I am storing templates for regression testing.

The --report-mode=all option

This reports all entities the tool found in the .spark files.

Run the command:

run_spat -ra -P saatana.gpr

Typical output looks like this:

Saatana.Crypto.Phelix.Encrypt_Bytes        => 174.3 s (10170 steps)/174.3 s (10170 steps)/189.0 s
Saatana.Crypto.Phelix.Setup_Key            => 640.0 ms (221 steps)/206.4 s (131078 steps)/219.2 s
Saatana.Crypto.Phelix.Ctx_AAD_Len          => 0.0 s (0 steps)/0.0 s (0 steps)/0.0 s
Saatana.Crypto.Phelix.Encrypt_Packet       => 100.0 ms (1 step)/100.0 ms (1 step)/2.0 s
Saatana.Crypto.Phelix.MAC_Size_32Predicate => 30.0 ms (1 step)/30.0 ms (1 step)/30.0 ms
Saatana.Crypto.Phelix.Ctx_Msg_Len          => 0.0 s (0 steps)/0.0 s (0 steps)/0.0 s
Saatana.Crypto.Phelix.Ctx_I                => 0.0 s (0 steps)/0.0 s (0 steps)/0.0 s
Saatana.Crypto                             => 0.0 s (0 steps)/0.0 s (0 steps)/0.0 s
Saatana.Crypto.Phelix.Setup_Key_Called     => 0.0 s (0 steps)/0.0 s (0 steps)/0.0 s
Saatana.Crypto.Phelix.Ctx_Mac_Size         => 0.0 s (0 steps)/0.0 s (0 steps)/0.0 s
Saatana.Crypto.Phelix.Exclusive_Or         => 110.0 ms (1 step)/110.0 ms (1 step)/540.0 ms
...

The first value you see after the Ada entity is the longest time (and steps) needed for a single successful proof, the second value is the maximum time (and steps) needed for a proof (successful or not), and the third value is the total sum of all proof times for this entity.

If the first and the second value vastly differ, that usually means that one of the provers involved could not prove a certain item, but another one could and was better at it. See below how to analyze this in a more detailed way.

If the first value is shown as --, then that means, there was at least one unsuccessful proof for this entity. An example:

...
SPARKNaCl.MAC.Onetimeauth                                 => 340.0 ms (1 step)/340.0 ms (1 step)/3.7 s
SPARKNaCl.Seminormal_GFPredicate                          => 0.0 s (0 steps)/0.0 s (0 steps)/0.0 s
SPARKNaCl.ASR_4                                           => --/1.2 s (14001 steps)/1.4 s
SPARKNaCl.Car.Nearlynormal_To_Normal                      => --/1.4 s (803 steps)/17.5 s
SPARKNaCl.ASR_8                                           => --/3.3 s (14001 steps)/3.5 s
SPARKNaCl.Sign.Unpackneg.Pow_2523                         => 0.0 s (0 steps)/0.0 s (0 steps)/0.0 s
...

The --report-mode=failed option

When the --report-mode is invoked with the failed option, it will only show proof attempts where at least one prover failed to prove the verification condition. This does not necessarily mean that the proof itself failed. Especially, if a different prover could prove the condition later, this is a good indicator to look if the call order of the provers should be changed.

Example:

run_spat -ct -rf -P saatana.gpr

Typical output:

Saatana.Crypto.Phelix.Setup_Key => 640.0 ms (221 steps)/206.4 s (131078 steps)/219.2 s

Here, we can see that there is one entity where a prover failed to prove the verification condition. As mentioned above, here you can see that the time for longest successful proof (640 ms) greatly differs from the maximum time for a single proof (206 s). This is a clear indicator, that one of the provers is not well suited to prove a certain verification condition.

The --report-mode=unproved option

When the --report-mode is invoked with the --unproved option, the tool will only show proof attempts where all provers failed to prove the verification condition, in other words, the tools lists all unproved VCs.

Example:

run_spat -ct -ru -P sparknacl.gpr

Typical output:

SPARKNaCl.Sign.Sign                  => --/57.6 s (14007 steps)/489.2 s
SPARKNaCl.Car.Nearlynormal_To_Normal => --/1.4 s (803 steps)/17.5 s
SPARKNaCl.ASR_16                     => --/5.7 s (14001 steps)/5.9 s
SPARKNaCl.ASR_8                      => --/3.3 s (14001 steps)/3.5 s
SPARKNaCl.ASR_4                      => --/1.2 s (14001 steps)/1.4 s

Here, we can see that there are five entities with unproven verification conditions (and no reported maximum time for successful proof, of course, so it is shown as --).

The --report-mode=unjustified option

When the --report-mode is invoked with the --unjustified option, it will only show unproven VCs (see above) which are not manually justified (i.e. those which don't have a justification message).

Example:

run_spat -ct -rj -P sparknacl.gpr

Typical output:

SPARKNaCl.Sign.Sign                  => --/57.6 s (14007 steps)/489.2 s
SPARKNaCl.Car.Nearlynormal_To_Normal => --/1.4 s (803 steps)/17.5 s

Here, we can see that out of the five entities listed by the previous tool invocation with --report-mode=unproved only two entities are left which have unproven VCs with no justification message.

The --details option

When invoked together with one of the --report-mode options, it will show all the individual proof attempts (level 1) and paths (level 2) for an entity.

Example (with --report-mode=failed and detail level 1):

run_spat -ct -rf -d 1 -P saatana.gpr

Output:

Saatana.Crypto.Phelix.Setup_Key                   =>                                                       640.0 ms (221 steps)/206.4 s (131078 steps)/219.2 s
`-VC_RANGE_CHECK saatana-crypto-phelix.adb:466:44 => 640.0 ms (221 steps)/206.4 s (131078 steps)/207.1 s

Example (with --report-mode=failed):

run_spat -ct -rf -d -P saatana.gpr

Output:

Saatana.Crypto.Phelix.Setup_Key                   =>                                                       640.0 ms (221 steps)/206.4 s (131078 steps)/219.2 s
`-VC_RANGE_CHECK saatana-crypto-phelix.adb:466:44 => 640.0 ms (221 steps)/206.4 s (131078 steps)/207.1 s
 `-Z3: 206.4 s (131078 steps), Unknown (unknown)
  -CVC4: 640.0 ms (221 steps), Valid

Same example as the one a little above, but here you can see the individual proof results. It seems that Z3 is not well suited to prove this particular verification condition, but CVC4 can prove it quite fast. This is a good indicator that in that particular case, CVC4 should be called first to optimize proof times.

Without all the other --report-mode options, all proof attempts will be shown in a similar manner.

Please keep in mind that a single proof may have multiple paths leading to it, resulting in more than just one proof attempt for a single verification condition.

Another example (with --unproved):

run_spat -ct -ru -d -P sparknacl.gpr

Typical output:

SPARKNaCl.Sign.Sign                                  =>                                  --/57.6 s (14007 steps)/489.2 s
`-VC_OVERFLOW_CHECK sparknacl-sign.adb:890:36        => --/57.6 s (14007 steps)/238.7 s
 `-CVC4: 51.6 s (14001 steps), Unknown (unknown)
  -Z3: 7.9 s (14006 steps), Unknown (unknown)
 `-CVC4: 50.5 s (14001 steps), Unknown (unknown)
  -Z3: 8.8 s (14006 steps), Unknown (unknown)
 `-CVC4: 50.7 s (14001 steps), Unknown (unknown)
  -Z3: 7.2 s (14007 steps), Unknown (unknown)
SPARKNaCl.Car.Nearlynormal_To_Normal                 =>                                  --/1.4 s (803 steps)/17.5 s
`-VC_LOOP_INVARIANT_PRESERV sparknacl-car.adb:324:13 => --/1.4 s (367 steps)/1.9 s
 `-CVC4: 1.4 s (1 step), Unknown (unknown)
  -Z3: 590.0 ms (367 steps), Unknown (unknown)
`-VC_ASSERT sparknacl-car.adb:343:31                 => --/790.0 ms (635 steps)/1.2 s
 `-Z3: 790.0 ms (635 steps), Unknown (unknown)
  -CVC4: 410.0 ms (1 step), Unknown (unknown)
SPARKNaCl.ASR_16                                     =>                                  --/5.7 s (14001 steps)/5.9 s
`-VC_POSTCONDITION sparknacl.ads:355:35              => --/5.7 s (14001 steps)/5.8 s
 `-Z3: 5.7 s (14001 steps), Unknown (unknown)
  -CVC4: 80.0 ms (1 step), Unknown (unknown)
Justified with: "From definition of arithmetic shift right".
SPARKNaCl.ASR_8                                      =>                                  --/3.3 s (14001 steps)/3.5 s
`-VC_POSTCONDITION sparknacl.ads:367:35              => --/3.3 s (14001 steps)/3.4 s
 `-Z3: 3.3 s (14001 steps), Unknown (unknown)
  -CVC4: 90.0 ms (1 step), Unknown (unknown)
Justified with: "From definition of arithmetic shift right".
SPARKNaCl.ASR_4                                      =>                                  --/1.2 s (14001 steps)/1.4 s
`-VC_POSTCONDITION sparknacl.ads:379:35              => --/1.2 s (14001 steps)/1.3 s
 `-Z3: 1.2 s (14001 steps), Unknown (unknown)
  -CVC4: 80.0 ms (1 step), Unknown (unknown)
Justified with: "From definition of arithmetic shift right".

As above, but here you can see the individual proof results including any justification messages (if present).

The --entity option

Sometimes the (detailed) output is just too much and if you want to only see the results for certain entities, then the --entity option is for you. You can specify multiple --entity options. When invoked together with one of the --report-mode options, it will show only those entities that match one of the given filters.

The expression after --entity is expected to be a valid regular expression. That means, in most cases when you do not want to specify the fully qualified name you should start the expression with a "match anything" .*.

Example (with --unproved), show only those entities matching "ASR" (Arithmetic Shift Right):

run_spat -ct -ru -d -e ".*ASR.*" -P sparknacl.gpr

This shows all unproved entities that match the expression ".*ASR.*":

SPARKNaCl.ASR_16                        =>                               --/5.7 s (14001 steps)/5.9 s
`-VC_POSTCONDITION sparknacl.ads:355:35 => --/5.7 s (14001 steps)/5.8 s
 `-Z3: 5.7 s (14001 steps), Unknown (unknown)
  -CVC4: 80.0 ms (1 step), Unknown (unknown)
Justified with: "From definition of arithmetic shift right".
SPARKNaCl.ASR_8                         =>                               --/3.3 s (14001 steps)/3.5 s
`-VC_POSTCONDITION sparknacl.ads:367:35 => --/3.3 s (14001 steps)/3.4 s
 `-Z3: 3.3 s (14001 steps), Unknown (unknown)
  -CVC4: 90.0 ms (1 step), Unknown (unknown)
Justified with: "From definition of arithmetic shift right".
SPARKNaCl.ASR_4                         =>                               --/1.2 s (14001 steps)/1.4 s
`-VC_POSTCONDITION sparknacl.ads:379:35 => --/1.2 s (14001 steps)/1.3 s
 `-Z3: 1.2 s (14001 steps), Unknown (unknown)
  -CVC4: 80.0 ms (1 step), Unknown (unknown)
Justified with: "From definition of arithmetic shift right".

As above, but the SPARKNaCl.Sign.Sign and SPARKNaCl.Car.Nearlynormal_To_Normal have been omitted as they don't match the given filter expression.

The --cut-off option

This option allows you to prune the output from possibly irrelevant results. You can give it a time value (either in seconds, which is the default or in milliseconds which you can indicate by appending ms to the number). Rational numbers are supported.

If you don't specify a cut off point, the default is 0.0, in other words, no cut off.

Please note that due to how spat works, the semantics of this cut off point is different for the --report-mode and --summary output.

  • For --report-mode the value applies to all entities and displayed verification conditions (for --details with at least level 1). Here the maximum proof time (i.e. longest time for a single proof) is taken into account, not the total proof time. The rationale behind that is that if you want to optimize proof times, you need to know which proofs take longest, not how many proofs are for a single entity, so I am assuming you are not interested in proofs that take less than the cut off point, even if thousands of them would add up to a total time well beyond the cut-off point.

    Example:

    run_spat -ra -ct -p 400ms -P saatana.gpr

    Saatana.Crypto.Phelix.Setup_Key     => 640.0 ms (221 steps)/206.4 s (131078 steps)/219.2 s
    Saatana.Crypto.Phelix.Encrypt_Bytes => 174.3 s (10170 steps)/174.3 s (10170 steps)/189.0 s
    Saatana.Crypto.Phelix.Decrypt_Bytes => 4.0 s (3331 steps)/4.0 s (3331 steps)/18.6 s
    Saatana.Crypto.Phelix.Finalize      => 2.2 s (3029 steps)/2.2 s (3029 steps)/7.7 s
    Saatana.Crypto.Phelix.H             => 6.0 s (14009 steps)/6.0 s (14009 steps)/6.0 s

    vs.

    run_spat -ra -ct -P saatana.gpr -p 5000ms (or -p 5s, or even -p 5)

    Saatana.Crypto.Phelix.Setup_Key     => 640.0 ms (221 steps)/206.4 s (131078 steps)/219.2 s
    Saatana.Crypto.Phelix.Encrypt_Bytes => 174.3 s (10170 steps)/174.3 s (10170 steps)/189.0 s
    Saatana.Crypto.Phelix.H             => 6.0 s (14009 steps)/6.0 s (14009 steps)/6.0 s

    Notice that omitted entries disappear from the middle of the list, because the sorting criterion uses the total time spent, while the cut off point uses the maximum time.

    Note that this works the same way if the --details option is given. All verification conditions with a proof time below the cut off point will be omitted from the report.

  • For the --summary option the value applies to the total proof time reported for that file, and not for individual proof time like in the --report-mode option. That is because the tool assumes that if you want to see the summary on a per file (i.e. Ada package) basis, you are more interested in the total time spent for a file than a single proof.

The --suggest option

First of all, this option is highly experimental. At some time in the near future I might write something more extensive about the implementation in the project's Wiki, but for now, the following must suffice.

When spat is called with the option, it will try to find a better configuration, i.e. file specific options for gnatprove which you can add to your project file.

Assumptions

  • spat assumes that the files are fully proven to the extent of the capability of the provers. Some undischarged VCs might remain, but in general, the project should be in a good state and the provers should be able to prove what can be proved.

How Does it Work?

The only information available to spat when a prover succeeds is the time and the steps it took it to succeed. Only when a prover fails and the next one in the chain is called, spat can infer more information.

For each source file referenced in the parsed .spark files, spat records which provers were involved and also the maximum number of steps and the longest time for each discharged VC (i.e. spat assumes that undischarged VCs are there by design).

That means, the output regarding steps and time (see example below) takes the current state of the project into account.

As for the order of the provers, that is a bit more tricky due to the lack of information. Because spat cannot know what is not known, I decided to go with a rather simplistic heuristic which can be boiled down to two points:

  1. The more time a prover spends in unsuccessful proofs, the less likely it is to succeed.
  2. The more time a prover spends in successful proofs, the more likely it is to succeed.

This results in a very simple sorting order: The less "fail" time a prover has, the better it seems to be, and if that cannot be decided because the accumulated "fail" time of the provers for the source file is equal, the prover with the greater "success" time wins.

Of course, in most practical scenarios, the prover that will always be called first will likely also have the most success time.

Example:

run_spat -g -P sparknacl.gpr

Warning: You requested a suggested prover configuration.
Warning: This feature is highly experimental.
Warning: Please consult the documentation.

package Prove is
   for Proof_Switches ("sparknacl-car.adb") use ("--prover=Z3,CVC4", "--steps=803", "--timeout=2");
   for Proof_Switches ("sparknacl-core.adb") use ("--prover=CVC4", "--steps=1", "--timeout=1");
   for Proof_Switches ("sparknacl-cryptobox.adb") use ("--prover=CVC4", "--steps=1", "--timeout=1");
   for Proof_Switches ("sparknacl-hashing.adb") use ("--prover=CVC4", "--steps=1", "--timeout=1");
   for Proof_Switches ("sparknacl-mac.adb") use ("--prover=CVC4", "--steps=1", "--timeout=1");
   for Proof_Switches ("sparknacl-scalar.adb") use ("--prover=CVC4", "--steps=1", "--timeout=1");
   for Proof_Switches ("sparknacl-secretbox.adb") use ("--prover=CVC4", "--steps=1", "--timeout=1");
   for Proof_Switches ("sparknacl-sign.adb") use ("--prover=Z3,CVC4", "--steps=14007", "--timeout=10");
   for Proof_Switches ("sparknacl-stream.adb") use ("--prover=CVC4", "--steps=1", "--timeout=1");
   for Proof_Switches ("sparknacl-utils.adb") use ("--prover=Z3,CVC4", "--steps=1536", "--timeout=1");
   for Proof_Switches ("sparknacl.adb") use ("--prover=Z3,CVC4", "--steps=1", "--timeout=1");
end Prove;

Here you may want to add the --verbose switch to see some debug output.

As explained above, the steps and timeout output should be fairly accurate. The call order of the provers is at best an educated guess.

Also, please note that this output never lists provers that have never been called, simply because we know nothing about them.

Worth mentioning is also that the steps reported here are different from the steps reported in the --report-mode option. This is due to the fact that within the .spark files steps are currently reported differently than the way gnatprove looks at them. The thing is that each prover has their own notion of steps, but giving a --steps option to gnatprove should behave the same regardless of the prover involved, so gnatprove implements some transformation to scale the number of steps to roughly the equivalent of alt-ergo steps.

I decided to implement the same scaling values that gnatprove uses (which are also the steps which are reported in the stats object of the .spark files).

The --verbose option

This option is mainly used for debugging, it enables extra output about what run_spat is doing (i.e. files found in the given project file, parse results and some timings).

The --version option

Show version and compiler information for the executable. If that option is encountered, no other options take effect and the program immediately exits.

The --raw option

This is intended for scripts parsing the output of spat. If this switch is specified times are shown as raw numbers instead of properly scaled, human readable output. This should make it easier for scripts to parse the numbers.

Some Notes on Sorting

The sort option by successful proof time (i.e. --sort-by=s) may work in a slightly counter-intuitive way, so I explain it a bit. The explanation is similar for sorting by successful proof steps (i.e. --sort-by=q).

This option does not apply for the --cut-off option, which still only takes the maximum proof time into account.

The idea behind this sorting option is that you may want to know where provers spend useless time (e.g. where a prover is called which is known to not be able to prove a certain verification condition). After all, all provers have their strengths and weaknesses.

Sorting by maximum successful proof time usually only makes sense when also invoked with the --report-mode=failed (or even more restrictive) filter option. That is because if there are no failed attempts, it doesn't really matter what the best proof time would be, after all, the provers take all the time they need. If you want to know the total effort spent, you can just as well use --sort-by=t.

Examples (mostly to show how the difference between --sort-by=t and --sort-by=s work):

  • By time:

    run_spat -ct -rf -P sparknacl.gpr

    SPARKNaCl.Sign.Sign                   => --/57.6 s (14007 steps)/489.2 s
    SPARKNaCl.Omultiply                   => 700.0 ms (1 step)/19.1 s (14001 steps)/28.6 s
    SPARKNaCl.Car.Nearlynormal_To_Normal  => --/1.4 s (803 steps)/17.5 s
    SPARKNaCl.ASR_16                      => --/5.7 s (14001 steps)/5.9 s
    SPARKNaCl.ASR_8                       => --/3.3 s (14001 steps)/3.5 s
    SPARKNaCl.Utils.Pack_25519.Subtract_P => 150.0 ms (1 step)/180.0 ms (1 step)/1.8 s
    SPARKNaCl.ASR_4                       => --/1.2 s (14001 steps)/1.4 s
    
  • By successful proof:

    run_spat -cs -rf -P sparknacl.gpr

    SPARKNaCl.Omultiply                   => 700.0 ms (1 step)/19.1 s (14001 steps)/28.6 s
    SPARKNaCl.Utils.Pack_25519.Subtract_P => 150.0 ms (1 step)/180.0 ms (1 step)/1.8 s
    SPARKNaCl.Sign.Sign                   => --/57.6 s (14007 steps)/489.2 s
    SPARKNaCl.Car.Nearlynormal_To_Normal  => --/1.4 s (803 steps)/17.5 s
    SPARKNaCl.ASR_16                      => --/5.7 s (14001 steps)/5.9 s
    SPARKNaCl.ASR_8                       => --/3.3 s (14001 steps)/3.5 s
    SPARKNaCl.ASR_4                       => --/1.2 s (14001 steps)/1.4 s
    

Notice, how the entries for SPARKNaCl.Omultiply and SPARKNaCl.Utils.Pack_25519.Subtract_P moved up?

Please note that unproved items are still shown, but due to the fact that they are unproved, they have no successful proof time (although there may be partial successes for the involved VCs), so with this sorting option they will always appear at the end.

Looking at Omultiply in detail:

run_spat -d -cs -ra -P sparknacl.gpr -e .*Omultiply

SPARKNaCl.Omultiply                              =>                                                700.0 ms (1 step)/19.1 s (14001 steps)/28.6 s
`-VC_LOOP_INVARIANT_PRESERV sparknacl.adb:164:13 => 40.0 ms (1 step)/19.1 s (14001 steps)/19.1 s
 `-CVC4: 19.1 s (14001 steps), Unknown (unknown)
  -Z3: 40.0 ms (1 step), Valid
`-VC_LOOP_INVARIANT_PRESERV sparknacl.adb:80:16  => 700.0 ms (1 step)/700.0 ms (1 step)/930.0 ms
 `-CVC4: 700.0 ms (1 step), Valid
 `-CVC4: 230.0 ms (1 step), Valid
`-VC_RANGE_CHECK sparknacl.adb:75:36             => 170.0 ms (1 step)/170.0 ms (1 step)/480.0 ms
 `-CVC4: 170.0 ms (1 step), Valid
 `-CVC4: 120.0 ms (1 step), Valid
 `-CVC4: 110.0 ms (1 step), Valid
 `-CVC4: 80.0 ms (1 step), Valid
`-VC_OVERFLOW_CHECK sparknacl.adb:75:48          => 140.0 ms (1 step)/140.0 ms (1 step)/420.0 ms
 `-CVC4: 140.0 ms (1 step), Valid
 `-CVC4: 110.0 ms (1 step), Valid
 `-CVC4: 100.0 ms (1 step), Valid
 `-CVC4: 70.0 ms (1 step), Valid
`-VC_LOOP_INVARIANT_INIT sparknacl.adb:80:16     => 120.0 ms (1 step)/220.0 ms (1 step)/400.0 ms
 `-CVC4: 220.0 ms (1 step), Unknown (unknown)
  -Z3: 60.0 ms (1 step), Valid
 `-CVC4: 120.0 ms (1 step), Valid
...

You can see, where the 700 ms for the longest time for a successful proof comes from (this VC would not be shown at all in --report-mode=failed).

Tool Limitations

  • spat only reports accurate timings if it is used after a pristine run of gnatprove.

    That means, let's say you have a failed proof, you change the code, and run gnatprove again, all times reported for unchanged entitites will be 0.0 s.

    This can be used as an advantage though: Let's assume you are trying to improve the proof time for a certain proof, so you change the code (like adding helping assertion or restructure the logic) and then run gnatprove again. If you now run spat again all unchanged proofs will be reported as having a time of 0.0 s, but the verification conditions that had to be re-verified will show the time spent proving them. Which, in the case of trying to optimize proof times is exactly what you want.

  • The --suggest option is highly experimental. As the name says, it is a suggestion.

spat's People

Contributors

gitter-badger avatar jellix avatar yannickmoy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

spat's Issues

`spat.py` likely to be too strict about matching filenames

Describe the bug
The regular expression for a file:line:column "([a-z0-9-]+\.ad[bs]):(\d+):(\d+)" only works if the standard GNAT convention is used, as soon as there are upper case letters, it won't match, so spat.py will not show anything.

Also, the original expression totally missed that we could have underscores in file names.

Sorting is not deterministic

See below for the reoccurence


The fix for issue #13 revealed that sorting is not stable.

In case of sorting by time, when two times are identical, we also need to take the entity name/source location into account, I'm afraid.

For VCs, the total_proof_time, the maximum_proof time and then the entity location should be compared.

It's a bit unclear what to do with single proof paths, though. They all reference the same location and entity, even sorting by prover name could result in undeterministic behaviour.

Show max/success steps also in summary and less detailed report modes

Is your feature request related to a problem? Please describe.

See latest comments on issue #30.

Describe the solution you'd like

  • Steps should also be shown in summary and report mode without details (although I suspect, we should show the scaled steps then, not the prover specific raw ones which are currently shown in the detailed modes).

  • Also, sorting by (scaled) steps should be possible.

Improve performance

Right now, when printing entities filtered by --failed or --unproved, the decision if the entity's proof tree has failed attempts results in multiple calls to Has_Failed_Attempts, Is_Unproved which each cause more similar calls into all dependent objects. This should be optimized, so that such properties are only calculated once.

For further discussion see this wiki-page

Show steps for successful proof

Is your feature request related to a problem? Please describe.
While reducing --timeout to improve the proof time is a valid option, it is not very reliable when GNATprove is executed on machines with very different computing power. A reproducible proof setting is especially important when GNATprove is regularly run on external machines with unknown resources (e.g., a CI provider). That is why I usually prefer to use --steps instead (or in combination with a high timeout). Unfortunately, SPAT doesn't analyze proof steps.

Describe the solution you'd like
The steps could be shown along with the time. For example:

RFLX.RFLX_Types.U64_Insert => 120.0 s/1.6 ks 87391 steps
`-VC_PRECONDITION rflx-rflx_generic_types.adb:221:39 => 120.0 s/491.0 s 46034 steps
  -altergo: 188.2 ms 46034 steps (Valid)
[...]

It should be also possible to sort the output by the number of steps, so that the most difficult units of the whole project can be easily seen.

Emit warning if proof times do not match?

While #22 is now documented under tool limitations it could be nice to issue a warning to the user if such data is encountered (i.e. the times reported in the file and the summed proof times vastly differ and most proof times are 0.0 s).

Describe the solution you'd like
Emit a text like "Warning: Data seems to be from a non-pristine run, timing results will be inaccurate." to notify the user of the issue.

Additional context
If the tool is used in "iterative proof mode" (see README), the additional warning may be more disturbing than helpful.
Making the warning optional is pointless (as the user would actively need to enable it), forcing the user to say they do not want such warnings on each run could be annoying, too...

Document in README that tool must be used on pristine run of GNATprove

SPAT will only work as intended after a run from scratch of GNATprove, not on a subsequent run. Indeed, on subsequent runs, GNATprove will regenerate .spark files but spend no time on checks that were already proved, or even in some cases no time on any checks (if switch --output-msg-only is used). As a result, the time info for most check is just "0.0 s".

This should be described in the README, so that users are not surprised.

The underlying reason for that behavior of GNATprove is that the Why3 session files are where all the history of prover runs and current state is stored. A reasonable solution to improve the connection between GNATprove and SPAT would be to regenerate the timing information in .spark files from the info read in the Why3 session files. Feel free to open an Issue on spark2014 repo if you'd like us to investigate it!

Improve (scale, extend) timeout values in suggested configuration

Is your feature request related to a problem? Please describe.

If you use the --suggest switch, the resulting suggested timeouts are rather optimistic and fragile. For instance, if a proof takes 1.9 seconds in the run that spat analysed, the suggested timeout configuration is --timeout=2 (essentially the ceiling value). Even if you run the proof always on the same machine dedicated only for proofs, times are in no way guaranteed due to different load, task scheduling, occasionally running background jobs, etc. pp. Cutting it so close may lead to random proof failures when using that configuration.

This also warrants an explanation in the documentation, so that the user is at least be made more aware of this.

Describe the solution you'd like

Suggested timeout values should have a (constant) minimum value to cater for load variations on the proofing machine and should be scaled by some percentage on the upper end. Roughly like:

Timeout_Value := (Maximum_Proof_Time (VC) + Min_Timeout) * 1.1; -- force a minimum time out and add 10% slack

For special needs such parameters could maybe also be passed on the command line to override the internal defaults.

Describe alternatives you've considered

Well, you can always do rough calculations like that manually, but this is error-prone and not user-friendly. This might be fine for "toy" projects with maybe a dozen of files, but becomes infeasible with larger project with many files.

Additional context

As a bare minimum, the proof time which was used for calculating the timeout should be added as a comment after the Proof_Switches line. That way, the user has at least an idea, how close the value to the timeout actually is. It makes no sense to increase a timeout to five seconds or so by default, if the maximum proof time is in the order of a couple of milliseconds.
Chances are, it will never ever come close to running for a whole second. It's a whole different case, if the recorded proof time is already 980 ms, then it is almost a certainty it will exceed a one second limit at some time. Right now, spat handles both cases as if they were the same.

Suggested steps too low

Describe the bug
The suggested steps are in some cases too low for a successful proof. While 1672 steps are suggested, 1768 steps are required.

$ run_spat -Ptest.gpr --suggest
[...]
package Prove is
   for Proof_Switches ("rflx-arrays-tests.adb") use ("--prover=Z3", "--steps=1672", "--timeout=1");
end Prove;
$ sparkprof -Ptest -u rflx-arrays-tests
---------------------------------------------------------------------------------------
UNIT                                                  MAX STEPS    MAX TIME  TOTAL TIME
[...]
rflx-arrays-tests                                       1768 steps     0.0 s     0.0 s

Here is the relevant .spark file: rflx-arrays-tests.spark.txt

Used version: run_spat V1.1.3 (compiled by SYSTEM_NAME_GNAT Community 2020 (20200429-93))

Suggested configuration is incomplete

Describe the bug
I have run SPAT (run_spat V1.1.1 (compiled by SYSTEM_NAME_GNAT Community 2020 (20200429-93))) on RecordFlux´s test suite and found the following issues:

  1. For multiple units (e.g., rflx-rflx_types) no prover configuration is generated. SPAT shows information for this unit in the summary and thus should be able to create a configuration for it.
rflx-rflx_types.spark                     => (Flow  => 37.8 ms,
                                              Proof => 29.8 s/60.0 s/714.0 s)
  1. Multiple prover configurations are generated for generics. (I suppose Proof_Switches has no effect on generic packages.)
   for Proof_Switches ("rflx-rflx_generic_types.adb") use ("--prover=CVC4,altergo", "--steps=239854", "--timeout=30");
   for Proof_Switches ("rflx-rflx_generic_types.ads") use ("--prover=CVC4,altergo", "--steps=364532", "--timeout=58");
   for Proof_Switches ("rflx-rflx_generic_types.ads") use ("--prover=CVC4,Z3", "--steps=1", "--timeout=1");
   for Proof_Switches ("rflx-rflx_generic_types.ads") use ("--prover=CVC4,Z3", "--steps=11566", "--timeout=10");

Console Output

$ run_spat -Ptest.gpr --suggest --verbose
Debug: GNAT project loaded in 36.7 ms.
Debug: Found "rflx-arrays-av_enumeration_vector.ads"...Debug: "rflx-arrays-av_enumeration_vector.spark" added to index.
Debug: Found "rflx-arrays-enumeration_vector.ads"...Debug: "rflx-arrays-enumeration_vector.spark" added to index.
Debug: Found "rflx-arrays-generic_inner_message.adb"...Debug: "rflx-arrays-generic_inner_message.spark" not found on disk, skipped.
Debug: Found "rflx-arrays-generic_inner_message.ads"...Debug: "rflx-arrays-generic_inner_message.spark" already in index.
Debug: Found "rflx-arrays-generic_message.adb"...Debug: "rflx-arrays-generic_message.spark" not found on disk, skipped.
Debug: Found "rflx-arrays-generic_message.ads"...Debug: "rflx-arrays-generic_message.spark" already in index.
Debug: Found "rflx-arrays-generic_messages_message.adb"...Debug: "rflx-arrays-generic_messages_message.spark" not found on disk, skipped.
Debug: Found "rflx-arrays-generic_messages_message.ads"...Debug: "rflx-arrays-generic_messages_message.spark" already in index.
Debug: Found "rflx-arrays-inner_message.ads"...Debug: "rflx-arrays-inner_message.spark" added to index.
Debug: Found "rflx-arrays-inner_messages.ads"...Debug: "rflx-arrays-inner_messages.spark" added to index.
Debug: Found "rflx-arrays-message.ads"...Debug: "rflx-arrays-message.spark" added to index.
Debug: Found "rflx-arrays-messages_message.ads"...Debug: "rflx-arrays-messages_message.spark" added to index.
Debug: Found "rflx-arrays-modular_vector.ads"...Debug: "rflx-arrays-modular_vector.spark" added to index.
Debug: Found "rflx-arrays-range_vector.ads"...Debug: "rflx-arrays-range_vector.spark" added to index.
Debug: Found "rflx-arrays.ads"...Debug: "rflx-arrays.spark" added to index.
Debug: Found "rflx-derivation-message.ads"...Debug: "rflx-derivation-message.spark" added to index.
Debug: Found "rflx-derivation.ads"...Debug: "rflx-derivation.spark" added to index.
Debug: Found "rflx-enumeration-generic_message.adb"...Debug: "rflx-enumeration-generic_message.spark" not found on disk, skipped.
Debug: Found "rflx-enumeration-generic_message.ads"...Debug: "rflx-enumeration-generic_message.spark" already in index.
Debug: Found "rflx-enumeration-message.ads"...Debug: "rflx-enumeration-message.spark" added to index.
Debug: Found "rflx-enumeration.ads"...Debug: "rflx-enumeration.spark" added to index.
Debug: Found "rflx-ethernet-frame.ads"...Debug: "rflx-ethernet-frame.spark" added to index.
Debug: Found "rflx-ethernet-generic_frame.adb"...Debug: "rflx-ethernet-generic_frame.spark" not found on disk, skipped.
Debug: Found "rflx-ethernet-generic_frame.ads"...Debug: "rflx-ethernet-generic_frame.spark" already in index.
Debug: Found "rflx-ethernet.ads"...Debug: "rflx-ethernet.spark" added to index.
Debug: Found "rflx-expression-generic_message.adb"...Debug: "rflx-expression-generic_message.spark" not found on disk, skipped.
Debug: Found "rflx-expression-generic_message.ads"...Debug: "rflx-expression-generic_message.spark" already in index.
Debug: Found "rflx-expression-message.ads"...Debug: "rflx-expression-message.spark" added to index.
Debug: Found "rflx-expression.ads"...Debug: "rflx-expression.spark" added to index.
Debug: Found "rflx-in_ethernet-contains.ads"...Debug: "rflx-in_ethernet-contains.spark" added to index.
Debug: Found "rflx-in_ethernet-generic_contains.adb"...Debug: "rflx-in_ethernet-generic_contains.spark" not found on disk, skipped.
Debug: Found "rflx-in_ethernet-generic_contains.ads"...Debug: "rflx-in_ethernet-generic_contains.spark" already in index.
Debug: Found "rflx-in_ethernet.ads"...Debug: "rflx-in_ethernet.spark" added to index.
Debug: Found "rflx-in_ipv4-contains.ads"...Debug: "rflx-in_ipv4-contains.spark" added to index.
Debug: Found "rflx-in_ipv4-generic_contains.adb"...Debug: "rflx-in_ipv4-generic_contains.spark" not found on disk, skipped.
Debug: Found "rflx-in_ipv4-generic_contains.ads"...Debug: "rflx-in_ipv4-generic_contains.spark" already in index.
Debug: Found "rflx-in_ipv4.ads"...Debug: "rflx-in_ipv4.spark" added to index.
Debug: Found "rflx-in_tlv-contains.ads"...Debug: "rflx-in_tlv-contains.spark" added to index.
Debug: Found "rflx-in_tlv-generic_contains.ads"...Debug: "rflx-in_tlv-generic_contains.spark" not found on disk, skipped.
Debug: Found "rflx-in_tlv.ads"...Debug: "rflx-in_tlv.spark" added to index.
Debug: Found "rflx-ipv4-generic_option.adb"...Debug: "rflx-ipv4-generic_option.spark" not found on disk, skipped.
Debug: Found "rflx-ipv4-generic_option.ads"...Debug: "rflx-ipv4-generic_option.spark" already in index.
Debug: Found "rflx-ipv4-generic_packet.adb"...Debug: "rflx-ipv4-generic_packet.spark" not found on disk, skipped.
Debug: Found "rflx-ipv4-generic_packet.ads"...Debug: "rflx-ipv4-generic_packet.spark" already in index.
Debug: Found "rflx-ipv4-option.ads"...Debug: "rflx-ipv4-option.spark" added to index.
Debug: Found "rflx-ipv4-options.ads"...Debug: "rflx-ipv4-options.spark" added to index.
Debug: Found "rflx-ipv4-packet.ads"...Debug: "rflx-ipv4-packet.spark" added to index.
Debug: Found "rflx-ipv4.ads"...Debug: "rflx-ipv4.spark" added to index.
Debug: Found "rflx-rflx_arithmetic.adb"...Debug: "rflx-rflx_arithmetic.spark" added to index.
Debug: Found "rflx-rflx_arithmetic.ads"...Debug: "rflx-rflx_arithmetic.spark" already in index.
Debug: Found "rflx-rflx_builtin_types-conversions.ads"...Debug: "rflx-rflx_builtin_types-conversions.spark" added to index.
Debug: Found "rflx-rflx_builtin_types.ads"...Debug: "rflx-rflx_builtin_types.spark" added to index.
Debug: Found "rflx-rflx_generic_types.adb"...Debug: "rflx-rflx_generic_types.spark" not found on disk, skipped.
Debug: Found "rflx-rflx_generic_types.ads"...Debug: "rflx-rflx_generic_types.spark" already in index.
Debug: Found "rflx-rflx_message_sequence.adb"...Debug: "rflx-rflx_message_sequence.spark" not found on disk, skipped.
Debug: Found "rflx-rflx_message_sequence.ads"...Debug: "rflx-rflx_message_sequence.spark" already in index.
Debug: Found "rflx-rflx_scalar_sequence.adb"...Debug: "rflx-rflx_scalar_sequence.spark" not found on disk, skipped.
Debug: Found "rflx-rflx_scalar_sequence.ads"...Debug: "rflx-rflx_scalar_sequence.spark" already in index.
Debug: Found "rflx-rflx_types.ads"...Debug: "rflx-rflx_types.spark" added to index.
Debug: Found "rflx-tlv-generic_message.adb"...Debug: "rflx-tlv-generic_message.spark" not found on disk, skipped.
Debug: Found "rflx-tlv-generic_message.ads"...Debug: "rflx-tlv-generic_message.spark" already in index.
Debug: Found "rflx-tlv-message.ads"...Debug: "rflx-tlv-message.spark" added to index.
Debug: Found "rflx-tlv.ads"...Debug: "rflx-tlv.spark" added to index.
Debug: Found "rflx-udp-datagram.ads"...Debug: "rflx-udp-datagram.spark" added to index.
Debug: Found "rflx-udp-generic_datagram.adb"...Debug: "rflx-udp-generic_datagram.spark" not found on disk, skipped.
Debug: Found "rflx-udp-generic_datagram.ads"...Debug: "rflx-udp-generic_datagram.spark" already in index.
Debug: Found "rflx-udp.ads"...Debug: "rflx-udp.spark" added to index.
Debug: Found "rflx.ads"...Debug: "rflx.spark" added to index.
Debug: Found "ethernet.rflx"...Debug: "ethernet.spark" not found on disk, skipped.
Debug: Found "icmp.rflx"...Debug: "icmp.spark" not found on disk, skipped.
Debug: Found "in_ethernet.rflx"...Debug: "in_ethernet.spark" not found on disk, skipped.
Debug: Found "in_ipv4.rflx"...Debug: "in_ipv4.spark" not found on disk, skipped.
Debug: Found "ipv4.rflx"...Debug: "ipv4.spark" not found on disk, skipped.
Debug: Found "test.rflx"...Debug: "test.spark" added to index.
Debug: Found "tls_alert.rflx"...Debug: "tls_alert.spark" not found on disk, skipped.
Debug: Found "tls_handshake.rflx"...Debug: "tls_handshake.spark" not found on disk, skipped.
Debug: Found "tls_heartbeat.rflx"...Debug: "tls_heartbeat.spark" not found on disk, skipped.
Debug: Found "tls_record.rflx"...Debug: "tls_record.spark" not found on disk, skipped.
Debug: Found "tlv.rflx"...Debug: "tlv.spark" not found on disk, skipped.
Debug: Found "udp.rflx"...Debug: "udp.spark" not found on disk, skipped.
Debug: Found "array_message.rflx"...Debug: "array_message.spark" not found on disk, skipped.
Debug: Found "array_type.rflx"...Debug: "array_type.spark" not found on disk, skipped.
Debug: Found "comment_only.rflx"...Debug: "comment_only.spark" not found on disk, skipped.
Debug: Found "context.rflx"...Debug: "context.spark" not found on disk, skipped.
Debug: Found "context_cycle.rflx"...Debug: "context_cycle.spark" not found on disk, skipped.
Debug: Found "context_cycle_1.rflx"...Debug: "context_cycle_1.spark" not found on disk, skipped.
Debug: Found "context_cycle_2.rflx"...Debug: "context_cycle_2.spark" not found on disk, skipped.
Debug: Found "context_cycle_3.rflx"...Debug: "context_cycle_3.spark" not found on disk, skipped.
Debug: Found "duplicate_type.rflx"...Debug: "duplicate_type.spark" not found on disk, skipped.
Debug: Found "empty_file.rflx"...Debug: "empty_file.spark" not found on disk, skipped.
Debug: Found "empty_package.rflx"...Debug: "empty_package.spark" not found on disk, skipped.
Debug: Found "enumeration_type.rflx"...Debug: "enumeration_type.spark" not found on disk, skipped.
Debug: Found "feature_integration.rflx"...Debug: "feature_integration.spark" not found on disk, skipped.
Debug: Found "identical_literals.rflx"...Debug: "identical_literals.spark" not found on disk, skipped.
Debug: Found "incorrect_name.rflx"...Debug: "incorrect_name.spark" not found on disk, skipped.
Debug: Found "incorrect_specification.rflx"...Debug: "incorrect_specification.spark" not found on disk, skipped.
Debug: Found "integer_type.rflx"...Debug: "integer_type.spark" not found on disk, skipped.
Debug: Found "message_in_message.rflx"...Debug: "message_in_message.spark" not found on disk, skipped.
Debug: Found "message_odd_length.rflx"...Debug: "message_odd_length.spark" not found on disk, skipped.
Debug: Found "message_type.rflx"...Debug: "message_type.spark" not found on disk, skipped.
Debug: Found "null_message.rflx"...Debug: "null_message.spark" not found on disk, skipped.
Debug: Found "rflx-arrays-tests.adb"...Debug: "rflx-arrays-tests.spark" added to index.
Debug: Found "rflx-arrays-tests.ads"...Debug: "rflx-arrays-tests.spark" already in index.
Debug: Found "rflx-builtin_types_tests.adb"...Debug: "rflx-builtin_types_tests.spark" added to index.
Debug: Found "rflx-builtin_types_tests.ads"...Debug: "rflx-builtin_types_tests.spark" already in index.
Debug: Found "rflx-custom_types_tests.adb"...Debug: "rflx-custom_types_tests.spark" added to index.
Debug: Found "rflx-custom_types_tests.ads"...Debug: "rflx-custom_types_tests.spark" already in index.
Debug: Found "rflx-derivation-tests.adb"...Debug: "rflx-derivation-tests.spark" added to index.
Debug: Found "rflx-derivation-tests.ads"...Debug: "rflx-derivation-tests.spark" already in index.
Debug: Found "rflx-enumeration-tests.adb"...Debug: "rflx-enumeration-tests.spark" added to index.
Debug: Found "rflx-enumeration-tests.ads"...Debug: "rflx-enumeration-tests.spark" already in index.
Debug: Found "rflx-ethernet-tests.adb"...Debug: "rflx-ethernet-tests.spark" added to index.
Debug: Found "rflx-ethernet-tests.ads"...Debug: "rflx-ethernet-tests.spark" already in index.
Debug: Found "rflx-expression-tests.adb"...Debug: "rflx-expression-tests.spark" added to index.
Debug: Found "rflx-expression-tests.ads"...Debug: "rflx-expression-tests.spark" already in index.
Debug: Found "rflx-in_ethernet-tests.adb"...Debug: "rflx-in_ethernet-tests.spark" added to index.
Debug: Found "rflx-in_ethernet-tests.ads"...Debug: "rflx-in_ethernet-tests.spark" already in index.
Debug: Found "rflx-in_ipv4-tests.adb"...Debug: "rflx-in_ipv4-tests.spark" added to index.
Debug: Found "rflx-in_ipv4-tests.ads"...Debug: "rflx-in_ipv4-tests.spark" already in index.
Debug: Found "rflx-in_tlv-tests.adb"...Debug: "rflx-in_tlv-tests.spark" added to index.
Debug: Found "rflx-in_tlv-tests.ads"...Debug: "rflx-in_tlv-tests.spark" already in index.
Debug: Found "rflx-ipv4-tests.adb"...Debug: "rflx-ipv4-tests.spark" added to index.
Debug: Found "rflx-ipv4-tests.ads"...Debug: "rflx-ipv4-tests.spark" already in index.
Debug: Found "rflx-tlv-tests.adb"...Debug: "rflx-tlv-tests.spark" added to index.
Debug: Found "rflx-tlv-tests.ads"...Debug: "rflx-tlv-tests.spark" already in index.
Debug: Found "spark-assertions.adb"...Debug: "spark-assertions.spark" added to index.
Debug: Found "spark-assertions.ads"...Debug: "spark-assertions.spark" already in index.
Debug: Found "spark-file_io.adb"...Debug: "spark-file_io.spark" added to index.
Debug: Found "spark-file_io.ads"...Debug: "spark-file_io.spark" already in index.
Debug: Found "spark.ads"...Debug: "spark.spark" added to index.
Debug: Found "test.adb"...Debug: "test.spark" already in index.
Debug: Found "test_suite.adb"...Debug: "test_suite.spark" added to index.
Debug: Found "test_suite.ads"...Debug: "test_suite.spark" already in index.
Debug: Found "tlv_with_checksum.rflx"...Debug: "tlv_with_checksum.spark" not found on disk, skipped.
Debug: Found "type_refinement.rflx"...Debug: "type_refinement.spark" not found on disk, skipped.
Debug: Found "ada_containers-aunit_lists.adb"...Debug: "ada_containers-aunit_lists.spark" not found on disk, skipped.
Debug: Found "ada_containers-aunit_lists.ads"...Debug: "ada_containers-aunit_lists.spark" already in index.
Debug: Found "ada_containers.ads"...Debug: "ada_containers.spark" not found on disk, skipped.
Debug: Found "aunit-assertions-assert_exception.adb"...Debug: "aunit-assertions-assert_exception.spark" not found on disk, skipped.
Debug: Found "aunit-assertions.adb"...Debug: "aunit-assertions.spark" not found on disk, skipped.
Debug: Found "aunit-assertions.ads"...Debug: "aunit-assertions.spark" already in index.
Debug: Found "aunit-memory-utils.adb"...Debug: "aunit-memory-utils.spark" not found on disk, skipped.
Debug: Found "aunit-memory-utils.ads"...Debug: "aunit-memory-utils.spark" already in index.
Debug: Found "aunit-memory.adb"...Debug: "aunit-memory.spark" not found on disk, skipped.
Debug: Found "aunit-memory.ads"...Debug: "aunit-memory.spark" already in index.
Debug: Found "aunit-options.ads"...Debug: "aunit-options.spark" not found on disk, skipped.
Debug: Found "aunit-reporter-gnattest.adb"...Debug: "aunit-reporter-gnattest.spark" not found on disk, skipped.
Debug: Found "aunit-reporter-gnattest.ads"...Debug: "aunit-reporter-gnattest.spark" already in index.
Debug: Found "aunit-reporter-text.adb"...Debug: "aunit-reporter-text.spark" not found on disk, skipped.
Debug: Found "aunit-reporter-text.ads"...Debug: "aunit-reporter-text.spark" already in index.
Debug: Found "aunit-reporter-xml.adb"...Debug: "aunit-reporter-xml.spark" not found on disk, skipped.
Debug: Found "aunit-reporter-xml.ads"...Debug: "aunit-reporter-xml.spark" already in index.
Debug: Found "aunit-reporter.ads"...Debug: "aunit-reporter.spark" not found on disk, skipped.
Debug: Found "aunit-run.adb"...Debug: "aunit-run.spark" not found on disk, skipped.
Debug: Found "aunit-run.ads"...Debug: "aunit-run.spark" already in index.
Debug: Found "aunit-simple_test_cases-run_routine.adb"...Debug: "aunit-simple_test_cases-run_routine.spark" not found on disk, skipped.
Debug: Found "aunit-simple_test_cases.adb"...Debug: "aunit-simple_test_cases.spark" not found on disk, skipped.
Debug: Found "aunit-simple_test_cases.ads"...Debug: "aunit-simple_test_cases.spark" already in index.
Debug: Found "aunit-test_caller.adb"...Debug: "aunit-test_caller.spark" not found on disk, skipped.
Debug: Found "aunit-test_caller.ads"...Debug: "aunit-test_caller.spark" already in index.
Debug: Found "aunit-test_cases-call_set_up_case.adb"...Debug: "aunit-test_cases-call_set_up_case.spark" not found on disk, skipped.
Debug: Found "aunit-test_cases-registration.adb"...Debug: "aunit-test_cases-registration.spark" not found on disk, skipped.
Debug: Found "aunit-test_cases.adb"...Debug: "aunit-test_cases.spark" not found on disk, skipped.
Debug: Found "aunit-test_cases.ads"...Debug: "aunit-test_cases.spark" already in index.
Debug: Found "aunit-test_filters.adb"...Debug: "aunit-test_filters.spark" not found on disk, skipped.
Debug: Found "aunit-test_filters.ads"...Debug: "aunit-test_filters.spark" already in index.
Debug: Found "aunit-test_fixtures.adb"...Debug: "aunit-test_fixtures.spark" not found on disk, skipped.
Debug: Found "aunit-test_fixtures.ads"...Debug: "aunit-test_fixtures.spark" already in index.
Debug: Found "aunit-test_results.adb"...Debug: "aunit-test_results.spark" not found on disk, skipped.
Debug: Found "aunit-test_results.ads"...Debug: "aunit-test_results.spark" already in index.
Debug: Found "aunit-test_suites.adb"...Debug: "aunit-test_suites.spark" not found on disk, skipped.
Debug: Found "aunit-test_suites.ads"...Debug: "aunit-test_suites.spark" already in index.
Debug: Found "aunit-tests.ads"...Debug: "aunit-tests.spark" not found on disk, skipped.
Debug: Found "aunit-time_measure.adb"...Debug: "aunit-time_measure.spark" not found on disk, skipped.
Debug: Found "aunit-time_measure.ads"...Debug: "aunit-time_measure.spark" already in index.
Debug: Found "aunit.adb"...Debug: "aunit.spark" not found on disk, skipped.
Debug: Found "aunit.ads"...Debug: "aunit.spark" already in index.
Debug: Search completed in 1.7 ms, 53 files found.
Debug: Using up to 16 parsing threads.
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-av_enumeration_vector.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-enumeration_vector.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-inner_message.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-inner_messages.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-message.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-messages_message.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-modular_vector.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-range_vector.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-derivation-message.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-derivation.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-enumeration-message.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-enumeration.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-ethernet-frame.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-ethernet.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-expression-message.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-expression.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-in_ethernet-contains.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-in_ethernet.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-in_ipv4-contains.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-in_ipv4.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-in_tlv-contains.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-in_tlv.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-ipv4-option.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-ipv4-options.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-ipv4-packet.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-ipv4.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-rflx_arithmetic.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-rflx_builtin_types-conversions.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-rflx_builtin_types.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-rflx_types.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-tlv-message.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-tlv.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-udp-datagram.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-udp.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/test.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-tests.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-builtin_types_tests.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-custom_types_tests.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-derivation-tests.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-enumeration-tests.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-ethernet-tests.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-expression-tests.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-in_ethernet-tests.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-in_ipv4-tests.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-in_tlv-tests.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-ipv4-tests.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/rflx-tlv-tests.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/spark-assertions.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/spark-file_io.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/spark.spark"...
Debug: Queuing "/home/tr/sync/RecordFlux/build/gnatprove/test_suite.spark"...
Debug: Picking up results...
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-derivation.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-expression.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-enumeration.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-in_ethernet.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-ethernet.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-in_ipv4.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-in_tlv-contains.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-in_tlv.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-in_ethernet-contains.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-in_ipv4-contains.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-inner_messages.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-ipv4-options.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-ipv4.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-rflx_builtin_types.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-rflx_builtin_types-conversions.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-av_enumeration_vector.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-modular_vector.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-enumeration_vector.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-range_vector.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/test.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-tlv.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-udp.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-rflx_arithmetic.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-expression-message.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-enumeration-message.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-enumeration-tests.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-rflx_types.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-expression-tests.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-inner_message.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-in_tlv-tests.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-messages_message.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-builtin_types_tests.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/spark-assertions.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/spark-file_io.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/spark.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/test_suite.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-in_ethernet-tests.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-in_ipv4-tests.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-derivation-tests.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-tlv-message.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-tlv-tests.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-ipv4-tests.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-derivation-message.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-udp-datagram.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-tests.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-message.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-ethernet-tests.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-custom_types_tests.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-ipv4-option.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-ethernet-frame.spark"...ok.
Debug: "/home/tr/sync/RecordFlux/build/gnatprove/rflx-ipv4-packet.spark"...ok.
Debug: Parsing completed in 362.9 ms.
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: File version: GNAT_CE_2020
Debug: Reading completed in 566.3 ms.
Debug: Cut off point set to 0.0 s.
Warning: You requested a suggested prover configuration.
Warning: This feature is highly experimental.
Warning: Please consult the documentation.
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-ipv4-option.spark
Debug:   CVC4
Debug:     t(Success) 2.9 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 1.4 s
Debug:     S(Success) 640
Debug:     file     "rflx-ipv4-generic_option.ads"
Debug:   Z3
Debug:     t(Success) 40.5 s
Debug:     t(Failed)  680.0 ms
Debug:     T(Success) 790.0 ms
Debug:     S(Success) 1979
Debug:     file     "rflx-ipv4-generic_option.adb"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-udp-datagram.spark
Debug:   CVC4
Debug:     t(Success) 4.6 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 4.6 s
Debug:     S(Success) 2334
Debug:     file     "rflx-udp-generic_datagram.ads"
Debug:   Z3
Debug:     t(Success) 25.4 s
Debug:     t(Failed)  1.1 s
Debug:     T(Success) 460.0 ms
Debug:     S(Success) 1863
Debug:     file     "rflx-udp-generic_datagram.adb"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-derivation-tests.spark
Debug:   Z3
Debug:     t(Success) 33.3 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 1.1 s
Debug:     S(Success) 2215
Debug:     file     "rflx-derivation-tests.adb"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays.spark
Debug:   Z3
Debug:     t(Success) 150.0 ms
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 20.0 ms
Debug:     S(Success) 1
Debug:     file     "rflx-arrays.ads"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-ethernet-tests.spark
Debug:   Z3
Debug:     t(Success) 28.3 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 1.0 s
Debug:     S(Success) 3359
Debug:     file     "rflx-ethernet-generic_frame.ads"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-expression-tests.spark
Debug:   Z3
Debug:     t(Success) 820.0 ms
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 80.0 ms
Debug:     S(Success) 1
Debug:     file     "rflx-expression-tests.adb"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-tlv-tests.spark
Debug:   Z3
Debug:     t(Success) 7.7 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 170.0 ms
Debug:     S(Success) 467
Debug:     file     "rflx-tlv-tests.adb"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-ethernet.spark
Debug:   Z3
Debug:     t(Success) 90.0 ms
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 20.0 ms
Debug:     S(Success) 1
Debug:     file     "rflx-ethernet.ads"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-enumeration-tests.spark
Debug:   Z3
Debug:     t(Success) 1.7 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 90.0 ms
Debug:     S(Success) 1
Debug:     file     "rflx-enumeration-tests.adb"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-rflx_arithmetic.spark
Debug:   altergo
Debug:     t(Success) 4.2 s
Debug:     t(Failed)  304.1 s
Debug:     T(Success) 381.7 ms
Debug:     S(Success) 3039
Debug:     file     "rflx-rflx_arithmetic.adb"
Debug:   CVC4
Debug:     t(Success) 137.6 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 50.3 s
Debug:     S(Success) 70979
Debug:     file     "rflx-rflx_arithmetic.adb"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-tlv.spark
Debug:   Z3
Debug:     t(Success) 60.0 ms
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 20.0 ms
Debug:     S(Success) 1
Debug:     file     "rflx-tlv.ads"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-ethernet-frame.spark
Debug:   Z3
Debug:     t(Success) 74.7 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 3.6 s
Debug:     S(Success) 14670
Debug:     file     "rflx-ethernet-generic_frame.adb"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-tlv-message.spark
Debug:   CVC4
Debug:     t(Success) 1.7 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 850.0 ms
Debug:     S(Success) 1
Debug:     file     "rflx-tlv-generic_message.adb"
Debug:   Z3
Debug:     t(Success) 13.7 s
Debug:     t(Failed)  450.0 ms
Debug:     T(Success) 130.0 ms
Debug:     S(Success) 109
Debug:     file     "rflx-tlv-generic_message.ads"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-custom_types_tests.spark
Debug:   altergo
Debug:     t(Success) 851.6 s
Debug:     t(Failed)  1.6 ks
Debug:     T(Success) 57.9 s
Debug:     S(Success) 364532
Debug:     file     "rflx-rflx_generic_types.ads"
Debug:   CVC4
Debug:     t(Success) 268.2 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 48.5 s
Debug:     S(Success) 50012
Debug:     file     "rflx-rflx_generic_types.adb"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-enumeration_vector.spark
Debug:   Z3
Debug:     t(Success) 1.9 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 120.0 ms
Debug:     S(Success) 247
Debug:     file     "rflx-rflx_scalar_sequence.adb"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-tests.spark
Debug:   Z3
Debug:     t(Success) 44.0 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 680.0 ms
Debug:     S(Success) 1672
Debug:     file     "rflx-arrays-tests.adb"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-in_tlv-tests.spark
Debug:   Z3
Debug:     t(Success) 400.0 ms
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 80.0 ms
Debug:     S(Success) 1
Debug:     file     "rflx-in_tlv-tests.adb"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-messages_message.spark
Debug:   CVC4
Debug:     t(Success) 840.0 ms
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 840.0 ms
Debug:     S(Success) 633
Debug:     file     "rflx-arrays-generic_messages_message.ads"
Debug:   Z3
Debug:     t(Success) 9.9 s
Debug:     t(Failed)  460.0 ms
Debug:     T(Success) 240.0 ms
Debug:     S(Success) 618
Debug:     file     "rflx-arrays-generic_messages_message.adb"
Debug:   Trivial
Debug:     t(Success) 0.0 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 0.0 s
Debug:     S(Success) 1
Debug:     file     "rflx-arrays-generic_messages_message.ads"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-ipv4-packet.spark
Debug:   CVC4
Debug:     t(Success) 38.7 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 9.8 s
Debug:     S(Success) 11566
Debug:     file     "rflx-ipv4-generic_packet.ads"
Debug:   Z3
Debug:     t(Success) 281.9 s
Debug:     t(Failed)  19.8 s
Debug:     T(Success) 3.9 s
Debug:     S(Success) 10747
Debug:     file     "rflx-rflx_generic_types.ads"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-message.spark
Debug:   CVC4
Debug:     t(Success) 3.1 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 1.1 s
Debug:     S(Success) 183
Debug:     file     "rflx-arrays-generic_message.adb"
Debug:   Z3
Debug:     t(Success) 26.9 s
Debug:     t(Failed)  790.0 ms
Debug:     T(Success) 280.0 ms
Debug:     S(Success) 540
Debug:     file     "rflx-arrays-generic_message.adb"
Debug:   Trivial
Debug:     t(Success) 0.0 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 0.0 s
Debug:     S(Success) 1
Debug:     file     "rflx-arrays-generic_message.ads"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-inner_messages.spark
Debug:   Z3
Debug:     t(Success) 1.3 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 60.0 ms
Debug:     S(Success) 1
Debug:     file     "rflx-rflx_message_sequence.ads"
Debug:   Trivial
Debug:     t(Success) 0.0 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 0.0 s
Debug:     S(Success) 1
Debug:     file     "rflx-rflx_message_sequence.ads"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-enumeration-message.spark
Debug:   Z3
Debug:     t(Success) 5.9 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 150.0 ms
Debug:     S(Success) 359
Debug:     file     "rflx-enumeration-generic_message.ads"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-enumeration.spark
Debug:   Z3
Debug:     t(Success) 40.0 ms
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 20.0 ms
Debug:     S(Success) 1
Debug:     file     "rflx-enumeration.ads"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-ipv4-tests.spark
Debug:   Z3
Debug:     t(Success) 69.6 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 13.3 s
Debug:     S(Success) 76924
Debug:     file     "rflx-ipv4-tests.adb"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-ipv4-options.spark
Debug:   Z3
Debug:     t(Success) 830.0 ms
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 40.0 ms
Debug:     S(Success) 1
Debug:     file     "rflx-rflx_message_sequence.ads"
Debug:   Trivial
Debug:     t(Success) 0.0 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 0.0 s
Debug:     S(Success) 1
Debug:     file     "rflx-rflx_message_sequence.ads"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-expression-message.spark
Debug:   Z3
Debug:     t(Success) 4.5 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 70.0 ms
Debug:     S(Success) 1
Debug:     file     "rflx-expression-generic_message.ads"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-modular_vector.spark
Debug:   CVC4
Debug:     t(Success) 450.0 ms
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 450.0 ms
Debug:     S(Success) 1
Debug:     file     "rflx-rflx_scalar_sequence.adb"
Debug:   Z3
Debug:     t(Success) 1.7 s
Debug:     t(Failed)  140.0 ms
Debug:     T(Success) 70.0 ms
Debug:     S(Success) 1
Debug:     file     "rflx-rflx_generic_types.ads"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-derivation-message.spark
Debug:   CVC4
Debug:     t(Success) 3.2 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 1.1 s
Debug:     S(Success) 183
Debug:     file     "rflx-arrays-generic_message.ads"
Debug:   Z3
Debug:     t(Success) 27.1 s
Debug:     t(Failed)  800.0 ms
Debug:     T(Success) 280.0 ms
Debug:     S(Success) 530
Debug:     file     "rflx-arrays-generic_message.ads"
Debug:   Trivial
Debug:     t(Success) 0.0 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 0.0 s
Debug:     S(Success) 1
Debug:     file     "rflx-arrays-generic_message.ads"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-range_vector.spark
Debug:   Z3
Debug:     t(Success) 2.0 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 140.0 ms
Debug:     S(Success) 446
Debug:     file     "rflx-rflx_scalar_sequence.adb"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-in_ipv4-tests.spark
Debug:   Z3
Debug:     t(Success) 218.1 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 34.5 s
Debug:     S(Success) 141142
Debug:     file     "rflx-in_ipv4-tests.adb"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-builtin_types_tests.spark
Debug:   Z3
Debug:     t(Success) 10.6 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 60.0 ms
Debug:     S(Success) 1
Debug:     file     "rflx-builtin_types_tests.adb"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-in_ethernet-contains.spark
Debug:   CVC4
Debug:     t(Success) 1.7 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 1.7 s
Debug:     S(Success) 521
Debug:     file     "rflx-in_ethernet-generic_contains.ads"
Debug:   Z3
Debug:     t(Success) 1.3 s
Debug:     t(Failed)  350.0 ms
Debug:     T(Success) 340.0 ms
Debug:     S(Success) 449
Debug:     file     "rflx-in_ethernet-generic_contains.adb"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-rflx_types.spark
Debug:   altergo
Debug:     t(Success) 207.9 s
Debug:     t(Failed)  316.6 s
Debug:     T(Success) 29.2 s
Debug:     S(Success) 239854
Debug:     file     "rflx-rflx_generic_types.adb"
Debug:   CVC4
Debug:     t(Success) 75.5 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 29.8 s
Debug:     S(Success) 40212
Debug:     file     "rflx-rflx_generic_types.adb"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-in_ethernet-tests.spark
Debug:   Z3
Debug:     t(Success) 134.4 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 28.8 s
Debug:     S(Success) 108032
Debug:     file     "rflx-ipv4-generic_packet.ads"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-udp.spark
Debug:   Z3
Debug:     t(Success) 70.0 ms
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 30.0 ms
Debug:     S(Success) 1
Debug:     file     "rflx-udp.ads"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-av_enumeration_vector.spark
Debug:   Z3
Debug:     t(Success) 1.9 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 160.0 ms
Debug:     S(Success) 683
Debug:     file     "rflx-rflx_scalar_sequence.adb"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-in_ipv4-contains.spark
Debug:   CVC4
Debug:     t(Success) 24.6 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 24.6 s
Debug:     S(Success) 117158
Debug:     file     "rflx-in_ipv4-generic_contains.ads"
Debug:   Z3
Debug:     t(Success) 1.9 s
Debug:     t(Failed)  35.4 s
Debug:     T(Success) 500.0 ms
Debug:     S(Success) 585
Debug:     file     "rflx-in_ipv4-generic_contains.adb"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-ipv4.spark
Debug:   Z3
Debug:     t(Success) 300.0 ms
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 20.0 ms
Debug:     S(Success) 1
Debug:     file     "rflx-ipv4.ads"
Debug: /home/tr/sync/RecordFlux/build/gnatprove/rflx-arrays-inner_message.spark
Debug:   Z3
Debug:     t(Success) 8.9 s
Debug:     t(Failed)  0.0 s
Debug:     T(Success) 180.0 ms
Debug:     S(Success) 338
Debug:     file     "rflx-arrays-generic_inner_message.adb"

package Prove is
   for Proof_Switches ("rflx-arrays-generic_inner_message.adb") use ("--prover=Z3", "--steps=338", "--timeout=1");
   for Proof_Switches ("rflx-arrays-generic_message.ads") use ("--prover=CVC4,Z3", "--steps=530", "--timeout=2");
   for Proof_Switches ("rflx-arrays-generic_message.ads") use ("--prover=CVC4,Z3", "--steps=540", "--timeout=2");
   for Proof_Switches ("rflx-arrays-generic_messages_message.ads") use ("--prover=CVC4,Z3", "--steps=633", "--timeout=1");
   for Proof_Switches ("rflx-arrays-tests.adb") use ("--prover=Z3", "--steps=1672", "--timeout=1");
   for Proof_Switches ("rflx-arrays.ads") use ("--prover=Z3", "--steps=1", "--timeout=1");
   for Proof_Switches ("rflx-builtin_types_tests.adb") use ("--prover=Z3", "--steps=1", "--timeout=1");
   for Proof_Switches ("rflx-derivation-tests.adb") use ("--prover=Z3", "--steps=2215", "--timeout=2");
   for Proof_Switches ("rflx-enumeration-generic_message.ads") use ("--prover=Z3", "--steps=359", "--timeout=1");
   for Proof_Switches ("rflx-enumeration-tests.adb") use ("--prover=Z3", "--steps=1", "--timeout=1");
   for Proof_Switches ("rflx-enumeration.ads") use ("--prover=Z3", "--steps=1", "--timeout=1");
   for Proof_Switches ("rflx-ethernet-generic_frame.adb") use ("--prover=Z3", "--steps=14670", "--timeout=4");
   for Proof_Switches ("rflx-ethernet-generic_frame.ads") use ("--prover=Z3", "--steps=3359", "--timeout=1");
   for Proof_Switches ("rflx-ethernet.ads") use ("--prover=Z3", "--steps=1", "--timeout=1");
   for Proof_Switches ("rflx-expression-generic_message.ads") use ("--prover=Z3", "--steps=1", "--timeout=1");
   for Proof_Switches ("rflx-expression-tests.adb") use ("--prover=Z3", "--steps=1", "--timeout=1");
   for Proof_Switches ("rflx-in_ethernet-generic_contains.ads") use ("--prover=CVC4,Z3", "--steps=521", "--timeout=2");
   for Proof_Switches ("rflx-in_ipv4-generic_contains.ads") use ("--prover=CVC4,Z3", "--steps=117158", "--timeout=25");
   for Proof_Switches ("rflx-in_ipv4-tests.adb") use ("--prover=Z3", "--steps=141142", "--timeout=35");
   for Proof_Switches ("rflx-in_tlv-tests.adb") use ("--prover=Z3", "--steps=1", "--timeout=1");
   for Proof_Switches ("rflx-ipv4-generic_option.ads") use ("--prover=CVC4,Z3", "--steps=1979", "--timeout=2");
   for Proof_Switches ("rflx-ipv4-generic_packet.ads") use ("--prover=Z3", "--steps=108032", "--timeout=29");
   for Proof_Switches ("rflx-ipv4-tests.adb") use ("--prover=Z3", "--steps=76924", "--timeout=14");
   for Proof_Switches ("rflx-ipv4.ads") use ("--prover=Z3", "--steps=1", "--timeout=1");
   for Proof_Switches ("rflx-rflx_arithmetic.adb") use ("--prover=CVC4,altergo", "--steps=70979", "--timeout=51");
   for Proof_Switches ("rflx-rflx_generic_types.adb") use ("--prover=CVC4,altergo", "--steps=239854", "--timeout=30");
   for Proof_Switches ("rflx-rflx_generic_types.ads") use ("--prover=CVC4,altergo", "--steps=364532", "--timeout=58");
   for Proof_Switches ("rflx-rflx_generic_types.ads") use ("--prover=CVC4,Z3", "--steps=1", "--timeout=1");
   for Proof_Switches ("rflx-rflx_generic_types.ads") use ("--prover=CVC4,Z3", "--steps=11566", "--timeout=10");
   for Proof_Switches ("rflx-rflx_message_sequence.ads") use ("--prover=Z3", "--steps=1", "--timeout=1");
   for Proof_Switches ("rflx-rflx_message_sequence.ads") use ("--prover=Z3", "--steps=1", "--timeout=1");
   for Proof_Switches ("rflx-rflx_scalar_sequence.adb") use ("--prover=Z3", "--steps=247", "--timeout=1");
   for Proof_Switches ("rflx-rflx_scalar_sequence.adb") use ("--prover=Z3", "--steps=446", "--timeout=1");
   for Proof_Switches ("rflx-rflx_scalar_sequence.adb") use ("--prover=Z3", "--steps=683", "--timeout=1");
   for Proof_Switches ("rflx-tlv-generic_message.ads") use ("--prover=CVC4,Z3", "--steps=109", "--timeout=1");
   for Proof_Switches ("rflx-tlv-tests.adb") use ("--prover=Z3", "--steps=467", "--timeout=1");
   for Proof_Switches ("rflx-tlv.ads") use ("--prover=Z3", "--steps=1", "--timeout=1");
   for Proof_Switches ("rflx-udp-generic_datagram.ads") use ("--prover=CVC4,Z3", "--steps=2334", "--timeout=5");
   for Proof_Switches ("rflx-udp.ads") use ("--prover=Z3", "--steps=1", "--timeout=1");
end Prove;

Sorting by successful proof time should not intermix with unproved items

Describe the bug
As mentioned in the README, reports sorted by successful proof time (i.e. -cs) also show unproved items interspersed with proved items. I am assuming that if a user wants to see the times for successful proofs, unproved VCs are currently irrelevant, so they should be shown last.

Provide --cut-off command line parameter

Is your feature request related to a problem? Please describe.

After fixing #22 I noticed that it's a bit useless to suddenly show all proofs with a time of 0.0 s (tool limitation).

Describe the solution you'd like
We should provide a command line parameter (I propose --cut-off) which suppresses output for entities whose proof times are below that.

Additional context
Also, vaguely related to #1.

Invalid output of --suggest option

Describe the bug
The output of the --suggest option contains too many quotes after the --steps option.

Console Output

Warning: You requested a suggested prover configuration.
Warning: This feature is highly experimental.
Warning: Please consult the documentation.

package Prove is
   for Prover_Switches ("sparknacl-car.adb") use ("--provers=CVC4", "--steps=5882"", "--timeout=1");

--details option sometimes seem too detailed

Is your feature request related to a problem? Please describe.
When a single entity has a lot of verification conditions and paths leading to them, showing all of these is sometimes just too much information.

Describe the solution you'd like
The --details option could be enhanced with a "level", e.g. --details=1 shows only the first level (the VC), while --details=2 would also show the proof paths.

This probably would be most useful in conjunction with #34.

Prepare SPAT for when step scaling becomes obsolete

Describe the bug
Currently we output scaled steps (as we should) for the --suggest option and the 1.2 branch extended on that concept that we scale steps already as early as reading them. The development release of SPARK fixed the distinction between raw and scaled steps in the JSON output (i.e. check_tree vs. stats object).

Expected Behavior
To prepare for future releases of SPARK, SPAT should be able to detect if step scaling is actually needed and skip the scaling if not. If there is at least one successful proof, we can infer that information from the difference of reported steps in the check_tree array and the corresponding stats object.
Alternatively, we may provide a command line switch to enable scaling or not.

Let the tool suggest an "optimal" prover configuration.

Is your feature request related to a problem? Please describe.
A lot of times, optimizing proof times just consists of shuffling around which prover should be called first.
See, for example this PR

Describe the solution you'd like
Given the data provided, the tool should output a suggested configuration of prover/file matrix.

Describe alternatives you've considered
Manually reading through the output. But well, we do have a tool, we do have the data, so why not use both?

Suggested configuration leads to failing proof

Describe the bug
The --suggest option has multiple issues:

  1. The determined steps seem to be wrong. In my example they were either too high or too low.
  2. The determined timeout is too low.
  3. Setting --steps and --timeout using Prover_Switches has no effect.

To Reproduce
So far I'm using sparkprof to determine the maximum number of steps. Here is an example (the two used files can be found here):

$ sparkprof -Ptest --level=4
Phase 1 of 2: generation of Global contracts ...
Phase 2 of 2: flow analysis and proof ...
------------------------------------------------------
UNIT                 MAX STEPS    MAX TIME  TOTAL TIME
rflx-rflx_arithmetic  90761 steps    21.0 s    54.2 s
ALL                   90761 steps    21.0 s    54.2 s

Based on that output I would add the following to the project file:

   package Prove is
      for Proof_Switches ("rflx-rflx_arithmetic.ads") use ("--steps=90761");
      for Proof_Switches ("rflx-rflx_arithmetic.adb") use ("--steps=90761");
   end Prove;

After adding these switches to the project the project can still be proven. If the steps value is changed to 90760, gnatprove fails as expected.

SPAT suggests quite different options:

package Prove is
   for Prover_Switches ("rflx-rflx_arithmetic.adb") use ("--provers=CVC4", "--steps=82085", "--timeout=1");
   for Prover_Switches ("rflx-rflx_arithmetic.ads") use ("--provers=CVC4", "--steps=3191607", "--timeout=21");
end Prove;

When I use the proposed switches, gnatprove fails:

$ gnatprove -Ptest               
Phase 1 of 2: generation of Global contracts ...
Phase 2 of 2: flow analysis and proof ...
rflx-rflx_arithmetic.adb:11:13: medium: divide by zero might fail [possible explanation: precondition of subprogram at rflx-rflx_arithmetic.ads:16 should mention Value]
[...]

This seems to have multiple reasons:

  • --steps and --timeout seem to be ignored in Prover_Switches. They have an effect as soon as I replace Prover_Switches by Proof_Switches (and remove --provers).
  • The timeout seems to be to low. Many failing checks vanish when --timeout is removed.
  • The suggested steps for rflx-rflx_arithmetic.adb are too low. The remaining failing check vanishes when the steps are increased to 90761.

Show unjustified proofs

Right now, unproven VCs are all shown regardless if they have a justification message or not. We should add a --unjustified command line parameter which would work in conjunction with the --unproved parameter to only show unproven VCs with no associated justification.

Show minimum required time for successful proof

Is your feature request related to a problem? Please describe.
I want to improve the proof time for already known correct code. One way to achieve that is reducing the timeout to reduce wasting time on a prover which is unable to prove a certain VC. Unfortunately, the current output of SPAT does not allow getting the minimum required time easily. It may can be determined manually by using the details option, but that is quite tedious.

Here is an example of the current output:

RFLX.RFLX_Types.U64_Insert => 120.0 s/1.6 ks
`-VC_PRECONDITION rflx-rflx_generic_types.adb:221:39 => 120.0 s/491.0 s
 `-CVC4: 120.0 s (Timeout)
  -Z3: 120.0 s (Timeout)
  -altergo: 188.2 ms (Valid)
 `-Z3: 120.0 s (Timeout)
  -CVC4: 5.4 s (Valid)
 `-Z3: 120.0 s (Timeout)
  -CVC4: 5.3 s (Valid)
 `-Z3: 60.0 ms (Valid)
 `-Z3: 40.0 ms (Valid)
 `-Z3: 20.0 ms (Valid)
 `-Trivial: 0.0 s (Valid)

SPAT shows a very high maximum time used by a prover (120.0 s), while it cannot be seen easily that the required time could be reduced significantly when reducing the timeout (5.4 s + some buffer for the shown VC).

Describe the solution you'd like
It would be nice if the output could be extended by a value for the maximum time needed by any prover for a valid proof. For example:

RFLX.RFLX_Types.U64_Insert => 5.4 s/120.0 s/1.6 ks
`-VC_PRECONDITION rflx-rflx_generic_types.adb:221:39 => 5.4 s/120.0 s/491.0 s
[...]

It would be also helpful to be able to sort the output by this value, so that the maximum value for the whole project can be easily seen.

Document compilation

Hello,

Unless I missed it, the project doesn't contain any info to build the executable. I tried

$ gprbuild -P spat
spat.gpr:2:06: unknown project file: "si_units_lib"
gprbuild: "spat" processing failed

I discovered the SI_Units project in your github profile, so I will go download that now ...

Fix spurious warnings

For SPARK CE 2020, we spit out warnings that an (empty) check_tree is not a JSON array, because with the 2020 file format, an empty JSON object appears there instead of the expected array (i.e. [] vs. {}).

We should probably relax this check, i.e. this requires warning-less Preconditions.

GNAT2020 compatibility

Format of the .spark files seems slightly changed in the GNAT2020 release. See what the differences are and solve these in a preferably backward compatible way.

Support GNAT project files

We are already using GNATCOLL and the files we parse are rather GNAT specific, so to be in line with other tools, we could just as well use a .gpr file as input instead of wildly scanning directories. This is especially important as specifying the object directory to parse also catches files from the saved_runs subdirectory, causing inconsistent behaviour by including data from earlier runs. Parsing the project files and selectively finding the associated .spark should be more user friendly.

Incorrect warning about missing timings

When running SPAT on a toy project, I get warnings like these:

Warning: Expected field "gnatwhy3.run_vcs" not present!
Warning: Expected field "gnatwhy3.register_vcs" not present!
Warning: Expected field "gnatwhy3.schedule_vcs" not present!

But it's expected to have such missing fields, it just means the corresponding phase wasn't run at all. For the statistics, the tool should assume the phase took no time (0s).

GNATStudio plugin

It could be handy to have a plug-in in GNATStudio so that the user can run the tool after a prove and see the information possibly interspersed with the actual source code.

Not clear yet, though, how the output should be displayed in the IDE.

Provide an entity filter parameter

When trying to analyze single proof times, the current filters are not sufficient. An idea would be to add some entity filter option to only show matching entries for a certain entity.

Roughly like

run_spat -rf - ct -e Setup_Key -P saatana.gpr

Which would have the expected effect (report failed/sort by time), but only for entities that match "Setup_Key".

Open questions: Maybe wildcard matching, or even reg exp?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.