Giter Site home page Giter Site logo

cgreen-devs / cgreen Goto Github PK

View Code? Open in Web Editor NEW
173.0 17.0 47.0 3.29 MB

A modern, portable, cross-language unit testing and mocking framework for C and C++

License: ISC License

Emacs Lisp 0.01% CMake 8.62% Makefile 1.99% Python 1.67% Shell 4.19% C++ 8.30% C 74.66% Perl 0.47% sed 0.09%
unittesting tdd mocking-framework tdd-utilities c cplusplus

cgreen's Introduction

Build Status Coverage Status

Cgreen - The Modern Unit Test and Mocking Framework for C and C++

Do you TDD? In C or C++? Maybe you want to have your tests read out in a fluent fashion? Like this

Ensure(Converter, converts_XIV_to_14) {
    assert_that(convert_roman_to_decimal("XIV"), is_equal_to(14));
}

And you want output like this

roman_test.c:12: Failure: Converter -> converts_XIV_to_14
        Expected [convert_roman_to_decimal("XIV")] to [equal] [14]
                actual value:                   [0]
                expected value:                 [14]

Then Cgreen is the thing for you!

TLDR; The full tutorial is on github.io. Or have a look at the cheat sheet.

What It Is

Cgreen is a modern unit test and mocking framework for C and C++. Here are some of Cgreens unique selling points:

  • fast build, clean code, highly portable
  • auto-discovery of tests without the abuse of static initializers or globals
  • extensible without recompiling
  • fluent, expressive and readable API with the same modern syntax across C and C++
  • process isolation for each test preventing intermittent failures and cross-test dependencies
  • built-in mocking for C, compatible with mockitopp and other C++ mocking libraries
  • expressive and clear output using the default reporter
  • fully functional mocks, both strict, loose and learning
  • mocks with side effects
  • extensive and expressive constraints for many datatypes
  • custom constraints can be constructed by user
  • bdd-flavoured test declarations with Before and After declarations
  • extensible reporting mechanism
  • fully composable test suites
  • a single test can be run in a single process for easier debugging

Getting It

Cgreen is hosted on GitHub. As of now there are no pre-built packages to download, but Cgreen is available in Debian, Fedora and some other package repositories, although some are lagging.

There are also some other packaging scripts available, not all official:

You can also clone the repository or download the source zip from GitHub and build it yourself.

Building It

You need the CMake build system. Most standard C/C++ compilers should work. GCC definitely does.

Perl, diff, find and sed are required to run Cgreen's own unit-tests. Most distro will have those already installed.

In the root directory run make. That will configure and build the library and the cgreen-runner, both supporting both C and C++. See also the documentation.

Using It

Tests are fairly easy write, as shown by the examples in the beginning of this readme. You should probably read the tutorial once before writing your first test, though.

Basically you can run your tests in two ways

  1. Compile and link all your tests with a test driver (as shown in the fist chapters of the tutorial)
  2. Link your tests into separate shared libraries (.so, .dylib or similar) and run them with the cgreen-runner (described in chapter 6 of the tutorial)

Option 2 is very handy, you can run multiple libraries in the same run, but also specify single tests that you want to run. And with the completion script available for bash you can get TAB-completion not only for files and options but also for tests inside the libraries.

cgreen-debug is a small script that you invoke in the same way as the runner but runs a single, specified, test and puts you in the debugger at the start of that test. Awesome!

Using Cgreen in other CMake projects

Once Cgreen is installed you can use find_package(cgreen) in your CMake projects to get access to useful variables like ${CGREEN_LIBRARIES}, ${CGREEN_EXECUTABLE} and ${CGREEN_INCLUDE_DIRS}. Version can be specified in find_package as well. For example, in order to enforce a minimum version of Cgreen in your project use find_package(cgreen 1.1.0)

Reading Up!

You can read the extensive tutorial directly on GitHub.

There is a cheat sheet available.

You can also build the documentation yourself in HTML and PDF format. Generate it using Asciidoctor, which can be done using the CMake configuration. Of course you need Asciidoctor.

make doc
make pdf

(Generating PDF also requires asciidoctor-pdf.)

License

Cgreen is licensed under the ISC License (http://spdx.org/licenses/ISC), sometimes known as the OpenBSD license. If there is no licence agreement with this package please download a version from the location above. You must read and accept that licence to use this software. The file is titled simply LICENSE.

The Original Version

What is it? It's a framework for unit testing, written in C. A tool for C developers writing tests of their own code.

If you have used JUnit, or any of the xUnit clones, you will find the concept familiar. In particular the tool supports a range of assertions, composable test suites and setup/teardown facilities. Because of the peculiarities of C programming, each test function is normally run in it's own process.

This project is very close in scope to the "Check" unit tester and was initially influenced by it.

The main difference from this tool and other xUnit tools, such as "Check", is that test results are not stored. Instead they are streamed to the reporter psuedo-class, one that is easily overridden by the end user.

The other main extra feature is the support for writing mock callbacks. This includes generating sequences for return values or parameter expectations.

Feedback, queries and request should be put to the cgreen developers through https://github.com/cgreen-devs/cgreen.

This tool is basically a spin off from a research project at Wordtracker and would not have happened without the generous financial support of the Wordtracker keyword tool... http://www.wordtracker.com/

Substantial inital work by Marcus Baker [email protected]. Recent additions by Matt Hargett [email protected], Thomas Nilefalk [email protected], João Freitas [email protected] and others.

cgreen's People

Contributors

adamburgess avatar alvinchchen avatar alvinmoxa avatar ar-cetitec avatar aytchell avatar aytey avatar cmburn avatar crockagile avatar d-meiser avatar dankm avatar deciduously avatar derfian avatar fnadeau avatar gavin09 avatar gladiac avatar joaohf avatar joaopapereira avatar lastcraft avatar matthargett avatar oniboni avatar partouf avatar ptzafrir avatar sargun avatar selavy avatar souryogurt avatar stevemadsenblippar avatar thoni56 avatar tommyjc avatar ykaliuta avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cgreen's Issues

Show result of current suite when tallying, not only total

Currently the textreporter presents is output for multiple suites like this:

Running "libreflective_tests" (2 tests)...
Completed "ReflectiveRunner": 706 passes, 2 skipped, 51 failures, 4 exceptions in 15ms.
Completed "libreflective_tests": 706 passes, 2 skipped, 51 failures, 4 exceptions in 18ms.
Running "libtext_reporter_tests" (4 tests)...
Completed "TextReporter": 713 passes, 2 skipped, 51 failures, 4 exceptions in 31ms.
Completed "libtext_reporter_tests": 713 passes, 2 skipped, 51 failures, 4 exceptions in 34ms.

Sometimes it's interesting to actually see how many tests there are in a particular suite, or which suite threw an exception, so perhaps the output for a suite could be changed to something like:

Completed "TextReporter": 7 passes (713), 0 skipped (2), 0 failures (51), 0 exceptions (4) in 31ms.

Make reporter infra-structure take responsibility for measuring time and future dynamic loading of reporters

In #32 it was introduced an idea about changing the reporter interface to not call specific reporter functions directly, but let the reporter (framework/class) do that delegation so that the specific reporters are only specializations of the "printout":

@thoni56:

What I was thinking was probably bigger that this specific PR. Also factoring in dynamic loading of reporters. Giving the reporter infrastructure "class" more responsibility than being a pattern for reporters.
One way of handling this would be to modify the reporter infrastructure so that the runner would
always call the reporter "super class". Instead of calling the delegated, specific reporter functions
directly through the pointers like

(*reporter->start_suite)(reporter, ...

it would do

start_suite(reporter, ...

which would do something like

void start_suite(TestReporter *reporter, ... {
    // Do timing magic...
    (*reporter->start_suite)(reporter, ...
    ...
}

thus allowing the reporter infra-structure to intervene and do common stuff.

This semi-new reporter layer could then also be made to handle the responsibility of dynamically
loading reporters.

@matthargett then replied:

I like this idea a lot. Since it would be easy to forget/misuse, we would probably want to have
a flag in the reporter struct that gets set when the top-level reporter's start() is called so the
framework can output an error message when reporter->start() is called without having called
the top-level reporter's start_suite(). comments?

cgreen install results in cgreen_value.h not found

In file included from /usr/local/include/cgreen/internal/cpp_assertions.h:4:0,
from /usr/local/include/cgreen/internal/assertions_internal.h:7,
from /usr/local/include/cgreen/assertions.h:4,
from /usr/local/include/cgreen/cgreen.h:7,
from Tests/Tracker/Tests-Main.cpp:1:
/usr/local/include/cgreen/constraint.h:5:33: fatal error: cgreen/cgreen_value.h: No such file or directory

Error on Ubuntu 16.04LTS.

I went through
make
make test
sudo make install

C++ to not require typed constraints

Currently, if you are using C++ you have to use the C-style constraints, mostly. This includes

  • will_return_double()
  • is_equal_to_string()

a.s.o

Since C++ can use overloading it would be cool if you could just do

expect(double_out, will_return(4.23));
assert_that(3.14, is_equal_to(PI));
assert_that("hello", is_equal_to("hello"));

Mocks cannot handle floating point?

I'm having trouble getting mocks to work for functions which use floating point values.

  1. When a mocked function is expected to return a floating point value, the mock actually returns the integer version of that value.
  2. When a parameter to a mocked function has an expected value, that mock will segfault.

When running the attached code example fp_mock_test.txt:

Running "floating point mock test" (2 tests)...
fp_mock_test.c:15: Failure: foobar_returns_floating_point 
        Expected [test_func()] to [equal double] [282.4] within [8] significant figures
                actual value:   282.000000
                expected value: 282.400000

fp_mock_test.c:24: Exception: foobar_param_doesnt_segfault 
        Test terminated with signal: Segmentation fault: 11

Completed "floating point mock test": 1 failure, 1 exception in 113ms.

Am I using mocks incorrectly here, or do mocks not currently support floating point types? This is using the latest Cgreen @ bead2ca, on OS X 10.11.6, with gcc version 4.9.3 (Homebrew gcc49 4.9.3).

Rewrite the section on multiple fixtures in the documentation

While rewriting the documentation to use the BDD notation I discovered that the section on multiple suites with difference setup() and teardown() needs a more radical and thought-through rewrite.

This is a reminder to do that at some point.

NOTE: I put in a cautionary note in the documentation.

`BeforeOnce()` or similar to run "module" level setup/fixture

In some cases where creating a fixture might be costly and there are no risk of it being compromised by the tests themselves it is often handy to do a single setup once before all tests.

Such example is the setup of database connections or complex structures. In the docs there is also a (somewhat outdated) discussion that touches on these matters.

A suggestion would then be to introduce a BeforeOnce(context) and of course corresponding AfterOnce(context) in analogy to BeforeEach() and AfterEach().

(I considered BeforeAll() but that seems to be too easy to mis-interpreter as BeforeEach(), other suggestions welcome...)

Such a function should probably be run in the parent so that environment variables can be set to affect the running of testcases. There are a couple of testcases for Cgreen itself that currently relies on a command line with the appropriate variables set, which is not as rubust as I'd like. This is not strictly necessary for this feature to be valuable, but if it were then we could clean up some testcases in Cgreen.

Note that this might not be quite as straight forward as it might seem since interference with legacy setup()-strategy has to be considered.

cgreen-runner to show start/finish for total when multiple libraries

If you run cgreen-runner with multiple shared test libraries, like

cgreen-runner tests/*.so

the output consists of a number of line pairs like

Running "<name of library>" (3 tests)...
<suites reported here>
Completed "<name of library>": 716 passes, 2 skipped, 51 failures, 4 exceptions in 1ms.

One such pair per library. I'd like it to also say

Running all tests (174 tests)...

before and

Completed all tests: 716 passes, 2 skipped, 51 failures, 4 exceptions in 1ms.

after all tests, in effect mimicing a library of all libraries.

C++: terminate called ...

When running the constraint messages test cases for C++ a couple of errors show up in the console:

terminate called after throwing an instance of 'char const*'

and

terminate called without an active exception

They emanate from FailureMessage:for_incorrect_assert_throws and FailureMessage:increments_exception_count_when_throwing in tests/constraint_message_tests.c.

As Google tells us this is because the child process has not been joined to the parent correctly. Anyone want to take a look at that?

Some compilers don't like `fortify_source`without `-O`

If possible we want the fortify_source "feature" even for debug builds. That works for many gcc:s. But Fedora's/RedHat's GCC 6.3.1 issues a warning:

FORTIFY_SOURCE requires compilation with optimization (-O)

So this issue will be used to list which compilers do that, so that we might be able to avoid the warning.

Allow 'add_test()' to take context/SUT

Currently add_test() is used when adding a test to a non-BDD-ish suite, i.e. without a named context. If we are going for BDD-ish style as the "promoted" style, it seems backwards that the natural name for adding a test is reserved for the pure TDD style. (To add a test with a SUT/context you need to use add_test_with_context() which is clunky if you are doing this all the time, which we are promoting).

I don't know what could work, given backwards compatibility and macro snafu's, but it would be very nice if the flavour of the 'add' could be controlled by the number of parameters so that the following would work:

add_test(suite, test); // when using the default context, i.e. TDD style
add_test(suite, context, test); // when using BDD style

Text reporter bug when assert_that(<contains % character>, ...)

See the following (failing) test:

#include <stdio.h>
#include <cgreen/cgreen.h>

Ensure(my_test)
{
    assert_that(snprintf(NULL, 0, "%d", 42), is_equal_to(3));
}

int main(void)
{
    TestSuite *suite = create_test_suite();
    
    add_test(suite, my_test);

    return run_test_suite(suite, create_text_reporter());
}

The output is:

Running "main" (1 tests)...
cgreen-bug.c:6: Failure: my_test 
	Expected [snprintf(((void *)0), 0, "23", 42)] to [equal] [3]
		actual value:			[2]
		expected value:			[3]

Completed "main": 0 passes, 1 failure, 0 exceptions in 0ms.

The expected output is:

Running "main" (1 tests)...
cgreen-bug.c:6: Failure: my_test 
	Expected [snprintf(NULL, 0, "%d", 42)] to [equal] [3]
		actual value:			[2]
		expected value:			[3]

Completed "main": 0 passes, 1 failure, 0 exceptions in 0ms.

Cleanup include files wrt. constraints

constraint.h is mostly (if not only) internal and should be moved into that subdirectory and constraints_syntax_helper.h should really be called constraints.h.

get_significant_figures() and significant_figures_for_assert_double_are() should then move to the new constraints.h.

/usr/bin/ctest: Command not found

$ mkdir build ; cd build
$ cmake .. ; make
[...]
$ make test
Running tests...
make: /usr/bin/ctest: Command not found
make: *** [test] Error 127

Counter intuitive.

Document how to mock in a larger context

From a user:

My problem though was that I wanted to mock functions in one test file that I needed the actual definition for in others.

We should document, and possibly add some examples/tutorials of, the "solution" as building separate libraries/executables for each CUT with the specific mocks that are required. This is the only general solution given that you can only have one implementation of a function in each library/executable.

Add "call" constraint to allow adding side effects in expect()

I need something like expect(__wrap_rename, will_return(-1), will_set_errno(EXDEV)). I digged into CGreen's internals and found out that a ConstraintType::CALL value exists but is currently unused.

Is it planned to add something like

Constraint *create_callback_constraint(void (*callback)(void *), void *data)

?

That would be great!

`cgreen-runner` should be able to tell which version of the cgreen library is used

I don't know what is possible, but with the dynamic loading that is going on, I sometimes get tripped by my load paths are set wrong. Being able to do

cgreen-runner --version

and get back something like

cgreen-runner version X.Y - compiled YYYY-MM-DD
libcgreen.so version X.Y - compiled YYYY-MM-DD - loaded from /usr/local/lib32

would be a life saver sometimes.

Clear up top level directory

There are a lot of files that are not needed (remnants of an autotool attempt I think), some that are out of date, and some that we handle differently now.

One benefit of this is that the file listing on GitHub gets shorter and you don't have to scroll down so far to see the new, shiny README ;-)

exit behavior of forked processed messes with lcov/gcov

[ mailing list issue (manually) imported from http://sourceforge.net/p/cgreen/mailman/cgreen-devel/thread/CANvsC2voB3-GpT%[email protected]/
]

It appears that cgreen's "run in a child process" model is messing up
my coverage data somehow. Or maybe that isn't the reason at all. But
it certainly seems to affect the outcome. When I google I don't see
much discussion of this as a common problem apart from some mutterings
about GTEST in this context but not enough for me to think it is a
well known problem.

When I examine the ENV variables in both cases they look the same.
When I run strace it looks like they are both trying to write gcda
files to the same place.

I can boil it down to a suite with one test. If I use
"run_test_suite" (which runs in a child process) my test run yields no
coverage output. If I use "run_single_test" (which runs in the
current process) it works as expected.

Steps to reproduce (see below source file):

## this produces 45% line coverage
lcov --base-directory . --directory . --zerocounters -q
./test.exe
lcov --base-directory . --directory . -c -o coverage.info
 genhtml -o coverage coverage.info
## this produces 100% line coverage
lcov --base-directory . --directory . --zerocounters -q
./test.exe xxx # <- don't fork child process
lcov --base-directory . --directory . -c -o coverage2.info
genhtml -o coverage coverage2.info

What am I missing?

Cheers,
Colm

// test.c
//
// compile with: -g -fprofile-arcs -ftest-coverage

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#include <cgreen/cgreen.h>

int do_something()
{
    return -2;
}

Ensure (xxx)
{
    assert_that(do_something(), is_equal_to(-2));
}

int main(int argc, char **argv) {
    TestSuite *suite = create_test_suite();

    add_test(suite, xxx);

    if (argc > 1) {
        return run_single_test(suite, argv[1], create_text_reporter());
    }

    return run_test_suite(suite, create_text_reporter());
}

Do we need 'is_equal_to_unsigned'? 'hex' or 'byte'?

I was flipping through James Grennings TDD for Embedded C where he is mainly using CppUTest, but initially also Unity. He does this basically:

TEST(sprintf, NoFormatOperations)​ {
    ​char​ output[5];​
    memset(output, 0xaa, ​sizeof​(output));
    TEST_ASSERT_BYTES_EQUAL(0xaa, output[4]);​
}

Obviously, I'm no fan of the SHOUTING style ;-) but there seems to be no way to do exactly the same in Cgreen since we don't have is_equal_to_byte().

Here are some attempts:

Ensure(Char, can_compare_to_hex_without_cast) {
  char chars[4] = {0xaa, 0xaa, 0xaa, 0};
  assert_that(chars[0], is_equal_to(0xaa));
}
Ensure(Char, can_compare_to_hex_with_cast_expected_to_signed) {
  char chars[4] = {0xaa, 0xaa, 0xaa, 0};
  assert_that(chars[0], is_equal_to((signed)0xaa));
}
Ensure(Char, can_compare_to_hex_with_cast_actual_to_unsigned) {
  char chars[4] = {0xaa, 0xaa, 0xaa, 0};
  assert_that(((unsigned)chars[0]), is_equal_to(0xaa));
}

Resulting in

Running "char_tests" (4 tests)...
char_tests.c:19: Failure: Char -> can_compare_to_hex_with_cast_actual_to_unsigned
    Expected [((unsigned)chars[0])] to [equal] [0xaa]
            actual value:                   [4294967210]
            expected value:                 [170]
char_tests.c:14: Failure: Char -> can_compare_to_hex_with_cast_expected_to_signed
    Expected [chars[0]] to [equal] [(signed)0xaa]
            actual value:                   [-86]
            expected value:                 [170]
char_tests.c:9: Failure: Char -> can_compare_to_hex_without_cast
    Expected [chars[0]] to [equal] [0xaa]
            actual value:                   [-86]
            expected value:                 [170]

Which, of course, is caused by the signededness of the built in char and C sign-extending actual arguments.

I've only found one way to make this test pass:

Ensure(Char, can_compare_to_hex_with_actuals_type_unsigned) {
  unsigned char chars[4] = {0xaa, 0xaa, 0xaa, 0};
  assert_that(chars[0], is_equal_to(0xaa));
}

Note the move to unsigned char. We certainly don't want to force users to change their datatypes to get Cgreen tests to work.

Alternatives are of course to go for more is_equal_to_X() or figure out the signedness of the argument type of the actual (which I'm not sure can be done even theoretically).

The fact that this does not work in a natural way would/could trip an unsuspecting TDD:er onto a long hunt for a problem that isn't there.

Suggestions?

XML reporter doesn't list failed tests

There is a problem with the xml reporter.
It only generate XML output for failed tests when running a single test and don't work when running a test suite.

The problem is that tests are executed in separate processes when using test suites.
The reporter's functions (xml_show_fail and xml_show_skip) are executed in the child process to accumulate data but this data can't be returned to parent process to put it in the XML file (xml_reporter_finish_test) because the two process don't share the same memory space.

I don't know how to correct this, maybe using IPC.
For my project I've made a "dirty patch".
I have modified xml_reporter_start_test() so that the "testcase" tag is closed with time = 0 and I modified xml_show_fail() and xml_show_skip() so they can write in the XML file.
xml_reporter.c.diff.txt

improve failure message of not_equal_to_contents_of when contents are the same

current output in our own test suite:
Expected [fourty_five_and_up] to [not equal contents of] [another_fourty_five_and_up]
at offset: [-1]

offset -1 is a pretty obtuse way to say the contents were actually the same. maybe we should forego the 'at offset' part of the message since the first line says it all?

Unify C and C++ versions to support both languages

The divide between the C and C++ supporting versions is a problem. Not so much for actual usage but for release build, packaging and distribution. We need to build, package and distribute two versions of the library.

This is also a root cause for #51, in that on most platforms the cgreen-runner built in the C++ build cannot run test built with the C version and vice versa.

The most elegant solution to this would be to unify Cgreen to support both C and C++ with the same build/library.

Add `expect_call_count()`

Moved from TODO file.

I'm suspecting this implies that expect_always() could count the number of calls. If so, I would rather have:

expect_repeated(func, <constraints>, 3);

or even

expect(func, <constraints>, times(3));

or something similar.

Same 'cgreen-runner' for C and C++

Currently, in many environments, the reflective runner from a C++ build cannot run C built test libraries, and vice versa.

It would be great if we could use the same one for all shared test libraries. I have no idea what influences this, so there needs to be some kind of discovery or analysis effort.

It also relates somewhat to our build strategy, in that I think we should look into packaging both C and C++ libs, as well as a universal runner in the same package.

Wrong number of tests in the 'main' suite

I'm not sure if it's a problem from my side or it's related to cgreen (or in general if it's a problem at all!) but after compiling and running the tests, I'm getting this output:

Completed "reordering_fourier_input_tests": 16 passes, 0 failures, 0 exceptions in 0ms.
Completed "convert_real_delta_to_complex_tests": 32 passes, 0 failures, 0 exceptions in 1ms.
Completed "main": 32 passes, 0 failures, 0 exceptions in 1ms.

as you can see, the number of tests accumulate suite by suite instead of reporting the number of tests in each suite! I have 16 tests in each suite and as you can see the number which is reported in front of convert_real_delta_to_complex_tests is the sum of the last suite tests and the tests of current suite itself. Here is my main function:

#include <cgreen/cgreen.h>

TestSuite *reordering_fourier_input_tests();
TestSuite *convert_real_delta_to_complex_tests ();

int main(int argc, char **argv) {
    TestSuite *suite = create_test_suite();

    add_suite(suite, reordering_fourier_input_tests());
    add_suite(suite, convert_real_delta_to_complex_tests());

    if (argc > 1) {
        return run_single_test(suite, argv[1], create_text_reporter());
    }

    return run_test_suite(suite, create_text_reporter());
}

Make it easier to create 32- & 64 bit libraries and make them co-exist

In environments where there is a need for both 32 and 64-bit libraries its a bit fiddely to create both. What I've done then is to first install 32-bit libraries in /usr/local/lib32 and binaries in /usr/local/bin32 and then build 64-bit and install them in default locations. That's on a 64-bit system where I need to compile and test both 64 and 32 bit applications. Possibly you could do the reverse on a 32-bit system.

It would be handy to have cgreen itself do this if requested.

This is of course on Linux, since on Darwin the universal binaries handle this excellently. (Although causing some other issues when running the runner under 'arch'...)

Ensure the built DLL's are used when running Cgreen's own tests

On DLL-platforms, in this case Cygwin, the dynamic libraries are not loaded in the same way as for *nix. Since the cygcgreen.dll is built in in the src directory and the test executable is run in the tests subtree, CGreen's own freshly built DLL is not used.

Instead you need to install before running the tests. This works but is obviously easy to forget (and has bitten me multiple times causing long frustrating debugging sessions before realizing that I recognized the problem...)

There are indications that it would be possible to fix this using some CMake magic.

Not installed on correctly on OSX.

The experience on mac for this project is far from optimal. When following the tutorial https://cgreen-devs.github.io/#_installing_cgreen I checkout out the code and run make && make test and that seams to be ok. Then I run make install and nothing happens and I assume at least it installed something to my system to make testing suer easy. Then I try to build first_test.c with this command gcc -v -c first_test.c and I get:

Apple LLVM version 7.0.0 (clang-700.1.76)
Target: x86_64-apple-darwin14.5.0
Thread model: posix
 "/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang" -cc1 -triple x86_64-apple-macosx10.10.0 -Wdeprecated-objc-isa-usage -Werror=deprecated-objc-isa-usage -emit-obj -mrelax-all -disable-free -disable-llvm-verifier -main-file-name first_test.c -mrelocation-model pic -pic-level 2 -mthread-model posix -mdisable-fp-elim -masm-verbose -munwind-tables -target-cpu core2 -target-linker-version 253.6 -v -dwarf-column-info -coverage-file /Users/agir/code/using-cgreen/first_test.c -resource-dir /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/7.0.0 -fdebug-compilation-dir /Users/agir/code/using-cgreen -ferror-limit 19 -fmessage-length 101 -stack-protector 1 -mstackrealign -fblocks -fobjc-runtime=macosx-10.10.0 -fencode-extended-block-signature -fmax-type-align=16 -fdiagnostics-show-option -fcolor-diagnostics -o first_test.o -x c first_test.c
clang -cc1 version 7.0.0 based upon LLVM 3.7.0svn default target x86_64-apple-darwin14.5.0
#include "..." search starts here:
#include <...> search starts here:
 /usr/local/include
 /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../lib/clang/7.0.0/include
 /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/include
 /usr/include
 /System/Library/Frameworks (framework directory)
 /Library/Frameworks (framework directory)
End of search list.
first_test.c:1:10: fatal error: 'cgreen/cgreen.h' file not found
#include <cgreen/cgreen.h>
         ^
1 error generated.

Which I can fix by adding C_INCLUDE_PATH so I run it a second time like this.

C_INCLUDE_PATH=~/code/cgreen/include gcc -v -c first_test.c

That works so move on to the next command gcc first_test.o -lcgreen -o first_test and does not work.

ld: library not found for -lcgreen
clang: error: linker command failed with exit code 1 (use -v to see invocation)

This command needs to be fixed: gcc first_test.o -Lpwd/../cgreen/build/build-c/src/ -lcgreen -o first_test

Then I try to run the test ./first_test and I get this.

dyld: Library not loaded: @rpath/libcgreen.1.dylib
  Referenced from: /Users/agir/code/using-cgreen/./first_test
  Reason: image not found
[1]    31142 trace trap  ./first_test

And this command needs to be fixed like this. DYLD_LIBRARY_PATH=~/code/cgreen/build/build-c/src/ ./first_test

I assume this can be fixed by having make install copy Cgreen to the proper location.

Runner overwrites xml-files if two suites have the same name

If you have two suites, say main/default and runner/default, the runner in xml-mode will overwrite the xml-files so that there will be only one, in effect hiding the results of one of the suites.

The files should be generated with unique names.

More string constraints

I'm missing (for completeness):

  • does_not_begin_with_string()
  • ends_with_string()
  • does_not_end_with_string()

add_tests - is it working, needed? Why no context-version?

I tried to use add_tests(), note the plural. It seems not to be working. Anyone know if it's working?

Is there a compelling reason to have this duplicated feature? (As you can add your tests one by one, I'd consider this a duplication.)

One option is to remove it. Else we need to fix it so that it works, and has a "with_context" counterpart. (But also see #12 for my view on "with_context".)

Clear up documentation about shared fixtures

When the docs didn't use BDD notation, it was very clear about setup and teardown added to suites. This allowed building common fixtures that was setup and torn down once for a set of test by nesting suites and adding setup and teardown to the outer suite. The section about this need to be rewritten to start from a BDD notation perspective. (Suites are still the only way to manage this, but the wording is as if suites is the way to run tests.

Here's the ending of that section:

We also have the schema fixture, the `create_schema()` and
`drop_schema()`, which is run before and after every test.  Those are
still attached to the inner `suite`.

In the real world we would probably place the connection
fixture in its own file...

[source,c]
-----------------------
static MYSQL *connection;

MYSQL *get_connection() {
    return connection;
}

static void open_connection() {
    connection = mysql_init(NULL);
    mysql_real_connect(connection, "localhost", "me", "secret", "test", 0, NULL, 0);
}

static void close_connection() {
    mysql_close(connection);
}

TestSuite *connection_fixture(TestSuite *suite) {
    TestSuite *fixture = create_named_test_suite("Mysql fixture");
    add_suite(fixture, suite);
    set_setup(fixture, open_connection);
    set_teardown(fixture, close_connection);
    return fixture;
}
-----------------------

This allows the reuse of common fixtures across projects.

mock tests fails on cygwin64 - because of typing problems

On Cygwin64 a couple of tests fails (always_expect_keeps_affirming_parameter, expect_a_sequence and learning_mocks_emit_pastable_code), all because of the va_arg unpacking wrong size of data.

It seems that on Cygwin64 uintptr_t are bigger than int so when an int argument to mock() is unpacked often garbage is included.

(Strange thing is that those tests have code that causes multiple evaluation of constraints and every one fails on this problem except the last one in the test, no matter how many you add....)

They pass if the argument to the mocked function is defined as uintptr_t so this is some kind of proof of the problem.

Is there a way to force a type cast to uintptr_t to all VA_ARGS in the mock() macro?

I suppose I'm asking @matthargett primarily, but any input is appreciated.

Remove use of CGREEN_PER_TEST_TIMEOUT environment variable usage?

There is code in the runner which investigates the use of an environment variable, CGREEN_PER_TEST_TIMEOUT, to set a die_in() around every test run.

This is not documented anywhere, and it is easy to use die_in() in your tests. Is there a scenario where this is not sufficient? Is anyone we know of using this?

I'm considering removing this duplicated functionality. Or we should document it. But primarily I'd like to discuss scenarios.

Constraint to verify order of two substrings?

Yesterday I felt a need for a constraint that ensures that one substring preceeded another substring in a value. So it would assert true if given "something sub1 someother sub2 end" and the constraint was "sub1 preceeds sub2".

However I can't even think of a good way to express that. Haven't seen that in other frameworks or hamcrests...

Mechanism to skip tests temporarily

Although I don't like the concept of having tests that are skipped in the run, in principle, I'm also a realist and know there are times when you need this. Particularly if you are adopting a better framework like Cgreen from your previous. Not having this feature might make you use comments, which is even worse.

A mechanism to indicate skipped tests will allow tallying them and making them visible, e.g. in Jenkins.

So I propose that at some point this feature is included.

Updated license

The current license text is so old. It includes the "Ty Coon" signature...

Suggest update to LGPL 3.0. Do we really need to keep a separate license in the tutorial? I'd rather have a short reference in a section in the front matter or page 2.

Rewrite readme.md

We need a better and more 'marketing' style readme.md for display on the GitHub landing page. I think it should showcase the very readable style of the tests instead of wallowing in historical facts ;-)

But we'd still like to keep some of the current information somewhere.

GCC 5.2 on Cygwin x86_64 fails some tests for C++

/home/Thomas/Utveckling/Cgreen/cgreen-dev/cgreen/tests/cpp_assertion_tests.cpp:131: Exception: cpp_assertion_tests -> assert_throws_macro_passes_basic_type
        Test terminated with signal: Segmentation fault

/home/Thomas/Utveckling/Cgreen/cgreen-dev/cgreen/tests/cpp_assertion_tests.cpp:140: Exception: cpp_assertion_tests -> assert_throws_macro_passes_pointer_type
        Test terminated with signal: Segmentation fault

Caused by posix_runner_platform.run_test_in_its_own_process() getting a signal which it didn't before.

Maybe C++ in GCC 5.2 no longer signals exceptions with SIGABRT?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.