Contents[Hide]

1.  What is Cgreen?

Cgreen is a unit tester for the C software developer, a test automation and software quality assurance tool for development teams. The tool is completely open source published under the LGPL

Unit testing is a development practice popularised by the agile development community. It is characterised by writing many small tests alongside the normal code. Often the tests are written before the code they are testing, in a tight test-code-refactor loop. Done this way, the practice is known as Test Driven Development. Cgreen was designed to support this style of development.

Unit tests are written in the same language as the code, in our case C. This avoids the mental overhead of constantly switching language, and also allows you to use any application code in your tests.

Here are some of its features:

  • Fully composable test suites

  • setup() and teardown() for tests and test suites

  • Each test runs in its own process

  • An isolated test can be run in a single process for debugging

  • Ability to mock functions

  • The reporting mechanism can be easily extended

  • Automatic discovery of tests

Cgreen was primarily developed to support C programming, but there is also support for C++.

1.1.  Cgreen - Vanilla or Chocolate?

Test driven development (TDD) catched on when the JUnit framework for Java spread to other langauges, giving us a family of xUnit tools. Cgreen was born in this wave and have many similarities to the xUnit family.

But TDD evolved over time and modern thinking and practice is more along the lines of BDD, an abbreviation of Behaviour Driven Development, made popular by people like Dan North and frameworks like JBehave, RSpec, Cucumber and Jasmine.

Cgreen follows this trend and has evolved to embrace BDD-style testing. Although the fundamental mechanisms in TDD and BDD are much the same, the shift in focus by changing wording from tests to behaviour specifications is significant.

This document will present TDD-style Cgreen first and then go on to describe the minor differences that will allow you to think, drive and test in BDD fashion.

1.2.  Installing Cgreen

There are two ways to install Cgreen in your system.

Installing a package

The first way is to use the RPM or DEB package provided by the Cgreen Team. You can fetch it from Cgreen website project. Download and install it using the normal procedures for your system.

Installing from source

The second way is indicated for developers and advanced users. Basically this consists of fetching the sources of the project and compiling them. To do this you need the CMake build system.

When you have the CMake tool, the steps are:

tar -zxpvf cgreen.tar.gz
mkdir cgreen-build
cd cgreen-build
cmake ../cgreen
make
make test
make install

(On cygwin you need to install before the tests can be run.)

As you can see, we create a separate build directory before going there and building. This is called an out of source build. It compiles Cgreen from outside the sources directory. This helps the overall file organization and enables multi-target builds from the same sources by leaving the complete source tree untouched.

Is possible to use the Makefile. This file is used to compile and test Cgreen without the need of CMake tool. However it does not contain the rules to install on your system.

Both methods will create a library (on unix called libcgreen.so) which can be used in conjunction with the cgreen.h header file to compile test code. The created library is installed in the system, by default in the /usr/local/lib/.

We’ll first write a test to confirm everything is working. Let’s start with a simple test module with no tests, called first_test.c

#include "cgreen/cgreen.h"

int main(int argc, char **argv) {
  TestSuite *suite = create_test_suite();
  return run_test_suite(suite, create_text_reporter());
}

This is a very unexciting test. It just creates an empty test suite and runs it. It’s usually easier to proceed in small steps, though, and this is the smallest one I could think of. The only complication is the cgreen.h header file. Here I am assuming we have the Cgreen folder in the path to ensure compilation works.

Building this test is, of course, trivial…

gcc -c first_test.c
gcc first_test.o -lcgreen -o first_test
./first_test

Invoking the executable should give…

Running "main" (0 tests)...
Completed "main": 0 passes, 0 failures, 0 exceptions.

All of the above rather assumes you are working in a Unix like environment, probably with gcc. The code is pretty much standard C99, so any C compiler should work. Cgreen should compile on all systems that support the sys/msg.h messaging library. This has been tested on Linux, MacOSX and Cygwin so far, but not Windows.

So far we have tested compilation, and that the test suite actually runs. Let’s add a meaningless test or two so that you can see how it runs…

#include "cgreen/cgreen.h"

Ensure(this_test_should_pass) {
    assert_that(1 == 1, is_true);
}

Ensure(this_test_should_fail) {
    assert_that(0, is_equal_to(1));
}

int main(int argc, char **argv) {
    TestSuite *suite = create_test_suite();
    add_test(suite, this_test_should_pass);
    add_test(suite, this_test_should_fail);
    return run_test_suite(suite, create_text_reporter());
}

A test is denoted by the macro Ensure. You can think of a test as having a void (void) signature. You add the test to your suite using add_test().

On compiling and running, we now get the output…

Running "main" (2 tests)...
first_test.c:8: Test Failure: -> this_test_should_fail
    Expected [0] to [equal] [1]
Completed "main": 1 pass, 1 failure, 0 exceptions.

The TextReporter, created by the create_text_reporter() call, is the simplest way to output the test results. It just streams the failures as text.

Of course "0" would never equal "1", but this shows how Cgreen presents the expression that you want to assert.

1.3.  Five minutes doing TDD with Cgreen

For a more realistic example we need something to test. We’ll pretend that we are writing a function to split the words of a sentence in place. It does this by replacing any spaces with string terminators and returns the number of conversions plus one. Here is an example of what we have in mind…

char *sentence = strdup("Just the first test");
word_count = split_words(sentence);

sentence should now point at "Just\0the\0first\0test". Not an obviously useful function, but we’ll be using it for something more practical later.

This time around we’ll add a little more structure to our tests. Rather than having the test as a stand alone program, we’ll separate the runner from the test cases. That way, multiple test suites of test cases can be included in the main() runner file. This makes it less work to add more tests.

Here is the, so far empty, test case in words_test.c

#include "cgreen/cgreen.h"

TestSuite *words_tests() {
TestSuite *suite = create_test_suite();
  return suite;
}

Here is the all_tests.c test runner…

#include "cgreen/cgreen.h"

TestSuite *words_tests();

int main(int argc, char **argv) {
  TestSuite *suite = create_test_suite();
  add_suite(suite, words_tests());
  if (argc > 1) {
    return run_single_test(suite, argv[1], create_text_reporter());
  }
  return run_test_suite(suite, create_text_reporter());
}

Cgreen has two ways of running tests. The default is to run all tests in their own processes. This is what happens if you invoke run_test_suite(). This makes all the tests independent, but the constant fork()ing can make the tests difficult to debug. To make debugging simpler, Cgreen does not fork() when only a single test is run by name with run_single_test().

Building this scaffolding…

gcc -c words_test.c
gcc -c all_tests.c
gcc words_test.o all_tests.o -lcgreen -o all_tests

…and executing the result gives the familiar…

Running "main" (0 tests)...
Completed "main": 0 passes, 0 failures, 0 exceptions.

All this scaffolding is pure overhead, but from now on adding tests will be a lot easier.

Here is a first test of split_words() in words_test.c

#include "cgreen/cgreen.h";
#include "words.h";
#include <string.h>;

Ensure(word_count_returned_from_split) {
  char *sentence = strdup("Birds of a feather");
  int word_count = split_words(sentence);
  assert_that(word_count, is_equal_to(4));
  free(sentence);
}

TestSuite *words_tests() {
  TestSuite *suite = create_test_suite();
  add_test(suite, word_count_returned_from_split);
  return suite;
}

The assert_that() macro takes two parameters, the value to assert and a constraint. The constraints comes in various forms. In this case we use the probably most common, is_equal_to(). With the default TextReporter the message is sent to STDOUT.

To get this to compile we need to create the words.h header file…

int split_words(char *sentence);

…and to get the code to link we need a stub function in words.c

int split_words(char *sentence) {
  return 0;
}

A full build later…

gcc -c all_tests.c
gcc -c words_test.c
gcc -c words.c
gcc all_tests.o words_test.o words.o -lcgreen -o all_tests
./all_tests

…and we get the more useful response…

Running "main" (1 tests)...
words_test.c:10: Failure: -> words_tests -> word_count_returned_from_split
        Expected [word_count] to [equal] [4]
                actual value:   [0]
                expected value: [4]
Completed "main": 0 passes, 1 failure, 0 exceptions.

The breadcrumb trail following the "Failure" text is the nesting of the tests. It goes from the test suites, which can be nested in each other, through the test function, and finally to the message from the assertion. In the language of Cgreen, a "failure" is a mismatched assertion, an "exception" occurs when a test fails to complete for any reason.

We could get this to pass just by returning the value 4. Doing TDD in really small steps, you would actually do this, but frankly this example is too simple. Instead we’ll go straight to the core of the implementation…

#include <string.h>;

int split_words(char *sentence) {
  int i, count = 1;
  for (i = 0; i < strlen(sentence); i++) {
    if (sentence[i] == ' ') {
      count++;
    }
  }
  return count;
}

There is a hidden problem here, but our tests still passed so we’ll pretend we didn’t notice.

Running "main" (1 tests)...
Completed "main": 1 pass, 0 failures, 0 exceptions.

Time to add another test. We want to confirm that the string is broken into separate words…

#include "cgreen/cgreen.h"
#include "words.h"
#include <string.h>;

Ensure(word_count_returned_from_split) { ... }

Ensure(spaces_should_be_converted_to_zeroes) {
  char *sentence = strdup("Birds of a feather");
  split_words(sentence);
  int comparison = memcmp("Birds\0of\0a\0feather", sentence, strlen(sentence));
  assert_that(comparison, is_equal_to(0));
  free(sentence);
}

TestSuite *words_tests() {
  TestSuite *suite = create_test_suite();
  add_test(suite, word_count_returned_from_split);
  add_test(suite, spaces_should_be_converted_to_zeroes);
  return suite;
}

Sure enough, we get a failure…

Running "main" (2 tests)...
words_test.c:18: Failure: -> words_tests -> spaces_should_be_converted_to_zeroes
        Expected [comparison] to [equal] [0]
                actual value:   [-32]
                expected value: [0]
Completed "main": 1 pass, 1 failure, 0 exceptions.

Not surprising given that we haven’t written the code yet.

The fix…

int split_words(char *sentence) {
  int i, count = 1;
  for (i = 0; i < strlen(sentence); i++) {
    if (sentence[i] == ' ') {
      sentence[i] = '\0';
      count++;
    }
  }
  return count;
}

…reveals our previous hack…

Running "main" (2 tests)...
words_test.c:10: Failure: -> words_tests -> word_count_returned_from_split
        Expected [word_count] to [equal] [4]
                actual value:   [2]
                expected value: [4]
Completed "main": 1 pass, 1 failure, 0 exceptions.

Our earlier test now fails, because we have affected the strlen() call in our loop. Moving the length calculation out of the loop…

int split_words(char *sentence) {
  int i, count = 1, length = strlen(sentence);
  for (i = 0; i < length; i++) {
    ...
  }
  return count;
}

…restores order…

Running "main" (2 tests)...
Completed "main": 2 passes, 0 failures, 0 exceptions.

It’s nice to keep the code under control while we are actually writing it, rather than debugging later when things are more complicated.

That was pretty straight forward. Let’s do something more interesting.

1.4.  What are mock functions?

The next example is more realistic. Still in our words.h file, we want to write a function that invokes a callback on each word in a sentence. Something like…

void act_on_word(const char *word, void *memo) { ... }
words("This is a sentence", &act_on_word, &memo);

Here the memo pointer is just some accumulated data that the act_on_word() callback is working with. Other people will write the act_on_word() function and probably many other functions like it. The callback is actually a flex point, and not of interest right now.

The function under test is the words() function and we want to make sure it walks the sentence correctly, dispatching individual words as it goes. How to test this?

Let’s start with a one word sentence. In this case we would expect the callback to be invoked once with the only word, right? Here is the test for that…

...
#include <cgreen/mocks.h>
#include <stdlib.h>
...
void mocked_callback(const char *word, void *memo) {
  mock(word, memo);
}

Ensure(single_word_sentence_invokes_callback_once) {
  expect(mocked_callback,
    when(word, is_equal_to_string("Word")), when(memo, is_equal_to(NULL)));
  words("Word", &mocked_callback, NULL);
}

TestSuite *words_tests() {
  TestSuite *suite = create_test_suite();
  ...
  add_test(suite, single_word_sentence_invokes_callback_once);
  return suite;
}

What is the funny looking mock() function?

A mock is basically a programmable object. In C objects are limited to functions, so this is a mock function. The macro mock() compares the incoming parameters with any expected values and dispatches messages to the test suite if there is a mismatch. It also returns any values that have been preprogrammed in the test.

The test function is single_word_sentence_invokes_callback_once(). Using the expect() macro it programs the mock function to expect a single call. That call will have parameters "Word" and NULL. If they don’t match later, we will get a test failure.

Only the test method, not the mock callback, should be added to the test suite.

For a successful compile and link, the words.h file must now look like…

int split_words(char *sentence);
void words(const char *sentence, void (*walker)(const char *, void *), void *memo);

…and the words.c file should have the stub…

void words(const char *sentence, void (*walker)(const char *, void *), void *memo) {
}

This gives us the expected failing tests…

Running "main" (3 tests)...
words_test.c:27: Test Failure: -> words_tests -> single_word_sentence_invokes_callback_once
        Expected call was not made to function [mocked_callback]
Completed "main": 2 passes, 1 failure, 0 exceptions.

Cgreen reports that the callback was never invoked. We can easily get the test to pass by filling out the implementation with…

void words(const char *sentence, void (*walker)(const char *, void *), void *memo) {
  (*walker)(sentence, memo);
}

That is, we just invoke it once with the whole string. This is a temporary measure to get us moving. Now everything should pass, although it’s not much of a test yet.

That was all pretty conventional, but let’s tackle the trickier case of actually splitting the sentence. Here is the test function we will add to words_test.c

Ensure(phrase_invokes_callback_for_each_word) {
  expect(mocked_callback, when(word, is_equal_to_string("Birds")));
  expect(mocked_callback, when(word, is_equal_to_string("of")));
  expect(mocked_callback, when(word, is_equal_to_string("a")));
  expect(mocked_callback, when(word, is_equal_to_string("feather")));
  words("Birds of a feather", &mocked_callback, NULL);
}

Each call is expected in sequence. Any failures, or left over calls, or extra calls, and we get failures. We can see all this when we run the tests…

Running "main" (4 tests)...
words_test.c:32: Test Failure: -> words_tests -> phrase_invokes_callback_for_each_word
        Expected [[word] parameter in [mocked_callback]] to [equal string] ["Birds"]
                actual value:   ["Birds of a feather"]
                expected value: ["Birds"]
words_test.c:33: Test Failure: -> words_tests -> phrase_invokes_callback_for_each_word
        Expected call was not made to function [mocked_callback]
words_test.c:34: Test Failure: -> words_tests -> phrase_invokes_callback_for_each_word
        Expected call was not made to function [mocked_callback]
words_test.c:35: Test Failure: -> words_tests -> phrase_invokes_callback_for_each_word
        Expected call was not made to function [mocked_callback]
Completed "main": 4 passes, 4 failures, 0 exceptions.

The first failure tells the story. Our little words() function called the mock callback with the entire sentence. This makes sense, because that was the hack to get to the next test.

Although not relevant to this guide, I cannot resist getting these tests to pass. Besides, we get to use the function we created earlier…

void words(const char *sentence, void (*walker)(const char *, void *), void *memo) {
  char *words = strdup(sentence);
  int word_count = split_words(words);
  char *word = words;
  while (word_count-- > 0) {
    (*walker)(word, memo);
    word = word + strlen(word) + 1;
  }
  free(words);
}

And with some work we are rewarded with…

Running "main" (4 tests)...
Completed "main": 8 passes, 0 failures, 0 exceptions.

More work than I like to admit as it took me three goes to get this right. I firstly forgot the + 1 added on to strlen(), then forgot to swap sentence for word in the (*walker)() call, and finally third time lucky. Of course running the tests each time made these mistakes very obvious. It’s taken me far longer to write these paragraphs than it has to write the code.