How does the git test infrastructure works?
Published on: 11 November 2025
I'm often curious how a C project handles testing, because C doesn't have a standard framework for it, so each project makes its own decision.
The git project decided to create its own framework with various shell
scripts and libraries to perform end-to-end tests. All the tests are
collected in the t/ folder and to run them you can simply run make
tests from the root folder or make from within the t/ folder. Of
course, git must be compiled before running the tests.
The tests uses a protocol called TAP (Test Anything Protocol), which I have never heard before, but it's quite an interesting protocol to separate the test producers and consumers.
Because TAP is used, you can use a TAP harness software (they suggest prove) to run the tests and have various reports on top. This is an example of what it looks like.
t $ make DEFAULT_TEST_TARGET=prove GIT_PROVE_OPTS='--timer --jobs 16' rm -f -r 'test-results' *** prove (shell & unit tests) *** [12:23:25] t0013-sha1dc.sh .................................... ok 512 ms ( 0.01 usr 0.00 sys + 0.09 cusr 0.10 csys = 0.20 CPU) [12:23:25] t0005-signals.sh ................................... ok 704 ms ( 0.01 usr 0.01 sys + 0.22 cusr 0.25 csys = 0.49 CPU) [12:23:25] t0018-advice.sh .................................... ok 882 ms ( 0.02 usr 0.01 sys + 0.38 cusr 0.45 csys = 0.86 CPU) [12:23:25] t0004-unwritable.sh ................................ ok 1145 ms ( 0.03 usr 0.01 sys + 0.55 cusr 0.65 csys = 1.24 CPU) [12:23:25] t0019-json-writer.sh ............................... ok 1276 ms ( 0.03 usr 0.01 sys + 0.77 cusr 0.89 csys = 1.70 CPU) [12:23:25] t0017-env-helper.sh ................................ ok 1311 ms ( 0.03 usr 0.01 sys + 1.00 cusr 1.12 csys = 2.16 CPU) [12:23:25] t0002-gitfile.sh ................................... ok 1473 ms ( 0.04 usr 0.02 sys + 1.26 cusr 1.42 csys = 2.74 CPU) [12:23:26] t0022-crlf-rename.sh ............................... ok 839 ms ( 0.02 usr 0.01 sys + 1.00 cusr 1.13 csys = 2.16 CPU) [12:23:26] t0014-alias.sh ..................................... ok 1790 ms ( 0.05 usr 0.02 sys + 1.71 cusr 1.90 csys = 3.68 CPU) [12:23:26] t0023-crlf-am.sh ................................... ok 763 ms ( 0.02 usr 0.01 sys + 1.27 cusr 1.40 csys = 2.70 CPU) [12:23:26] t0007-git-var.sh ................................... ok 2112 ms ( 0.06 usr 0.02 sys + 2.21 cusr 2.45 csys = 4.74 CPU) [12:23:26] t0024-crlf-archive.sh .............................. ok 809 ms ( 0.03 usr 0.01 sys + 1.56 cusr 1.71 csys = 3.31 CPU) [12:23:26] t0025-crlf-renormalize.sh .......................... ok 810 ms ( 0.03 usr 0.01 sys + 1.47 cusr 1.64 csys = 3.15 CPU) [12:23:26] t0029-core-unsetenvvars.sh ......................... skipped: skipping Windows-specific tests [12:23:27] t0026-eol-config.sh ................................ ok 1096 ms ( 0.03 usr 0.01 sys + 1.51 cusr 1.68 csys = 3.23 CPU) ... [12:32:15] t9902-completion.sh ................................ ok 15743 ms ( 0.21 usr 0.03 sys + 52.18 cusr 59.45 csys = 111.87 CPU) [12:32:18] t9500-gitweb-standalone-no-errors.sh ............... ok 26970 ms ( 0.35 usr 0.11 sys + 85.80 cusr 85.21 csys = 171.47 CPU) [12:32:25] t9001-send-email.sh ................................ ok 46478 ms ( 0.61 usr 0.26 sys + 149.19 cusr 147.90 csys = 297.96 CPU) [12:32:26] t7610-mergetool.sh ................................. ok 71846 ms ( 0.92 usr 0.36 sys + 253.50 cusr 256.27 csys = 511.05 CPU) [12:32:26] All tests successful. Files=1014, Tests=31764, 542 wallclock secs ( 7.93 usr 2.55 sys + 1382.80 cusr 1650.09 csys = 3043.37 CPU) Result: PASS
The test framework has also all the various features that are common in most frameworks: run a specific test, skip a test, run tests with a naming pattern, parallel run and many more.
I also found interesting the process they use for naming tests. I haven't found many open source projects with a clear process for naming tests.
The test files are named as:
tNNNN-commandname-details.sh
where N is a decimal digit.
First digit tells the family:
0 - the absolute basics and global stuff
1 - the basic commands concerning database
2 - the basic commands concerning the working tree
3 - the other basic commands (e.g. ls-files)
4 - the diff commands
5 - the pull and exporting commands
6 - the revision tree commands (even e.g. merge-base)
7 - the porcelainish commands concerning the working tree
8 - the porcelainish commands concerning forensics
9 - the git tools
Second digit tells the particular command we are testing.
Third digit (optionally) tells the particular switch or group of switches
we are testing.
On top all of these end-to-end tests there is also a folder
t/unit-tests/ which includes 212 test.
unit-tests $ prove ./bin/unit-tests ./bin/unit-tests .. ok All tests successful. Files=1, Tests=212, 0 wallclock secs ( 0.01 usr 0.00 sys + 0.03 cusr 0.25 csys = 0.29 CPU) Result: PASS
According to the git history it seems that they originally created
their own framework for these, but recently they migrated everything
to clar which is a minimal C testing framework that started for the
libgit2 project. All the tests are compiled in the final
bin/unit-tests binary during the git compilation process and then
you can simply run that binary.
There is also a document Documentation/technical/unit-tests.adoc
where they say the following.
In our current testing environment, we spend a significant amount of effort crafting end-to-end tests for error conditions that could easily be captured by unit tests (or we simply forgo some hard-to-setup and rare error conditions). Unit tests additionally provide stability to the codebase and can simplify debugging through isolation. Writing unit tests in pure C, rather than with our current shell/test-tool helper setup, simplifies test setup, simplifies passing data around (no shell-isms required), and reduces testing runtime by not spawning a separate process for every test invocation. We believe that a large body of unit tests, living alongside the existing test suite, will improve code quality for the Git project.
That document was written in 2023 so it's likely that they have a goal of increasing the number of unit tests in the future.
Lastly, they also have a way to find out the test coverage. This is what they say about it.
You can use the coverage tests to find code paths that are not being
used or properly exercised yet.
To do that, run the coverage target at the top-level (not in the t/
directory):
make coverage
That'll compile Git with GCC's coverage arguments, and generate a test
report with gcov after the tests finish. Running the coverage tests
can take a while, since running the tests in parallel is incompatible
with GCC's coverage mode.
After the tests have run you can generate a list of untested
functions:
make coverage-untested-functions
You can also generate a detailed per-file HTML report using the
Devel::Cover module. To install it do:
# On Debian or Ubuntu:
sudo aptitude install libdevel-cover-perl
# From the CPAN with cpanminus
curl -L https://cpanmin.us/ | perl - --sudo --self-upgrade
cpanm --sudo Devel::Cover
Then, at the top-level:
make cover_db_html
That'll generate a detailed cover report in the "cover_db_html"
directory, which you can then copy to a webserver, or inspect locally
in a browser.
Overall this is quite a serious testing infrastructure they have, which makes sense for such a large and critical project.