summaryrefslogtreecommitdiffstats
path: root/builtin/commit-graph.c (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2023-10-30The twenty-second batchJunio C Hamano1-0/+19
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-23The twenty-first batchJunio C Hamano1-0/+17
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-23doc/git-bisect: clarify `git bisect run` syntaxJavier Mora2-4/+4
The description of the `git bisect run` command syntax at the beginning of the manpage is `git bisect run <cmd>...`, which isn't quite clear about what `<cmd>` is or what the `...` mean; one could think that it is the whole (quoted) command line with all arguments in a single string, or that it supports multiple commands, or that it doesn't accept commands with arguments at all. Change to `git bisect run <cmd> [<arg>...]` to clarify the syntax, in both the manpage and the `git bisect -h` command output. Additionally, change `--term-{new,bad}` et al to `--term-(new|bad)` for consistency with the synopsis syntax conventions. Signed-off-by: Javier Mora <cousteaulecommandant@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-23builtin/branch.c: adjust error messages to coding guidelinesIsoken June Ibizugbe4-47/+47
As per the CodingGuidelines document, it is recommended that error messages such as die(), error() and warning(), should start with a lowercase letter and should not end with a period. This patch adjusts tests to match updated messages. Signed-off-by: Isoken June Ibizugbe <isokenjune@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-21merge-ort.c: fix typo 'neeed' to 'needed'王常新1-1/+1
Signed-off-by: 王常新 <wchangxin824@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-20The twentieth batchJunio C Hamano1-0/+12
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-20git-push doc: more visibility for -q optionMichal Suchanek1-1/+1
The "-v" option is shown in the SYNOPSIS section near the top, but "-q" is not shown anywhere there. List "-q" alongside "-v". Signed-off-by: Michal Suchanek <msuchanek@suse.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-20rebase: move parse_opt_keep_empty() downOswald Buddenhagen1-13/+12
This moves it right next to parse_opt_empty(), which is a much more logical place. As a side effect, this removes the need for a forward declaration of imply_merge(). Signed-off-by: Oswald Buddenhagen <oswald.buddenhagen@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-20rebase: handle --strategy via imply_merge() as wellOswald Buddenhagen1-12/+1
At least after the successive trimming of enum rebase_type mentioned in the previous commit, this code did exactly what imply_merge() does, so just call it instead. Suggested-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Oswald Buddenhagen <oswald.buddenhagen@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-20rebase: simplify code related to imply_merge()Oswald Buddenhagen1-5/+1
The code's evolution left in some bits surrounding enum rebase_type that don't really make sense any more. In particular, it makes no sense to invoke imply_merge() if the type is already known not to be REBASE_APPLY, and it makes no sense to assign the type after calling imply_merge(). enum rebase_type had more values until commit a74b35081c ("rebase: drop support for `--preserve-merges`") and commit 10cdb9f38a ("rebase: rename the two primary rebase backends"). The latter commit also renamed imply_interactive() to imply_merge(). Signed-off-by: Oswald Buddenhagen <oswald.buddenhagen@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-20send-email: handle to/cc/bcc from --compose messageJeff King3-12/+31
If the user writes a message via --compose, send-email will pick up various headers like "From", "Subject", etc and use them for other patches as if they were specified on the command-line. But we don't handle "To", "Cc", or "Bcc" this way; we just tell the user "those aren't interpeted yet" and ignore them. But it seems like an obvious thing to want, especially as the same feature exists when the cover letter is generated separately by format-patch. There it is gated behind the --to-cover option, but I don't think we'd need the same control here; since we generate the --compose template ourselves based on the existing input, if the user leaves the lines unchanged then the behavior remains the same. So let's fill in the implementation; like those other headers we already handle, we just need to assign to the initial_* variables. The only difference in this case is that they are arrays, so we'll feed them through parse_address_line() to split them (just like we would when reading a single string via prompting). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-20Revert "send-email: extract email-parsing code into a subroutine"Jeff King2-80/+75
This reverts commit b6049542b97e7b135e0e82bf996084d461224d32. Prior to that commit, we read the results of the user editing the "--compose" message in a loop, picking out parts we cared about, and streaming the result out to a ".final" file. That commit split the reading/interpreting into two phases; we'd now read into a hash, and then pick things out of the hash. The goal was making the code more readable. And in some ways it did, because the ugly regexes are confined to the reading phase. But it also introduced several bugs, because now the two phases need to match each other. In particular: - we pick out headers like "Subject: foo" with a case-insensitive regex, and then use the user-provided header name as the key in a case-sensitive hash. So if the user wrote "subject: foo", we'd no longer recognize it as a subject. - the namespace for the hash keys conflates header names with meta information like "body". If you put "body: foo" in your message, it would be misinterpreted as the actual message body (nobody is likely to do that in practice, but it seems like an unnecessary danger). - the handling for to/cc/bcc is totally broken. The behavior before that commit is to recognize and skip those headers, with a note to the user that they are not yet handled. Not great, but OK. But after the patch, the reading side now splits the addresses into a perl array-ref. But the interpreting side doesn't handle this at all, and blindly prints the stringified array-ref value. This leads to garbage like: (mbox) Adding to: ARRAY (0x555b4345c428) from line 'To: ARRAY(0x555b4345c428)' error: unable to extract a valid address from: ARRAY (0x555b4345c428) What to do with this address? ([q]uit|[d]rop|[e]dit): Probably not a huge deal, since nobody should even try to use those headers in the first place (since they were not implemented). But the new behavior is worse, and indicative of the sorts of problems that come from having the two layers. The revert had a few conflicts, due to later work in this area from 15dc3b9161 (send-email: rename variable for clarity, 2018-03-04) and d11c943c78 (send-email: support separate Reply-To address, 2018-03-04). I've ported the changes from those commits over as part of the conflict resolution. The new tests show the bugs. Note the use of GIT_SEND_EMAIL_NOTTY in the second one. Without it, the test is happy to reach outside the test harness to the developer's actual terminal (when run with the buggy state before this patch). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-20doc/send-email: mention handling of "reply-to" with --composeJeff King1-5/+5
The documentation for git-send-email lists the headers handled specially by --compose in a way that implies that this is the complete set of headers that are special. But one more was added by d11c943c78 (send-email: support separate Reply-To address, 2018-03-04) and never documented. Let's add it, and reword the documentation slightly to avoid having to specify the list of headers twice (as it is growing and will continue to do so as we add new features). If you read the code, you may notice that we also handle MIME-Version specially, in that we'll avoid over-writing user-provided MIME headers. I don't think this is worth mentioning, as it's what you'd expect to happen (as opposed to the other headers, which are picked up to be used in later emails). And certainly this feature existed when the documentation was expanded in 01d3861217 (git-send-email.txt: describe --compose better, 2009-03-16), and we chose not to mention it then. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-20grep: die gracefully when outside repositoryKristoffer Haugsbakk2-1/+33
Die gracefully when `git grep --no-index` is run outside of a Git repository and the path is outside the directory tree. If you are not in a Git repository and say: git grep --no-index search .. You trigger a `BUG`: BUG: environment.c:213: git environment hasn't been setup Aborted (core dumped) Because `..` is a valid path which is treated as a pathspec. Then `pathspec` figures out that it is not in the current directory tree. The `BUG` is triggered when `pathspec` tries to advise the user about how the path is not in the current (non-existing) repository. Reported-by: ks1322 ks1322 <ks1322@gmail.com> Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Kristoffer Haugsbakk <code@khaugsbakk.name> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-19git-p4 shouldn't attempt to store symlinks in LFSMatthew McClain1-0/+4
git-p4.py would attempt to put a symlink in LFS if its file extension matched git-p4.largeFileExtensions. Git LFS doesn't store symlinks because smudge/clean filters don't handle symlinks. They never get passed to the filter process nor the smudge/clean filters, nor could that occur without a change to the protocol or command-line interface. Unless Git learned how to send them to the filters, Git LFS would have a hard time using them in any useful way. Git LFS's goal is to move large files out of the repository history, and symlinks are functionally limited to 4 KiB or a similar size on most systems. Signed-off-by: Matthew McClain <mmcclain@noprivs.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-18t7601: use "test_path_is_file" etc. instead of "test -f"Dorcas AnonoLitunya1-12/+12
Some tests in t7601 use "test -f" and "test ! -f" to see if a path exists or is missing. Use test_path_is_file and test_path_is_missing helper functions to clarify these tests a bit better. This especially matters for the "missing" case because "test ! -f F" will be happy if "F" exists as a directory, but the intent of the test is that "F" should not exist, even as a directory. The updated code expresses this better. Signed-off-by: Dorcas AnonoLitunya <anonolitunya@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-18am: align placeholder for --whitespace option with applyJunio C Hamano1-2/+2
`git am` passes the value given to its `--whitespace` option through to the underlying `git apply`, and the value is called <action> over there. Fix the documentation for the command that calls the value <option> to say <action> instead. Note that the option help given by `git am -h` already calls the value <action>, so there is no need to make a matching change there. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-18The nineteenth batchJunio C Hamano1-1/+9
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-17commit: do not use cryptic "new_index" in end-user facing messagesJunio C Hamano1-4/+4
These error messages say "new_index" as if that spelling has some significance to the end users (e.g. the file "$GIT_DIR/new_index" has some issues), but that is not the case at all. The i18n folks were made to include the word literally in the translated messages, which was not a good idea at all. Spell it "new index", as we are just telling the users that we failed to create a new index file. The term is expected to be translated to the end-users' languages, not left as if it were a literal file name. This dates all the way back to the first re-implemenation of "git commit" command in C (the scripted version did not have such wording in its error messages), in f5bbc322 (Port git commit to C., 2007-11-08). Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-17builtin/add.c: clean up die() messagesNaomi Ibe1-5/+5
As described in the CodingGuidelines document, a single line message given to die() and its friends should not capitalize its first word, and should not add full-stop at the end. Signed-off-by: Naomi Ibe <naomi.ibeh69@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-16doc/git-repack: don't mention nonexistent "--unpacked" optionPatrick Steinhardt1-5/+2
The documentation for geometric repacking mentions a "--unpacked" option that supposedly changes how loose objects are rolled up. This option has never existed, and the implied behaviour, namely to include all unpacked objects into the resulting packfile, is in fact the default behaviour. Correct the documentation to not mention this option. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-16doc/git-repack: fix syntax for `-g` shorthand optionPatrick Steinhardt1-1/+1
The `-g` switch is a shorthand for `--geometric=` and allows the user to specify the geometric. The documentation is wrong though and indicates that the syntax for the shorthand is `-g=<factor>`. In fact though, the option must be specified without the equals sign via `-g<factor>`. Fix the syntax accordingly. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-14t5319: make corrupted large-offset test more robustJeff King1-2/+4
The test t5319.88 ("reader bounds-checks large offset table") can fail intermittently. The failure mode looks like this: 1. An earlier test sets up "objects64", a directory that can be used to produce a midx with a corrupted large-offsets table. To get the large offsets, it corrupts the normal ".idx" file to have a fake large offset, and then builds a midx from that. That midx now has a large offset table, which is what we want. But we also have a .idx on disk that has a corrupted entry. We'll call the object with the corrupted large-offset "X". 2. In t5319.88, we further corrupt the midx by reducing the size of the large-offset chunk (because our goal is to make sure we do not do an out-of-bounds read on it). 3. We then enumerate all of the objects with "cat-file --batch-check --batch-all-objects", expecting to see a complaint when we try to show object X. We use --batch-all-objects because our objects64 repo doesn't actually have any refs (but if we check them all, one of them will be the failing one). The default batch-check format includes %(objecttype) and %(objectsize), both of which require us to access the actual pack data (and thus requires looking at the offset). 4a. Usually, this succeeds. We try to output object X, do a lookup via the midx for the type/size lookup, and run into the corrupt large-offset table. 4b. But sometimes we hit a different error. If another object points to X as a delta base, then trying to find the type of that object requires walking the delta chain to the base entry (since only the base has the concrete type; deltas themselves are either OFS_DELTA or REF_DELTA). Normally this would not require separate offset lookups at all, as deltas are usually stored as OFS_DELTA, specifying the relative offset to the base. But the corrupt idx created in step 1 is done directly with "git pack-objects" and does not pass the --delta-base-offset option, meaning we have REF_DELTA entries! Those do have to consult an index to find the location of the base object, and they use the pack .idx to do this. The same pack .idx that we know is corrupted from step 1! Git does notice the error, but it does so by seeing the corrupt .idx file, not the corrupt midx file, and the error it reports is different, causing the test to fail. The set of objects created in the test is deterministic. But the delta selection seems not to be (which is not too surprising, as it is multi-threaded). I have seen the failure in Windows CI but haven't reproduced it locally (not even with --stress). Re-running a failed Windows CI job tends to work. But when I download and examine the trash directory from a failed run, it shows a different set of deltas than I get locally. But the exact source of non-determinism isn't that important; our test should be robust against any order. There are a few options to fix this: a. It would be OK for the "objects64" setup to "unbreak" the .idx file after generating the midx. But then it would be hard for subsequent tests to reuse it, since it is the corrupted idx that forces the midx to have a large offset table. b. The "objects64" setup could use --delta-base-offset. This would fix our problem, but earlier tests have many hard-coded offsets. Using OFS_DELTA would change the locations of objects in the pack (this might even be OK because I think most of the offsets are within the .idx file, but it seems brittle and I'm afraid to touch it). c. Our cat-file output is in oid order by default. Since we store bases before deltas, if we went in pack order (using the "--unordered" flag), we'd always see our corrupt X before any delta which depends on it. But using "--unordered" means we skip the midx entirely. That makes sense, since it is just enumerating all of the packs, using the offsets found in their .idx files directly. So it doesn't work for our test. d. We could ask directly about object X, rather than enumerating all of them. But that requires further hard-coding of the oid (both sha1 and sha256) of object X. I'd prefer not to introduce more brittleness. e. We can use a --batch-check format that looks at the pack data, but doesn't have to chase deltas. The problem in this case is %(objecttype), which has to walk to the base. But %(objectsize) does not; we can get the value directly from the delta itself. Another option would be %(deltabase), where we report the REF_DELTA name but don't look at its data. I've gone with option (e) here. It's kind of subtle, but it's simple and has no side effects. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-13The eighteenth batchJunio C Hamano1-0/+21
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-13Prevent git from rehashing 4GiB filesJason Hatton2-2/+34
The index stores file sizes using a uint32_t. This causes any file that is a multiple of 2^32 to have a cached file size of zero. Zero is a special value used by racily clean. This causes git to rehash every file that is a multiple of 2^32 every time git status or git commit is run. This patch mitigates the problem by making all files that are a multiple of 2^32 appear to have a size of 1<<31 instead of zero. The value of 1<<31 is chosen to keep it as far away from zero as possible to help prevent things getting mixed up with unpatched versions of git. An example would be to have a 2^32 sized file in the index of patched git. Patched git would save the file as 2^31 in the cache. An unpatched git would very much see the file has changed in size and force it to rehash the file, which is safe. The file would have to grow or shrink by exactly 2^31 and retain all of its ctime, mtime, and other attributes for old git to not notice the change. This patch does not change the behavior of any file that is not an exact multiple of 2^32. Signed-off-by: Jason D. Hatton <jhatton@globalfinishing.com> Signed-off-by: brian m. carlson <bk2204@github.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-13t: add a test helper to truncate filesbrian m. carlson4-0/+28
In a future commit, we're going to work with some large files which will be at least 4 GiB in size. To take advantage of the sparseness functionality on most Unix systems and avoid running the system out of disk, it would be convenient to use truncate(2) to simply create a sparse file of sufficient size. However, the GNU truncate(1) utility isn't portable, so let's write a tiny test helper that does the work for us. Signed-off-by: brian m. carlson <bk2204@github.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-13attr: add attr.tree for setting the treeish to read attributes fromJohn Cai6-0/+95
44451a2 (attr: teach "--attr-source=<tree>" global option to "git", 2023-05-06) provided the ability to pass in a treeish as the attr source. In the context of serving Git repositories as bare repos like we do at GitLab however, it would be easier to point --attr-source to HEAD for all commands by setting it once. Add a new config attr.tree that allows this. Signed-off-by: John Cai <johncai86@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-13attr: read attributes from HEAD when bare repoJohn Cai3-2/+23
The motivation for 44451a2e5e (attr: teach "--attr-source=<tree>" global option to "git" , 2023-05-06), was to make it possible to use gitattributes with bare repositories. To make it easier to read gitattributes in bare repositories however, let's just make HEAD:.gitattributes the default. This is in line with how mailmap works, 8c473cecfd (mailmap: default mailmap.blob in bare repositories, 2012-12-13). Signed-off-by: John Cai <johncai86@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-12The seventeenth batchJunio C Hamano1-0/+4
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-12mailmap: change primary address for Derrick StoleeDerrick Stolee1-3/+3
The previous primary address is no longer valid. Signed-off-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-11stash: be careful what we storeJunio C Hamano2-0/+10
"git stash store" is meant to store what "git stash create" produces, as these two are implementation details of the end-user facing "git stash save" command. Even though it is clearly documented as such, users would try silly things like "git stash store HEAD" to render their stash unusable. Worse yet, because "git stash drop" does not allow such a stash entry to be removed, "git stash clear" would be the only way to recover from such a mishap. Reuse the logic that allows "drop" to refrain from working on such a stash entry to teach "store" to avoid storing an object that is not a stash entry in the first place. Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-11merge: introduce {copy|clear}_merge_options()Junio C Hamano3-1/+22
When mostly the same set of options are to be used to perform multiple merges, one instance of the merge_options structure may want to be created and used by copying from the same template instance. We saw such a use recently in "git merge-tree". Let's make the pattern official by introducing copy_merge_options() as a supported way to make a copy of the structure, and also give clear_merge_options() to release any resources held by a copied instance. Currently we only make a shallow copy, so the former is a mere structure assignment while the latter is a no-op, but this may change in the future as the members of merge_options structure evolve. Suggested-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-10The sixteenth batchJunio C Hamano1-0/+7
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-10doc/git-worktree: mention "refs/rewritten" as per-worktree refsPatrick Steinhardt1-3/+4
Some references are special in the context of worktrees as they are considered to be per-worktree instead of shared across all of the worktrees. Most importantly, this includes "refs/worktree/" that have explicitly been designed such that users can create per-woorktree refs. But there are also special references that have an associated meaning like "refs/bisect/", which is used to track state of git-bisect(1). These special per-worktree references are documented in git-worktree(1), but one instance is missing. In a9be29c9817 (sequencer: make refs generated by the `label` command worktree-local, 2018-04-25), we have converted "refs/rewritten/" to be a per-worktree reference as well. These references are used by our sequencer infrastructure to generate labels for rebased commits. So in order to allow for multiple concurrent rebases to happen in different worktrees, these references need to be tracked per worktree. We forgot to update our documentation to mention these new per-worktree references, which is fixed by this patch. Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09chunk-format: drop pair_chunk_unsafe()Jeff King2-21/+0
There are no callers left, and we don't want anybody to add new ones (they should use the not-unsafe version instead). So let's drop the function. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09commit-graph: detect out-of-order BIDX offsetsJeff King2-0/+23
The BIDX chunk tells us the offsets at which each commit's Bloom filters can be found in the BDAT chunk. We compute the length of each filter by checking the offsets of neighbors and subtracting them. If the offsets are out of order, then we'll get a negative length, which we then store as a very large unsigned value. This can cause us to read out-of-bounds memory, as we access the hash data modulo "filter->len * BITS_PER_WORD". We can easily detect this case when loading the individual filters. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09commit-graph: check bounds when accessing BIDX chunkJeff King2-2/+23
We load the bloom_filter_indexes chunk using pair_chunk(), so we have no idea how big it is. This can lead to out-of-bounds reads if it is smaller than expected, since we index it based on the number of commits found elsewhere in the graph file. We can check the chunk size up front, like we do for CDAT and other chunks with one fixed-size record per commit. The test case demonstrates the problem. It actually won't segfault, because we end up reading random data from the follow-on chunk (BDAT in this case), and the bounds checks added in the previous patch complain. But this is by no means assured, and you can craft a commit-graph file with BIDX at the end (or a smaller BDAT) that does segfault. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09commit-graph: check bounds when accessing BDAT chunkJeff King4-0/+63
When loading Bloom filters from a commit-graph file, we use the offset values in the BIDX chunk to index into the memory mapped for the BDAT chunk. But since we don't record how big the BDAT chunk is, we just trust that the BIDX offsets won't cause us to read outside of the chunk memory. A corrupted or malicious commit-graph file will cause us to segfault (in practice this isn't a very interesting attack, since commit-graph files are local-only, and the worst case is an out-of-bounds read). We can't fix this by checking the chunk size during parsing, since the data in the BDAT chunk doesn't have a fixed size (that's why we need the BIDX in the first place). So we'll fix it in two parts: 1. Record the BDAT chunk size during parsing, and then later check that the BIDX offsets we look up are within bounds. 2. Because the offsets are relative to the end of the BDAT header, we must also make sure that the BDAT chunk is at least as large as the expected header size. Otherwise, we overflow when trying to move past the header, even for an offset of "0". We can check this early, during the parsing stage. The error messages are rather verbose, but since this is not something you'd expect to see outside of severe bugs or corruption, it makes sense to err on the side of too many details. Sadly we can't mention the filename during the chunk-parsing stage, as we haven't set g->filename at this point, nor passed it down through the stack. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09commit-graph: bounds-check generation overflow chunkJeff King3-3/+18
If the generation entry in a commit-graph doesn't fit, we instead insert an offset into a generation overflow chunk. But since we don't record the size of the chunk, we may read outside the chunk if the offset we find on disk is malicious or corrupted. We can't check the size of the chunk up-front; it will vary based on how many entries need overflow. So instead, we'll do a bounds-check before accessing the chunk memory. Unfortunately there is no error-return from this function, so we'll just have to die(), which is what it does for other forms of corruption. As with other cases, we can drop the st_mult() call, since we know our bounds-checked value will fit within a size_t. Before this patch, the test here actually "works" because we read garbage data from the next chunk. And since that garbage data happens not to provide a generation number which changes the output, it appears to work. We could construct a case that actually segfaults or produces wrong output, but it would be a bit tricky. For our purposes its sufficient to check that we've detected the bounds error. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09commit-graph: check size of generations chunkJeff King2-2/+20
We neither check nor record the size of the generations chunk we parse from a commit-graph file. This should have one uint32_t for each commit in the file; if it is smaller (due to corruption, etc), we may read outside the mapped memory. The included test segfaults without this patch, as it shrinks the size considerably (and the chunk is near the end of the file, so we read off the end of the array rather than accidentally reading another chunk). We can fix this by checking the size up front (like we do for other fixed-size chunks, like CDAT). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09commit-graph: bounds-check base graphs chunkJeff King3-1/+22
When we are loading a commit-graph chain, we check that each slice of the chain points to the appropriate set of base graphs via its BASE chunk. But since we don't record the size of the chunk, we may access out-of-bounds memory if the file is corrupted. Since we know the number of entries we expect to find (based on the position within the commit-graph-chain file), we can just check the size up front. In theory this would also let us drop the st_mult() call a few lines later when we actually access the memory, since we know that the computed offset will fit in a size_t. But because the operands "g->hash_len" and "n" have types "unsigned char" and "int", we'd have to cast to size_t first. Leaving the st_mult() does that cast, and makes it more obvious that we don't have an overflow problem. Note that the test does not actually segfault before this patch, since it just reads garbage from the chunk after BASE (and indeed, it even rejects the file because that garbage does not have the expected hash value). You could construct a file with BASE at the end that did segfault, but corrupting the existing one is easy, and we can check stderr for the expected message. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09commit-graph: detect out-of-bounds extra-edges pointersJeff King3-6/+23
If an entry in a commit-graph file has more than 2 parents, the fixed-size parent fields instead point to an offset within an "extra edges" chunk. We blindly follow these, assuming that the chunk is present and sufficiently large; this can lead to an out-of-bounds read for a corrupt or malicious file. We can fix this by recording the size of the chunk and adding a bounds-check in fill_commit_in_graph(). There are a few tricky bits: 1. We'll switch from working with a pointer to an offset. This makes some corner cases just fall out naturally: a. If we did not find an EDGE chunk at all, our size will correctly be zero (so everything is "out of bounds"). b. Comparing "size / 4" lets us make sure we have at least 4 bytes to read, and we never compute a pointer more than one element past the end of the array (computing a larger pointer is probably OK in practice, but is technically undefined behavior). c. The current code casts to "uint32_t *". Replacing it with an offset avoids any comparison between different types of pointer (since the chunk is stored as "unsigned char *"). 2. This is the first case in which fill_commit_in_graph() may return anything but success. We need to make sure to roll back the "parsed" flag (and any parents we might have added before running out of buffer) so that the caller can cleanly fall back to loading the commit object itself. It's a little non-trivial to do this, and we might benefit from factoring it out. But we can wait on that until we actually see a second case where we return an error. As a bonus, this lets us drop the st_mult() call. Since we've already done a bounds check, we know there won't be any integer overflow (it would imply our buffer is larger than a size_t can hold). The included test does not actually segfault before this patch (though you could construct a case where it does). Instead, it reads garbage from the next chunk which results in it complaining about a bogus parent id. This is sufficient for our needs, though (we care that the fallback succeeds, and that stderr mentions the out-of-bounds read). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09commit-graph: check size of commit data chunkJeff King2-1/+20
We expect a commit-graph file to have a fixed-size data record for each commit in the file (and we know the number of commits to expct from the size of the lookup table). If we encounter a file where this is too small, we'll look past the end of the chunk (and possibly even off the mapped memory). We can fix this by checking the size up front when we record the pointer. The included test doesn't segfault, since it ends up reading bytes from another chunk. But it produces nonsense results, since the values it reads are garbage. Our test notices this by comparing the output to a non-corrupted run of the same command (and of course we also check that the expected error is printed to stderr). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09midx: check size of revindex chunkJeff King4-2/+32
When we load a revindex from disk, we check the size of the file compared to the number of objects we expect it to have. But when we use a RIDX chunk stored directly in the midx, we just access the memory directly. This can lead to out-of-bounds memory access for a corrupted or malicious multi-pack-index file. We can catch this by recording the RIDX chunk size, and then checking it against the expected size when we "load" the revindex. Note that this check is much simpler than the one that load_revindex_from_disk() does, because we just have the data array with no header (so we do not need to account for the header size, and nor do we need to bother validating the header values). The test confirms both that we catch this case, and that we continue the process (the revindex is required to use the midx bitmaps, but we fallback to a non-bitmap traversal). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09midx: bounds-check large offset chunkJeff King3-3/+26
When we see a large offset bit in the regular midx offset table, we use the entry as an index into a separate large offset table (just like a pack idx does). But we don't bounds-check the access to that large offset table (nor even record its size when we parse the chunk!). The equivalent code for a regular pack idx is in check_pack_index_ptr(). But things are a bit simpler here because of the chunked format: we can just check our array index directly. As a bonus, we can get rid of the st_mult() here. If our array bounds-check is successful, then we know that the result will fit in a size_t (and the bounds check uses a division to avoid overflow entirely). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09midx: check size of object offset chunkJeff King2-1/+24
The object offset chunk has one fixed-size entry for each object in the midx. But since we don't check its size, we may access out-of-bounds memory if we see a corrupt or malicious midx file. Sine the entries are fixed-size, the total length can be known up-front, and we can just check it while parsing the chunk (this is similar to what we do when opening pack idx files, which contain a similar offset table). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09midx: enforce chunk alignment on readingJeff King5-4/+26
The midx reader assumes chunks are aligned to a 4-byte boundary: we treat the fanout chunk as an array of uint32_t, indexing it to feed the results to ntohl(). Without aligning the chunks, we may violate the CPU's alignment constraints. Though many platforms allow this, some do not. And certanily UBSan will complain, since it is undefined behavior. Even though most chunks are naturally 4-byte-aligned (because they are storing uint32_t or larger types), PNAM is not. It stores NUL-terminated pack names, so you can have a valid chunk with any length. The writing side handles this by 4-byte-aligning the chunk, introducing a few extra NULs as necessary. But since we don't check this on the reading side, we may end up with a misaligned fanout and trigger the undefined behavior. We have two options here: 1. Swap out ntohl(fanout[i]) for get_be32(fanout+i) everywhere. The latter handles alignment itself. It's possible that it's slightly slower (though in practice I'm not sure how true that is, especially for these code paths which then go on to do a binary search). 2. Enforce the alignment when reading the chunks. This is easy to do, since the table-of-contents reader can check it in one spot. I went with the second option here, just because it places less burden on maintenance going forward (it is OK to continue using ntohl), and we know it can't have any performance impact on the actual reads. The commit-graph code uses the same chunk API. It's usually also 4-byte aligned, but some chunks are not (like Bloom filter BDAT chunks). So we'll pass "1" here to allow any alignment. It doesn't suffer from the same problem as midx with its fanout because the fanout chunk is always the first (and the rest of the format dictates that the first chunk will start aligned). The new test shows the effect on a midx with a misaligned PNAM chunk. Note that the midx-reading code treats chunk-toc errors as soft, falling back to the non-midx path rather than calling die(), as we do for other parsing errors. Arguably we should make all of these behave the same, but that's out of scope for this patch. For now the test just expects the fallback behavior. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09midx: check size of pack names chunkJeff King3-2/+21
We parse the pack-name chunk as a series of NUL-terminated strings. But since we don't look at the chunk size, there's nothing to guarantee that we don't parse off the end of the chunk (or even off the end of the mapped file). We can record the length, and then as we parse make sure that we never walk past it. The new test exercises the case, though note that it does not actually segfault before this patch. It hits a NUL byte somewhere in one of the other chunks, and comes up with a garbage pack name. You could construct one that reads out-of-bounds (e.g., a PNAM chunk at the end of file), but this case is simple and sufficient to check that we detect the problem. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09commit-graph: check consistency of fanout tableJeff King3-4/+43
We use bsearch_hash() to look up items in the oid index of a commit-graph. It also has a fanout table to reduce the initial range in which we'll search. But since the fanout comes from the on-disk file, a corrupted or malicious file can cause us to look outside of the allocated index memory. One solution here would be to pass the total table size to bsearch_hash(), which could then bounds check the values it reads from the fanout. But there's an inexpensive up-front check we can do, and it's the same one used by the midx and pack idx code (both of which likewise have fanout tables and use bsearch_hash(), but are not affected by this bug): 1. We can check the value of the final fanout entry against the size of the table we got from the index chunk. These must always match, since the fanout is just slicing up the index. As a side note, the midx and pack idx code compute it the other way around: they use the final fanout value as the object count, and check the index size against it. Either is valid; if they disagree we cannot know which is wrong (a corrupted fanout value, or a too-small table of oids). 2. We can quickly scan the fanout table to make sure it is monotonically increasing. If it is, then we know that every value is less than or equal to the final value, and therefore less than or equal to the table size. It would also be sufficient to just check that each fanout value is smaller than the final one, but the midx and pack idx code both do a full monotonicity check. It's the same cost, and it catches some other corruptions (though not all; the checks done by "commit-graph verify" are more complete but more expensive, and our goal here is to be fast and memory-safe). There are two new tests. One just checks the final fanout value (this is the mirror image of the "too small oid lookup" case added for the midx in the previous commit; it's flipped here because commit-graph considers the oid lookup chunk to be the source of truth). The other actually creates a fanout with many out-of-bounds entries, and prior to this patch, it does cause the segfault you'd expect. But note that the error is not "your fanout entry is out-of-bounds", but rather "fanout value out of order". That's because we leave the final fanout value in place (to get past the table size check), making the index non-monotonic (the second-to-last entry is big, but the last one must remain small to match the actual table). We need adjustments to a few existing tests, as well: - an earlier test in t5318 corrupts the fanout and runs "commit-graph verify". Its message is now changed, since we catch the problem earlier (during the load step, rather than the careful validation step). - in t5324, we test that "commit-graph verify --shallow" does not do expensive verification on the base file of the chain. But the corruption it uses (munging a byte at offset 1000) happens to be in the middle of the fanout table. And now we detect that problem in the cheaper checks that are performed for every part of the graph. We'll push this back to offset 1500, which is only caught by the more expensive checksum validation. Likewise, there's a later test in t5324 which munges an offset 100 bytes into a file (also in the fanout table) that is referenced by an alternates file. So we now find that corruption during the load step, rather than the verification step. At the very least we need to change the error message (like the case above in t5318). But it is probably good to make sure we handle all parts of the verification even for alternate graph files. So let's likewise corrupt byte 1500 and make sure we found the invalid checksum. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>
2023-10-09midx: check size of oid lookup chunkJeff King2-3/+25
When reading an on-disk multi-pack-index, we take the number of objects in the midx from the final value of the fanout table. But we just blindly assume that the chunk containing the actual oid entries is the correct size. This can lead to us reading out-of-bounds memory if the lookup chunk is too small (or if the fanout is corrupted; when they don't agree we cannot tell which one is wrong). Note that we bump the assignment of m->num_objects into the fanout parser callback, so that it's set when we parse the lookup table (otherwise we'd have to manually record the lookup table size and check it later). Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com>