aboutsummaryrefslogtreecommitdiffstats
path: root/t/t5551-http-fetch-smart.sh
diff options
context:
space:
mode:
authorDerrick Stolee <derrickstolee@github.com>2024-09-20 00:00:21 +0000
committerJunio C Hamano <gitster@pobox.com>2024-09-20 14:44:31 -0700
commit719399b57b3db8471852d86f96ab5db4a40d43ba (patch)
treea468fb60b1d718c0e7909646ccc6c611055743f7 /t/t5551-http-fetch-smart.sh
parentStart preparing for Git 2.46.2 (diff)
downloadgit-719399b57b3db8471852d86f96ab5db4a40d43ba.tar.gz
git-719399b57b3db8471852d86f96ab5db4a40d43ba.zip
credential: add new interactive config option
When scripts or background maintenance wish to perform HTTP(S) requests, there is a risk that our stored credentials might be invalid. At the moment, this causes the credential helper to ping the user and block the process. Even if the credential helper does not ping the user, Git falls back to the 'askpass' method, which includes a direct ping to the user via the terminal. Even setting the 'core.askPass' config as something like 'echo' will causes Git to fallback to a terminal prompt. It uses git_terminal_prompt(), which finds the terminal from the environment and ignores whether stdin has been redirected. This can also block the process awaiting input. Create a new config option to prevent user interaction, favoring a failure to a blocked process. The chosen name, 'credential.interactive', is taken from the config option used by Git Credential Manager to already avoid user interactivity, so there is already one credential helper that integrates with this option. However, older versions of Git Credential Manager also accepted other string values, including 'auto', 'never', and 'always'. The modern use is to use a boolean value, but we should still be careful that some users could have these non-booleans. Further, we should respect 'never' the same as 'false'. This is respected by the implementation and test, but not mentioned in the documentation. The implementation for the Git interactions takes place within credential_getpass(). The method prototype is modified to return an 'int' instead of 'void'. This allows us to detect that no attempt was made to fill the given credential, changing the single caller slightly. Also, a new trace2 region is added around the interactive portion of the credential request. This provides a way to measure the amount of time spent in that region for commands that _are_ interactive. It also makes a conventient way to test that the config option works with 'test_region'. Signed-off-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Diffstat (limited to 't/t5551-http-fetch-smart.sh')
-rwxr-xr-xt/t5551-http-fetch-smart.sh22
1 files changed, 22 insertions, 0 deletions
diff --git a/t/t5551-http-fetch-smart.sh b/t/t5551-http-fetch-smart.sh
index 7b5ab0eae1..ceb3336a5c 100755
--- a/t/t5551-http-fetch-smart.sh
+++ b/t/t5551-http-fetch-smart.sh
@@ -186,6 +186,28 @@ test_expect_success 'clone from password-protected repository' '
test_cmp expect actual
'
+test_expect_success 'credential.interactive=false skips askpass' '
+ set_askpass bogus nonsense &&
+ (
+ GIT_TRACE2_EVENT="$(pwd)/interactive-true" &&
+ export GIT_TRACE2_EVENT &&
+ test_must_fail git clone --bare "$HTTPD_URL/auth/smart/repo.git" interactive-true-dir &&
+ test_region credential interactive interactive-true &&
+
+ GIT_TRACE2_EVENT="$(pwd)/interactive-false" &&
+ export GIT_TRACE2_EVENT &&
+ test_must_fail git -c credential.interactive=false \
+ clone --bare "$HTTPD_URL/auth/smart/repo.git" interactive-false-dir &&
+ test_region ! credential interactive interactive-false &&
+
+ GIT_TRACE2_EVENT="$(pwd)/interactive-never" &&
+ export GIT_TRACE2_EVENT &&
+ test_must_fail git -c credential.interactive=never \
+ clone --bare "$HTTPD_URL/auth/smart/repo.git" interactive-never-dir &&
+ test_region ! credential interactive interactive-never
+ )
+'
+
test_expect_success 'clone from auth-only-for-push repository' '
echo two >expect &&
set_askpass wrong &&
ment the user gets to correct their bogus URL is a quiet warning. This is inconsistent with the check we perform in fsck, where any use of such a URL as a submodule is an error. When we see such a bogus URL, let's not try to be nice and continue without helpers. Instead, die() immediately. This is simpler and obviously safe. And there's very little chance of disrupting a normal workflow. It's _possible_ that somebody has a legitimate URL with a raw newline in it. It already wouldn't work with credential helpers, so this patch steps that up from an inconvenience to "we will refuse to work with it at all". If such a case does exist, we should figure out a way to work with it (especially if the newline is only in the path component, which we normally don't even pass to helpers). But until we see a real report, we're better off being defensive. Reported-by: Carlo Arenas <carenas@gmail.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> 2020-04-19fsck: convert gitmodules url to URL passed to curlJonathan Nieder2-5/+118 In 07259e74ec1 (fsck: detect gitmodules URLs with embedded newlines, 2020-03-11), git fsck learned to check whether URLs in .gitmodules could be understood by the credential machinery when they are handled by git-remote-curl. However, the check is overbroad: it checks all URLs instead of only URLs that would be passed to git-remote-curl. In principle a git:// or file:/// URL does not need to follow the same conventions as an http:// URL; in particular, git:// and file:// protocols are not succeptible to issues in the credential API because they do not support attaching credentials. In the HTTP case, the URL in .gitmodules does not always match the URL that would be passed to git-remote-curl and the credential machinery: Git's URL syntax allows specifying a remote helper followed by a "::" delimiter and a URL to be passed to it, so that git ls-remote http::https://example.com/repo.git invokes git-remote-http with https://example.com/repo.git as its URL argument. With today's checks, that distinction does not make a difference, but for a check we are about to introduce (for empty URL schemes) it will matter. .gitmodules files also support relative URLs. To ensure coverage for the https based embedded-newline attack, urldecode and check them directly for embedded newlines. Helped-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> Reviewed-by: Jeff King <peff@peff.net> 2020-04-19credential: refuse to operate when missing host or protocolJeff King2-14/+40 The credential helper protocol was designed to be very flexible: the fields it takes as input are treated as a pattern, and any missing fields are taken as wildcards. This allows unusual things like: echo protocol=https | git credential reject to delete all stored https credentials (assuming the helpers themselves treat the input that way). But when helpers are invoked automatically by Git, this flexibility works against us. If for whatever reason we don't have a "host" field, then we'd match _any_ host. When you're filling a credential to send to a remote server, this is almost certainly not what you want. Prevent this at the layer that writes to the credential helper. Add a check to the credential API that the host and protocol are always passed in, and add an assertion to the credential_write function that speaks credential helper protocol to be doubly sure. There are a few ways this can be triggered in practice: - the "git credential" command passes along arbitrary credential parameters it reads from stdin. - until the previous patch, when the host field of a URL is empty, we would leave it unset (rather than setting it to the empty string) - a URL like "example.com/foo.git" is treated by curl as if "http://" was present, but our parser sees it as a non-URL and leaves all fields unset - the recent fix for URLs with embedded newlines blanks the URL but otherwise continues. Rather than having the desired effect of looking up no credential at all, many helpers will return _any_ credential Our earlier test for an embedded newline didn't catch this because it only checked that the credential was cleared, but didn't configure an actual helper. Configuring the "verbatim" helper in the test would show that it is invoked (it's obviously a silly helper which doesn't look at its input, but the point is that it shouldn't be run at all). Since we're switching this case to die(), we don't need to bother with a helper. We can see the new behavior just by checking that the operation fails. We'll add new tests covering partial input as well (these can be triggered through various means with url-parsing, but it's simpler to just check them directly, as we know we are covered even if the url parser changes behavior in the future). [jn: changed to die() instead of logging and showing a manual username/password prompt] Reported-by: Carlo Arenas <carenas@gmail.com> Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> 2020-04-19credential: parse URL without host as empty host, not unsetJeff King3-2/+19 We may feed a URL like "cert:///path/to/cert.pem" into the credential machinery to get the key for a client-side certificate. That credential has no hostname field, which is about to be disallowed (to avoid confusion with protocols where a helper _would_ expect a hostname). This means as of the next patch, credential helpers won't work for unlocking certs. Let's fix that by doing two things: - when we parse a url with an empty host, set the host field to the empty string (asking only to match stored entries with an empty host) rather than NULL (asking to match _any_ host). - when we build a cert:// credential by hand, similarly assign an empty string It's the latter that is more likely to impact real users in practice, since it's what's used for http connections. But we don't have good infrastructure to test it. The url-parsing version will help anybody using git-credential in a script, and is easy to test. Signed-off-by: Jeff King <peff@peff.net> Reviewed-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> 2020-04-19t0300: use more realistic inputsJeff King1-4/+85 Many of the tests in t0300 give partial inputs to git-credential, omitting a protocol or hostname. We're checking only high-level things like whether and how helpers are invoked at all, and we don't care about specific hosts. However, in preparation for tightening up the rules about when we're willing to run a helper, let's start using input that's a bit more realistic: pretend as if http://example.com is being examined. This shouldn't change the point of any of the tests, but do note we have to adjust the expected output to accommodate this (filling a credential will repeat back the protocol/host fields to stdout, and the helper debug messages and askpass prompt will change on stderr). Signed-off-by: Jeff King <peff@peff.net> Reviewed-by: Taylor Blau <me@ttaylorr.com> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> 2020-04-19t0300: make "quit" helper more realisticJeff King1-3/+13 We test a toy credential helper that writes "quit=1" and confirms that we stop running other helpers. However, that helper is unrealistic in that it does not bother to read its stdin at all. For now we don't send any input to it, because we feed git-credential a blank credential. But that will change in the next patch, which will cause this test to racily fail, as git-credential will get SIGPIPE writing to the helper rather than exiting because it was asked to. Let's make this one-off helper more like our other sample helpers, and have it source the "dump" script. That will read stdin, fixing the SIGPIPE problem. But it will also write what it sees to stderr. We can make the test more robust by checking that output, which confirms that we do run the quit helper, don't run any other helpers, and exit for the reason we expected. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Jonathan Nieder <jrnieder@gmail.com> 2020-03-17Git 2.25.2v2.25.2Junio C Hamano3-2/+62 Signed-off-by: Junio C Hamano <gitster@pobox.com> 2020-03-17unicode: update the width tables to Unicode 13.0Beat Bolli1-16/+27 Now that Unicode 13.0 has been announced[0], update the character width tables to the new version. [0] https://home.unicode.org/announcing-the-unicode-standard-version-13-0/ Signed-off-by: Beat Bolli <dev+git@drbeat.li> Signed-off-by: Junio C Hamano <gitster@pobox.com> 2020-03-17Git 2.17.4v2.17.4Junio C Hamano3-2/+18 Signed-off-by: Junio C Hamano <gitster@pobox.com> 2020-03-15prefix_path: show gitdir if worktree unavailableEmily Shaffer3-4/+50 If there is no worktree at present, we can still hint the user about Git's current directory by showing them the absolute path to the Git directory. Even though the Git directory doesn't make it as easy to locate the worktree in question, it can still help a user figure out what's going on while developing a script. This fixes a segmentation fault introduced in e0020b2f ("prefix_path: show gitdir when arg is outside repo", 2020-02-14). Signed-off-by: Emily Shaffer <emilyshaffer@google.com> [jc: added minimum tests, with help from Szeder Gábor] Signed-off-by: Junio C Hamano <gitster@pobox.com> 2020-03-12fsck: detect gitmodules URLs with embedded newlinesJeff King2-2/+32 The credential protocol can't handle values with newlines. We already detect and block any such URLs from being used with credential helpers, but let's also add an fsck check to detect and block gitmodules files with such URLs. That will let us notice the problem earlier when transfer.fsckObjects is turned on. And in particular it will prevent bad objects from spreading, which may protect downstream users running older versions of Git. We'll file this under the existing gitmodulesUrl flag, which covers URLs with option injection. There's really no need to distinguish the exact flaw in the URL in this context. Likewise, I've expanded the description of t7416 to cover all types of bogus URLs. 2020-03-12credential: detect unrepresentable values when parsing urlsJeff King3-4/+60 The credential protocol can't represent newlines in values, but URLs can embed percent-encoded newlines in various components. A previous commit taught the low-level writing routines to die() when encountering this, but we can be a little friendlier to the user by detecting them earlier and handling them gracefully. This patch teaches credential_from_url() to notice such components, issue a warning, and blank the credential (which will generally result in prompting the user for a username and password). We blank the whole credential in this case. Another option would be to blank only the invalid component. However, we're probably better off not feeding a partially-parsed URL result to a credential helper. We don't know how a given helper would handle it, so we're better off to err on the side of matching nothing rather than something unexpected. The die() call in credential_write() is _probably_ impossible to reach after this patch. Values should end up in credential structs only by URL parsing (which is covered here), or by reading credential protocol input (which by definition cannot read a newline into a value). But we should definitely keep the low-level check, as it's our final and most accurate line of defense against protocol injection attacks. Arguably it could become a BUG(), but it probably doesn't matter much either way. Note that the public interface of credential_from_url() grows a little more than we need here. We'll use the extra flexibility in a future patch to help fsck catch these cases. 2020-03-12t/lib-credential: use test_i18ncmp to check stderrJeff King1-1/+1 The credential tests have a "check" function which feeds some input to git-credential and checks the stdout and stderr. We look for exact matches in the output. For stdout, this makes sense; the output is the credential protocol. But for stderr, we may be showing various diagnostic messages, or the prompts fed to the askpass program, which could be translated. Let's mark them as such. 2020-03-12credential: avoid writing values with newlinesJeff King2-0/+8 The credential protocol that we use to speak to helpers can't represent values with newlines in them. This was an intentional design choice to keep the protocol simple, since none of the values we pass should generally have newlines. However, if we _do_ encounter a newline in a value, we blindly transmit it in credential_write(). Such values may break the protocol syntax, or worse, inject new valid lines into the protocol stream. The most likely way for a newline to end up in a credential struct is by decoding a URL with a percent-encoded newline. However, since the bug occurs at the moment we write the value to the protocol, we'll catch it there. That should leave no possibility of accidentally missing a code path that can trigger the problem. At this level of the code we have little choice but to die(). However, since we'd not ever expect to see this case outside of a malicious URL, that's an acceptable outcome. Reported-by: Felix Wilhelm <fwilhelm@google.com> 2020-03-02show_one_mergetag: print non-parent in hex form.Harald van Dijk2-1/+21 When a mergetag names a non-parent, which can occur after a shallow clone, its hash was previously printed as raw data. Print it in hex form instead. Signed-off-by: Harald van Dijk <harald@gigawatt.nl> Signed-off-by: Junio C Hamano <gitster@pobox.com> 2020-02-28Revert "gpg-interface: prefer check_signature() for GPG verification"Junio C Hamano4-72/+75 This reverts commit 72b006f4bfd30b7c5037c163efaf279ab65bea9c, which breaks the end-user experience when merging a signed tag without having the public key. We should report "can't check because we have no public key", but the code with this change claimed that there was no signature. 2020-02-27mingw: workaround for hangs when sending STDINAlexandr Miloslavskiy1-28/+3 Explanation ----------- The problem here is flawed `poll()` implementation. When it tries to see if pipe can be written without blocking, it eventually calls `NtQueryInformationFile()` and tests `WriteQuotaAvailable`. However, the meaning of quota was misunderstood. The value of quota is reduced when either some data was written to a pipe, *or* there is a pending read on the pipe. Therefore, if there is a pending read of size >= than the pipe's buffer size, poll() will think that pipe is not writable and will hang forever, usually that means deadlocking both pipe users. I have studied the problem and found that Windows pipes track two values: `QuotaUsed` and `BytesInQueue`. The code in `poll()` apparently wants to know `BytesInQueue` instead of quota. Unfortunately, `BytesInQueue` can only be requested from read end of the pipe, while `poll()` receives write end. The git's implementation of `poll()` was copied from gnulib, which also contains a flawed implementation up to today. I also had a look at implementation in cygwin, which is also broken in a subtle way. It uses this code in `pipe_data_available()`: fpli.WriteQuotaAvailable = (fpli.OutboundQuota - fpli.ReadDataAvailable) However, `ReadDataAvailable` always returns 0 for the write end of the pipe, turning the code into an obfuscated version of returning pipe's total buffer size, which I guess will in turn have `poll()` always say that pipe is writable. The commit that introduced the code doesn't say anything about this change, so it could be some debugging code that slipped in. These are the typical sizes used in git: 0x2000 - default read size in `strbuf_read()` 0x1000 - default read size in CRT, used by `strbuf_getwholeline()` 0x2000 - pipe buffer size in compat\mingw.c As a consequence, as soon as child process uses `strbuf_read()`, `poll()` in parent process will hang forever, deadlocking both processes. This results in two observable behaviors: 1) If parent process begins sending STDIN quickly (and usually that's the case), then first `poll()` will succeed and first block will go through. MAX_IO_SIZE_DEFAULT is 8MB, so if STDIN exceeds 8MB, then it will deadlock. 2) If parent process waits a little bit for any reason (including OS scheduler) and child is first to issue `strbuf_read()`, then it will deadlock immediately even on small STDINs. The problem is illustrated by `git stash push`, which will currently read the entire patch into memory and then send it to `git apply` via STDIN. If patch exceeds 8MB, git hangs on Windows. Possible solutions ------------------ 1) Somehow obtain `BytesInQueue` instead of `QuotaUsed` I did a pretty thorough search and didn't find any ways to obtain the value from write end of the pipe. 2) Also give read end of the pipe to `poll()` That can be done, but it will probably invite some dirty code, because `poll()` * can accept multiple pipes at once * can accept things that are not pipes * is expected to have a well known signature. 3) Make `poll()` always reply "writable" for write end of the pipe Afterall it seems that cygwin (accidentally?) does that for years. Also, it should be noted that `pump_io_round()` writes 8MB blocks, completely ignoring the fact that pipe's buffer size is only 8KB, which means that pipe gets clogged many times during that single write. This may invite a deadlock, if child's STDERR/STDOUT gets clogged while it's trying to deal with 8MB of STDIN. Such deadlocks could be defeated with writing less than pipe's buffer size per round, and always reading everything from STDOUT/STDERR before starting next round. Therefore, making `poll()` always reply "writable" shouldn't cause any new issues or block any future solutions. 4) Increase the size of the pipe's buffer The difference between `BytesInQueue` and `QuotaUsed` is the size of pending reads. Therefore, if buffer is bigger than size of reads, `poll()` won't hang so easily. However, I found that for example `strbuf_read()` will get more and more hungry as it reads large inputs, eventually surpassing any reasonable pipe buffer size. Chosen solution --------------- Make `poll()` always reply "writable" for write end of the pipe. Hopefully one day someone will find a way to implement it properly. Reproduction ------------ printf "%8388608s" X >large_file.txt git stash push --include-untracked -- large_file.txt I have decided not to include this as test to avoid slowing down the test suite. I don't expect the specific problem to come back, and chances are that `git stash push` will be reworked to avoid sending the entire patch via STDIN. Signed-off-by: Alexandr Miloslavskiy <alexandr.miloslavskiy@syntevo.com> Signed-off-by: Junio C Hamano <gitster@pobox.com> 2020-02-27Documentation: clarify that `-h` alone stands for `help`Junio C Hamano2-1/+8 We seem to be getting new users who get confused every 20 months or so with this "-h consistently wants to give help, but the commands to which `-h` may feel like a good short-form option want it to mean something else." compromise. Let's make sure that the readers know that `git cmd -h` (with no other arguments) is a way to get usage text, even for commands like ls-remote and grep. Also extend the description that is already in gitcli.txt, as it is clear that users still get confused with the current text. Signed-off-by: Junio C Hamano <gitster@pobox.com> 2020-02-27Azure Pipeline: switch to the latest agent poolsJohannes Schindelin1-12/+25 It would seem that at least the `vs2015-win2012r2` pool (which we use via its old name, `Hosted`) is about to be phased out. Let's switch before that. While at it, use the newer pool names as suggested at https://docs.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops#use-a-microsoft-hosted-agent Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com> 2020-02-27ci: prevent `perforce` from being quarantinedJohannes Schindelin1-2/+2 The most recent Azure Pipelines macOS agents enable what Apple calls "System Integrity Protection". This makes `p4d -V` hang: there is some sort of GUI dialog waiting for the user to acknowledge that the copied binaries are legit and may be executed, but on build agents, there is no user who could acknowledge that. Let's ask Homebrew specifically to _not_ quarantine the Perforce binaries. Helped-by: Aleksandr Chebotov Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com> 2020-02-27t/lib-httpd: avoid using macOS' sedJohannes Schindelin8-59/+66 Among other differences relative to GNU sed, macOS' sed always ends its output with a trailing newline, even if the input did not have such a trailing newline. Surprisingly, this makes three httpd-based tests fail on macOS: t5616, t5702 and t5703. ("Surprisingly" because those tests have been around for some time, but apparently nobody runs them on macOS with a working Apache2 setup.) The reason is that we use `sed` in those tests to filter the response of the web server. Apart from the fact that we use GNU constructs (such as using a space after the `c` command instead of a backslash and a newline), we have another problem: macOS' sed LF-only newlines while webservers are supposed to use CR/LF ones. Even worse, t5616 uses `sed` to replace a binary part of the response with a new binary part (kind of hoping that the replaced binary part does not contain a 0x0a byte which would be interpreted as a newline). To that end, it calls on Perl to read the binary pack file and hex-encode it, then calls on `sed` to prefix every hex digit pair with a `\x` in order to construct the text that the `c` statement of the `sed` invocation is supposed to insert. So we call Perl and sed to construct a sed statement. The final nail in the coffin is that macOS' sed does not even interpret those `\x<hex>` constructs. Let's just replace all of that by Perl snippets. With Perl, at least, we do not have to deal with GNU vs macOS semantics, we do not have to worry about unwanted trailing newlines, and we do not have to spawn commands to construct arguments for other commands to be spawned (i.e. we can avoid a whole lot of shell scripting complexity). The upshot is that this fixes t5616, t5702 and t5703 on macOS with Apache2. Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com> 2020-02-22partial-clone: avoid fetching when looking for objectsDerrick Stolee2-6/+6 When using partial clone, find_non_local_tags() in builtin/fetch.c checks each remote tag to see if its object also exists locally. There is no expectation that the object exist locally, but this function nevertheless triggers a lazy fetch if the object does not exist. This can be extremely expensive when asking for a commit, as we are completely removed from the context of the non-existent object and thus supply no "haves" in the request. 6462d5eb9a (fetch: remove fetch_if_missing=0, 2019-11-05) removed a global variable that prevented these fetches in favor of a bitflag. However, some object existence checks were not updated to use this flag. Update find_non_local_tags() to use OBJECT_INFO_SKIP_FETCH_OBJECT in addition to OBJECT_INFO_QUICK. The _QUICK option only prevents repreparing the pack-file structures. We need to be extremely careful about supplying _SKIP_FETCH_OBJECT when we expect an object to not exist due to updated refs. This resolves a broken test in t5616-partial-clone.sh. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com> 2020-02-22partial-clone: demonstrate bugs in partial fetchDerrick Stolee1-0/+31 While testing partial clone, I noticed some odd behavior. I was testing a way of running 'git init', followed by manually configuring the remote for partial clone, and then running 'git fetch'. Astonishingly, I saw the 'git fetch' process start asking the server for multiple rounds of pack-file downloads! When tweaking the situation a little more, I discovered that I could cause the remote to hang up with an error. Add two tests that demonstrate these two issues. In the first test, we find that when fetching with blob filters from a repository that previously did not have any tags, the 'git fetch --tags origin' command fails because the server sends "multiple filter-specs cannot be combined". This only happens when using protocol v2. In the second test, we see that a 'git fetch origin' request with several ref updates results in multiple pack-file downloads. This must be due to Git trying to fault-in the objects pointed by the refs. What makes this matter particularly nasty is that this goes through the do_oid_object_info_extended() method, so there are no "haves" in the negotiation. This leads the remote to send every reachable commit and tree from each new ref, providing a quadratic amount of data transfer! This test is fixed if we revert 6462d5eb9a (fetch: remove fetch_if_missing=0, 2019-11-05), but that revert causes other test failures. The real fix will need more care. The tests are ordered in this way because if I swap the test order the tag test will succeed instead of fail. I believe this is because somehow we need the srv.bare repo to not have any tags when we clone, but then have tags in our next fetch. Signed-off-by: Derrick Stolee <dstolee@microsoft.com> Signed-off-by: Junio C Hamano <gitster@pobox.com> 2020-02-22run-command.h: fix mis-indented struct memberJeff King1-1/+1 An accidental conversion of a tab to 4 spaces snuck into 4c4066d95d (run-command: move doc to run-command.h, 2019-11-17), messing up the alignment when you have the project-recommended 8-width tabstops. Let's revert that line. Signed-off-by: Jeff King <peff@peff.net> Signed-off-by: Junio C Hamano <gitster@pobox.com> 2020-02-19merge-recursive: fix the refresh logic in update_file_flagsElijah Newren2-3/+6 If we need to delete a higher stage entry in the index to place the file at stage 0, then we'll lose that file's stat information. In such situations we may still be able to detect that the file on disk is the version we want (as noted by our comment in the code: /* do not overwrite file if already present */ ), but we do still need to update the mtime since we are creating a new cache_entry for that file. Update the logic used to determine whether we refresh a file's mtime. Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>