Commit graph

298 commits

Author SHA1 Message Date
Junio C Hamano c13975e7fd Merge branch 'di/fast-import-empty-tag-note-fix'
* di/fast-import-empty-tag-note-fix:
  fast-import: don't allow to note on empty branch
  fast-import: don't allow to tag empty branch
2011-10-13 19:03:19 -07:00
Michael Haggerty 8d9c50105f Change check_ref_format() to take a flags argument
Change check_ref_format() to take a flags argument that indicates what
is acceptable in the reference name (analogous to "git
check-ref-format"'s "--allow-onelevel" and "--refspec-pattern").  This
is more convenient for callers and also fixes a failure in the test
suite (and likely elsewhere in the code) by enabling "onelevel" and
"refspec-pattern" to be allowed independently of each other.

Also rename check_ref_format() to check_refname_format() to make it
obvious that it deals with refnames rather than references themselves.

Signed-off-by: Michael Haggerty <mhagger@alum.mit.edu>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-10-05 13:45:29 -07:00
Dmitry Ivankov 0bc69881a6 fast-import: don't allow to note on empty branch
'reset' command makes fast-import start a branch from scratch. It's name
is kept in lookup table but it's sha1 is null_sha1 (special value).
'notemodify' command can be used to add a note on branch head given it's
name. lookup_branch() is used it that case and it doesn't check for
null_sha1. So fast-import writes a note for null_sha1 object instead of
giving a error.

Add a check to deny adding a note on empty branch and add a corresponding
test.

Signed-off-by: Dmitry Ivankov <divanorama@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-09-22 13:30:59 -07:00
Dmitry Ivankov 2c9c8ee2de fast-import: don't allow to tag empty branch
'reset' command makes fast-import start a branch from scratch. It's name
is kept in lookup table but it's sha1 is null_sha1 (special value).
'tag' command can be used to tag a branch by it's name. lookup_branch()
is used it that case and it doesn't check for null_sha1. So fast-import
writes a tag for null_sha1 object instead of giving a error.

Add a check to deny tagging an empty branch and add a corresponding test.

Signed-off-by: Dmitry Ivankov <divanorama@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-09-22 13:30:57 -07:00
Junio C Hamano 0dc691a4f3 Merge branch 'di/fast-import-tagging'
* di/fast-import-tagging:
  fast-import: allow to tag newly created objects
  fast-import: add tests for tagging blobs
2011-08-28 21:18:48 -07:00
Junio C Hamano 05d88e6f7e Merge branch 'di/fast-import-blob-tweak'
* di/fast-import-blob-tweak:
  fast-import: treat cat-blob as a delta base hint for next blob
  fast-import: count and report # of calls to diff_delta in stats
2011-08-28 21:18:47 -07:00
Junio C Hamano 45792b64c1 Merge branch 'di/fast-import-deltified-tree'
* di/fast-import-deltified-tree:
  fast-import: prevent producing bad delta
  fast-import: add a test for tree delta base corruption
2011-08-28 21:18:47 -07:00
Junio C Hamano 0b98954975 Merge branch 'di/fast-import-ident'
* di/fast-import-ident:
  fsck: improve committer/author check
  fsck: add a few committer name tests
  fast-import: check committer name more strictly
  fast-import: don't fail on omitted committer name
  fast-import: add input format tests
2011-08-28 21:18:47 -07:00
Dmitry Ivankov 6c447f633c fast-import: allow to tag newly created objects
fast-import allows to tag objects by sha1 and to query sha1 of objects
being imported. So it should allow to tag these objects, make it do so.

Signed-off-by: Dmitry Ivankov <divanorama@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-08-23 11:25:59 -07:00
Dmitry Ivankov 2efe38e7da fast-import: add tests for tagging blobs
fast-import allows to create an annotated tag that annotates a blob,
via mark or direct sha1 specification.

For mark it works, for sha1 it tries to read the object. It tries to
do so via read_sha1_file, and then checks the size to be at least 46.

That's weird, let's just allow to (annotated) tag any object referenced
by sha1. If the object originates from our packfile, we still fail though.

Signed-off-by: Dmitry Ivankov <divanorama@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-08-23 11:25:56 -07:00
Dmitry Ivankov a7e9c34126 fast-import: treat cat-blob as a delta base hint for next blob
Delta base for blobs is chosen as a previously saved blob. If we
treat cat-blob's blob as a delta base for the next blob, nothing
is likely to become worse.

For fast-import stream producer like svn-fe cat-blob is used like
following:
- svn-fe reads file delta in svn format
- to apply it, svn-fe asks cat-blob 'svn delta base'
- applies 'svn delta' to the response
- produces a blob command to store the result

Currently there is no way for svn-fe to give fast-import a hint on
object delta base. While what's requested in cat-blob is most of
the time a best delta base possible. Of course, it could be not a
good delta base, but we don't know any better one anyway.

So do treat cat-blob's result as a delta base for next blob. The
profit is nice: 2x to 7x reduction in pack size AND 1.2x to 3x
time speedup due to diff_delta being faster on good deltas. git gc
--aggressive can compress it even more, by 10% to 70%, utilizing
more cpu time, real time and 3 cpu cores.

Tested on 213M and 2.7G fast-import streams, resulting packs are 22M
and 113M, import time is 7s and 60s, both streams are produced by
svn-fe, sniffed and then used as raw input for fast-import.

For git-fast-export produced streams there is no change as it doesn't
use cat-blob and doesn't try to reorder blobs in some smart way to
make successive deltas small.

Signed-off-by: Dmitry Ivankov <divanorama@gmail.com>
Acked-by: David Barr <davidbarr@google.com>
Acked-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-08-22 11:57:07 -07:00
Dmitry Ivankov 94c3b48247 fast-import: count and report # of calls to diff_delta in stats
It's an interesting number, how often do we try to deltify each type of
objects and how often do we succeed. So do add it to stats.

Success doesn't mean much gain in pack size though. As we allow delta to
be as big as (data.len - 20). And delta close to data.len gains nothing
compared to no delta at all even after zlib compression (delta is pretty
much the same as data, just with few modifications).

We should try to make less attempts that result in huge deltas as these
consume more cpu than trivial small deltas. Either by choosing a better
delta base or reducing delta size upper bound or doing less delta attempts
at all.

Currently, delta base for blobs is a waste literally. Each blob delta
base is chosen as a previously stored blob. Disabling deltas for blobs
doesn't increase pack size and reduce import time, or at least doesn't
increase time for all fast-import streams I've tried.

Signed-off-by: Dmitry Ivankov <divanorama@gmail.com>
Acked-by: David Barr <davidbarr@google.com>
Acked-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-08-22 11:57:06 -07:00
Dmitry Ivankov 8fb3ad76b1 fast-import: prevent producing bad delta
To produce deltas for tree objects fast-import tracks two versions
of tree's entries - base and current one. Base version stands both
for a delta base of this tree, and for a entry inside a delta base
of a parent tree. So care should be taken to keep it in sync.

tree_content_set cuts away a whole subtree and replaces it with a
new one (or NULL for lazy load of a tree with known sha1). It
keeps a base sha1 for this subtree (needed for parent tree). And
here is the problem, 'subtree' tree root doesn't have the implied
base version entries.

Adjusting the subtree to include them would mean a deep rewrite of
subtree. Invalidating the subtree base version would mean recursive
invalidation of parents' base versions. So just mark this tree as
do-not-delta me. Abuse setuid bit for this purpose.

tree_content_replace is the same as tree_content_set except that is
is used to replace the root, so just clearing base sha1 here (instead
of setting the bit) is fine.

[di: log message]

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Dmitry Ivankov <divanorama@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-08-14 14:40:01 -07:00
Dmitry Ivankov 4b4963c0e1 fast-import: check committer name more strictly
The documentation declares following identity format:
(<name> SP)? LT <email> GT
where name is any string without LF and LT characters.
But fast-import just accepts any string up to first GT
instead of checking the whole format, and moreover just
writes it as is to the commit object.

git-fsck checks for [^<\n]* <[^<>\n]*> format. Note that the
space is mandatory. And the space quirk is already handled via
extending the string to the left when needed.

Modify fast-import input identity format to a slightly stricter
one - deny LF, LT and GT in both <name> and <email>. And check
for it.

This is stricter then git-fsck as fsck accepts "Name> <email>"
currently, but soon fsck check will be adjusted likewise.

Signed-off-by: Dmitry Ivankov <divanorama@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-08-11 12:21:03 -07:00
Dmitry Ivankov 17fb00721b fast-import: don't fail on omitted committer name
fast-import format declares 'committer_name SP' to be optional in
'committer_name SP LT email GT'. But for a (commit) object SP is
obligatory while zero length committer_name is ok. git-fsck checks
that SP is present, so fast-import must prepend it if the name SP
part is omitted. It doesn't do so and thus for "LT email GT" ident
it writes a bad object.

Name cannot contain LT or GT, ident always comes after SP in fast-import.
So if ident starts with LT reuse the SP as if a valid 'SP LT email GT'
ident was passed.

This fixes a ident parsing bug for a well-formed fast-import input.
Though the parsing is still loose and can accept a ill-formed input.

Signed-off-by: Dmitry Ivankov <divanorama@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-08-11 12:20:56 -07:00
Junio C Hamano 59d9ba869e Merge branch 'sr/transport-helper-fix'
* sr/transport-helper-fix: (21 commits)
  transport-helper: die early on encountering deleted refs
  transport-helper: implement marks location as capability
  transport-helper: Use capname for refspec capability too
  transport-helper: change import semantics
  transport-helper: update ref status after push with export
  transport-helper: use the new done feature where possible
  transport-helper: check status code of finish_command
  transport-helper: factor out push_update_refs_status
  fast-export: support done feature
  fast-import: introduce 'done' command
  git-remote-testgit: fix error handling
  git-remote-testgit: only push for non-local repositories
  remote-curl: accept empty line as terminator
  remote-helpers: export GIT_DIR variable to helpers
  git_remote_helpers: push all refs during a non-local export
  transport-helper: don't feed bogus refs to export push
  git-remote-testgit: import non-HEAD refs
  t5800: document some non-functional parts of remote helpers
  t5800: use skip_all instead of prereq
  t5800: factor out some ref tests
  ...
2011-08-01 15:00:14 -07:00
Sverre Rabbelier be56862f19 fast-import: introduce 'done' command
Add a 'done' command that causes fast-import to stop reading from the
stream and exit.

If the new --done command line flag was passed on the command line
(or a "feature done" declaration included at the start of the stream),
make the 'done' command mandatory.  So "git fast-import --done"'s
input format will be prefix-free, making errors easier to detect when
they show up as early termination at some convenient time of the
upstream of a pipe writing to fast-import.

Another possible application of the 'done' command would to be allow a
fast-import stream that is only a small part of a larger encapsulating
stream to be easily parsed, leaving the file offset after the "done\n"
so the other application can pick up from there.  This patch does not
teach fast-import to do that --- fast-import still uses buffered input
(stdio).

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Sverre Rabbelier <srabbelier@gmail.com>
Acked-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-07-19 11:17:47 -07:00
Junio C Hamano d907bf8ef3 Merge branch 'jc/index-pack'
* jc/index-pack:
  verify-pack: use index-pack --verify
  index-pack: show histogram when emulating "verify-pack -v"
  index-pack: start learning to emulate "verify-pack -v"
  index-pack: a miniscule refactor
  index-pack --verify: read anomalous offsets from v2 idx file
  write_idx_file: need_large_offset() helper function
  index-pack: --verify
  write_idx_file: introduce a struct to hold idx customization options
  index-pack: group the delta-base array entries also by type

Conflicts:
	builtin/verify-pack.c
	cache.h
	sha1_file.c
2011-07-19 09:54:51 -07:00
Junio C Hamano ef49a7a012 zlib: zlib can only process 4GB at a time
The size of objects we read from the repository and data we try to put
into the repository are represented in "unsigned long", so that on larger
architectures we can handle objects that weigh more than 4GB.

But the interface defined in zlib.h to communicate with inflate/deflate
limits avail_in (how many bytes of input are we calling zlib with) and
avail_out (how many bytes of output from zlib are we ready to accept)
fields effectively to 4GB by defining their type to be uInt.

In many places in our code, we allocate a large buffer (e.g. mmap'ing a
large loose object file) and tell zlib its size by assigning the size to
avail_in field of the stream, but that will truncate the high octets of
the real size. The worst part of this story is that we often pass around
z_stream (the state object used by zlib) to keep track of the number of
used bytes in input/output buffer by inspecting these two fields, which
practically limits our callchain to the same 4GB limit.

Wrap z_stream in another structure git_zstream that can express avail_in
and avail_out in unsigned long. For now, just die() when the caller gives
a size that cannot be given to a single zlib call. In later patches in the
series, we would make git_inflate() and git_deflate() internally loop to
give callers an illusion that our "improved" version of zlib interface can
operate on a buffer larger than 4GB in one go.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-06-10 11:52:15 -07:00
Junio C Hamano 225a6f1068 zlib: wrap deflateBound() too
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-06-10 11:18:17 -07:00
Junio C Hamano 55bb5c9147 zlib: wrap deflate side of the API
Wrap deflateInit, deflate, and deflateEnd for everybody, and the sole use
of deflateInit2 in remote-curl.c to tell the library to use gzip header
and trailer in git_deflate_init_gzip().

There is only one caller that cares about the status from deflateEnd().
Introduce git_deflate_end_gently() to let that sole caller retrieve the
status and act on it (i.e. die) for now, but we would probably want to
make inflate_end/deflate_end die when they ran out of memory and get
rid of the _gently() kind.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-06-10 11:10:29 -07:00
Sverre Rabbelier 4cce4ef2d5 fast-import: fix option parser for no-arg options
While refactoring the options parser in bc3c79a (fast-import: add
(non-)relative-marks feature, 2009-12-04), it was made too lenient
for options that take no argument, fix that.

Signed-off-by: Sverre Rabbelier <srabbelier@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-05-05 21:21:24 -07:00
Junio C Hamano f28d2e33c6 Merge branch 'jc/pack-objects-bigfile' into maint
* jc/pack-objects-bigfile:
  Teach core.bigfilethreashold to pack-objects
2011-05-04 14:57:38 -07:00
Junio C Hamano 15366280c2 Teach core.bigfilethreashold to pack-objects
The pack-objects command should take notice of the object file and
refrain from attempting to delta large ones, to be consistent with
the fast-import command.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-04-05 20:25:49 -07:00
Stephen Boyd c2e86addb8 Fix sparse warnings
Fix warnings from 'make check'.

 - These files don't include 'builtin.h' causing sparse to complain that
   cmd_* isn't declared:

   builtin/clone.c:364, builtin/fetch-pack.c:797,
   builtin/fmt-merge-msg.c:34, builtin/hash-object.c:78,
   builtin/merge-index.c:69, builtin/merge-recursive.c:22
   builtin/merge-tree.c:341, builtin/mktag.c:156, builtin/notes.c:426
   builtin/notes.c:822, builtin/pack-redundant.c:596,
   builtin/pack-refs.c:10, builtin/patch-id.c:60, builtin/patch-id.c:149,
   builtin/remote.c:1512, builtin/remote-ext.c:240,
   builtin/remote-fd.c:53, builtin/reset.c:236, builtin/send-pack.c:384,
   builtin/unpack-file.c:25, builtin/var.c:75

 - These files have symbols which should be marked static since they're
   only file scope:

   submodule.c:12, diff.c:631, replace_object.c:92, submodule.c:13,
   submodule.c:14, trace.c:78, transport.c:195, transport-helper.c:79,
   unpack-trees.c:19, url.c:3, url.c:18, url.c:104, url.c:117, url.c:123,
   url.c:129, url.c:136, thread-utils.c:21, thread-utils.c:48

 - These files redeclare symbols to be different types:

   builtin/index-pack.c:210, parse-options.c:564, parse-options.c:571,
   usage.c:49, usage.c:58, usage.c:63, usage.c:72

 - These files use a literal integer 0 when they really should use a NULL
   pointer:

   daemon.c:663, fast-import.c:2942, imap-send.c:1072, notes-merge.c:362

While we're in the area, clean up some unused #includes in builtin files
(mostly exec_cmd.h).

Signed-off-by: Stephen Boyd <bebarino@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-03-22 10:16:54 -07:00
Junio C Hamano b2f6eab402 Merge branch 'maint'
* maint:
  Prepare draft release notes to 1.7.4.2
  gitweb: highlight: replace tabs with spaces
  make_absolute_path: return the input path if it points to our buffer
  valgrind: ignore SSE-based strlen invalid reads
  diff --submodule: split into bite-sized pieces
  cherry: split off function to print output lines
  branch: split off function that writes tracking info and commit subject
  standardize brace placement in struct definitions
  compat: make gcc bswap an inline function
  enums: omit trailing comma for portability

Conflicts:
	RelNotes
2011-03-16 16:59:30 -07:00
Jonathan Nieder 9cba13ca5d standardize brace placement in struct definitions
In a struct definitions, unlike functions, the prevailing style is for
the opening brace to go on the same line as the struct name, like so:

 struct foo {
	int bar;
	char *baz;
 };

Indeed, grepping for 'struct [a-z_]* {$' yields about 5 times as many
matches as 'struct [a-z_]*$'.

Linus sayeth:

 Heretic people all over the world have claimed that this inconsistency
 is ...  well ...  inconsistent, but all right-thinking people know that
 (a) K&R are _right_ and (b) K&R are right.

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-03-16 12:49:02 -07:00
Junio C Hamano 674ef90904 Merge branch 'sp/maint-fd-limit'
* sp/maint-fd-limit:
  sha1_file.c: Don't retain open fds on small packs
  mingw: add minimum getrlimit() compatibility stub
  Limit file descriptors used by packs
2011-03-15 14:22:23 -07:00
Shawn O. Pearce d131b7afea sha1_file.c: Don't retain open fds on small packs
If a pack file is small enough that its entire contents fits within
one mmap window, mmap the file and then immediately close its file
descriptor.  This reduces the number of file descriptors that are
needed to read from repositories with many tiny pack files, such
as one that has received 1000 pushes (and created 1000 small pack
files) since its last repack.

Signed-off-by: Shawn O. Pearce <spearce@spearce.org>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-03-02 11:25:30 -08:00
Jonathan Nieder 6288e3e180 fast-import: make code "-Wpointer-arith" clean
The dereference() function to peel a tree-ish and find the underlying
tree expects arithmetic to (void *) to work on byte addresses.  We
should be reading the text of objects through a char * anyway.

Noticed-by: Junio C Hamano <gitster@pobox.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
2011-02-28 15:25:12 -06:00
Junio C Hamano ebcfb3791a write_idx_file: introduce a struct to hold idx customization options
Remove two globals, pack_idx_default version and pack_idx_off32_limit,
and place them in a pack_idx_option structure.  Allow callers to pass
it to write_idx_file() as a parameter.

Adjust all callers to the API change.

Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-02-27 23:29:03 -08:00
David Barr 8dc6a373d2 fast-import: add 'ls' command
Lazy fast-import frontend authors that want to rely on the backend to
keep track of the content of the imported trees _almost_ have what
they need in the 'cat-blob' command (v1.7.4-rc0~30^2~3, 2010-11-28).
But it is not quite enough, since

 (1) cat-blob can be used to retrieve the content of files, but
     not their mode, and

 (2) using cat-blob requires the frontend to keep track of a name
     (mark number or object id) for each blob to be retrieved

Introduce an 'ls' command to complement cat-blob and take care of the
remaining needs.  The 'ls' command finds what is at a given path
within a given tree-ish (tag, commit, or tree):

	'ls' SP <dataref> SP <path> LF

or in fast-import's active commit:

	'ls' SP <path> LF

The response is a single line sent through the cat-blob channel,
imitating ls-tree output.  So for example:

	FE> ls :1 Documentation
	gfi> 040000 tree 9e6c2b599341d28a2a375f8207507e0a2a627fe9	Documentation
	FE> ls 9e6c2b599341d28a2a375f8207507e0a2a627fe9 git-fast-import.txt
	gfi> 100644 blob 4f92954396e3f0f97e75b6838a5635b583708870	git-fast-import.txt
	FE> ls :1 RelNotes
	gfi> 120000 blob b942e49944	RelNotes
	FE> cat-blob b942e49944
	gfi> b942e49944 blob 32
	gfi> Documentation/RelNotes/1.7.4.txt

The most interesting parts of the reply are the first word, which is
a 6-digit octal mode (regular file, executable, symlink, directory,
or submodule), and the part from the second space to the tab, which is
a <dataref> that can be used in later cat-blob, ls, and filemodify (M)
commands to refer to the content (blob, tree, or commit) at that path.

If there is nothing there, the response is "missing some/path".

The intent is for this command to be used to read files from the
active commit, so a frontend can apply patches to them, and to copy
files and directories from previous revisions.

For example, proposed updates to svn-fe use this command in place of
its internal representation of the repository directory structure.
This simplifies the frontend a great deal and means support for
resuming an import in a separate fast-import run (i.e., incremental
import) is basically free.

Signed-off-by: David Barr <david.barr@cordelta.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Improved-by: Junio C Hamano <gitster@pobox.com>
Improved-by: Sverre Rabbelier <srabbelier@gmail.com>
2011-02-26 04:57:58 -06:00
Junio C Hamano fc180d98a2 Merge branch 'rr/fi-import-marks-if-exists'
* rr/fi-import-marks-if-exists:
  fast-import: Introduce --import-marks-if-exists
2011-02-09 16:41:16 -08:00
Junio C Hamano a8e4a5943a Merge branch 'maint-1.7.0' into maint
* maint-1.7.0:
  fast-import: introduce "feature notes" command
  fast-import: clarify documentation of "feature" command

Conflicts:
	Documentation/git-fast-import.txt
2011-02-09 16:40:12 -08:00
Jonathan Nieder 547e8b9205 fast-import: introduce "feature notes" command
Here is a 'feature' command for streams to use to require support for
the notemodify (N) command.

When the 'feature' facility was introduced (v1.7.0-rc0~95^2~4,
2009-12-04), the notes import feature was old news (v1.6.6-rc0~21^2~8,
2009-10-09) and it was not obvious it deserved to be a named feature.
But now that is clear, since all major non-git fast-import backends
lack support for it.

Details: on git version with this patch applied, any "feature notes"
command in the features/options section at the beginning of a stream
will be treated as a no-op.  On fast-import implementations without
the feature (and older git versions), the command instead errors out
with a message like

	This version of fast-import does not support feature notes.

So by declaring use of notes at the beginning of a stream, frontends
can avoid wasting time and other resources when the backend does not
support notes.  (This would be especially important for backends that
do not support rewinding history after a botched import.)

Improved-by: Thomas Rast <trast@student.ethz.ch>
Improved-by: Sverre Rabbelier <srabbelier@gmail.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-02-09 16:06:51 -08:00
Junio C Hamano 99e63ef24e Merge branch 'maint'
* maint:
  rebase -i: clarify in-editor documentation of "exec"
  tests: sanitize more git environment variables
  fast-import: treat filemodify with empty tree as delete
  rebase: give a better error message for bogus branch
  rebase: use explicit "--" with checkout

Conflicts:
	t/t9300-fast-import.sh
2011-01-27 10:27:49 -08:00
Junio C Hamano 5ce3258122 Merge branch 'jn/fast-import-empty-tree-removal' into maint
* jn/fast-import-empty-tree-removal:
  fast-import: treat filemodify with empty tree as delete
2011-01-27 10:23:53 -08:00
Jonathan Nieder 8fe533f686 fast-import: treat filemodify with empty tree as delete
Normal git processes do not allow one to build a tree with an empty
subtree entry without trying hard at it.  This is in keeping with the
general UI philosophy: git tracks content, not empty directories.

v1.7.3-rc0~75^2 (2010-06-30) changed that by making it easy to include
an empty subtree in fast-import's active commit:

	M 040000 4b825dc642 subdir

One can trigger this by reading an empty tree (for example, the tree
corresponding to an empty root commit) and trying to move it to a
subtree.  It is better and more closely analogous to 'git read-tree
--prefix' to treat such commands as requests to remove the subtree.

Noticed-by: David Barr <david.barr@cordelta.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-01-27 10:22:37 -08:00
Junio C Hamano 267684f0b7 Merge branch 'jn/maint-fast-import-object-reuse' into maint
* jn/maint-fast-import-object-reuse:
  fast-import: insert new object entries at start of hash bucket
2011-01-19 08:25:46 -08:00
Ramkumar Ramachandra dded4f12a4 fast-import: Introduce --import-marks-if-exists
When a frontend uses a marks file to ensure its state persists between
runs, it may represent "clean slate" when bootstrapping with "no marks
yet". In such a case, feeding the last state with --import-marks and
saving the state after the current run with --export-marks would be a
natural thing to do.

The --import-marks option however errors out when the specified marks file
doesn't exist; this makes bootstrapping a bit difficult.  The location of
the marks file becomes backend-dependent when --relative-marks is in
effect, and the frontend cannot check for the existence of the file in
such a case.

The --import-marks-if-exists option does the same thing as --import-marks
but does not flag an error if the named file does not exist yet to help
these frontends.

Helped-by: Junio C Hamano <gitster@pobox.com>
Helped-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Ramkumar Ramachandra <artagnon@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2011-01-18 07:07:01 -08:00
Junio C Hamano 914584266c Merge branch 'jn/fast-import-blob-access'
* jn/fast-import-blob-access:
  t9300: avoid short reads from dd
  t9300: remove unnecessary use of /dev/stdin
  fast-import: Allow cat-blob requests at arbitrary points in stream
  fast-import: let importers retrieve blobs
  fast-import: clarify documentation of "feature" command
  fast-import: stricter parsing of integer options

Conflicts:
	fast-import.c
2010-12-16 12:58:38 -08:00
Junio C Hamano c835288be4 Merge branch 'jn/maint-fast-import-object-reuse'
* jn/maint-fast-import-object-reuse:
  fast-import: insert new object entries at start of hash bucket
2010-12-16 12:49:16 -08:00
Junio C Hamano f73c3e9704 Merge branch 'jn/fast-import-ondemand-checkpoint'
* jn/fast-import-ondemand-checkpoint:
  fast-import: treat SIGUSR1 as a request to access objects early
2010-12-16 12:49:11 -08:00
Junio C Hamano 5e738ae820 Merge branch 'jj/icase-directory'
* jj/icase-directory:
  Support case folding in git fast-import when core.ignorecase=true
  Support case folding for git add when core.ignorecase=true
  Add case insensitivity support when using git ls-files
  Add case insensitivity support for directories when using git status
  Case insensitivity support for .gitignore via core.ignorecase
  Add string comparison functions that respect the ignore_case variable.
  Makefile & configure: add a NO_FNMATCH_CASEFOLD flag
  Makefile & configure: add a NO_FNMATCH flag

Conflicts:
	Makefile
	config.mak.in
	configure.ac
	fast-import.c
2010-12-03 16:10:34 -08:00
Jonathan Nieder 777f80d742 fast-import: Allow cat-blob requests at arbitrary points in stream
The new rule: a "cat-blob" can be inserted wherever a comment is
allowed, which means at the start of any line except in the middle of
a "data" command.

This saves frontends from having to loop over everything they want to
commit in the next commit and cat-ing the necessary objects in
advance.

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: David Barr <david.barr@cordelta.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-12-01 13:28:04 -08:00
David Barr 85c62395b1 fast-import: let importers retrieve blobs
New objects written by fast-import are not available immediately.
Until a checkpoint has been started and finishes writing the pack
index, any new blobs will not be accessible using standard git tools.

So introduce a new way to access them: a "cat-blob" command in the
command stream requests for fast-import to print a blob to stdout or a
file descriptor specified by the argument to --cat-blob-fd.  The value
for cat-blob-fd cannot be specified in the stream because that would
be a layering violation: the decision of where to direct a stream has
to be made when fast-import is started anyway, so we might as well
make the stream format is independent of that detail.

Output uses the same format as "git cat-file --batch".

Thanks to Sverre Rabbelier and Sam Vilain for guidance in designing
the protocol.

Based-on-patch-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: David Barr <david.barr@cordelta.com>
Acked-by: Ramkumar Ramachandra <artagnon@gmail.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-12-01 13:27:37 -08:00
Jonathan Nieder a9ff277e58 fast-import: stricter parsing of integer options
Check the result from strtoul to avoid accepting arguments like
--depth=-1 and --active-branches=foo,bar,baz.

Requested-by: Ramkumar Ramachandra <artagnon@gmail.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-12-01 13:26:52 -08:00
Junio C Hamano ed8298dc34 Merge branch 'jn/fast-import-fix'
* jn/fast-import-fix:
  fast-import: do not clear notes in do_change_note_fanout()
  t9300 (fast-import): another test for the "replace root" feature
  fast-import: tighten M 040000 syntax
  fast-import: filemodify after M 040000 <tree> "" crashes
2010-11-29 17:52:32 -08:00
Jonathan Nieder dc01f59d21 fast-import: treat SIGUSR1 as a request to access objects early
It can be tedious to wait for a multi-million-revision import.
Unfortunately it is hard to spy on the import because fast-import
works by continuously streaming out objects, without updating the pack
index or refs until a checkpoint command or the end of the stream.

So allow the impatient operator to request checkpoints by sending a
signal, like so:

	killall -USR1 git-fast-import

When receiving such a signal, fast-import would schedule a checkpoint
to take place after the current top-level command (usually a "commit"
or "blob" request) finishes.

Caveats: just like ordinary checkpoint commands, such requests slow
down the import.  Switching to a new pack at a suboptimal moment is
also likely to result in a less dense initial collection of packs.
That's the price.

Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-11-24 15:01:24 -08:00
David Barr b7c1ce4f14 fast-import: insert new object entries at start of hash bucket
More often than not, find_object is called for recently inserted objects.
Optimise for this case by inserting new entries at the start of the chain.
This doesn't affect the cost of new inserts but reduces the cost of find
and insert for existing object entries.

Signed-off-by: David Barr <david.barr@cordelta.com>
Signed-off-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2010-11-24 11:25:16 -08:00