2010-02-20 08:50:25 +00:00
|
|
|
#!/usr/bin/perl
|
2023-11-16 19:30:10 +00:00
|
|
|
use 5.008001;
|
2010-02-20 08:50:25 +00:00
|
|
|
use strict;
|
|
|
|
use warnings;
|
|
|
|
use IO::Pty;
|
|
|
|
use File::Copy;
|
|
|
|
|
test-terminal: drop stdin handling
Since 18d8c26930 (test_terminal: redirect child process' stdin to a pty,
2015-08-04), we set up a pty and copy stdin to the child program. But
this ends up being racy; once we send all of the bytes and close the
descriptor, the child program will no longer see a terminal! isatty()
will return 0, and trying to read may return EIO, even if we didn't yet
get all of the bytes.
This was mentioned even in the commit message of 18d8c26930, but we
hacked around it by just sending an infinite input from /dev/zero (in
the intended case, we only cared about isatty(0), not reading actual
input).
And it came up again recently in:
https://lore.kernel.org/git/d42a55b1-1ba9-4cfb-9c3d-98ea4d86da33@gmail.com/
where we tried to actually send bytes, but they don't always all come
through. So this interface is somewhat of an accident waiting to happen;
a caller might not even care about stdin being a tty, but will get bit
by the flaky behavior.
One solution would probably be to avoid closing test_terminal's end of
the pty altogether. But then the other side would never see EOF on its
stdin. That may be OK for some cases, but it's another gotcha that
might cause races or deadlocks, depending on what the child expects to
read.
Let's instead just drop test_terminal's stdin feature completely. Since
the previous commit dropped the two cases from t4153 for which the
feature was originally added, there are no callers left that need it.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-06-06 08:22:37 +00:00
|
|
|
# Run @$argv in the background with stdio redirected to $out and $err.
|
2010-02-20 08:50:25 +00:00
|
|
|
sub start_child {
|
test-terminal: drop stdin handling
Since 18d8c26930 (test_terminal: redirect child process' stdin to a pty,
2015-08-04), we set up a pty and copy stdin to the child program. But
this ends up being racy; once we send all of the bytes and close the
descriptor, the child program will no longer see a terminal! isatty()
will return 0, and trying to read may return EIO, even if we didn't yet
get all of the bytes.
This was mentioned even in the commit message of 18d8c26930, but we
hacked around it by just sending an infinite input from /dev/zero (in
the intended case, we only cared about isatty(0), not reading actual
input).
And it came up again recently in:
https://lore.kernel.org/git/d42a55b1-1ba9-4cfb-9c3d-98ea4d86da33@gmail.com/
where we tried to actually send bytes, but they don't always all come
through. So this interface is somewhat of an accident waiting to happen;
a caller might not even care about stdin being a tty, but will get bit
by the flaky behavior.
One solution would probably be to avoid closing test_terminal's end of
the pty altogether. But then the other side would never see EOF on its
stdin. That may be OK for some cases, but it's another gotcha that
might cause races or deadlocks, depending on what the child expects to
read.
Let's instead just drop test_terminal's stdin feature completely. Since
the previous commit dropped the two cases from t4153 for which the
feature was originally added, there are no callers left that need it.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-06-06 08:22:37 +00:00
|
|
|
my ($argv, $out, $err) = @_;
|
2010-02-20 08:50:25 +00:00
|
|
|
my $pid = fork;
|
|
|
|
if (not defined $pid) {
|
|
|
|
die "fork failed: $!"
|
|
|
|
} elsif ($pid == 0) {
|
|
|
|
open STDOUT, ">&", $out;
|
2010-10-16 18:36:57 +00:00
|
|
|
open STDERR, ">&", $err;
|
2010-02-20 08:50:25 +00:00
|
|
|
close $out;
|
|
|
|
exec(@$argv) or die "cannot exec '$argv->[0]': $!"
|
|
|
|
}
|
|
|
|
return $pid;
|
|
|
|
}
|
|
|
|
|
|
|
|
# Wait for $pid to finish.
|
|
|
|
sub finish_child {
|
|
|
|
# Simplified from wait_or_whine() in run-command.c.
|
|
|
|
my ($pid) = @_;
|
|
|
|
|
|
|
|
my $waiting = waitpid($pid, 0);
|
|
|
|
if ($waiting < 0) {
|
|
|
|
die "waitpid failed: $!";
|
|
|
|
} elsif ($? & 127) {
|
|
|
|
my $code = $? & 127;
|
|
|
|
warn "died of signal $code";
|
run-command: encode signal death as a positive integer
When a sub-command dies due to a signal, we encode the
signal number into the numeric exit status as "signal -
128". This is easy to identify (versus a regular positive
error code), and when cast to an unsigned integer (e.g., by
feeding it to exit), matches what a POSIX shell would return
when reporting a signal death in $? or through its own exit
code.
So we have a negative value inside the code, but once it
passes across an exit() barrier, it looks positive (and any
code we receive from a sub-shell will have the positive
form). E.g., death by SIGPIPE (signal 13) will look like
-115 to us in inside git, but will end up as 141 when we
call exit() with it. And a program killed by SIGPIPE but run
via the shell will come to us with an exit code of 141.
Unfortunately, this means that when the "use_shell" option
is set, we need to be on the lookout for _both_ forms. We
might or might not have actually invoked the shell (because
we optimize out some useless shell calls). If we didn't invoke
the shell, we will will see the sub-process's signal death
directly, and run-command converts it into a negative value.
But if we did invoke the shell, we will see the shell's
128+signal exit status. To be thorough, we would need to
check both, or cast the value to an unsigned char (after
checking that it is not -1, which is a magic error value).
Fortunately, most callsites do not care at all whether the
exit was from a code or from a signal; they merely check for
a non-zero status, and sometimes propagate the error via
exit(). But for the callers that do care, we can make life
slightly easier by just using the consistent positive form.
This actually fixes two minor bugs:
1. In launch_editor, we check whether the editor died from
SIGINT or SIGQUIT. But we checked only the negative
form, meaning that we would fail to notice a signal
death exit code which was propagated through the shell.
2. In handle_alias, we assume that a negative return value
from run_command means that errno tells us something
interesting (like a fork failure, or ENOENT).
Otherwise, we simply propagate the exit code. Negative
signal death codes confuse us, and we print a useless
"unable to run alias 'foo': Success" message. By
encoding signal deaths using the positive form, the
existing code just propagates it as it would a normal
non-zero exit code.
The downside is that callers of run_command can no longer
differentiate between a signal received directly by the
sub-process, and one propagated. However, no caller
currently cares, and since we already optimize out some
calls to the shell under the hood, that distinction is not
something that should be relied upon by callers.
Fix the same logic in t/test-terminal.perl for consistency [jc:
raised by Jonathan in the discussion].
Signed-off-by: Jeff King <peff@peff.net>
Acked-by: Johannes Sixt <j6t@kdbg.org>
Reviewed-by: Jonathan Nieder <jrnieder@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2013-01-05 14:49:49 +00:00
|
|
|
return $code + 128;
|
2010-02-20 08:50:25 +00:00
|
|
|
} else {
|
|
|
|
return $? >> 8;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
sub xsendfile {
|
|
|
|
my ($out, $in) = @_;
|
|
|
|
|
|
|
|
# Note: the real sendfile() cannot read from a terminal.
|
|
|
|
|
|
|
|
# It is unspecified by POSIX whether reads
|
|
|
|
# from a disconnected terminal will return
|
|
|
|
# EIO (as in AIX 4.x, IRIX, and Linux) or
|
|
|
|
# end-of-file. Either is fine.
|
|
|
|
copy($in, $out, 4096) or $!{EIO} or die "cannot copy from child: $!";
|
|
|
|
}
|
|
|
|
|
2010-10-16 18:36:57 +00:00
|
|
|
sub copy_stdio {
|
|
|
|
my ($out, $err) = @_;
|
|
|
|
my $pid = fork;
|
|
|
|
defined $pid or die "fork failed: $!";
|
|
|
|
if (!$pid) {
|
|
|
|
close($out);
|
|
|
|
xsendfile(\*STDERR, $err);
|
|
|
|
exit 0;
|
|
|
|
}
|
|
|
|
close($err);
|
|
|
|
xsendfile(\*STDOUT, $out);
|
|
|
|
finish_child($pid) == 0
|
|
|
|
or exit 1;
|
|
|
|
}
|
|
|
|
|
2010-02-20 08:50:25 +00:00
|
|
|
if ($#ARGV < 1) {
|
|
|
|
die "usage: test-terminal program args";
|
|
|
|
}
|
2017-10-03 13:39:34 +00:00
|
|
|
$ENV{TERM} = 'vt100';
|
2020-09-21 22:01:23 +00:00
|
|
|
my $parent_out = new IO::Pty;
|
|
|
|
my $parent_err = new IO::Pty;
|
|
|
|
$parent_out->set_raw();
|
|
|
|
$parent_err->set_raw();
|
|
|
|
$parent_out->slave->set_raw();
|
|
|
|
$parent_err->slave->set_raw();
|
test-terminal: drop stdin handling
Since 18d8c26930 (test_terminal: redirect child process' stdin to a pty,
2015-08-04), we set up a pty and copy stdin to the child program. But
this ends up being racy; once we send all of the bytes and close the
descriptor, the child program will no longer see a terminal! isatty()
will return 0, and trying to read may return EIO, even if we didn't yet
get all of the bytes.
This was mentioned even in the commit message of 18d8c26930, but we
hacked around it by just sending an infinite input from /dev/zero (in
the intended case, we only cared about isatty(0), not reading actual
input).
And it came up again recently in:
https://lore.kernel.org/git/d42a55b1-1ba9-4cfb-9c3d-98ea4d86da33@gmail.com/
where we tried to actually send bytes, but they don't always all come
through. So this interface is somewhat of an accident waiting to happen;
a caller might not even care about stdin being a tty, but will get bit
by the flaky behavior.
One solution would probably be to avoid closing test_terminal's end of
the pty altogether. But then the other side would never see EOF on its
stdin. That may be OK for some cases, but it's another gotcha that
might cause races or deadlocks, depending on what the child expects to
read.
Let's instead just drop test_terminal's stdin feature completely. Since
the previous commit dropped the two cases from t4153 for which the
feature was originally added, there are no callers left that need it.
Signed-off-by: Jeff King <peff@peff.net>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2024-06-06 08:22:37 +00:00
|
|
|
my $pid = start_child(\@ARGV, $parent_out->slave, $parent_err->slave);
|
2020-09-21 22:01:23 +00:00
|
|
|
close $parent_out->slave;
|
|
|
|
close $parent_err->slave;
|
|
|
|
copy_stdio($parent_out, $parent_err);
|
test_terminal: redirect child process' stdin to a pty
When resuming, git-am detects if we are trying to feed it patches or not
by checking if stdin is a TTY.
However, the test library redirects stdin to /dev/null. This makes it
difficult, for instance, to test the behavior of "git am -3" when
resuming, as git-am will think we are trying to feed it patches and
error out.
Support this use case by extending test-terminal.perl to create a
pseudo-tty for the child process' standard input as well.
Note that due to the way the code is structured, the child's stdin
pseudo-tty will be closed when we finish reading from our stdin. This
means that in the common case, where our stdin is attached to /dev/null,
the child's stdin pseudo-tty will be closed immediately. Some operations
like isatty(), which git-am uses, require the file descriptor to be
open, and hence if the success of the command depends on such functions,
test_terminal's stdin should be redirected to a source with large amount
of data to ensure that the child's stdin is not closed, e.g.
test_terminal git am --3way </dev/zero
Cc: Jonathan Nieder <jrnieder@gmail.com>
Cc: Jeff King <peff@peff.net>
Signed-off-by: Paul Tan <pyokagan@gmail.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
2015-08-04 14:08:49 +00:00
|
|
|
my $ret = finish_child($pid);
|
|
|
|
exit($ret);
|