Commit graph

4980 commits

Author SHA1 Message Date
Cuong Manh Le 778880b008 cmd/compile: fix typecheck range over negative integer
Before range over integer, types2 leaves constant expression in RHS of
non-constant shift untyped, so idealType do the validation to ensure
that constant value must be an int >= 0.

With range over int, the range expression can also be left untyped, and
can be an negative integer, causing the validation false.

Fixing this by relaxing the validation in idealType, and moving the
check to Unified IR reader.

Fixes #63378

Change-Id: I43042536c09afd98d52c5981adff5dbc5e7d882a
Reviewed-on: https://go-review.googlesource.com/c/go/+/532835
Auto-Submit: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@golang.org>
2023-10-09 18:09:34 +00:00
Keith Randall afd7c15c7f cmd/compile: use cache in front of convI2I
This is the last of the getitab users to receive a cache.
We should now no longer see getitab (and callees) in profiles.
Hopefully.

Change-Id: I2ed72b9943095bbe8067c805da7f08e00706c98c
Reviewed-on: https://go-review.googlesource.com/c/go/+/531055
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2023-10-09 17:28:22 +00:00
Cuong Manh Le 2744155d36 cmd/compile: fix ICE with parenthesized builtin calls
CL 419456 starts using lookupObj to find types2.Object associated with
builtin functions. However, the new code does not un-parenthesized the
callee expression, causing an ICE because of nil obj returned.

Un-parenthesizing the callee expression fixes the problem.

Fixes #63436

Change-Id: Iebb4fbc08575e7d0b1dbd026c98e8f949ca16460
Reviewed-on: https://go-review.googlesource.com/c/go/+/533476
Auto-Submit: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
2023-10-08 23:15:25 +00:00
Mark Ryan 561bf0457f cmd/compile: optimize right shifts of uint32 on riscv
The compiler is currently zero extending 32 bit unsigned integers to
64 bits before right shifting them using a 64 bit shift instruction.
There's no need to do this as RISC-V has instructions for right
shifting 32 bit unsigned values (srlw and srliw) which zero extend
the result of the shift to 64 bits.  Change the compiler so that
it uses srlw and srliw for 32 bit unsigned shifts reducing in most
cases the number of instructions needed to perform the shift.

Here are some examples of code sequences that are changed by this
patch:

uint32(a) >> 2

  before:

    sll     x5,x10,0x20
    srl     x10,x5,0x22

  after:

    srlw    x10,x10,0x2

uint32(a) >> int(b)

  before:

    sll     x5,x10,0x20
    srl     x5,x5,0x20
    srl     x5,x5,x11
    sltiu   x6,x11,64
    neg     x6,x6
    and     x10,x5,x6

  after:

    srlw    x5,x10,x11
    sltiu   x6,x11,32
    neg     x6,x6
    and     x10,x5,x6

bits.RotateLeft32(uint32(a), 1)

  before:

    sll     x5,x10,0x1
    sll     x6,x10,0x20
    srl     x7,x6,0x3f
    or      x5,x5,x7

  after:

   sll     x5,x10,0x1
   srlw    x6,x10,0x1f
   or      x10,x5,x6

bits.RotateLeft32(uint32(a), int(b))

  before:
    and     x6,x11,31
    sll     x7,x10,x6
    sll     x8,x10,0x20
    srl     x8,x8,0x20
    add     x6,x6,-32
    neg     x6,x6
    srl     x9,x8,x6
    sltiu   x6,x6,64
    neg     x6,x6
    and     x6,x9,x6
    or      x6,x6,x7

  after:

    and     x5,x11,31
    sll     x6,x10,x5
    add     x5,x5,-32
    neg     x5,x5
    srlw    x7,x10,x5
    sltiu   x5,x5,32
    neg     x5,x5
    and     x5,x7,x5
    or      x10,x6,x5

The one regression observed is the following case, an unbounded right
shift of a uint32 where the value we're shifting by is known to be
< 64 but > 31.  As this is an unusual case this commit does not
optimize for it, although the existing code does.

uint32(a) >> (b & 63)

  before:

    sll     x5,x10,0x20
    srl     x5,x5,0x20
    and     x6,x11,63
    srl     x10,x5,x6

  after

    and     x5,x11,63
    srlw    x6,x10,x5
    sltiu   x5,x5,32
    neg     x5,x5
    and     x10,x6,x5

Here we have one extra instruction.

Some benchmark highlights, generated on a VisionFive2 8GB running
Ubuntu 23.04.

pkg: math/bits
LeadingZeros32-4    18.64n ± 0%     17.32n ± 0%   -7.11% (p=0.000 n=10)
LeadingZeros64-4    15.47n ± 0%     15.51n ± 0%   +0.26% (p=0.027 n=10)
TrailingZeros16-4   18.48n ± 0%     17.68n ± 0%   -4.33% (p=0.000 n=10)
TrailingZeros32-4   16.87n ± 0%     16.07n ± 0%   -4.74% (p=0.000 n=10)
TrailingZeros64-4   15.26n ± 0%     15.27n ± 0%   +0.07% (p=0.043 n=10)
OnesCount32-4       20.08n ± 0%     19.29n ± 0%   -3.96% (p=0.000 n=10)
RotateLeft-4        8.864n ± 0%     8.838n ± 0%   -0.30% (p=0.006 n=10)
RotateLeft32-4      8.837n ± 0%     8.032n ± 0%   -9.11% (p=0.000 n=10)
Reverse32-4         29.77n ± 0%     26.52n ± 0%  -10.93% (p=0.000 n=10)
ReverseBytes32-4    9.640n ± 0%     8.838n ± 0%   -8.32% (p=0.000 n=10)
Sub32-4             8.835n ± 0%     8.035n ± 0%   -9.06% (p=0.000 n=10)
geomean             11.50n          11.33n        -1.45%

pkg: crypto/md5
Hash8Bytes-4             1.486µ ± 0%   1.426µ ± 0%  -4.04% (p=0.000 n=10)
Hash64-4                 2.079µ ± 0%   1.968µ ± 0%  -5.36% (p=0.000 n=10)
Hash128-4                2.720µ ± 0%   2.557µ ± 0%  -5.99% (p=0.000 n=10)
Hash256-4                3.996µ ± 0%   3.733µ ± 0%  -6.58% (p=0.000 n=10)
Hash512-4                6.541µ ± 0%   6.072µ ± 0%  -7.18% (p=0.000 n=10)
Hash1K-4                 11.64µ ± 0%   10.75µ ± 0%  -7.58% (p=0.000 n=10)
Hash8K-4                 82.95µ ± 0%   76.32µ ± 0%  -7.99% (p=0.000 n=10)
Hash1M-4                10.436m ± 0%   9.591m ± 0%  -8.10% (p=0.000 n=10)
Hash8M-4                 83.50m ± 0%   76.73m ± 0%  -8.10% (p=0.000 n=10)
Hash8BytesUnaligned-4    1.494µ ± 0%   1.434µ ± 0%  -4.02% (p=0.000 n=10)
Hash1KUnaligned-4        11.64µ ± 0%   10.76µ ± 0%  -7.52% (p=0.000 n=10)
Hash8KUnaligned-4        83.01µ ± 0%   76.32µ ± 0%  -8.07% (p=0.000 n=10)
geomean                  28.32µ        26.42µ       -6.72%

Change-Id: I20483a6668cca1b53fe83944bee3706aadcf8693
Reviewed-on: https://go-review.googlesource.com/c/go/+/528975
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Joel Sing <joel@sing.id.au>
Run-TryBot: Joel Sing <joel@sing.id.au>
TryBot-Result: Gopher Robot <gobot@golang.org>
2023-10-07 12:31:38 +00:00
mstmdev be3d5fb6e6 sync: use atomic.Uint32 in Once
Change-Id: If9089f8afd78de3e62cd575f642ff96ab69e2099
GitHub-Last-Rev: 14165018d6
GitHub-Pull-Request: golang/go#63386
Reviewed-on: https://go-review.googlesource.com/c/go/+/532895
Auto-Submit: Ian Lance Taylor <iant@google.com>
Reviewed-by: Jorropo <jorropo.pgm@gmail.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Ian Lance Taylor <iant@google.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
Auto-Submit: Michael Pratt <mpratt@google.com>
2023-10-06 21:01:50 +00:00
David Chase b72bbaebf9 cmd/compile: expand calls cleanup
Convert expand calls into a smaller number of focused
recursive rewrites, and rely on an enhanced version of
"decompose" to clean up afterwards.

Debugging information seems to emerge intact.

Change-Id: Ic46da4207e3a4da5c8e2c47b637b0e35abbe56bb
Reviewed-on: https://go-review.googlesource.com/c/go/+/507295
Run-TryBot: David Chase <drchase@google.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
2023-10-06 20:57:33 +00:00
Keith Randall cbcf8efa5f cmd/compile: use cache in front of type assert runtime call
That way we don't need to call into the runtime for every
type assertion (to an interface type).

name           old time/op  new time/op  delta
TypeAssert-24  3.78ns ± 3%  1.00ns ± 1%  -73.53%  (p=0.000 n=10+8)

Change-Id: I0ba308aaf0f24a5495b4e13c814d35af0c58bfde
Reviewed-on: https://go-review.googlesource.com/c/go/+/529316
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
2023-10-06 17:02:53 +00:00
Keith Randall 39263f34a3 cmd/compile: add a cache to interface type switches
That way we don't need to call into the runtime when the type being
switched on has been seen many times before.

The cache is just a hash table of a sample of all the concrete types
that have been switched on at that source location.  We record the
matching case number and the resulting itab for each concrete input
type.

The caches seldom get large. The only two in a run of all.bash that
get more than 100 entries, even with the sampling rate set to 1, are

test/fixedbugs/issue29264.go, with 101
test/fixedbugs/issue29312.go, with 254

Both happen at the type switch in fmt.(*pp).handleMethods, perhaps
unsurprisingly.

name                                 old time/op  new time/op  delta
SwitchInterfaceTypePredictable-24    25.8ns ± 2%   2.5ns ± 3%  -90.43%  (p=0.000 n=10+10)
SwitchInterfaceTypeUnpredictable-24  37.5ns ± 2%  11.2ns ± 1%  -70.02%  (p=0.000 n=10+10)

Change-Id: I4961ac9547b7f15b03be6f55cdcb972d176955eb
Reviewed-on: https://go-review.googlesource.com/c/go/+/526658
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Keith Randall <khr@google.com>
2023-10-06 15:44:08 +00:00
Keith Randall 28f4ea16a2 cmd/compile: improve interface type switches
For type switches where the targets are interface types,
call into the runtime once instead of doing a sequence
of assert* calls.

name                                 old time/op  new time/op  delta
SwitchInterfaceTypePredictable-24    26.6ns ± 1%  25.8ns ± 2%  -2.86%  (p=0.000 n=10+10)
SwitchInterfaceTypeUnpredictable-24  39.3ns ± 1%  37.5ns ± 2%  -4.57%  (p=0.000 n=10+10)

Not super helpful by itself, but this code organization allows
followon CLs that add caching to the lookups.

Change-Id: I7967f85a99171faa6c2550690e311bea8b54b01c
Reviewed-on: https://go-review.googlesource.com/c/go/+/526657
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Keith Randall <khr@google.com>
2023-10-06 15:42:30 +00:00
Cuong Manh Le 38db0316f4 cmd/compile: do not fatal when typechecking conversion expression
The types2 typechecker already reported all invalid conversions required
by the Go language spec. However, the conversion involves go pragma is
not specified in the spec, so is not checked by types2.

Fixing this by handling the error gracefully during typecheck, just like
how old typechecker did before CL 394575.

Fixes #63333

Change-Id: I04c4121971c62d96f75ded1794ab4bdf3a6cd0ea
Reviewed-on: https://go-review.googlesource.com/c/go/+/532515
Auto-Submit: Cuong Manh Le <cuong.manhle.vn@gmail.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Keith Randall <khr@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Cuong Manh Le <cuong.manhle.vn@gmail.com>
2023-10-05 19:44:52 +00:00
Paul E. Murphy dcd018b5c5 cmd/internal/obj/ppc64: generate MOVD mask constants in register
Add a new form of RLDC which maps directly to the ISA definition
of rldc: RLDC Rs, $sh, $mb, Ra. This is used to generate mask
constants described below.

Using MOVD $-1, Rx; RLDC Rx, $sh, $mb, Rx, any mask constant
can be generated. A mask is a contiguous series of 1 bits, which
may wrap.

Change-Id: Ifcaae1114080ad58b5fdaa3e5fc9019e2051f282
Reviewed-on: https://go-review.googlesource.com/c/go/+/531120
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: Lynn Boger <laboger@linux.vnet.ibm.com>
Run-TryBot: Paul Murphy <murp@ibm.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2023-10-05 14:03:32 +00:00
Paul E. Murphy 36ecff0893 cmd/internal/obj/ppc64: generate small, shifted constants in register
Check for shifted 16b constants, and transform them to avoid the load
penalty. This should be much faster than loading, and reduce binary
size by reducing the constant pool size.

Change-Id: I6834e08be7ca88e3b77449d226d08d199db84299
Reviewed-on: https://go-review.googlesource.com/c/go/+/531119
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Lynn Boger <laboger@linux.vnet.ibm.com>
Reviewed-by: Michael Pratt <mpratt@google.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Run-TryBot: Paul Murphy <murp@ibm.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2023-10-04 19:10:19 +00:00
Jes Cok 81c5d92f52 all: use the indefinite article an in comments
Change-Id: I8787458f9ccd3b5cdcdda820d8a45deb4f77eade
GitHub-Last-Rev: be865d67ef
GitHub-Pull-Request: golang/go#63165
Reviewed-on: https://go-review.googlesource.com/c/go/+/530120
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Ian Lance Taylor <iant@google.com>
Auto-Submit: Ian Lance Taylor <iant@google.com>
Reviewed-by: Than McIntosh <thanm@google.com>
2023-09-25 14:29:30 +00:00
Paul E. Murphy c8caad423c cmd/compile/internal/ssa: optimize (AND (MOVDconst [-1] x)) on PPC64
This sequence can show up in the lowering pass on PPC64. If it
makes it to the latelower pass, it will cause an error because
it looks like it can be turned into RLDICL, but -1 isn't an
accepted mask.

Also, print more debug info if panic is called from
encodePPC64RotateMask.

Fixes #62698

Change-Id: I0f3322e2205357abe7fc28f96e05e3f7ad65567c
Reviewed-on: https://go-review.googlesource.com/c/go/+/529195
Reviewed-by: Lynn Boger <laboger@linux.vnet.ibm.com>
Run-TryBot: Paul Murphy <murp@ibm.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
2023-09-22 14:09:29 +00:00
eric fang ace1494d92 cmd/compile: optimize absorbing InvertFlags into Noov comparisons for arm64
Previously (LessThanNoov (InvertFlags x)) is lowered as:
CSET
CSET
BIC
With this CL it's lowered as:
CSET
CSEL
This saves one instruction.

Similarly (GreaterEqualNoov (InvertFlags x)) is now lowered as:
CSET
CSINC

$ benchstat old.bench new.bench
goos: linux
goarch: arm64
                       │  old.bench  │             new.bench              │
                       │   sec/op    │   sec/op     vs base               │
InvertLessThanNoov-160   2.249n ± 2%   2.190n ± 1%  -2.62% (p=0.003 n=10)

Change-Id: Idd8979b7f4fe466e74b1a201c4aba7f1b0cffb0b
Reviewed-on: https://go-review.googlesource.com/c/go/+/526237
Reviewed-by: Heschi Kreinick <heschi@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Eric Fang <eric.fang@arm.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
2023-09-21 02:36:06 +00:00
Russ Cox 2fba42cb52 cmd/compile: implement range over func
Add compiler support for range over functions.
See the large comment at the top of
cmd/compile/internal/rangefunc/rewrite.go for details.

This is only reachable if GOEXPERIMENT=range is set,
because otherwise type checking will fail.

For proposal #61405 (but behind a GOEXPERIMENT).
For #61717.

Change-Id: I05717f94e63089c503acc49b28b47edeb4e011b4
Reviewed-on: https://go-review.googlesource.com/c/go/+/510541
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Auto-Submit: Russ Cox <rsc@golang.org>
2023-09-20 14:52:38 +00:00
Russ Cox a94347a05c cmd/compile: implement range over integer
Add compiler implementation of range over integers.
This is only reachable if GOEXPERIMENT=range is set,
because otherwise type checking will fail.

For proposal #61405 (but behind a GOEXPERIMENT).
For #61717.

Change-Id: I4e35a73c5df1ac57f61ffb54033a433967e5be51
Reviewed-on: https://go-review.googlesource.com/c/go/+/510538
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Auto-Submit: Russ Cox <rsc@golang.org>
2023-09-20 14:52:33 +00:00
Russ Cox 8b727f856e cmd/compile, go/types: typechecking of range over int, func
Add type-checking logic for range over integers and functions,
behind GOEXPERIMENT=range.

For proposal #61405 (but behind a GOEXPERIMENT).
For #61717.

Change-Id: Ibf78cf381798b450dbe05eb922df82af2b009403
Reviewed-on: https://go-review.googlesource.com/c/go/+/510537
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Russ Cox <rsc@golang.org>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Robert Findley <rfindley@google.com>
2023-09-20 14:52:29 +00:00
Than McIntosh 0b07bbd2be cmd/compile/internal/inl: inline based on scoring when GOEXPERIMENT=newinliner
This patch changes the inliner to use callsite scores when deciding to
inline as opposed to looking only at callee cost/hairyness.

For this to work, we have to relax the inline budget cutoff as part of
CanInline to allow for the possibility that a given function might
start off with a cost of N where N > 80, but then be called from a
callsites whose score is less than 80. Once a given function F in
package P has been approved by CanInline (based on the relaxed budget)
it will then be emitted as part of the export data, meaning that other
packages importing P will need to also need to compute callsite scores
appropriately.

For a function F that calls function G, if G is marked as potentially
inlinable then the hairyness computation for F will use G's cost for
the call to G as opposed to the default call cost; for this to work
with the new scheme (given relaxed cost change described above) we
use G's cost only if it falls below inlineExtraCallCost, otherwise
just use inlineExtraCallCost.

Included in this patch are a bunch of skips and workarounds to
selected 'errorcheck' tests in the <GOROOT>/test directory to deal
with the additional "can inline" messages emitted when the new inliner
is turned on.

Change-Id: I9be5f8cd0cd8676beb4296faf80d2f6be7246335
Reviewed-on: https://go-review.googlesource.com/c/go/+/519197
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
2023-09-14 19:43:26 +00:00
Matthew Dempsky d18e9407b0 cmd/compile/internal/ir: add Func.DeclareParams
There's several copies of this function. We only need one.

While here, normalize so that we always declare parameters, and always
use the names ~pNN for params and ~rNN for results.

Change-Id: I49e90d3fd1820f3c07936227ed5cfefd75d49a1c
Reviewed-on: https://go-review.googlesource.com/c/go/+/528415
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Auto-Submit: Matthew Dempsky <mdempsky@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Than McIntosh <thanm@google.com>
2023-09-14 13:15:50 +00:00
Yi Yang 4ee1d542ed cmd/compile: sparse conditional constant propagation
sparse conditional constant propagation can discover optimization
opportunities that cannot be found by just combining constant folding
and constant propagation and dead code elimination separately.

This is a re-submit of PR#59575, which fix a broken dominance relationship caught by ssacheck

Updates https://github.com/golang/go/issues/59399

Change-Id: I57482dee38f8e80a610aed4f64295e60c38b7a47
GitHub-Last-Rev: 830016f24e
GitHub-Pull-Request: golang/go#60469
Reviewed-on: https://go-review.googlesource.com/c/go/+/498795
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Heschi Kreinick <heschi@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
2023-09-12 21:01:50 +00:00
Matthew Dempsky 2e457b3868 cmd/compile/internal/noder: handle unsafe.Sizeof, etc in unified IR
Previously, the unified frontend implemented unsafe.Sizeof, etc that
involved derived types by constructing a normal OSIZEOF, etc
expression, including fully instantiating their argument. (When
unsafe.Sizeof is applied to a non-generic type, types2 handles
constant folding it.)

This worked, but involves unnecessary work, since all we really need
to track is the argument type (and the field selections, for
unsafe.Offsetof).

Further, the argument expression could generate temporary variables,
which would then go unused after typecheck replaced the OSIZEOF
expression with an OLITERAL. This results in compiler failures after
CL 523315, which made later passes stricter about expecting the
frontend to not construct unused temporaries.

Fixes #62515.

Change-Id: I37baed048fd2e35648c59243f66c97c24413aa94
Reviewed-on: https://go-review.googlesource.com/c/go/+/527097
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Auto-Submit: Matthew Dempsky <mdempsky@google.com>
2023-09-11 20:48:07 +00:00
Matthew Dempsky afa3f8e104 cmd/compile/internal/staticinit: make staticopy safe
Currently, cmd/compile optimizes `var a = true; var b = a` into `var a
= true; var b = true`. But this may not be safe if we need to
initialize any other global variables between `a` and `b`, and the
initialization involves calling a user-defined function that reassigns
`a`.

This CL changes staticinit to keep track of the initialization
expressions that we've seen so far, and to stop applying the
staticcopy optimization once we've seen an initialization expression
that might have modified another global variable within this package.

To help identify affected initializers, this CL adds a -d=staticcopy
flag to warn when a staticcopy is suppressed and turned into a dynamic
copy.

Currently, `go build -gcflags=all=-d=staticcopy std` reports only four
instances:

```
encoding/xml/xml.go:1600:5: skipping static copy of HTMLEntity+0 with map[string]string{...}
encoding/xml/xml.go:1869:5: skipping static copy of HTMLAutoClose+0 with []string{...}
net/net.go:661:5: skipping static copy of .stmp_31+0 with poll.ErrNetClosing
net/http/transport.go:2566:5: skipping static copy of errRequestCanceled+0 with ~R0
```

Fixes #51913.

Change-Id: Iab41cf6f84c44f7f960e4e62c28a8aeaade4fbcf
Reviewed-on: https://go-review.googlesource.com/c/go/+/395541
Auto-Submit: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: David Chase <drchase@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Heschi Kreinick <heschi@google.com>
2023-09-11 20:12:05 +00:00
Matthew Dempsky e3ce312621 cmd/compile/internal/typecheck: fix closure field naming
When creating the struct type to hold variables captured by a function
literal, we currently reuse the captured variable names as fields.

However, there's no particular reason to do this: these struct types
aren't visible to users, and it adds extra complexity in making sure
fields belong to the correct packages.

Further, it turns out we were getting that subtly wrong. If two
function literals from different packages capture variables with
identical names starting with an uppercase letter (and in the same
order and with corresponding identical types) end up in the same
function (e.g., due to inlining), then we could end up creating
closure struct types that are "different" (i.e., not types.Identical)
yet end up with equal LinkString representations (which violates
LinkString's contract).

The easy fix is to just always use simple, exported, generated field
names in the struct. This should allow further struct reuse across
packages too, and shrink binary sizes slightly.

Fixes #62498.

Change-Id: I9c973f5087bf228649a8f74f7dc1522d84a26b51
Reviewed-on: https://go-review.googlesource.com/c/go/+/527135
Auto-Submit: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
2023-09-11 16:02:11 +00:00
Matthew Dempsky 18c6ec1e4a cmd/compile/internal/noder: stop preserving original const strings
One of the more tedious quirks of the original frontend (i.e.,
typecheck) to preserve was that it preserved the original
representation of constants into the backend. To fit into the unified
IR model, I ended up implementing a fairly heavyweight workaround:
simply record the original constant's string expression in the export
data, so that diagnostics could still report it back, and match the
old test expectations.

But now that there's just a single frontend to support, it's easy
enough to just update the test expectations and drop this support for
"raw" constant expressions.

Change-Id: I1d859c5109d679879d937a2b213e777fbddf4f2f
Reviewed-on: https://go-review.googlesource.com/c/go/+/526376
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Auto-Submit: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
2023-09-08 18:50:24 +00:00
Keith Randall fb5bdb4cc9 cmd/compile: absorb InvertFlags into Noov comparisons
Unfortunately, there isn't a single op that provides the resulting
computation.
At least, I couldn't find one.

Fixes #62469

Change-Id: I236f3965b827aaeb3d70ef9fe89be66b116494f5
Reviewed-on: https://go-review.googlesource.com/c/go/+/526276
LUCI-TryBot-Result: Go LUCI <golang-scoped@luci-project-accounts.iam.gserviceaccount.com>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@google.com>
2023-09-07 15:14:39 +00:00
Paul E. Murphy 5cdb132228 cmd/compile/internal/ssa: improve masking codegen on PPC64
Generate RLDIC[LR] instead of MOVD mask, Rx; AND Rx, Ry, Rz.
This helps reduce code size, and reduces the latency caused
by the constant load.

Similarly, for smaller-than-register values, truncate constants
which exceed the range of the value's type to avoid needing to
load a constant.

Change-Id: I6019684795eb8962d4fd6d9585d08b17c15e7d64
Reviewed-on: https://go-review.googlesource.com/c/go/+/515576
Reviewed-by: Lynn Boger <laboger@linux.vnet.ibm.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Run-TryBot: Paul Murphy <murp@ibm.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
2023-09-06 16:34:20 +00:00
Keith Randall 0e02baa59a Revert "cmd/compile: use shorter ANDL/TESTL if upper 32 bits are known to be zero"
This reverts commit c1dfbf72e1.

Reason for revert: TESTL rule is wrong when the result is used for an ordered comparison.

Fixes #62360

Change-Id: I4d5b6aca24389b0a2bf767bfbc0a9d085359eb38
Reviewed-on: https://go-review.googlesource.com/c/go/+/524255
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Jakub Ciolek <jakub@ciolek.dev>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Bryan Mills <bcmills@google.com>
2023-08-30 15:15:28 +00:00
Matthew Dempsky 4f97a7e6ea cmd/compile/internal/ir: set Addrtaken on Canonical ONAME too
In CL 522879, I moved the logic for setting Addrtaken from typecheck's
markAddrOf and ComputeAddrtaken directly into ir.NewAddrExpr. However,
I took the logic from markAddrOf, and failed to notice that
ComputeAddrtaken also set Addrtaken on the canonical ONAME.

The result is that if the only address-of expressions were within a
function literal, the canonical variable never got marked Addrtaken.
In turn, this could cause the consistency check in ir.Reassigned to
fail. (Yay for consistency checks turning mistakes into ICEs, rather
than miscompilation.)

Fixes #62313.

Change-Id: Ieab2854cd7fcc1b6c5d1e61de66453add9890a4f
Reviewed-on: https://go-review.googlesource.com/c/go/+/523375
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Bryan Mills <bcmills@google.com>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Auto-Submit: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2023-08-28 14:32:14 +00:00
Keith Randall b303fb4855 runtime: fix maps.Clone bug when cloning a map mid-grow
Fixes #62203

Change-Id: I0459d3f481b0cd20102f6d9fd3ea84335a7739a8
Reviewed-on: https://go-review.googlesource.com/c/go/+/522317
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Keith Randall <khr@google.com>
Reviewed-by: Bryan Mills <bcmills@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
2023-08-25 16:10:03 +00:00
Bryan C. Mills 5374c1aaf5 cmd/internal/testdir: parse past gofmt'd //go:build lines
Also gofmt a test file to make sure the parser works.

Fixes #62267.

Change-Id: I9b9f12b06bae7df626231000879b5ed7df3cd9ba
Reviewed-on: https://go-review.googlesource.com/c/go/+/522635
Reviewed-by: Ian Lance Taylor <iant@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Bryan Mills <bcmills@google.com>
Run-TryBot: Bryan Mills <bcmills@google.com>
2023-08-24 17:52:48 +00:00
Bryan C. Mills fe2c686b63 test/fixedbugs: require cgo for issue10607.go
This test passes "-linkmode=external" to 'go run' to link the binary
using the system C linker.

CGO_ENABLED=0 explicitly tells cmd/go not to use the C toolchain,
so the test should not be run in that configuration.

Updates #46330.

Change-Id: I16ac66aac91178045f9decaeb28134061e9711f7
Reviewed-on: https://go-review.googlesource.com/c/go/+/522495
Reviewed-by: Heschi Kreinick <heschi@google.com>
Reviewed-by: Than McIntosh <thanm@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Bryan Mills <bcmills@google.com>
Run-TryBot: Bryan Mills <bcmills@google.com>
2023-08-24 17:19:11 +00:00
Matthew Dempsky f92741b1d8 cmd/compile/internal/noder: elide statically known "if" statements
In go.dev/cl/517775, I moved the frontend's deadcode elimination pass
into unified IR. But I also made a small enhancement: a branch like
"if x || true" is now detected as always taken, so the else branch can
be eliminated.

However, the inliner also has an optimization for delaying the
introduction of the result temporary variables when there's a single
return statement (added in go.dev/cl/266199). Consequently, the
inliner turns "if x || true { return true }; return true" into:

	if x || true {
		~R0 := true
		goto .i0
	}
	.i0:
	// code that uses ~R0

In turn, this confuses phi insertion, because it doesn't recognize
that the "if" statement is always taken, and so ~R0 will always be
initialized.

With this CL, after inlining we instead produce:

	_ = x || true
	~R0 := true
	goto .i0
	.i0:

Fixes #62211.

Change-Id: Ic8a12c9eb85833ee4e5d114f60e6c47817fce538
Reviewed-on: https://go-review.googlesource.com/c/go/+/522096
Reviewed-by: Than McIntosh <thanm@google.com>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
2023-08-23 18:01:55 +00:00
Keith Randall 4711299661 cmd/compile: use jump tables for large type switches
For large interface -> concrete type switches, we can use a jump
table on some bits of the type hash instead of a binary search on
the type hash.

name                        old time/op  new time/op  delta
SwitchTypePredictable-24    1.99ns ± 2%  1.78ns ± 5%  -10.87%  (p=0.000 n=10+10)
SwitchTypeUnpredictable-24  11.0ns ± 1%   9.1ns ± 2%  -17.55%  (p=0.000 n=7+9)

Change-Id: Ida4768e5d62c3ce1c2701288b72664aaa9e64259
Reviewed-on: https://go-review.googlesource.com/c/go/+/521497
Reviewed-by: Keith Randall <khr@google.com>
Auto-Submit: Keith Randall <khr@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
2023-08-23 00:30:54 +00:00
Keith Randall 556e9c5f3e cmd/compile: allow non-pointer writes in the middle of a write barrier
This lets us combine more write barriers, getting rid of some of the
test+branch and gcWriteBarrier* calls.
With the new write barriers, it's easy to add a few non-pointer writes
to the set of values written.

We allow up to 2 non-pointer writes between pointer writes. This is enough
for, for example, adjacent slice fields.

Fixes #62126

Change-Id: I872d0fa9cc4eb855e270ffc0223b39fde1723c4b
Reviewed-on: https://go-review.googlesource.com/c/go/+/521498
TryBot-Result: Gopher Robot <gobot@golang.org>
Run-TryBot: Keith Randall <khr@golang.org>
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@google.com>
2023-08-23 00:16:06 +00:00
Austin Clements 596120fdc6 cmd/compile: redo IsRuntimePkg/IsReflectPkg predicate
Currently, the types package has IsRuntimePkg and IsReflectPkg
predicates for testing if a Pkg is the runtime or reflect packages.
IsRuntimePkg returns "true" for any "CompilingRuntime" package, which
includes all of the packages imported by the runtime. This isn't
inherently wrong, except that all but one use of it is of the form "is
this Sym a specific runtime.X symbol?" for which we clearly only want
the package "runtime" itself. IsRuntimePkg was introduced (as
isRuntime) in CL 37538 as part of separating the real runtime package
from the compiler built-in fake runtime package. As of that CL, the
"runtime" package couldn't import any other packages, so this was
adequate at the time.

We could fix this by just changing the implementation of IsRuntimePkg,
but the meaning of this API is clearly somewhat ambiguous. Instead, we
replace it with a new RuntimeSymName function that returns the name of
a symbol if it's in package "runtime", or "" if not. This is what
every call site (except one) actually wants, which lets us simplify
the callers, and also more clearly addresses the ambiguity between
package "runtime" and the general concept of a runtime package.

IsReflectPkg doesn't have the same issue of ambiguity, but it
parallels IsRuntimePkg and is used in the same way, so we replace it
with a new ReflectSymName for consistency.

Change-Id: If3a81d7d11732a9ab2cac9488d17508415cfb597
Reviewed-on: https://go-review.googlesource.com/c/go/+/521696
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Matthew Dempsky <mdempsky@google.com>
Run-TryBot: Austin Clements <austin@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2023-08-22 19:18:21 +00:00
Meng Zhuo 63ab68ddc5 cmd/compile: add single-precision FMA code generation for riscv64
This CL adds FMADDS,FMSUBS,FNMADDS,FNMSUBS SSA support for riscv

Change-Id: I1e7dd322b46b9e0f4923dbba256303d69ed12066
Reviewed-on: https://go-review.googlesource.com/c/go/+/506616
Reviewed-by: Joel Sing <joel@sing.id.au>
Reviewed-by: David Chase <drchase@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
Run-TryBot: M Zhuo <mzh@golangcn.org>
2023-08-22 12:05:36 +00:00
Meng Zhuo 05f9511582 cmd/compile: improve FP FMA performance on riscv64
FMADD/FMSUB/FNSUB are an efficient FP FMA instructions, which can
be used by the compiler to improve FP performance.

Erf               188.0n ± 2%   139.5n ± 2%  -25.82% (p=0.000 n=10)
Erfc              193.6n ± 1%   143.2n ± 1%  -26.01% (p=0.000 n=10)
Erfinv            244.4n ± 2%   172.6n ± 0%  -29.40% (p=0.000 n=10)
Erfcinv           244.7n ± 2%   173.0n ± 1%  -29.31% (p=0.000 n=10)
geomean           216.0n        156.3n       -27.65%

Ref: The RISC-V Instruction Set Manual Volume I: Unprivileged ISA
11.6 Single-Precision Floating-Point Computational Instructions

Change-Id: I89aa3a4df7576fdd47f4a6ee608ac16feafd093c
Reviewed-on: https://go-review.googlesource.com/c/go/+/506036
Reviewed-by: Joel Sing <joel@sing.id.au>
Run-TryBot: M Zhuo <mzh@golangcn.org>
Reviewed-by: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2023-08-22 08:38:08 +00:00
Matthew Dempsky fecf51717f cmd/compile/internal/walk: reuse runtime.scase
Shaves ~1.5kB off cmd/go binary.

Change-Id: I8ad85aa4a24bc197b009c8e1ea9201957222152a
Reviewed-on: https://go-review.googlesource.com/c/go/+/521677
Reviewed-by: Keith Randall <khr@golang.org>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Auto-Submit: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
2023-08-21 23:34:34 +00:00
Matthew Dempsky 42f4ccb6f9 cmd/compile/internal/reflectdata: share hmap and hiter types
There's no need for distinct hmap and hiter types for each map.

Shaves 9kB off cmd/go binary size.

Change-Id: I7bc3b2d8ec82e7fcd78c1cb17733ebd8b615990a
Reviewed-on: https://go-review.googlesource.com/c/go/+/521615
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
Reviewed-by: Keith Randall <khr@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Matthew Dempsky <mdempsky@google.com>
2023-08-21 23:29:33 +00:00
Keith Randall 0b47b94a62 cmd/compile: remove more extension ops when not needed
If we're not using the upper bits, don't bother issuing a
sign/zero extension operation.

For arm64, after CL 520916 which fixed a correctness bug with
extensions but as a side effect leaves many unnecessary ones
still in place.

Change-Id: I5f4fe4efbf2e9f80969ab5b9a6122fb812dc2ec0
Reviewed-on: https://go-review.googlesource.com/c/go/+/521496
Reviewed-by: Cherry Mui <cherryyz@google.com>
Reviewed-by: Keith Randall <khr@google.com>
Run-TryBot: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
2023-08-21 20:52:15 +00:00
Matthew Dempsky d63c88d695 cmd/compile: enable -d=zerocopy by default
Fixes #2205.

Change-Id: Ib0802fee2b274798b35f0ebbd0b736b1be5ae00a
Reviewed-on: https://go-review.googlesource.com/c/go/+/520600
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Than McIntosh <thanm@google.com>
Auto-Submit: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
2023-08-18 11:59:42 +00:00
Matthew Dempsky 925d2fb36c cmd/compile: restore zero-copy string->[]byte optimization
This CL implements the remainder of the zero-copy string->[]byte
conversion optimization initially attempted in go.dev/cl/520395, but
fixes the tracking of mutations due to ODEREF/ODOTPTR assignments, and
adds more comprehensive tests that I should have included originally.

However, this CL also keeps it behind the -d=zerocopy flag. The next
CL will enable it by default (for easier rollback).

Updates #2205.

Change-Id: Ic330260099ead27fc00e2680a59c6ff23cb63c2b
Reviewed-on: https://go-review.googlesource.com/c/go/+/520599
Auto-Submit: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Than McIntosh <thanm@google.com>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
2023-08-18 11:58:37 +00:00
Matthew Dempsky b75ac7a8d5 Revert "cmd/compile: enable zero-copy string->[]byte conversions"
This reverts CL 520395.

Reason for revert: thanm@ pointed out failure cases.

Change-Id: I3fd60b73118be3652be2c08b77ab39e793b42110
Reviewed-on: https://go-review.googlesource.com/c/go/+/520596
Reviewed-by: Dmitri Shuralyov <dmitshur@golang.org>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Than McIntosh <thanm@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Auto-Submit: Matthew Dempsky <mdempsky@google.com>
2023-08-17 20:02:02 +00:00
Matthew Dempsky c8adb3004f cmd/compile: enable zero-copy string->[]byte conversions
This CL enables the latent support for string->[]byte conversions
added go.dev/cl/520259.

One catch is that we need to make sure []byte("") evaluates to a
non-nil slice, even if "" is (nil, 0). This CL addresses that by
adding a "ptr != nil" check for OSTR2BYTESTMP, unless the NonNil flag
is set.

The existing uses of OSTR2BYTESTMP (which aren't concerned about
[]byte("") evaluating to nil) are updated to set this flag.

Fixes #2205.

Change-Id: I35a9cb16c164cd86156b7560915aba5108d8b523
Reviewed-on: https://go-review.googlesource.com/c/go/+/520395
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
2023-08-17 19:37:36 +00:00
Matthew Dempsky 2c51ea11b0 cmd/compile/internal/typecheck: push ONEW into go/defer wrappers
Currently, we rewrite:

	go f(new(T))

into:

	tmp := new(T)
	go func() { f(tmp) }()

However, we can both shrink the closure and improve escape analysis by
instead rewriting it into:

	go func() { f(new(T)) }()

This CL does that.

Change-Id: Iae16a476368da35123052ca9ff41c49159980458
Reviewed-on: https://go-review.googlesource.com/c/go/+/520340
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Auto-Submit: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
2023-08-17 19:37:04 +00:00
Matthew Dempsky 7e2e648a2d cmd/compile/internal/typecheck: normalize go/defer statements earlier
Normalizing go/defer statements to always use functions with zero
parameters and zero results was added to escape analysis, because that
was the earliest point at which all three frontends converged. Now
that we only have the unified frontend, we can do it during typecheck,
which is where we perform all other desugaring and normalization
rewrites.

Change-Id: Iebf7679b117fd78b1dffee2974bbf85ebc923b23
Reviewed-on: https://go-review.googlesource.com/c/go/+/520260
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Auto-Submit: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
2023-08-17 19:36:58 +00:00
Matthew Dempsky f4e6815652 cmd/compile/internal/noder: remove inlined closure naming hack
I previously used a clumsy hack to copy Closgen back and forth while
inlining, to handle when an inlined function contains closures, which
need to each be uniquely numbered.

The real solution was to name the closures using r.inlCaller, rather
than r.curfn. This CL adds a helper method to do exactly this.

Change-Id: I510553b5d7a8f6581ea1d21604e834fd6338cb06
Reviewed-on: https://go-review.googlesource.com/c/go/+/520339
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
Auto-Submit: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
2023-08-17 19:36:29 +00:00
Matthew Dempsky ff47dd1d66 cmd/compile/internal/escape: optimize indirect closure calls
This CL extends escape analysis in two ways.

First, we already optimize directly called closures. For example,
given:

	var x int  // already stack allocated today
	p := func() *int { return &x }()

we don't need to move x to the heap, because we can statically track
where &x flows. This CL extends the same idea to work for indirectly
called closures too, as long as we know everywhere that they're
called. For example:

	var x int  // stack allocated after this CL
	f := func() *int { return &x }
	p := f()

This will allow a subsequent CL to move the generation of go/defer
wrappers earlier.

Second, this CL adds tracking to detect when pointer values flow to
the pointee operand of an indirect assignment statement (i.e., flows
to p in "*p = x") or to builtins that modify memory (append, copy,
clear). This isn't utilized in the current CL, but a subsequent CL
will make use of it to better optimize string->[]byte conversions.

Updates #2205.

Change-Id: I610f9c531e135129c947684833e288ce64406f35
Reviewed-on: https://go-review.googlesource.com/c/go/+/520259
Run-TryBot: Matthew Dempsky <mdempsky@google.com>
TryBot-Result: Gopher Robot <gobot@golang.org>
Auto-Submit: Matthew Dempsky <mdempsky@google.com>
Reviewed-by: Cuong Manh Le <cuong.manhle.vn@gmail.com>
Reviewed-by: Dmitri Shuralyov <dmitshur@google.com>
2023-08-17 16:36:09 +00:00
David Chase e72ecc6a6b cmd/compile: in expandCalls, move all arg marshalling into call block
For aggregate-typed arguments passed to a call, expandCalls
decomposed them into parts in the same block where the value
was created.  This is not necessarily the call block, and in
the case where stores are involved, can change the memory
leaving that block, and getting that right is problematic.

Instead, do all the expanding in the same block as the call,
which avoids the problems of (1) not being able to reorder
loads/stores across a block boundary to conform to memory
order and (2) (incorrectly, not) exposing the new memory to
consumers in other blocks.  Putting it all in the same block
as the call allows reordering, and the call creates its own
new memory (which is already dealt with correctly).

Fixes #61992.

Change-Id: Icc7918f0d2dd3c480cc7f496cdcd78edeca7f297
Reviewed-on: https://go-review.googlesource.com/c/go/+/519276
Reviewed-by: Keith Randall <khr@google.com>
Run-TryBot: David Chase <drchase@google.com>
Reviewed-by: Keith Randall <khr@golang.org>
TryBot-Result: Gopher Robot <gobot@golang.org>
2023-08-16 15:40:52 +00:00