1
0
mirror of https://github.com/golang/go synced 2024-07-01 07:56:09 +00:00

[dev.typeparams] merge: merge branch 'dev.regabi' into 'dev.typeparams'

The following files had merge conflicts and were merged manually:

	src/cmd/compile/fmtmap_test.go
	src/cmd/compile/internal/gc/noder.go
	src/go/parser/error_test.go
	test/assign.go
	test/chan/perm.go
	test/fixedbugs/issue22822.go
	test/fixedbugs/issue4458.go
	test/init.go
	test/interface/explicit.go
	test/map1.go
	test/method2.go

The following files had manual changes to make tests pass:

	test/run.go
	test/used.go
	src/cmd/compile/internal/types2/stdlib_test.go

Change-Id: Ia495aaaa80ce321ee4ec2a9105780fbe913dbd4c
This commit is contained in:
Robert Griesemer 2020-12-14 11:53:55 -08:00
commit 91803a2df3
596 changed files with 15899 additions and 9730 deletions

View File

@ -26,12 +26,12 @@ Do not send CLs removing the interior tags from such phrases.
<h2 id="language">Changes to the language</h2>
<p>
TODO
There are no changes to the language.
</p>
<h2 id="ports">Ports</h2>
<h3 id="darwin">Darwin</h3>
<h3 id="darwin">Darwin and iOS</h3>
<p><!-- golang.org/issue/38485, golang.org/issue/41385, CL 266373, more CLs -->
Go 1.16 adds support of 64-bit ARM architecture on macOS (also known as
@ -43,15 +43,24 @@ Do not send CLs removing the interior tags from such phrases.
</p>
<p><!-- CL 254740 -->
The iOS port, which was previously <code>darwin/arm64</code>, is now
moved to <code>ios/arm64</code>. <code>GOOS=ios</code> implies the
The iOS port, which was previously <code>darwin/arm64</code>, has
been renamed to <code>ios/arm64</code>. <code>GOOS=ios</code>
implies the
<code>darwin</code> build tag, just as <code>GOOS=android</code>
implies the <code>linux</code> build tag.
implies the <code>linux</code> build tag. This change should be
transparent to anyone using gomobile to build iOS apps.
</p>
<p><!-- golang.org/issue/42100, CL 263798 -->
The <code>ios/amd64</code> port is added, targetting the iOS simulator
running on AMD64-based macOS.
Go 1.16 adds an <code>ios/amd64</code> port, which targets the iOS
simulator running on AMD64-based macOS. Previously this was
unofficially supported through <code>darwin/amd64</code> with
the <code>ios</code> build tag set.
</p>
<p><!-- golang.org/issue/23011 -->
Go 1.16 is the last release that will run on macOS 10.12 Sierra.
Go 1.17 will require macOS 10.13 High Sierra or later.
</p>
<h3 id="netbsd">NetBSD</h3>
@ -61,6 +70,14 @@ Do not send CLs removing the interior tags from such phrases.
<code>netbsd/arm64</code> port).
</p>
<h3 id="openbsd">OpenBSD</h3>
<p><!-- golang.org/issue/40995 -->
Go now supports the MIPS64 architecture on OpenBSD
(the <code>openbsd/mips64</code> port). This port does not yet
support cgo.
</p>
<h3 id="386">386</h3>
<p><!-- golang.org/issue/40255, golang.org/issue/41848, CL 258957, and CL 260017 -->
@ -72,6 +89,14 @@ Do not send CLs removing the interior tags from such phrases.
with <code>GO386=softfloat</code>.
</p>
<h3 id="riscv">RISC-V</h3>
<p><!-- golang.org/issue/36641, CL 267317 -->
The <code>linux/riscv64</code> port now supports cgo and
<code>-buildmode=pie</code>. This release also includes performance
optimizations and code generation improvements for RISC-V.
</p>
<h2 id="tools">Tools</h2>
<p>
@ -80,17 +105,16 @@ Do not send CLs removing the interior tags from such phrases.
<h3 id="go-command">Go command</h3>
<p>
TODO
<!-- CL 237697: https://golang.org/cl/237697: cmd/go: error when -c or -i are used with unknown flags -->
<!-- CL 255052: https://golang.org/cl/255052: cmd/go: default to GO111MODULE=on -->
<!-- CL 266420: https://golang.org/cl/266420: yes (mention go help vcs): cmd/go: add GOVCS setting to control version control usage -->
<!-- CL 244773: https://golang.org/cl/244773: cmd/go/internal/modload: drop requirements on excluded versions -->
</p>
<h4 id="modules">Modules</h4>
<p><!-- golang.org/issue/41330 -->
Module-aware mode is enabled by default, regardless of whether a
<code>go.mod</code> file is present in the current working directory or a
parent directory. More precisely, the <code>GO111MODULE</code> environment
variable now defaults to <code>on</code>. To switch to the previous behavior,
set <code>GO111MODULE</code> to <code>auto</code>.
</p>
<p><!-- golang.org/issue/40728 -->
Build commands like <code>go</code> <code>build</code> and <code>go</code>
<code>test</code> no longer modify <code>go.mod</code> and <code>go.sum</code>
@ -107,9 +131,7 @@ Do not send CLs removing the interior tags from such phrases.
<code>install</code> to build and install packages in module-aware mode,
ignoring the <code>go.mod</code> file in the current directory or any parent
directory, if there is one. This is useful for installing executables without
affecting the dependencies of the main module.<br>
TODO: write and link to section in golang.org/ref/mod<br>
TODO: write and link to blog post
affecting the dependencies of the main module.
</p>
<p><!-- golang.org/issue/40276 -->
@ -127,8 +149,6 @@ Do not send CLs removing the interior tags from such phrases.
to indicate that certain published versions of the module should not be used
by other modules. A module author may retract a version after a severe problem
is discovered or if the version was published unintentionally.<br>
TODO: write and link to section in golang.org/ref/mod<br>
TODO: write and link to tutorial or blog post
</p>
<p><!-- golang.org/issue/26603 -->
@ -138,6 +158,25 @@ Do not send CLs removing the interior tags from such phrases.
resolving missing packages.
</p>
<p><!-- golang.org/issue/36465 -->
The <code>go</code> command now ignores requirements on module versions
excluded by <code>exclude</code> directives in the main module. Previously,
the <code>go</code> command used the next version higher than an excluded
version, but that version could change over time, resulting in
non-reproducible builds.
</p>
<h4 id="embed">Embedding Files</h4>
<p>
The <code>go</code> command now supports including
static files and file trees as part of the final executable,
using the new <code>//go:embed</code> directive.
See the documentation for the new
<a href="/pkg/embed/"><code>embed</code></a>
package for details.
</p>
<h4 id="go-test"><code>go</code> <code>test</code></h4>
<p><!-- golang.org/issue/29062 -->
@ -150,19 +189,26 @@ Do not send CLs removing the interior tags from such phrases.
that is still considered to be a passing test.
</p>
<p><!-- golang.org/issue/39484 -->
<code>go</code> <code>test</code> reports an error when the <code>-c</code>
or <code>-i</code> flags are used together with unknown flags. Normally,
unknown flags are passed to tests, but when <code>-c</code> or <code>-i</code>
are used, tests are not run.
</p>
<h4 id="go-get"><code>go</code> <code>get</code></h4>
<p><!-- golang.org/issue/37519 -->
The <code>go</code> <code>get</code> <code>-insecure</code> flag is
deprecated and will be removed in a future version. This flag permits
fetching from repositories and resolving custom domains using insecure
schemes such as HTTP, and also bypassess module sum validation using the
schemes such as HTTP, and also bypasses module sum validation using the
checksum database. To permit the use of insecure schemes, use the
<code>GOINSECURE</code> environment variable instead. To bypass module
sum validation, use <code>GOPRIVATE</code> or <code>GONOSUMDB</code>.
See <code>go</code> <code>help</code> <code>environment</code> for details.
</p>
<h4 id="go-get"><code>go</code> <code>get</code></h4>
<p><!-- golang.org/cl/263267 -->
<code>go</code> <code>get</code> <code>example.com/mod@patch</code> now
requires that some version of <code>example.com/mod</code> already be
@ -171,6 +217,21 @@ Do not send CLs removing the interior tags from such phrases.
to patch even newly-added dependencies.)
</p>
<h4 id="govcs"><code>GOVCS</code> environment variable</h4>
<p><!-- golang.org/issue/266420 -->
<code>GOVCS</code> is a new environment variable that limits which version
control tools the <code>go</code> command may use to download source code.
This mitigates security issues with tools that are typically used in trusted,
authenticated environments. By default, <code>git</code> and <code>hg</code>
may be used to download code from any repository. <code>svn</code>,
<code>bzr</code>, and <code>fossil</code> may only be used to download code
from repositories with module paths or package paths matching patterns in
the <code>GOPRIVATE</code> environment variable. See
<a href="/cmd/go/#hdr-Controlling_version_control_with_GOVCS"><code>go</code>
<code>help</code> <code>vcs</code></a> for details.
</p>
<h4 id="all-pattern">The <code>all</code> pattern</h4>
<p><!-- golang.org/cl/240623 -->
@ -232,10 +293,41 @@ Do not send CLs removing the interior tags from such phrases.
<!-- CL 235677: https://golang.org/cl/235677: cmd/vet: bring in pass to catch invalid uses of testing.T in goroutines -->
</p>
<p><!-- CL 248686, CL 276372 -->
The vet tool now warns about amd64 assembly that clobbers the BP
register (the frame pointer) without saving and restoring it,
contrary to the calling convention. Code that doesn't preserve the
BP register must be modified to either not use BP at all or preserve
BP by saving and restoring it. An easy way to preserve BP is to set
the frame size to a nonzero value, which causes the generated
prologue and epilogue to preserve the BP register for you.
See <a href="https://golang.org/cl/248260">CL 248260</a> for example
fixes.
</p>
<h2 id="runtime">Runtime</h2>
<p>
TODO
The new <a href="/pkg/runtime/metrics/"><code>runtime/metrics</code></a> package
introduces a stable interface for reading
implementation-defined metrics from the Go runtime.
It supersedes existing functions like
<a href="/pkg/runtime/#ReadMemStats"><code>runtime.ReadMemStats</code></a>
and
<a href="/pkg/runtime/debug/#GCStats"><code>debug.GCStats</code></a>
and is significantly more general and efficient.
See the package documentation for more details.
</p>
<p><!-- CL 254659 -->
Setting the <code>GODEBUG</code> environment variable
to <code>inittrace=1</code> now causes the runtime to emit a single
line to standard error for each package <code>init</code>,
summarizing its execution time and memory allocation. This trace can
be used to find bottlenecks or regressions in Go startup
performance.
The <a href="/pkg/runtime/#hdr-Environment_Variables"><code>GODEBUG</code><
documentation</a> describes the format.
</p>
<p><!-- CL 267100 -->
@ -250,10 +342,21 @@ Do not send CLs removing the interior tags from such phrases.
variable.
</p>
<p><!-- CL 220419, CL 271987 -->
Go 1.16 fixes a discrepancy between the race detector and
the <a href="/ref/mem">Go memory model</a>. The race detector now
more precisely follows the channel synchronization rules of the
memory model. As a result, the detector may now report races it
previously missed.
</p>
<h2 id="compiler">Compiler</h2>
<p>
TODO
<p><!-- CL 256459, CL 264837, CL 266203, CL 256460 -->
The compiler can now inline functions with
non-labeled <code>for</code> loops, method values, and type
switches. The inliner can also detect more indirect calls where
inlining is possible.
</p>
<h2 id="linker">Linker</h2>
@ -272,13 +375,10 @@ Do not send CLs removing the interior tags from such phrases.
supported architecture/OS combinations (the 1.15 performance improvements
were primarily focused on <code>ELF</code>-based OSes and
<code>amd64</code> architectures). For a representative set of
large Go programs, linking is 20-35% faster than 1.15 and requires
large Go programs, linking is 20-25% faster than 1.15 and requires
5-15% less memory on average for <code>linux/amd64</code>, with larger
improvements for other architectures and OSes.
</p>
<p>
TODO: update with final numbers later in the release.
improvements for other architectures and OSes. Most binaries are
also smaller as a result of more aggressive symbol pruning.
</p>
<p><!-- CL 255259 -->
@ -288,9 +388,54 @@ Do not send CLs removing the interior tags from such phrases.
<h2 id="library">Core library</h2>
<h3 id="library-embed">Embedded Files</h3>
<p>
TODO: mention significant additions like new packages (<code>io/fs</code>),
new proposal-scoped features (<code>//go:embed</code>), and so on
The new <a href="/pkg/embed/"><code>embed</code></a> package
provides access to files embedded in the program during compilation
using the new <a href="#embed"><code>//go:embed</code> directive</a>.
</p>
<h3 id="fs">File Systems</h3>
<p>
The new <a href="/pkg/io/fs/"><code>io/fs</code></a> package
defines an abstraction for read-only trees of files,
the <a href="/pkg/io/fs/#FS"><code>fs.FS</code></a> interface,
and the standard library packages have
been adapted to make use of the interface as appropriate.
</p>
<p>
On the producer side of the interface,
the new <a href="/pkg/embed/#FS">embed.FS</code></a> type
implements <code>fs.FS</code>, as does
<a href="/pkg/archive/zip/#Reader"><code>zip.Reader</code></a>.
The new <a href="/pkg/os/#DirFS"><code>os.DirFS</code></a> function
provides an implementation of <code>fs.FS</code> backed by a tree
of operating system files.
</p>
<p>
On the consumer side,
the new <a href="/pkg/net/http/#FS"><code>http.FS</code></a>
function converts an <code>fs.FS</code> to an
<a href="/pkg/net/http/#Handler"><code>http.Handler</code></a>.
Also, the <a href="/pkg/html/template/"><code>html/template</code></a>
and <a href="/pkg/text/template/"><code>text/template</code></a>
packages <a href="/pkg/html/template/#ParseFS"><code>ParseFS</code></a>
functions and methods read templates from an <code>fs.FS</code>.
</p>
<p>
For testing code that implements <code>fs.FS</code>,
the new <a href="/pkg/testing/fstest/"><code>testing/fstest</code></a>
package provides a <a href="/pkg/testing/fstest/#TestFS"><code>TestFS</code></a>
function that checks for and reports common mistakes.
It also provides a simple in-memory file system implementation,
<a href="/pkg/testing/fstest/#MapFS"><code>MapFS</code></a>,
which can be useful for testing code that accepts <code>fs.FS</code>
implementations.
</p>
<p>
@ -322,9 +467,10 @@ Do not send CLs removing the interior tags from such phrases.
<dl id="crypto/hmac"><dt><a href="/pkg/crypto/hmac/">crypto/hmac</a></dt>
<dd>
<p><!-- CL 261960 -->
<a href="/pkg/crypto/hmac/#New">New</a> will now panic if separate calls to
the hash generation function fail to return new values. Previously, the
behavior was undefined and invalid outputs were sometimes generated.
<a href="/pkg/crypto/hmac/#New"><code>New</code></a> will now panic if
separate calls to the hash generation function fail to return new values.
Previously, the behavior was undefined and invalid outputs were sometimes
generated.
</p>
</dd>
</dl><!-- crypto/hmac -->
@ -332,81 +478,83 @@ Do not send CLs removing the interior tags from such phrases.
<dl id="crypto/tls"><dt><a href="/pkg/crypto/tls/">crypto/tls</a></dt>
<dd>
<p><!-- CL 256897 -->
I/O operations on closing or closed TLS connections can now be detected using
the new <a href="/pkg/net/#ErrClosed">ErrClosed</a> error. A typical use
would be <code>errors.Is(err, net.ErrClosed)</code>. In earlier releases
the only way to reliably detect this case was to match the string returned
by the <code>Error</code> method with <code>"tls: use of closed connection"</code>.
I/O operations on closing or closed TLS connections can now be detected
using the new <a href="/pkg/net/#ErrClosed"><code>net.ErrClosed</code></a>
error. A typical use would be <code>errors.Is(err, net.ErrClosed)</code>.
</p>
<p><!-- CL 266037 -->
A default deadline is set in <a href="/pkg/crypto/tls/#Conn.Close">Close</a>
before sending the close notify alert, in order to prevent blocking
A default write deadline is now set in
<a href="/pkg/crypto/tls/#Conn.Close"><code>Conn.Close</code></a>
before sending the "close notify" alert, in order to prevent blocking
indefinitely.
</p>
<p><!-- CL 246338 -->
<a href="/pkg/crypto/tls#Conn.HandshakeContext">(*Conn).HandshakeContext</a> was added to
allow the user to control cancellation of an in-progress TLS Handshake.
The context provided is propagated into the
<a href="/pkg/crypto/tls#ClientHelloInfo">ClientHelloInfo</a>
and <a href="/pkg/crypto/tls#CertificateRequestInfo">CertificateRequestInfo</a>
structs and accessible through the new
<a href="/pkg/crypto/tls#ClientHelloInfo.Context">(*ClientHelloInfo).Context</a>
and
<a href="/pkg/crypto/tls#CertificateRequestInfo.Context">
(*CertificateRequestInfo).Context
</a> methods respectively. Canceling the context after the handshake has finished
has no effect.
The new <a href="/pkg/crypto/tls#Conn.HandshakeContext"><code>Conn.HandshakeContext</code></a>
method allows cancellation of an in-progress handshake. The provided
context is accessible through the new
<a href="/pkg/crypto/tls#ClientHelloInfo.Context"><code>ClientHelloInfo.Context</code></a>
and <a href="/pkg/crypto/tls#CertificateRequestInfo.Context">
<code>CertificateRequestInfo.Context</code></a> methods. Canceling the
context after the handshake has finished has no effect.
</p>
<p><!-- CL 239748 -->
Clients now ensure that the server selects
Clients now return a handshake error if the server selects
<a href="/pkg/crypto/tls/#ConnectionState.NegotiatedProtocol">
an ALPN protocol</a> from
an ALPN protocol</a> that was not in
<a href="/pkg/crypto/tls/#Config.NextProtos">
the list advertised by the client</a>.
</p>
<p><!-- CL 262857 -->
TLS servers will now prefer other AEAD cipher suites (such as ChaCha20Poly1305)
Servers will now prefer other available AEAD cipher suites (such as ChaCha20Poly1305)
over AES-GCM cipher suites if either the client or server doesn't have AES hardware
support, unless the application set both
<a href="/pkg/crypto/tls/#Config.PreferServerCipherSuites"><code>Config.PreferServerCipherSuites</code></a>
support, unless both <a href="/pkg/crypto/tls/#Config.PreferServerCipherSuites">
<code>Config.PreferServerCipherSuites</code></a>
and <a href="/pkg/crypto/tls/#Config.CipherSuites"><code>Config.CipherSuites</code></a>
or there are no other AEAD cipher suites supported.
The client is assumed not to have AES hardware support if it does not signal a
preference for AES-GCM cipher suites.
are set. The client is assumed not to have AES hardware support if it does
not signal a preference for AES-GCM cipher suites.
</p>
<p><!-- CL 246637 -->
TODO: <a href="https://golang.org/cl/246637">https://golang.org/cl/246637</a>: make config.Clone return nil if the source is nil
<a href="/pkg/crypto/tls/#Config.Clone"><code>Config.Clone</code></a> now
returns nil if the receiver is nil, rather than panicking.
</p>
</dd>
</dl><!-- crypto/tls -->
<dl id="crypto/x509"><dt><a href="/pkg/crypto/x509/">crypto/x509</a></dt>
<dd>
<p>
The <code>GODEBUG=x509ignoreCN=0</code> flag will be removed in Go 1.17.
It enables the legacy behavior of treating the <code>CommonName</code>
field on X.509 certificates as a host name when no Subject Alternative
Names are present.
</p>
<p><!-- CL 235078 -->
<a href="/pkg/crypto/x509/#ParseCertificate">ParseCertificate</a> and
<a href="/pkg/crypto/x509/#CreateCertificate">CreateCertificate</a> both
now enforce string encoding restrictions for the fields <code>DNSNames</code>,
<code>EmailAddresses</code>, and <code>URIs</code>. These fields can only
contain strings with characters within the ASCII range.
<a href="/pkg/crypto/x509/#ParseCertificate"><code>ParseCertificate</code></a> and
<a href="/pkg/crypto/x509/#CreateCertificate"><code>CreateCertificate</code></a>
now enforce string encoding restrictions for the <code>DNSNames</code>,
<code>EmailAddresses</code>, and <code>URIs</code> fields. These fields
can only contain strings with characters within the ASCII range.
</p>
<p><!-- CL 259697 -->
<a href="/pkg/crypto/x509/#CreateCertificate">CreateCertificate</a> now
verifies the generated certificate's signature using the signer's
public key. If the signature is invalid, an error is returned, instead
of a malformed certificate.
<a href="/pkg/crypto/x509/#CreateCertificate"><code>CreateCertificate</code></a>
now verifies the generated certificate's signature using the signer's
public key. If the signature is invalid, an error is returned, instead of
a malformed certificate.
</p>
<p><!-- CL 233163 -->
A number of additional fields have been added to the
<a href="/pkg/crypto/x509/#CertificateRequest">CertificateRequest</a> type.
These fields are now parsed in <a href="/pkg/crypto/x509/#ParseCertificateRequest">ParseCertificateRequest</a>
and marshalled in <a href="/pkg/crypto/x509/#CreateCertificateRequest">CreateCertificateRequest</a>.
<a href="/pkg/crypto/x509/#CertificateRequest"><code>CertificateRequest</code></a> type.
These fields are now parsed in <a href="/pkg/crypto/x509/#ParseCertificateRequest">
<code>ParseCertificateRequest</code></a> and marshalled in
<a href="/pkg/crypto/x509/#CreateCertificateRequest"><code>CreateCertificateRequest</code></a>.
</p>
<p><!-- CL 257939 -->
@ -416,25 +564,39 @@ Do not send CLs removing the interior tags from such phrases.
</p>
<p><!-- CL 257257 -->
TODO: <a href="https://golang.org/cl/257257">https://golang.org/cl/257257</a>: return additional chains from Verify on Windows
On Windows, <a href="/pkg/crypto/x509/#Certificate.Verify"><code>Certificate.Verify</code></a>
will now return all certificate chains that are built by the platform
certificate verifier, instead of just the highest ranked chain.
</p>
<p><!-- CL 262343 -->
TODO: <a href="https://golang.org/cl/262343">https://golang.org/cl/262343</a>: add Unwrap to SystemRootsError
The new <a href="/pkg/crypto/x509/#SystemRootsError.Unwrap"><code>SystemRootsError.Unwrap</code></a>
method allows accessing the <a href="/pkg/crypto/x509/#SystemRootsError.Err"><code>Err</code></a>
field through the <a href="/pkg/errors"><code>errors</code></a> package functions.
</p>
</dd>
</dl><!-- crypto/x509 -->
<dl id="encoding/asn1"><dt><a href="/pkg/encoding/asn1">encoding/asn1</a></dt>
<dd>
<p><!-- CL 255881 -->
<a href="/pkg/encoding/asn1/#Unmarshal"><code>Unmarshal</code></a> and
<a href="/pkg/encoding/asn1/#UnmarshalWithParams"><code>UnmarshalWithParams</code></a>
now return an error instead of panicking when the argument is not
a pointer or is nil. This change matches the behavior of other
encoding packages such as <a href="/pkg/encoding/json"><code>encoding/json</code></a>.
</p>
</dd>
</dl>
<dl id="encoding/json"><dt><a href="/pkg/encoding/json/">encoding/json</a></dt>
<dd>
<p><!-- CL 263619 -->
The error message for
<a href="/pkg/encoding/json/#SyntaxError">SyntaxError</a>
now begins with "json: ", matching the other errors in the package.
</p>
<p><!-- CL 234818 -->
TODO: <a href="https://golang.org/cl/234818">https://golang.org/cl/234818</a>: allow semicolon in field key / struct tag
The <code>json</code> struct field tags understood by
<a href="/pkg/encoding/json/#Marshal"><code>Marshal</code></a>,
<a href="/pkg/encoding/json/#Unmarshal"><code>Unmarshal</code></a>,
and related functionality now permit semicolon characters within
a JSON object name for a Go struct field.
</p>
</dd>
</dl><!-- encoding/json -->
@ -456,7 +618,10 @@ Do not send CLs removing the interior tags from such phrases.
<dl id="flag"><dt><a href="/pkg/flag/">flag</a></dt>
<dd>
<p><!-- CL 240014 -->
TODO: <a href="https://golang.org/cl/240014">https://golang.org/cl/240014</a>: add Func
The new <a href="/pkg/flag/#Func"><code>Func</code></a> function
allows registering a flag implemented by calling a function,
as a lighter-weight alternative to implementing the
<a href="/pkg/flag/#Value"><code>Value</code></a> interface.
</p>
</dd>
</dl><!-- flag -->
@ -464,7 +629,8 @@ Do not send CLs removing the interior tags from such phrases.
<dl id="io"><dt><a href="/pkg/io/">io</a></dt>
<dd>
<p><!-- CL 261577 -->
TODO: <a href="https://golang.org/cl/261577">https://golang.org/cl/261577</a>: add a new ReadSeekCloser interface
The package now defines a
<a href="/pkg/io/#ReadSeekCloser"><code>ReadSeekCloser</code></a> interface.
</p>
</dd>
</dl><!-- io -->
@ -472,7 +638,8 @@ Do not send CLs removing the interior tags from such phrases.
<dl id="log"><dt><a href="/pkg/log/">log</a></dt>
<dd>
<p><!-- CL 264460 -->
TODO: <a href="https://golang.org/cl/264460">https://golang.org/cl/264460</a>: expose std via new Default function
The new <a href="/pkg/log/#Default"><code>Default</code></a> function
provides access to the default <a href="/pkg/log/#Logger"><code>Logger</code></a>.
</p>
</dd>
</dl><!-- log -->
@ -480,7 +647,11 @@ Do not send CLs removing the interior tags from such phrases.
<dl id="log/syslog"><dt><a href="/pkg/log/syslog/">log/syslog</a></dt>
<dd>
<p><!-- CL 264297 -->
TODO: <a href="https://golang.org/cl/264297">https://golang.org/cl/264297</a>: set local to true if network is any of &#34;unix&#34;, or &#34;unixgram&#34;
The <a href="/pkg/log/syslog/#Writer"><code>Writer</code></a>
now uses the local message format
(omitting the host name and using a shorter time stamp)
when logging to custom Unix domain sockets,
matching the format already used for the default log socket.
</p>
</dd>
</dl><!-- log/syslog -->
@ -488,7 +659,10 @@ Do not send CLs removing the interior tags from such phrases.
<dl id="mime/multipart"><dt><a href="/pkg/mime/multipart/">mime/multipart</a></dt>
<dd>
<p><!-- CL 247477 -->
TODO: <a href="https://golang.org/cl/247477">https://golang.org/cl/247477</a>: return overflow errors in Reader.ReadForm
The <a href="/pkg/mime/multipart/#Reader"><code>Reader</code></a>'s
<a href="/pkg/mime/multipart/#Reader.ReadForm"><code>ReadForm</code></a>
method no longer rejects form data
when passed the maximum int64 value as a limit.
</p>
</dd>
</dl><!-- mime/multipart -->
@ -512,7 +686,10 @@ Do not send CLs removing the interior tags from such phrases.
</p>
<p><!-- CL 238629 -->
TODO: <a href="https://golang.org/cl/238629">https://golang.org/cl/238629</a>: prefer /etc/hosts over DNS when no /etc/nsswitch.conf is present
On Linux, host name lookups no longer use DNS before checking
<code>/etc/hosts</code> when <code>/etc/nsswitch.conf</code>
is missing; this is common on musl-based systems and makes
Go programs match the behavior of C programs on those systems.
</p>
</dd>
</dl><!-- net -->
@ -540,23 +717,29 @@ Do not send CLs removing the interior tags from such phrases.
</p>
<p><!-- CL 256498, golang.org/issue/36990 -->
Cookies set with <code>SameSiteDefaultMode</code> now behave according to the current
spec (no attribute is set) instead of generating a SameSite key without a value.
Cookies set with <a href="/pkg/net/http/#SameSiteDefaultMode"><code>SameSiteDefaultMode</code></a>
now behave according to the current spec (no attribute is set) instead of
generating a SameSite key without a value.
</p>
<p><!-- CL 246338 -->
The <a href="/pkg/net/http/"><code>net/http</code></a> package now uses the new
<a href="/pkg/crypto/tls#Conn.HandshakeContext"><code>(*tls.Conn).HandshakeContext</code></a>
with the <a href="/pkg/net/http/#Request"><code>Request</code></a> context
when performing TLS handshakes in the client or server.
The <a href="/pkg/net/http/"><code>net/http</code></a> package now passes the
<a href="/pkg/net/http/#Request.Context"><code>Request</code> context</a> to
<a href="/pkg/crypto/tls#Conn.HandshakeContext"><code>tls.Conn.HandshakeContext</code></a>
when performing TLS handshakes.
</p>
<p><!-- CL 250039 -->
TODO: <a href="https://golang.org/cl/250039">https://golang.org/cl/250039</a>: set Content-Length:0 for empty PATCH requests as with POST, PATCH
The <a href="/pkg/net/http/#Client">Client</a> now sends
an explicit <code>Content-Length:</code> <code>0</code>
header in <code>PATCH</code> requests with empty bodies,
matching the existing behavior of <code>POST</code> and <code>PUT</code>.
</p>
<p><!-- CL 249440 -->
TODO: <a href="https://golang.org/cl/249440">https://golang.org/cl/249440</a>: match http scheme when selecting http_proxy
The <a href="/pkg/net/http/#ProxyFromEnvironment">ProxyFromEnvironment</a> function
no longer returns the setting of the <code>HTTP_PROXY</code> environment
variable for <code>https://</code> URLs when <code>HTTPS_PROXY</code> is unset.
</p>
</dd>
</dl><!-- net/http -->
@ -564,7 +747,9 @@ Do not send CLs removing the interior tags from such phrases.
<dl id="net/http/httputil"><dt><a href="/pkg/net/http/httputil/">net/http/httputil</a></dt>
<dd>
<p><!-- CL 260637 -->
TODO: <a href="https://golang.org/cl/260637">https://golang.org/cl/260637</a>: flush ReverseProxy immediately if Content-Length is -1
The <a href="/pkg/net/http/httputil/#ReverseProxy">ReverseProxy</a>
now flushes buffered data more aggressively when proxying
streamed responses with unknown body lengths.
</p>
</dd>
</dl><!-- net/http/httputil -->
@ -572,7 +757,10 @@ Do not send CLs removing the interior tags from such phrases.
<dl id="net/smtp"><dt><a href="/pkg/net/smtp/">net/smtp</a></dt>
<dd>
<p><!-- CL 247257 -->
TODO: <a href="https://golang.org/cl/247257">https://golang.org/cl/247257</a>: adds support for the SMTPUTF8 extension
The <a href="/pkg/net/smtp/#Client">Client</a>'s
<a href="/pkg/net/smtp/#Client.Mail"><code>Mail</code></a>
method now sends the <code>SMTPUTF8</code> directive to
servers that support it, signaling that addresses are encoded in UTF-8.
</p>
</dd>
</dl><!-- net/smtp -->
@ -580,7 +768,10 @@ Do not send CLs removing the interior tags from such phrases.
<dl id="os"><dt><a href="/pkg/os/">os</a></dt>
<dd>
<p><!-- CL 242998 -->
TODO: <a href="https://golang.org/cl/242998">https://golang.org/cl/242998</a>: export errFinished as ErrProcessDone
<a href="/pkg/os/#Process.Signal"><code>Process.Signal</code></a> now
returns <a href="/pkg/os/#ErrProcessDone"><code>ErrProcessDone</code></a>
instead of the unexported <code>errFinished</code> when the process has
already finished.
</p>
</dd>
</dl><!-- os -->
@ -588,55 +779,55 @@ Do not send CLs removing the interior tags from such phrases.
<dl id="os/signal"><dt><a href="/pkg/os/signal/">os/signal</a></dt>
<dd>
<p><!-- CL 219640 -->
TODO: <a href="https://golang.org/cl/219640">https://golang.org/cl/219640</a>: add NotifyContext to cancel context using system signals
The new
<a href="/pkg/os/signal/#NotifyContext"><code>NotifyContext</code></a>
function allows creating contexts that are canceled upon arrival of
specific signals.
</p>
</dd>
</dl><!-- os/signal -->
<dl id="path"><dt><a href="/pkg/path/">path</a></dt>
<dd>
<p><!-- CL 264397 -->
TODO: <a href="https://golang.org/cl/264397">https://golang.org/cl/264397</a>: validate patterns in Match, Glob
<p><!-- CL 264397, golang.org/issues/28614 -->
The <code>Match</code> and <code>Glob</code> functions now
return an error if the unmatched part of the pattern has a
syntax error. Previously, the functions returned early on a failed
match, and thus did not report any later syntax error in the
pattern.
</p>
</dd>
</dl><!-- path -->
<dl id="path/filepath"><dt><a href="/pkg/path/filepath/">path/filepath</a></dt>
<dd>
<p><!-- CL 264397 -->
TODO: <a href="https://golang.org/cl/264397">https://golang.org/cl/264397</a>: validate patterns in Match, Glob
<p><!-- CL 264397, golang.org/issues/28614 -->
The <code>Match</code> and <code>Glob</code> functions now
return an error if the unmatched part of the pattern has a
syntax error. Previously, the functions returned early on a failed
match, and thus did not report any later syntax error in the
pattern.
</p>
</dd>
</dl><!-- path/filepath -->
<dl id="reflect"><dt><a href="/pkg/reflect/">reflect</a></dt>
<dd>
<p><!-- CL 248341 -->
TODO: <a href="https://golang.org/cl/248341">https://golang.org/cl/248341</a>: support multiple keys in struct tags
<p><!-- CL 248341, golang.org/issues/40281 -->
<code>StructTag</code> now allows multiple space-separated keys
in key:value pairs, as in <code>`json xml:"field1"`</code>
(equivalent to <code>`json:"field1" xml:"field1"`</code>).
</p>
</dd>
</dl><!-- reflect -->
<dl id="runtime"><dt><a href="/pkg/runtime/">runtime</a></dt>
<dd>
<p><!-- CL 37222 -->
TODO: <a href="https://golang.org/cl/37222">https://golang.org/cl/37222</a>: make stack traces of endless recursion print only top and bottom 50
</p>
<p><!-- CL 242258 -->
TODO: <a href="https://golang.org/cl/242258">https://golang.org/cl/242258</a>: add 24 byte allocation size class
</p>
<p><!-- CL 254659 -->
TODO: <a href="https://golang.org/cl/254659">https://golang.org/cl/254659</a>: implement GODEBUG=inittrace=1 support
</p>
</dd>
</dl><!-- runtime -->
<dl id="runtime/debug"><dt><a href="/pkg/runtime/debug/">runtime/debug</a></dt>
<dd>
<p><!-- CL 249677 -->
TODO: <a href="https://golang.org/cl/249677">https://golang.org/cl/249677</a>: provide Addr method for errors from SetPanicOnFault
The <a href="/pkg/runtime#Error"><code>runtime.Error</code></a> values
used when <code>SetPanicOnFault</code> is enabled may now have an
<code>Addr</code> method. If that method exists, it returns the memory
address that triggered the fault.
</p>
</dd>
</dl><!-- runtime/debug -->
@ -656,24 +847,39 @@ Do not send CLs removing the interior tags from such phrases.
<dl id="syscall"><dt><a href="/pkg/syscall/">syscall</a></dt>
<dd>
<p><!-- CL 263271 -->
<a href="/pkg/syscall/?GOOS=windows#NewCallback"><code>NewCallback</code></a>
and
<a href="/pkg/syscall/?GOOS=windows#NewCallbackCDecl"><code>NewCallbackCDecl</code></a>
now correctly support callback functions with multiple
sub-<code>uintptr</code>-sized arguments in a row. This may
require changing uses of these functions to eliminate manual
padding between small arguments.
</p>
<p><!-- CL 261917 -->
<a href="/pkg/syscall/#SysProcAttr"><code>SysProcAttr</code></a> on Windows has a new NoInheritHandles field that disables inheriting handles when creating a new process.
<a href="/pkg/syscall/?GOOS=windows#SysProcAttr"><code>SysProcAttr</code></a> on Windows has a new NoInheritHandles field that disables inheriting handles when creating a new process.
</p>
<p><!-- CL 269761, golang.org/issue/42584 -->
<a href="/pkg/syscall/#DLLError"><code>DLLError</code></a> on Windows now has an Unwrap function for unwrapping its underlying error.
<a href="/pkg/syscall/?GOOS=windows#DLLError"><code>DLLError</code></a> on Windows now has an Unwrap function for unwrapping its underlying error.
</p>
<p><!-- CL 210639 -->
TODO: <a href="https://golang.org/cl/210639">https://golang.org/cl/210639</a>: support POSIX semantics for Linux syscalls
On Linux,
<a href="/pkg/syscall/#Setgid"><code>Setgid</code></a>,
<a href="/pkg/syscall/#Setuid"><code>Setuid</code></a>,
and related calls are now implemented.
Previously, they returned an <code>syscall.EOPNOTSUPP</code> error.
</p>
</dd>
</dl><!-- syscall -->
<dl id="text/template"><dt><a href="/pkg/text/template/">text/template</a></dt>
<dd>
<p><!-- CL 254257 -->
TODO: <a href="https://golang.org/cl/254257">https://golang.org/cl/254257</a>: allow newlines inside action delimiters
<p><!-- CL 254257, golang.org/issue/29770 -->
Newlines characters are now allowed inside action delimiters,
permitting actions to span multiple lines.
</p>
</dd>
</dl><!-- text/template -->

View File

@ -1647,14 +1647,14 @@ c := signal.Incoming()
is
</p>
<pre>
c := make(chan os.Signal)
c := make(chan os.Signal, 1)
signal.Notify(c) // ask for all signals
</pre>
<p>
but most code should list the specific signals it wants to handle instead:
</p>
<pre>
c := make(chan os.Signal)
c := make(chan os.Signal, 1)
signal.Notify(c, syscall.SIGHUP, syscall.SIGQUIT)
</pre>

View File

@ -28,6 +28,7 @@ func TestMSAN(t *testing.T) {
{src: "msan4.go"},
{src: "msan5.go"},
{src: "msan6.go"},
{src: "msan7.go"},
{src: "msan_fail.go", wantErr: true},
}
for _, tc := range cases {

View File

@ -0,0 +1,38 @@
// Copyright 2020 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package main
// Test passing C struct to exported Go function.
/*
#include <stdint.h>
#include <stdlib.h>
// T is a C struct with alignment padding after b.
// The padding bytes are not considered initialized by MSAN.
// It is big enough to be passed on stack in C ABI (and least
// on AMD64).
typedef struct { char b; uintptr_t x, y; } T;
extern void F(T);
// Use weak as a hack to permit defining a function even though we use export.
void CF(int x) __attribute__ ((weak));
void CF(int x) {
T *t = malloc(sizeof(T));
t->b = (char)x;
t->x = x;
t->y = x;
F(*t);
}
*/
import "C"
//export F
func F(t C.T) { println(t.b, t.x, t.y) }
func main() {
C.CF(C.int(0))
}

View File

@ -10,7 +10,6 @@ import (
"errors"
"fmt"
"io"
"io/ioutil"
"math"
"os"
"path"
@ -773,7 +772,7 @@ func TestReadTruncation(t *testing.T) {
"testdata/pax-path-hdr.tar",
"testdata/sparse-formats.tar",
} {
buf, err := ioutil.ReadFile(p)
buf, err := os.ReadFile(p)
if err != nil {
t.Fatalf("unexpected error: %v", err)
}

View File

@ -11,7 +11,6 @@ import (
"internal/testenv"
"io"
"io/fs"
"io/ioutil"
"math"
"os"
"path"
@ -263,7 +262,7 @@ func TestFileInfoHeaderDir(t *testing.T) {
func TestFileInfoHeaderSymlink(t *testing.T) {
testenv.MustHaveSymlink(t)
tmpdir, err := ioutil.TempDir("", "TestFileInfoHeaderSymlink")
tmpdir, err := os.MkdirTemp("", "TestFileInfoHeaderSymlink")
if err != nil {
t.Fatal(err)
}

View File

@ -9,7 +9,6 @@ import (
"encoding/hex"
"errors"
"io"
"io/ioutil"
"os"
"path"
"reflect"
@ -520,7 +519,7 @@ func TestWriter(t *testing.T) {
}
if v.file != "" {
want, err := ioutil.ReadFile(v.file)
want, err := os.ReadFile(v.file)
if err != nil {
t.Fatalf("ReadFile() = %v, want nil", err)
}

View File

@ -11,7 +11,6 @@ import (
"internal/obscuretestdata"
"io"
"io/fs"
"io/ioutil"
"os"
"path/filepath"
"regexp"
@ -629,7 +628,7 @@ func readTestFile(t *testing.T, zt ZipTest, ft ZipTestFile, f *File) {
var c []byte
if ft.Content != nil {
c = ft.Content
} else if c, err = ioutil.ReadFile("testdata/" + ft.File); err != nil {
} else if c, err = os.ReadFile("testdata/" + ft.File); err != nil {
t.Error(err)
return
}
@ -685,7 +684,7 @@ func TestInvalidFiles(t *testing.T) {
}
func messWith(fileName string, corrupter func(b []byte)) (r io.ReaderAt, size int64) {
data, err := ioutil.ReadFile(filepath.Join("testdata", fileName))
data, err := os.ReadFile(filepath.Join("testdata", fileName))
if err != nil {
panic("Error reading " + fileName + ": " + err.Error())
}
@ -792,17 +791,17 @@ func returnRecursiveZip() (r io.ReaderAt, size int64) {
//
// func main() {
// bigZip := makeZip("big.file", io.LimitReader(zeros{}, 1<<32-1))
// if err := ioutil.WriteFile("/tmp/big.zip", bigZip, 0666); err != nil {
// if err := os.WriteFile("/tmp/big.zip", bigZip, 0666); err != nil {
// log.Fatal(err)
// }
//
// biggerZip := makeZip("big.zip", bytes.NewReader(bigZip))
// if err := ioutil.WriteFile("/tmp/bigger.zip", biggerZip, 0666); err != nil {
// if err := os.WriteFile("/tmp/bigger.zip", biggerZip, 0666); err != nil {
// log.Fatal(err)
// }
//
// biggestZip := makeZip("bigger.zip", bytes.NewReader(biggerZip))
// if err := ioutil.WriteFile("/tmp/biggest.zip", biggestZip, 0666); err != nil {
// if err := os.WriteFile("/tmp/biggest.zip", biggestZip, 0666); err != nil {
// log.Fatal(err)
// }
// }

View File

@ -10,8 +10,8 @@ import (
"fmt"
"io"
"io/fs"
"io/ioutil"
"math/rand"
"os"
"strings"
"testing"
"time"
@ -237,7 +237,7 @@ func TestWriterTime(t *testing.T) {
t.Fatalf("unexpected Close error: %v", err)
}
want, err := ioutil.ReadFile("testdata/time-go.zip")
want, err := os.ReadFile("testdata/time-go.zip")
if err != nil {
t.Fatalf("unexpected ReadFile error: %v", err)
}

View File

@ -146,7 +146,7 @@ func TestReader(t *testing.T) {
for i := 0; i < len(texts)-1; i++ {
texts[i] = str + "\n"
all += texts[i]
str += string(rune(i)%26 + 'a')
str += string(rune(i%26 + 'a'))
}
texts[len(texts)-1] = all

View File

@ -8,7 +8,6 @@ import (
"bufio"
"bytes"
"internal/testenv"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
@ -98,8 +97,8 @@ func testAddr2Line(t *testing.T, exepath, addr string) {
if !os.SameFile(fi1, fi2) {
t.Fatalf("addr2line_test.go and %s are not same file", srcPath)
}
if srcLineNo != "107" {
t.Fatalf("line number = %v; want 107", srcLineNo)
if srcLineNo != "106" {
t.Fatalf("line number = %v; want 106", srcLineNo)
}
}
@ -107,7 +106,7 @@ func testAddr2Line(t *testing.T, exepath, addr string) {
func TestAddr2Line(t *testing.T) {
testenv.MustHaveGoBuild(t)
tmpDir, err := ioutil.TempDir("", "TestAddr2Line")
tmpDir, err := os.MkdirTemp("", "TestAddr2Line")
if err != nil {
t.Fatal("TempDir failed: ", err)
}

View File

@ -17,7 +17,6 @@ import (
"go/token"
"go/types"
"io"
"io/ioutil"
"log"
"os"
"os/exec"
@ -342,7 +341,7 @@ func fileFeatures(filename string) []string {
if filename == "" {
return nil
}
bs, err := ioutil.ReadFile(filename)
bs, err := os.ReadFile(filename)
if err != nil {
log.Fatalf("Error reading file %s: %v", filename, err)
}

View File

@ -9,7 +9,6 @@ import (
"flag"
"fmt"
"go/build"
"io/ioutil"
"os"
"path/filepath"
"sort"
@ -75,7 +74,7 @@ func TestGolden(t *testing.T) {
f.Close()
}
bs, err := ioutil.ReadFile(goldenFile)
bs, err := os.ReadFile(goldenFile)
if err != nil {
t.Fatalf("opening golden.txt for package %q: %v", fi.Name(), err)
}

View File

@ -340,11 +340,11 @@ start:
// Branch pseudo-instructions
BEQZ X5, start // BEQZ X5, 2 // e38602c0
BGEZ X5, start // BGEZ X5, 2 // e3d402c0
BGT X5, X6, start // BGT X5, X6, 2 // e3c262c0
BGTU X5, X6, start // BGTU X5, X6, 2 // e3e062c0
BGT X5, X6, start // BGT X5, X6, 2 // e34253c0
BGTU X5, X6, start // BGTU X5, X6, 2 // e36053c0
BGTZ X5, start // BGTZ X5, 2 // e34e50be
BLE X5, X6, start // BLE X5, X6, 2 // e3dc62be
BLEU X5, X6, start // BLEU X5, X6, 2 // e3fa62be
BLE X5, X6, start // BLE X5, X6, 2 // e35c53be
BLEU X5, X6, start // BLEU X5, X6, 2 // e37a53be
BLEZ X5, start // BLEZ X5, 2 // e35850be
BLTZ X5, start // BLTZ X5, 2 // e3c602be
BNEZ X5, start // BNEZ X5, 2 // e39402be

View File

@ -62,7 +62,7 @@ func main() {
return
}
f, err = os.OpenFile(file, os.O_WRONLY, 0)
f, err = os.OpenFile(file, os.O_RDWR, 0)
if err != nil {
log.Fatal(err)
}

View File

@ -52,6 +52,7 @@ import (
"go/types"
"internal/testenv"
"io"
"io/fs"
"io/ioutil"
"log"
"os"
@ -89,7 +90,7 @@ func TestFormats(t *testing.T) {
testenv.MustHaveGoBuild(t) // more restrictive than necessary, but that's ok
// process all directories
filepath.Walk(".", func(path string, info os.FileInfo, err error) error {
filepath.WalkDir(".", func(path string, info fs.DirEntry, err error) error {
if info.IsDir() {
if info.Name() == "testdata" {
return filepath.SkipDir
@ -124,6 +125,12 @@ func TestFormats(t *testing.T) {
typ := p.types[index]
format := typ + " " + in // e.g., "*Node %n"
// Do not bother reporting basic types, nor %v, %T, %p.
// Vet handles basic types, and those three formats apply to all types.
if !strings.Contains(typ, ".") || (in == "%v" || in == "%T" || in == "%p") {
return in
}
// check if format is known
out, known := knownFormats[format]
@ -412,7 +419,17 @@ func nodeString(n ast.Node) string {
// typeString returns a string representation of n.
func typeString(typ types.Type) string {
return filepath.ToSlash(typ.String())
s := filepath.ToSlash(typ.String())
// Report all the concrete IR types as Node, to shorten fmtmap.
const ir = "cmd/compile/internal/ir."
if s == "*"+ir+"Name" || s == "*"+ir+"Func" || s == "*"+ir+"Decl" ||
s == ir+"Ntype" || s == ir+"Expr" || s == ir+"Stmt" ||
strings.HasPrefix(s, "*"+ir) && (strings.HasSuffix(s, "Expr") || strings.HasSuffix(s, "Stmt")) {
return "cmd/compile/internal/ir.Node"
}
return s
}
// stringLit returns the unquoted string value and true if

View File

@ -20,229 +20,83 @@ package main_test
// An absent entry means that the format is not recognized as valid.
// An empty new format means that the format should remain unchanged.
var knownFormats = map[string]string{
"*bytes.Buffer %s": "",
"*cmd/compile/internal/gc.EscLocation %v": "",
"*cmd/compile/internal/ir.node %v": "",
"*cmd/compile/internal/ssa.Block %s": "",
"*cmd/compile/internal/ssa.Block %v": "",
"*cmd/compile/internal/ssa.Func %s": "",
"*cmd/compile/internal/ssa.Func %v": "",
"*cmd/compile/internal/ssa.Register %s": "",
"*cmd/compile/internal/ssa.Register %v": "",
"*cmd/compile/internal/ssa.SparseTreeNode %v": "",
"*cmd/compile/internal/ssa.Value %s": "",
"*cmd/compile/internal/ssa.Value %v": "",
"*cmd/compile/internal/ssa.sparseTreeMapEntry %v": "",
"*cmd/compile/internal/syntax.CallExpr %s": "",
"*cmd/compile/internal/syntax.CallExpr %v": "",
"*cmd/compile/internal/syntax.FuncLit %s": "",
"*cmd/compile/internal/syntax.IndexExpr %s": "",
"*cmd/compile/internal/types.Field %p": "",
"*cmd/compile/internal/types.Field %v": "",
"*cmd/compile/internal/types.Sym %0S": "",
"*cmd/compile/internal/types.Sym %S": "",
"*cmd/compile/internal/types.Sym %p": "",
"*cmd/compile/internal/types.Sym %v": "",
"*cmd/compile/internal/types.Type %#L": "",
"*cmd/compile/internal/types.Type %#v": "",
"*cmd/compile/internal/types.Type %-S": "",
"*cmd/compile/internal/types.Type %0S": "",
"*cmd/compile/internal/types.Type %L": "",
"*cmd/compile/internal/types.Type %S": "",
"*cmd/compile/internal/types.Type %p": "",
"*cmd/compile/internal/types.Type %s": "",
"*cmd/compile/internal/types.Type %v": "",
"*cmd/compile/internal/types2.Basic %s": "",
"*cmd/compile/internal/types2.Chan %s": "",
"*cmd/compile/internal/types2.Func %s": "",
"*cmd/compile/internal/types2.Initializer %s": "",
"*cmd/compile/internal/types2.Interface %s": "",
"*cmd/compile/internal/types2.MethodSet %s": "",
"*cmd/compile/internal/types2.Named %s": "",
"*cmd/compile/internal/types2.Named %v": "",
"*cmd/compile/internal/types2.Package %s": "",
"*cmd/compile/internal/types2.Package %v": "",
"*cmd/compile/internal/types2.Scope %p": "",
"*cmd/compile/internal/types2.Selection %s": "",
"*cmd/compile/internal/types2.Signature %s": "",
"*cmd/compile/internal/types2.TypeName %s": "",
"*cmd/compile/internal/types2.TypeName %v": "",
"*cmd/compile/internal/types2.TypeParam %s": "",
"*cmd/compile/internal/types2.Var %s": "",
"*cmd/compile/internal/types2.operand %s": "",
"*cmd/compile/internal/types2.substMap %s": "",
"*cmd/internal/obj.Addr %v": "",
"*cmd/internal/obj.LSym %v": "",
"*math/big.Float %f": "",
"*math/big.Int %s": "",
"[16]byte %x": "",
"[]*cmd/compile/internal/ssa.Block %v": "",
"[]*cmd/compile/internal/ssa.Value %v": "",
"[]*cmd/compile/internal/types2.Func %v": "",
"[]*cmd/compile/internal/types2.TypeName %s": "",
"[][]string %q": "",
"[]byte %s": "",
"[]byte %x": "",
"[]cmd/compile/internal/ssa.Edge %v": "",
"[]cmd/compile/internal/ssa.ID %v": "",
"[]cmd/compile/internal/ssa.posetNode %v": "",
"[]cmd/compile/internal/ssa.posetUndo %v": "",
"[]cmd/compile/internal/syntax.token %s": "",
"[]cmd/compile/internal/types2.Type %s": "",
"[]int %v": "",
"[]string %v": "",
"[]uint32 %v": "",
"bool %v": "",
"byte %08b": "",
"byte %c": "",
"byte %q": "",
"byte %v": "",
"cmd/compile/internal/arm.shift %d": "",
"cmd/compile/internal/gc.initKind %d": "",
"cmd/compile/internal/gc.itag %v": "",
"cmd/compile/internal/importer.itag %v": "",
"cmd/compile/internal/ir.Class %d": "",
"cmd/compile/internal/ir.Class %v": "",
"cmd/compile/internal/ir.FmtMode %d": "",
"cmd/compile/internal/ir.Node %#v": "",
"cmd/compile/internal/ir.Node %+S": "",
"cmd/compile/internal/ir.Node %+v": "",
"cmd/compile/internal/ir.Node %L": "",
"cmd/compile/internal/ir.Node %S": "",
"cmd/compile/internal/ir.Node %j": "",
"cmd/compile/internal/ir.Node %p": "",
"cmd/compile/internal/ir.Node %v": "",
"cmd/compile/internal/ir.Nodes %#v": "",
"cmd/compile/internal/ir.Nodes %+v": "",
"cmd/compile/internal/ir.Nodes %.v": "",
"cmd/compile/internal/ir.Nodes %v": "",
"cmd/compile/internal/ir.Op %#v": "",
"cmd/compile/internal/ir.Op %v": "",
"cmd/compile/internal/ssa.BranchPrediction %d": "",
"cmd/compile/internal/ssa.Edge %v": "",
"cmd/compile/internal/ssa.ID %d": "",
"cmd/compile/internal/ssa.ID %v": "",
"cmd/compile/internal/ssa.LocalSlot %s": "",
"cmd/compile/internal/ssa.LocalSlot %v": "",
"cmd/compile/internal/ssa.Location %s": "",
"cmd/compile/internal/ssa.Op %s": "",
"cmd/compile/internal/ssa.Op %v": "",
"cmd/compile/internal/ssa.Sym %v": "",
"cmd/compile/internal/ssa.ValAndOff %s": "",
"cmd/compile/internal/ssa.domain %v": "",
"cmd/compile/internal/ssa.flagConstant %s": "",
"cmd/compile/internal/ssa.posetNode %v": "",
"cmd/compile/internal/ssa.posetTestOp %v": "",
"cmd/compile/internal/ssa.rbrank %d": "",
"cmd/compile/internal/ssa.regMask %d": "",
"cmd/compile/internal/ssa.register %d": "",
"cmd/compile/internal/ssa.relation %s": "",
"cmd/compile/internal/syntax.ChanDir %d": "",
"cmd/compile/internal/syntax.Decl %T": "",
"cmd/compile/internal/syntax.Error %q": "",
"cmd/compile/internal/syntax.Error %v": "",
"cmd/compile/internal/syntax.Expr %#v": "",
"cmd/compile/internal/syntax.Expr %T": "",
"cmd/compile/internal/syntax.Expr %s": "",
"cmd/compile/internal/syntax.LitKind %d": "",
"cmd/compile/internal/syntax.Node %T": "",
"cmd/compile/internal/syntax.Operator %s": "",
"cmd/compile/internal/syntax.Pos %s": "",
"cmd/compile/internal/syntax.Pos %v": "",
"cmd/compile/internal/syntax.position %s": "",
"cmd/compile/internal/syntax.token %q": "",
"cmd/compile/internal/syntax.token %s": "",
"cmd/compile/internal/types.EType %d": "",
"cmd/compile/internal/types.EType %s": "",
"cmd/compile/internal/types.EType %v": "",
"cmd/compile/internal/types2.Object %T": "",
"cmd/compile/internal/types2.Object %p": "",
"cmd/compile/internal/types2.Object %s": "",
"cmd/compile/internal/types2.Object %v": "",
"cmd/compile/internal/types2.Type %T": "",
"cmd/compile/internal/types2.Type %s": "",
"cmd/compile/internal/types2.Type %v": "",
"cmd/compile/internal/types2.color %s": "",
"cmd/internal/obj.ABI %v": "",
"error %s": "",
"error %v": "",
"float64 %.2f": "",
"float64 %.3f": "",
"float64 %g": "",
"go/constant.Kind %v": "",
"go/constant.Value %#v": "",
"go/constant.Value %s": "",
"go/constant.Value %v": "",
"int %#x": "",
"int %-12d": "",
"int %-6d": "",
"int %-8o": "",
"int %02d": "",
"int %6d": "",
"int %c": "",
"int %d": "",
"int %v": "",
"int %x": "",
"int16 %d": "",
"int16 %x": "",
"int32 %#x": "",
"int32 %d": "",
"int32 %v": "",
"int32 %x": "",
"int64 %#x": "",
"int64 %-10d": "",
"int64 %.5d": "",
"int64 %d": "",
"int64 %v": "",
"int64 %x": "",
"int8 %d": "",
"int8 %v": "",
"int8 %x": "",
"interface{} %#v": "",
"interface{} %T": "",
"interface{} %p": "",
"interface{} %q": "",
"interface{} %s": "",
"interface{} %v": "",
"*bytes.Buffer %s": "",
"*cmd/compile/internal/ssa.Block %s": "",
"*cmd/compile/internal/ssa.Func %s": "",
"*cmd/compile/internal/ssa.Register %s": "",
"*cmd/compile/internal/ssa.Value %s": "",
"*cmd/compile/internal/syntax.CallExpr %s": "",
"*cmd/compile/internal/syntax.FuncLit %s": "",
"*cmd/compile/internal/syntax.IndexExpr %s": "",
"*cmd/compile/internal/types.Sym %+v": "",
"*cmd/compile/internal/types.Sym %S": "",
"*cmd/compile/internal/types.Type %+v": "",
"*cmd/compile/internal/types.Type %-S": "",
"*cmd/compile/internal/types.Type %L": "",
"*cmd/compile/internal/types.Type %S": "",
"*cmd/compile/internal/types.Type %s": "",
"*cmd/compile/internal/types2.Basic %s": "",
"*cmd/compile/internal/types2.Chan %s": "",
"*cmd/compile/internal/types2.Func %s": "",
"*cmd/compile/internal/types2.Initializer %s": "",
"*cmd/compile/internal/types2.Interface %s": "",
"*cmd/compile/internal/types2.MethodSet %s": "",
"*cmd/compile/internal/types2.Named %s": "",
"*cmd/compile/internal/types2.Package %s": "",
"*cmd/compile/internal/types2.Selection %s": "",
"*cmd/compile/internal/types2.Signature %s": "",
"*cmd/compile/internal/types2.TypeName %s": "",
"*cmd/compile/internal/types2.TypeParam %s": "",
"*cmd/compile/internal/types2.Var %s": "",
"*cmd/compile/internal/types2.operand %s": "",
"*cmd/compile/internal/types2.substMap %s": "",
"*math/big.Float %f": "",
"*math/big.Int %s": "",
"[]*cmd/compile/internal/types2.TypeName %s": "",
"[]cmd/compile/internal/syntax.token %s": "",
"[]cmd/compile/internal/types2.Type %s": "",
"cmd/compile/internal/arm.shift %d": "",
"cmd/compile/internal/gc.RegIndex %d": "",
"cmd/compile/internal/gc.initKind %d": "",
"cmd/compile/internal/ir.Class %d": "",
"cmd/compile/internal/ir.Node %+v": "",
"cmd/compile/internal/ir.Node %L": "",
"cmd/compile/internal/ir.Nodes %+v": "",
"cmd/compile/internal/ir.Nodes %.v": "",
"cmd/compile/internal/ir.Op %+v": "",
"cmd/compile/internal/ssa.Aux %#v": "",
"cmd/compile/internal/ssa.Aux %q": "",
"cmd/compile/internal/ssa.Aux %s": "",
"cmd/compile/internal/ssa.BranchPrediction %d": "",
"cmd/compile/internal/ssa.ID %d": "",
"cmd/compile/internal/ssa.LocalSlot %s": "",
"cmd/compile/internal/ssa.Location %s": "",
"cmd/compile/internal/ssa.Op %s": "",
"cmd/compile/internal/ssa.ValAndOff %s": "",
"cmd/compile/internal/ssa.flagConstant %s": "",
"cmd/compile/internal/ssa.rbrank %d": "",
"cmd/compile/internal/ssa.regMask %d": "",
"cmd/compile/internal/ssa.register %d": "",
"cmd/compile/internal/ssa.relation %s": "",
"cmd/compile/internal/syntax.ChanDir %d": "",
"cmd/compile/internal/syntax.Error %q": "",
"cmd/compile/internal/syntax.Expr %#v": "",
"cmd/compile/internal/syntax.Expr %s": "",
"cmd/compile/internal/syntax.LitKind %d": "",
"cmd/compile/internal/syntax.Operator %s": "",
"cmd/compile/internal/syntax.Pos %s": "",
"cmd/compile/internal/syntax.position %s": "",
"cmd/compile/internal/syntax.token %q": "",
"cmd/compile/internal/syntax.token %s": "",
"cmd/compile/internal/types.Kind %d": "",
"cmd/compile/internal/types.Kind %s": "",
"cmd/compile/internal/types2.Object %s": "",
"cmd/compile/internal/types2.Type %s": "",
"cmd/compile/internal/types2.color %s": "",
"go/constant.Value %#v": "",
"go/constant.Value %s": "",
"map[*cmd/compile/internal/types2.TypeParam]cmd/compile/internal/types2.Type %s": "",
"map[cmd/compile/internal/ir.Node]*cmd/compile/internal/ssa.Value %v": "",
"map[cmd/compile/internal/ir.Node][]cmd/compile/internal/ir.Node %v": "",
"map[cmd/compile/internal/ssa.ID]uint32 %v": "",
"map[int64]uint32 %v": "",
"math/big.Accuracy %s": "",
"reflect.Type %s": "",
"reflect.Type %v": "",
"rune %#U": "",
"rune %c": "",
"rune %q": "",
"string %-*s": "",
"string %-16s": "",
"string %-6s": "",
"string %T": "",
"string %q": "",
"string %s": "",
"string %v": "",
"time.Duration %d": "",
"time.Duration %v": "",
"uint %04x": "",
"uint %5d": "",
"uint %d": "",
"uint %x": "",
"uint16 %d": "",
"uint16 %x": "",
"uint32 %#U": "",
"uint32 %#x": "",
"uint32 %d": "",
"uint32 %v": "",
"uint32 %x": "",
"uint64 %08x": "",
"uint64 %b": "",
"uint64 %d": "",
"uint64 %v": "",
"uint64 %x": "",
"uint8 %#x": "",
"uint8 %d": "",
"uint8 %v": "",
"uint8 %x": "",
"uintptr %d": "",
"math/big.Accuracy %s": "",
"reflect.Type %s": "",
"time.Duration %d": "",
}

View File

@ -546,7 +546,7 @@ func ssaGenValue(s *gc.SSAGenState, v *ssa.Value) {
case *obj.LSym:
wantreg = "SB"
gc.AddAux(&p.From, v)
case ir.Node:
case *ir.Name:
wantreg = "SP"
gc.AddAux(&p.From, v)
case nil:

View File

@ -396,7 +396,7 @@ func ssaGenValue(s *gc.SSAGenState, v *ssa.Value) {
case *obj.LSym:
wantreg = "SB"
gc.AddAux(&p.From, v)
case ir.Node:
case *ir.Name:
wantreg = "SP"
gc.AddAux(&p.From, v)
case nil:

View File

@ -0,0 +1,351 @@
// Copyright 2020 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package gc
import (
"cmd/compile/internal/types"
"cmd/internal/src"
"fmt"
"sync"
)
//......................................................................
//
// Public/exported bits of the ABI utilities.
//
// ABIParamResultInfo stores the results of processing a given
// function type to compute stack layout and register assignments. For
// each input and output parameter we capture whether the param was
// register-assigned (and to which register(s)) or the stack offset
// for the param if is not going to be passed in registers according
// to the rules in the Go internal ABI specification (1.17).
type ABIParamResultInfo struct {
inparams []ABIParamAssignment // Includes receiver for method calls. Does NOT include hidden closure pointer.
outparams []ABIParamAssignment
intSpillSlots int
floatSpillSlots int
offsetToSpillArea int64
config ABIConfig // to enable String() method
}
// RegIndex stores the index into the set of machine registers used by
// the ABI on a specific architecture for parameter passing. RegIndex
// values 0 through N-1 (where N is the number of integer registers
// used for param passing according to the ABI rules) describe integer
// registers; values N through M (where M is the number of floating
// point registers used). Thus if the ABI says there are 5 integer
// registers and 7 floating point registers, then RegIndex value of 4
// indicates the 5th integer register, and a RegIndex value of 11
// indicates the 7th floating point register.
type RegIndex uint8
// ABIParamAssignment holds information about how a specific param or
// result will be passed: in registers (in which case 'Registers' is
// populated) or on the stack (in which case 'Offset' is set to a
// non-negative stack offset. The values in 'Registers' are indices (as
// described above), not architected registers.
type ABIParamAssignment struct {
Type *types.Type
Registers []RegIndex
Offset int32
}
// RegAmounts holds a specified number of integer/float registers.
type RegAmounts struct {
intRegs int
floatRegs int
}
// ABIConfig captures the number of registers made available
// by the ABI rules for parameter passing and result returning.
type ABIConfig struct {
// Do we need anything more than this?
regAmounts RegAmounts
}
// ABIAnalyze takes a function type 't' and an ABI rules description
// 'config' and analyzes the function to determine how its parameters
// and results will be passed (in registers or on the stack), returning
// an ABIParamResultInfo object that holds the results of the analysis.
func ABIAnalyze(t *types.Type, config ABIConfig) ABIParamResultInfo {
setup()
s := assignState{
rTotal: config.regAmounts,
}
result := ABIParamResultInfo{config: config}
// Receiver
ft := t.FuncType()
if t.NumRecvs() != 0 {
rfsl := ft.Receiver.FieldSlice()
result.inparams = append(result.inparams,
s.assignParamOrReturn(rfsl[0].Type))
}
// Inputs
ifsl := ft.Params.FieldSlice()
for _, f := range ifsl {
result.inparams = append(result.inparams,
s.assignParamOrReturn(f.Type))
}
s.stackOffset = Rnd(s.stackOffset, int64(Widthreg))
// Record number of spill slots needed.
result.intSpillSlots = s.rUsed.intRegs
result.floatSpillSlots = s.rUsed.floatRegs
// Outputs
s.rUsed = RegAmounts{}
ofsl := ft.Results.FieldSlice()
for _, f := range ofsl {
result.outparams = append(result.outparams, s.assignParamOrReturn(f.Type))
}
result.offsetToSpillArea = s.stackOffset
return result
}
//......................................................................
//
// Non-public portions.
// regString produces a human-readable version of a RegIndex.
func (c *RegAmounts) regString(r RegIndex) string {
if int(r) < c.intRegs {
return fmt.Sprintf("I%d", int(r))
} else if int(r) < c.intRegs+c.floatRegs {
return fmt.Sprintf("F%d", int(r)-c.intRegs)
}
return fmt.Sprintf("<?>%d", r)
}
// toString method renders an ABIParamAssignment in human-readable
// form, suitable for debugging or unit testing.
func (ri *ABIParamAssignment) toString(config ABIConfig) string {
regs := "R{"
for _, r := range ri.Registers {
regs += " " + config.regAmounts.regString(r)
}
return fmt.Sprintf("%s } offset: %d typ: %v", regs, ri.Offset, ri.Type)
}
// toString method renders an ABIParamResultInfo in human-readable
// form, suitable for debugging or unit testing.
func (ri *ABIParamResultInfo) String() string {
res := ""
for k, p := range ri.inparams {
res += fmt.Sprintf("IN %d: %s\n", k, p.toString(ri.config))
}
for k, r := range ri.outparams {
res += fmt.Sprintf("OUT %d: %s\n", k, r.toString(ri.config))
}
res += fmt.Sprintf("intspill: %d floatspill: %d offsetToSpillArea: %d",
ri.intSpillSlots, ri.floatSpillSlots, ri.offsetToSpillArea)
return res
}
// assignState holds intermediate state during the register assigning process
// for a given function signature.
type assignState struct {
rTotal RegAmounts // total reg amounts from ABI rules
rUsed RegAmounts // regs used by params completely assigned so far
pUsed RegAmounts // regs used by the current param (or pieces therein)
stackOffset int64 // current stack offset
}
// stackSlot returns a stack offset for a param or result of the
// specified type.
func (state *assignState) stackSlot(t *types.Type) int64 {
if t.Align > 0 {
state.stackOffset = Rnd(state.stackOffset, int64(t.Align))
}
rv := state.stackOffset
state.stackOffset += t.Width
return rv
}
// allocateRegs returns a set of register indices for a parameter or result
// that we've just determined to be register-assignable. The number of registers
// needed is assumed to be stored in state.pUsed.
func (state *assignState) allocateRegs() []RegIndex {
regs := []RegIndex{}
// integer
for r := state.rUsed.intRegs; r < state.rUsed.intRegs+state.pUsed.intRegs; r++ {
regs = append(regs, RegIndex(r))
}
state.rUsed.intRegs += state.pUsed.intRegs
// floating
for r := state.rUsed.floatRegs; r < state.rUsed.floatRegs+state.pUsed.floatRegs; r++ {
regs = append(regs, RegIndex(r+state.rTotal.intRegs))
}
state.rUsed.floatRegs += state.pUsed.floatRegs
return regs
}
// regAllocate creates a register ABIParamAssignment object for a param
// or result with the specified type, as a final step (this assumes
// that all of the safety/suitability analysis is complete).
func (state *assignState) regAllocate(t *types.Type) ABIParamAssignment {
return ABIParamAssignment{
Type: t,
Registers: state.allocateRegs(),
Offset: -1,
}
}
// stackAllocate creates a stack memory ABIParamAssignment object for
// a param or result with the specified type, as a final step (this
// assumes that all of the safety/suitability analysis is complete).
func (state *assignState) stackAllocate(t *types.Type) ABIParamAssignment {
return ABIParamAssignment{
Type: t,
Offset: int32(state.stackSlot(t)),
}
}
// intUsed returns the number of integer registers consumed
// at a given point within an assignment stage.
func (state *assignState) intUsed() int {
return state.rUsed.intRegs + state.pUsed.intRegs
}
// floatUsed returns the number of floating point registers consumed at
// a given point within an assignment stage.
func (state *assignState) floatUsed() int {
return state.rUsed.floatRegs + state.pUsed.floatRegs
}
// regassignIntegral examines a param/result of integral type 't' to
// determines whether it can be register-assigned. Returns TRUE if we
// can register allocate, FALSE otherwise (and updates state
// accordingly).
func (state *assignState) regassignIntegral(t *types.Type) bool {
regsNeeded := int(Rnd(t.Width, int64(Widthptr)) / int64(Widthptr))
// Floating point and complex.
if t.IsFloat() || t.IsComplex() {
if regsNeeded+state.floatUsed() > state.rTotal.floatRegs {
// not enough regs
return false
}
state.pUsed.floatRegs += regsNeeded
return true
}
// Non-floating point
if regsNeeded+state.intUsed() > state.rTotal.intRegs {
// not enough regs
return false
}
state.pUsed.intRegs += regsNeeded
return true
}
// regassignArray processes an array type (or array component within some
// other enclosing type) to determine if it can be register assigned.
// Returns TRUE if we can register allocate, FALSE otherwise.
func (state *assignState) regassignArray(t *types.Type) bool {
nel := t.NumElem()
if nel == 0 {
return true
}
if nel > 1 {
// Not an array of length 1: stack assign
return false
}
// Visit element
return state.regassign(t.Elem())
}
// regassignStruct processes a struct type (or struct component within
// some other enclosing type) to determine if it can be register
// assigned. Returns TRUE if we can register allocate, FALSE otherwise.
func (state *assignState) regassignStruct(t *types.Type) bool {
for _, field := range t.FieldSlice() {
if !state.regassign(field.Type) {
return false
}
}
return true
}
// synthOnce ensures that we only create the synth* fake types once.
var synthOnce sync.Once
// synthSlice, synthString, and syncIface are synthesized struct types
// meant to capture the underlying implementations of string/slice/interface.
var synthSlice *types.Type
var synthString *types.Type
var synthIface *types.Type
// setup performs setup for the register assignment utilities, manufacturing
// a small set of synthesized types that we'll need along the way.
func setup() {
synthOnce.Do(func() {
fname := types.BuiltinPkg.Lookup
nxp := src.NoXPos
unsp := types.Types[types.TUNSAFEPTR]
ui := types.Types[types.TUINTPTR]
synthSlice = types.NewStruct(types.NoPkg, []*types.Field{
types.NewField(nxp, fname("ptr"), unsp),
types.NewField(nxp, fname("len"), ui),
types.NewField(nxp, fname("cap"), ui),
})
synthString = types.NewStruct(types.NoPkg, []*types.Field{
types.NewField(nxp, fname("data"), unsp),
types.NewField(nxp, fname("len"), ui),
})
synthIface = types.NewStruct(types.NoPkg, []*types.Field{
types.NewField(nxp, fname("f1"), unsp),
types.NewField(nxp, fname("f2"), unsp),
})
})
}
// regassign examines a given param type (or component within some
// composite) to determine if it can be register assigned. Returns
// TRUE if we can register allocate, FALSE otherwise.
func (state *assignState) regassign(pt *types.Type) bool {
typ := pt.Kind()
if pt.IsScalar() || pt.IsPtrShaped() {
return state.regassignIntegral(pt)
}
switch typ {
case types.TARRAY:
return state.regassignArray(pt)
case types.TSTRUCT:
return state.regassignStruct(pt)
case types.TSLICE:
return state.regassignStruct(synthSlice)
case types.TSTRING:
return state.regassignStruct(synthString)
case types.TINTER:
return state.regassignStruct(synthIface)
default:
panic("not expected")
}
}
// assignParamOrReturn processes a given receiver, param, or result
// of type 'pt' to determine whether it can be register assigned.
// The result of the analysis is recorded in the result
// ABIParamResultInfo held in 'state'.
func (state *assignState) assignParamOrReturn(pt *types.Type) ABIParamAssignment {
state.pUsed = RegAmounts{}
if pt.Width == types.BADWIDTH {
panic("should never happen")
} else if pt.Width == 0 {
return state.stackAllocate(pt)
} else if state.regassign(pt) {
return state.regAllocate(pt)
} else {
return state.stackAllocate(pt)
}
}

View File

@ -0,0 +1,270 @@
// Copyright 2020 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package gc
import (
"bufio"
"cmd/compile/internal/base"
"cmd/compile/internal/types"
"cmd/internal/obj"
"cmd/internal/obj/x86"
"cmd/internal/src"
"os"
"testing"
)
// AMD64 registers available:
// - integer: RAX, RBX, RCX, RDI, RSI, R8, R9, r10, R11
// - floating point: X0 - X14
var configAMD64 = ABIConfig{
regAmounts: RegAmounts{
intRegs: 9,
floatRegs: 15,
},
}
func TestMain(m *testing.M) {
thearch.LinkArch = &x86.Linkamd64
thearch.REGSP = x86.REGSP
thearch.MAXWIDTH = 1 << 50
base.Ctxt = obj.Linknew(thearch.LinkArch)
base.Ctxt.DiagFunc = base.Errorf
base.Ctxt.DiagFlush = base.FlushErrors
base.Ctxt.Bso = bufio.NewWriter(os.Stdout)
Widthptr = thearch.LinkArch.PtrSize
Widthreg = thearch.LinkArch.RegSize
initializeTypesPackage()
os.Exit(m.Run())
}
func TestABIUtilsBasic1(t *testing.T) {
// func(x int32) int32
i32 := types.Types[types.TINT32]
ft := mkFuncType(nil, []*types.Type{i32}, []*types.Type{i32})
// expected results
exp := makeExpectedDump(`
IN 0: R{ I0 } offset: -1 typ: int32
OUT 0: R{ I0 } offset: -1 typ: int32
intspill: 1 floatspill: 0 offsetToSpillArea: 0
`)
abitest(t, ft, exp)
}
func TestABIUtilsBasic2(t *testing.T) {
// func(x int32, y float64) (int32, float64, float64)
i8 := types.Types[types.TINT8]
i16 := types.Types[types.TINT16]
i32 := types.Types[types.TINT32]
i64 := types.Types[types.TINT64]
f32 := types.Types[types.TFLOAT32]
f64 := types.Types[types.TFLOAT64]
c64 := types.Types[types.TCOMPLEX64]
c128 := types.Types[types.TCOMPLEX128]
ft := mkFuncType(nil,
[]*types.Type{
i8, i16, i32, i64,
f32, f32, f64, f64,
i8, i16, i32, i64,
f32, f32, f64, f64,
c128, c128, c128, c128, c64,
i8, i16, i32, i64,
i8, i16, i32, i64},
[]*types.Type{i32, f64, f64})
exp := makeExpectedDump(`
IN 0: R{ I0 } offset: -1 typ: int8
IN 1: R{ I1 } offset: -1 typ: int16
IN 2: R{ I2 } offset: -1 typ: int32
IN 3: R{ I3 } offset: -1 typ: int64
IN 4: R{ F0 } offset: -1 typ: float32
IN 5: R{ F1 } offset: -1 typ: float32
IN 6: R{ F2 } offset: -1 typ: float64
IN 7: R{ F3 } offset: -1 typ: float64
IN 8: R{ I4 } offset: -1 typ: int8
IN 9: R{ I5 } offset: -1 typ: int16
IN 10: R{ I6 } offset: -1 typ: int32
IN 11: R{ I7 } offset: -1 typ: int64
IN 12: R{ F4 } offset: -1 typ: float32
IN 13: R{ F5 } offset: -1 typ: float32
IN 14: R{ F6 } offset: -1 typ: float64
IN 15: R{ F7 } offset: -1 typ: float64
IN 16: R{ F8 F9 } offset: -1 typ: complex128
IN 17: R{ F10 F11 } offset: -1 typ: complex128
IN 18: R{ F12 F13 } offset: -1 typ: complex128
IN 19: R{ } offset: 0 typ: complex128
IN 20: R{ F14 } offset: -1 typ: complex64
IN 21: R{ I8 } offset: -1 typ: int8
IN 22: R{ } offset: 16 typ: int16
IN 23: R{ } offset: 20 typ: int32
IN 24: R{ } offset: 24 typ: int64
IN 25: R{ } offset: 32 typ: int8
IN 26: R{ } offset: 34 typ: int16
IN 27: R{ } offset: 36 typ: int32
IN 28: R{ } offset: 40 typ: int64
OUT 0: R{ I0 } offset: -1 typ: int32
OUT 1: R{ F0 } offset: -1 typ: float64
OUT 2: R{ F1 } offset: -1 typ: float64
intspill: 9 floatspill: 15 offsetToSpillArea: 48
`)
abitest(t, ft, exp)
}
func TestABIUtilsArrays(t *testing.T) {
i32 := types.Types[types.TINT32]
ae := types.NewArray(i32, 0)
a1 := types.NewArray(i32, 1)
a2 := types.NewArray(i32, 2)
aa1 := types.NewArray(a1, 1)
ft := mkFuncType(nil, []*types.Type{a1, ae, aa1, a2},
[]*types.Type{a2, a1, ae, aa1})
exp := makeExpectedDump(`
IN 0: R{ I0 } offset: -1 typ: [1]int32
IN 1: R{ } offset: 0 typ: [0]int32
IN 2: R{ I1 } offset: -1 typ: [1][1]int32
IN 3: R{ } offset: 0 typ: [2]int32
OUT 0: R{ } offset: 8 typ: [2]int32
OUT 1: R{ I0 } offset: -1 typ: [1]int32
OUT 2: R{ } offset: 16 typ: [0]int32
OUT 3: R{ I1 } offset: -1 typ: [1][1]int32
intspill: 2 floatspill: 0 offsetToSpillArea: 16
`)
abitest(t, ft, exp)
}
func TestABIUtilsStruct1(t *testing.T) {
i8 := types.Types[types.TINT8]
i16 := types.Types[types.TINT16]
i32 := types.Types[types.TINT32]
i64 := types.Types[types.TINT64]
s := mkstruct([]*types.Type{i8, i8, mkstruct([]*types.Type{}), i8, i16})
ft := mkFuncType(nil, []*types.Type{i8, s, i64},
[]*types.Type{s, i8, i32})
exp := makeExpectedDump(`
IN 0: R{ I0 } offset: -1 typ: int8
IN 1: R{ I1 I2 I3 I4 } offset: -1 typ: struct { int8; int8; struct {}; int8; int16 }
IN 2: R{ I5 } offset: -1 typ: int64
OUT 0: R{ I0 I1 I2 I3 } offset: -1 typ: struct { int8; int8; struct {}; int8; int16 }
OUT 1: R{ I4 } offset: -1 typ: int8
OUT 2: R{ I5 } offset: -1 typ: int32
intspill: 6 floatspill: 0 offsetToSpillArea: 0
`)
abitest(t, ft, exp)
}
func TestABIUtilsStruct2(t *testing.T) {
f64 := types.Types[types.TFLOAT64]
i64 := types.Types[types.TINT64]
s := mkstruct([]*types.Type{i64, mkstruct([]*types.Type{})})
fs := mkstruct([]*types.Type{f64, s, mkstruct([]*types.Type{})})
ft := mkFuncType(nil, []*types.Type{s, s, fs},
[]*types.Type{fs, fs})
exp := makeExpectedDump(`
IN 0: R{ I0 } offset: -1 typ: struct { int64; struct {} }
IN 1: R{ I1 } offset: -1 typ: struct { int64; struct {} }
IN 2: R{ I2 F0 } offset: -1 typ: struct { float64; struct { int64; struct {} }; struct {} }
OUT 0: R{ I0 F0 } offset: -1 typ: struct { float64; struct { int64; struct {} }; struct {} }
OUT 1: R{ I1 F1 } offset: -1 typ: struct { float64; struct { int64; struct {} }; struct {} }
intspill: 3 floatspill: 1 offsetToSpillArea: 0
`)
abitest(t, ft, exp)
}
func TestABIUtilsSliceString(t *testing.T) {
i32 := types.Types[types.TINT32]
sli32 := types.NewSlice(i32)
str := types.New(types.TSTRING)
i8 := types.Types[types.TINT8]
i64 := types.Types[types.TINT64]
ft := mkFuncType(nil, []*types.Type{sli32, i8, sli32, i8, str, i8, i64, sli32},
[]*types.Type{str, i64, str, sli32})
exp := makeExpectedDump(`
IN 0: R{ I0 I1 I2 } offset: -1 typ: []int32
IN 1: R{ I3 } offset: -1 typ: int8
IN 2: R{ I4 I5 I6 } offset: -1 typ: []int32
IN 3: R{ I7 } offset: -1 typ: int8
IN 4: R{ } offset: 0 typ: string
IN 5: R{ I8 } offset: -1 typ: int8
IN 6: R{ } offset: 16 typ: int64
IN 7: R{ } offset: 24 typ: []int32
OUT 0: R{ I0 I1 } offset: -1 typ: string
OUT 1: R{ I2 } offset: -1 typ: int64
OUT 2: R{ I3 I4 } offset: -1 typ: string
OUT 3: R{ I5 I6 I7 } offset: -1 typ: []int32
intspill: 9 floatspill: 0 offsetToSpillArea: 48
`)
abitest(t, ft, exp)
}
func TestABIUtilsMethod(t *testing.T) {
i16 := types.Types[types.TINT16]
i64 := types.Types[types.TINT64]
f64 := types.Types[types.TFLOAT64]
s1 := mkstruct([]*types.Type{i16, i16, i16})
ps1 := types.NewPtr(s1)
a7 := types.NewArray(ps1, 7)
ft := mkFuncType(s1, []*types.Type{ps1, a7, f64, i16, i16, i16},
[]*types.Type{a7, f64, i64})
exp := makeExpectedDump(`
IN 0: R{ I0 I1 I2 } offset: -1 typ: struct { int16; int16; int16 }
IN 1: R{ I3 } offset: -1 typ: *struct { int16; int16; int16 }
IN 2: R{ } offset: 0 typ: [7]*struct { int16; int16; int16 }
IN 3: R{ F0 } offset: -1 typ: float64
IN 4: R{ I4 } offset: -1 typ: int16
IN 5: R{ I5 } offset: -1 typ: int16
IN 6: R{ I6 } offset: -1 typ: int16
OUT 0: R{ } offset: 56 typ: [7]*struct { int16; int16; int16 }
OUT 1: R{ F0 } offset: -1 typ: float64
OUT 2: R{ I0 } offset: -1 typ: int64
intspill: 7 floatspill: 1 offsetToSpillArea: 112
`)
abitest(t, ft, exp)
}
func TestABIUtilsInterfaces(t *testing.T) {
ei := types.Types[types.TINTER] // interface{}
pei := types.NewPtr(ei) // *interface{}
fldt := mkFuncType(types.FakeRecvType(), []*types.Type{},
[]*types.Type{types.UntypedString})
field := types.NewField(src.NoXPos, nil, fldt)
// interface{ ...() string }
nei := types.NewInterface(types.LocalPkg, []*types.Field{field})
i16 := types.Types[types.TINT16]
tb := types.Types[types.TBOOL]
s1 := mkstruct([]*types.Type{i16, i16, tb})
ft := mkFuncType(nil, []*types.Type{s1, ei, ei, nei, pei, nei, i16},
[]*types.Type{ei, nei, pei})
exp := makeExpectedDump(`
IN 0: R{ I0 I1 I2 } offset: -1 typ: struct { int16; int16; bool }
IN 1: R{ I3 I4 } offset: -1 typ: interface {}
IN 2: R{ I5 I6 } offset: -1 typ: interface {}
IN 3: R{ I7 I8 } offset: -1 typ: interface { () untyped string }
IN 4: R{ } offset: 0 typ: *interface {}
IN 5: R{ } offset: 8 typ: interface { () untyped string }
IN 6: R{ } offset: 24 typ: int16
OUT 0: R{ I0 I1 } offset: -1 typ: interface {}
OUT 1: R{ I2 I3 } offset: -1 typ: interface { () untyped string }
OUT 2: R{ I4 } offset: -1 typ: *interface {}
intspill: 9 floatspill: 0 offsetToSpillArea: 32
`)
abitest(t, ft, exp)
}

View File

@ -0,0 +1,157 @@
// Copyright 2020 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package gc
// This file contains utility routines and harness infrastructure used
// by the ABI tests in "abiutils_test.go".
import (
"cmd/compile/internal/ir"
"cmd/compile/internal/types"
"cmd/internal/src"
"fmt"
"strings"
"testing"
"text/scanner"
)
func mkParamResultField(t *types.Type, s *types.Sym, which ir.Class) *types.Field {
field := types.NewField(src.NoXPos, s, t)
n := NewName(s)
n.SetClass(which)
field.Nname = n
n.SetType(t)
return field
}
// mkstruct is a helper routine to create a struct type with fields
// of the types specified in 'fieldtypes'.
func mkstruct(fieldtypes []*types.Type) *types.Type {
fields := make([]*types.Field, len(fieldtypes))
for k, t := range fieldtypes {
if t == nil {
panic("bad -- field has no type")
}
f := types.NewField(src.NoXPos, nil, t)
fields[k] = f
}
s := types.NewStruct(types.LocalPkg, fields)
return s
}
func mkFuncType(rcvr *types.Type, ins []*types.Type, outs []*types.Type) *types.Type {
q := lookup("?")
inf := []*types.Field{}
for _, it := range ins {
inf = append(inf, mkParamResultField(it, q, ir.PPARAM))
}
outf := []*types.Field{}
for _, ot := range outs {
outf = append(outf, mkParamResultField(ot, q, ir.PPARAMOUT))
}
var rf *types.Field
if rcvr != nil {
rf = mkParamResultField(rcvr, q, ir.PPARAM)
}
return types.NewSignature(types.LocalPkg, rf, inf, outf)
}
type expectedDump struct {
dump string
file string
line int
}
func tokenize(src string) []string {
var s scanner.Scanner
s.Init(strings.NewReader(src))
res := []string{}
for tok := s.Scan(); tok != scanner.EOF; tok = s.Scan() {
res = append(res, s.TokenText())
}
return res
}
func verifyParamResultOffset(t *testing.T, f *types.Field, r ABIParamAssignment, which string, idx int) int {
n := ir.AsNode(f.Nname)
if n == nil {
panic("not expected")
}
if n.Offset() != int64(r.Offset) {
t.Errorf("%s %d: got offset %d wanted %d t=%v",
which, idx, r.Offset, n.Offset(), f.Type)
return 1
}
return 0
}
func makeExpectedDump(e string) expectedDump {
return expectedDump{dump: e}
}
func difftokens(atoks []string, etoks []string) string {
if len(atoks) != len(etoks) {
return fmt.Sprintf("expected %d tokens got %d",
len(etoks), len(atoks))
}
for i := 0; i < len(etoks); i++ {
if etoks[i] == atoks[i] {
continue
}
return fmt.Sprintf("diff at token %d: expected %q got %q",
i, etoks[i], atoks[i])
}
return ""
}
func abitest(t *testing.T, ft *types.Type, exp expectedDump) {
dowidth(ft)
// Analyze with full set of registers.
regRes := ABIAnalyze(ft, configAMD64)
regResString := strings.TrimSpace(regRes.String())
// Check results.
reason := difftokens(tokenize(regResString), tokenize(exp.dump))
if reason != "" {
t.Errorf("\nexpected:\n%s\ngot:\n%s\nreason: %s",
strings.TrimSpace(exp.dump), regResString, reason)
}
// Analyze again with empty register set.
empty := ABIConfig{}
emptyRes := ABIAnalyze(ft, empty)
emptyResString := emptyRes.String()
// Walk the results and make sure the offsets assigned match
// up with those assiged by dowidth. This checks to make sure that
// when we have no available registers the ABI assignment degenerates
// back to the original ABI0.
// receiver
failed := 0
rfsl := ft.Recvs().Fields().Slice()
poff := 0
if len(rfsl) != 0 {
failed |= verifyParamResultOffset(t, rfsl[0], emptyRes.inparams[0], "receiver", 0)
poff = 1
}
// params
pfsl := ft.Params().Fields().Slice()
for k, f := range pfsl {
verifyParamResultOffset(t, f, emptyRes.inparams[k+poff], "param", k)
}
// results
ofsl := ft.Results().Fields().Slice()
for k, f := range ofsl {
failed |= verifyParamResultOffset(t, f, emptyRes.outparams[k], "result", k)
}
if failed != 0 {
t.Logf("emptyres:\n%s\n", emptyResString)
}
}

View File

@ -68,7 +68,7 @@ func IncomparableField(t *types.Type) *types.Field {
// EqCanPanic reports whether == on type t could panic (has an interface somewhere).
// t must be comparable.
func EqCanPanic(t *types.Type) bool {
switch t.Etype {
switch t.Kind() {
default:
return false
case types.TINTER:
@ -120,7 +120,7 @@ func algtype1(t *types.Type) (AlgKind, *types.Type) {
return ANOEQ, t
}
switch t.Etype {
switch t.Kind() {
case types.TANY, types.TFORW:
// will be defined later.
return ANOEQ, t
@ -274,7 +274,7 @@ func genhash(t *types.Type) *obj.LSym {
// (And the closure generated by genhash will also get
// dead-code eliminated, as we call the subtype hashers
// directly.)
switch t.Etype {
switch t.Kind() {
case types.TARRAY:
genhash(t.Elem())
case types.TSTRUCT:
@ -292,18 +292,18 @@ func genhash(t *types.Type) *obj.LSym {
dclcontext = ir.PEXTERN
// func sym(p *T, h uintptr) uintptr
tfn := ir.Nod(ir.OTFUNC, nil, nil)
tfn.PtrList().Set2(
args := []*ir.Field{
namedfield("p", types.NewPtr(t)),
namedfield("h", types.Types[types.TUINTPTR]),
)
tfn.PtrRlist().Set1(anonfield(types.Types[types.TUINTPTR]))
}
results := []*ir.Field{anonfield(types.Types[types.TUINTPTR])}
tfn := ir.NewFuncType(base.Pos, nil, args, results)
fn := dclfunc(sym, tfn)
np := ir.AsNode(tfn.Type().Params().Field(0).Nname)
nh := ir.AsNode(tfn.Type().Params().Field(1).Nname)
switch t.Etype {
switch t.Kind() {
case types.TARRAY:
// An array of pure memory would be handled by the
// standard algorithm, so the element type must not be
@ -311,7 +311,7 @@ func genhash(t *types.Type) *obj.LSym {
hashel := hashfor(t.Elem())
n := ir.Nod(ir.ORANGE, nil, ir.Nod(ir.ODEREF, np, nil))
ni := NewName(lookup("i"))
ni := ir.Node(NewName(lookup("i")))
ni.SetType(types.Types[types.TINT])
n.PtrList().Set1(ni)
n.SetColas(true)
@ -382,8 +382,8 @@ func genhash(t *types.Type) *obj.LSym {
funcbody()
fn.Func().SetDupok(true)
fn = typecheck(fn, ctxStmt)
fn.SetDupok(true)
typecheckFunc(fn)
Curfn = fn
typecheckslice(fn.Body().Slice(), ctxStmt)
@ -393,7 +393,7 @@ func genhash(t *types.Type) *obj.LSym {
testdclstack()
}
fn.Func().SetNilCheckDisabled(true)
fn.SetNilCheckDisabled(true)
xtop = append(xtop, fn)
// Build closure. It doesn't close over any variables, so
@ -432,10 +432,10 @@ func hashfor(t *types.Type) ir.Node {
n := NewName(sym)
setNodeNameFunc(n)
n.SetType(functype(nil, []ir.Node{
n.SetType(functype(nil, []*ir.Field{
anonfield(types.NewPtr(t)),
anonfield(types.Types[types.TUINTPTR]),
}, []ir.Node{
}, []*ir.Field{
anonfield(types.Types[types.TUINTPTR]),
}))
return n
@ -521,12 +521,9 @@ func geneq(t *types.Type) *obj.LSym {
dclcontext = ir.PEXTERN
// func sym(p, q *T) bool
tfn := ir.Nod(ir.OTFUNC, nil, nil)
tfn.PtrList().Set2(
namedfield("p", types.NewPtr(t)),
namedfield("q", types.NewPtr(t)),
)
tfn.PtrRlist().Set1(namedfield("r", types.Types[types.TBOOL]))
tfn := ir.NewFuncType(base.Pos, nil,
[]*ir.Field{namedfield("p", types.NewPtr(t)), namedfield("q", types.NewPtr(t))},
[]*ir.Field{namedfield("r", types.Types[types.TBOOL])})
fn := dclfunc(sym, tfn)
np := ir.AsNode(tfn.Type().Params().Field(0).Nname)
@ -539,7 +536,7 @@ func geneq(t *types.Type) *obj.LSym {
// We reach here only for types that have equality but
// cannot be handled by the standard algorithms,
// so t must be either an array or a struct.
switch t.Etype {
switch t.Kind() {
default:
base.Fatalf("geneq %v", t)
@ -616,7 +613,7 @@ func geneq(t *types.Type) *obj.LSym {
}
}
switch t.Elem().Etype {
switch t.Elem().Kind() {
case types.TSTRING:
// Do two loops. First, check that all the lengths match (cheap).
// Second, check that all the contents match (expensive).
@ -761,8 +758,8 @@ func geneq(t *types.Type) *obj.LSym {
funcbody()
fn.Func().SetDupok(true)
fn = typecheck(fn, ctxStmt)
fn.SetDupok(true)
typecheckFunc(fn)
Curfn = fn
typecheckslice(fn.Body().Slice(), ctxStmt)
@ -776,7 +773,7 @@ func geneq(t *types.Type) *obj.LSym {
// We are comparing a struct or an array,
// neither of which can be nil, and our comparisons
// are shallow.
fn.Func().SetNilCheckDisabled(true)
fn.SetNilCheckDisabled(true)
xtop = append(xtop, fn)
// Generate a closure which points at the function we just generated.
@ -785,37 +782,14 @@ func geneq(t *types.Type) *obj.LSym {
return closure
}
func hasCall(n ir.Node) bool {
if n.Op() == ir.OCALL || n.Op() == ir.OCALLFUNC {
return true
}
if n.Left() != nil && hasCall(n.Left()) {
return true
}
if n.Right() != nil && hasCall(n.Right()) {
return true
}
for _, x := range n.Init().Slice() {
if hasCall(x) {
return true
func hasCall(fn *ir.Func) bool {
found := ir.Find(fn, func(n ir.Node) interface{} {
if op := n.Op(); op == ir.OCALL || op == ir.OCALLFUNC {
return n
}
}
for _, x := range n.Body().Slice() {
if hasCall(x) {
return true
}
}
for _, x := range n.List().Slice() {
if hasCall(x) {
return true
}
}
for _, x := range n.Rlist().Slice() {
if hasCall(x) {
return true
}
}
return false
return nil
})
return found != nil
}
// eqfield returns the node

View File

@ -126,8 +126,8 @@ func widstruct(errtype *types.Type, t *types.Type, o int64, flag int) int64 {
// NOTE(rsc): This comment may be stale.
// It's possible the ordering has changed and this is
// now the common case. I'm not sure.
if n.Name().Param.Stackcopy != nil {
n.Name().Param.Stackcopy.SetOffset(o)
if n.Name().Stackcopy != nil {
n.Name().Stackcopy.SetOffset(o)
n.SetOffset(0)
} else {
n.SetOffset(o)
@ -185,11 +185,18 @@ func findTypeLoop(t *types.Type, path *[]*types.Type) bool {
// We implement a simple DFS loop-finding algorithm. This
// could be faster, but type cycles are rare.
if t.Sym != nil {
if t.Sym() != nil {
// Declared type. Check for loops and otherwise
// recurse on the type expression used in the type
// declaration.
// Type imported from package, so it can't be part of
// a type loop (otherwise that package should have
// failed to compile).
if t.Sym().Pkg != types.LocalPkg {
return false
}
for i, x := range *path {
if x == t {
*path = (*path)[i:]
@ -198,14 +205,14 @@ func findTypeLoop(t *types.Type, path *[]*types.Type) bool {
}
*path = append(*path, t)
if p := ir.AsNode(t.Nod).Name().Param; p != nil && findTypeLoop(p.Ntype.Type(), path) {
if findTypeLoop(t.Obj().(*ir.Name).Ntype.Type(), path) {
return true
}
*path = (*path)[:len(*path)-1]
} else {
// Anonymous type. Recurse on contained types.
switch t.Etype {
switch t.Kind() {
case types.TARRAY:
if findTypeLoop(t.Elem(), path) {
return true
@ -307,22 +314,22 @@ func dowidth(t *types.Type) {
defercheckwidth()
lno := base.Pos
if ir.AsNode(t.Nod) != nil {
base.Pos = ir.AsNode(t.Nod).Pos()
if pos := t.Pos(); pos.IsKnown() {
base.Pos = pos
}
t.Width = -2
t.Align = 0 // 0 means use t.Width, below
et := t.Etype
et := t.Kind()
switch et {
case types.TFUNC, types.TCHAN, types.TMAP, types.TSTRING:
break
// simtype == 0 during bootstrap
default:
if simtype[t.Etype] != 0 {
et = simtype[t.Etype]
if simtype[t.Kind()] != 0 {
et = simtype[t.Kind()]
}
}

View File

@ -16,7 +16,7 @@ type exporter struct {
// markObject visits a reachable object.
func (p *exporter) markObject(n ir.Node) {
if n.Op() == ir.ONAME && n.Class() == ir.PFUNC {
inlFlood(n)
inlFlood(n.(*ir.Name))
}
p.markType(n.Type())
@ -35,7 +35,7 @@ func (p *exporter) markType(t *types.Type) {
// only their unexpanded method set (i.e., exclusive of
// interface embeddings), and the switch statement below
// handles their full method set.
if t.Sym != nil && t.Etype != types.TINTER {
if t.Sym() != nil && t.Kind() != types.TINTER {
for _, m := range t.Methods().Slice() {
if types.IsExported(m.Sym.Name) {
p.markObject(ir.AsNode(m.Nname))
@ -52,7 +52,7 @@ func (p *exporter) markType(t *types.Type) {
// Notably, we don't mark function parameter types, because
// the user already needs some way to construct values of
// those types.
switch t.Etype {
switch t.Kind() {
case types.TPTR, types.TARRAY, types.TSLICE:
p.markType(t.Elem())
@ -153,11 +153,11 @@ func predeclared() []*types.Type {
types.Types[types.TSTRING],
// basic type aliases
types.Bytetype,
types.Runetype,
types.ByteType,
types.RuneType,
// error
types.Errortype,
types.ErrorType,
// untyped types
types.UntypedBool,

View File

@ -6,6 +6,7 @@ package gc
import (
"cmd/compile/internal/ir"
"cmd/compile/internal/types"
"cmd/internal/src"
)
@ -15,5 +16,5 @@ func npos(pos src.XPos, n ir.Node) ir.Node {
}
func builtinCall(op ir.Op) ir.Node {
return ir.Nod(ir.OCALL, mkname(ir.BuiltinPkg.Lookup(ir.OpNames[op])), nil)
return ir.Nod(ir.OCALL, mkname(types.BuiltinPkg.Lookup(ir.OpNames[op])), nil)
}

View File

@ -187,16 +187,17 @@ var runtimeDecls = [...]struct {
{"racewriterange", funcTag, 121},
{"msanread", funcTag, 121},
{"msanwrite", funcTag, 121},
{"checkptrAlignment", funcTag, 122},
{"checkptrArithmetic", funcTag, 124},
{"libfuzzerTraceCmp1", funcTag, 126},
{"libfuzzerTraceCmp2", funcTag, 128},
{"libfuzzerTraceCmp4", funcTag, 129},
{"libfuzzerTraceCmp8", funcTag, 130},
{"libfuzzerTraceConstCmp1", funcTag, 126},
{"libfuzzerTraceConstCmp2", funcTag, 128},
{"libfuzzerTraceConstCmp4", funcTag, 129},
{"libfuzzerTraceConstCmp8", funcTag, 130},
{"msanmove", funcTag, 122},
{"checkptrAlignment", funcTag, 123},
{"checkptrArithmetic", funcTag, 125},
{"libfuzzerTraceCmp1", funcTag, 127},
{"libfuzzerTraceCmp2", funcTag, 129},
{"libfuzzerTraceCmp4", funcTag, 130},
{"libfuzzerTraceCmp8", funcTag, 131},
{"libfuzzerTraceConstCmp1", funcTag, 127},
{"libfuzzerTraceConstCmp2", funcTag, 129},
{"libfuzzerTraceConstCmp4", funcTag, 130},
{"libfuzzerTraceConstCmp8", funcTag, 131},
{"x86HasPOPCNT", varTag, 6},
{"x86HasSSE41", varTag, 6},
{"x86HasFMA", varTag, 6},
@ -205,137 +206,138 @@ var runtimeDecls = [...]struct {
}
func runtimeTypes() []*types.Type {
var typs [131]*types.Type
typs[0] = types.Bytetype
var typs [132]*types.Type
typs[0] = types.ByteType
typs[1] = types.NewPtr(typs[0])
typs[2] = types.Types[types.TANY]
typs[3] = types.NewPtr(typs[2])
typs[4] = functype(nil, []ir.Node{anonfield(typs[1])}, []ir.Node{anonfield(typs[3])})
typs[4] = functype(nil, []*ir.Field{anonfield(typs[1])}, []*ir.Field{anonfield(typs[3])})
typs[5] = types.Types[types.TUINTPTR]
typs[6] = types.Types[types.TBOOL]
typs[7] = types.Types[types.TUNSAFEPTR]
typs[8] = functype(nil, []ir.Node{anonfield(typs[5]), anonfield(typs[1]), anonfield(typs[6])}, []ir.Node{anonfield(typs[7])})
typs[8] = functype(nil, []*ir.Field{anonfield(typs[5]), anonfield(typs[1]), anonfield(typs[6])}, []*ir.Field{anonfield(typs[7])})
typs[9] = functype(nil, nil, nil)
typs[10] = types.Types[types.TINTER]
typs[11] = functype(nil, []ir.Node{anonfield(typs[10])}, nil)
typs[11] = functype(nil, []*ir.Field{anonfield(typs[10])}, nil)
typs[12] = types.Types[types.TINT32]
typs[13] = types.NewPtr(typs[12])
typs[14] = functype(nil, []ir.Node{anonfield(typs[13])}, []ir.Node{anonfield(typs[10])})
typs[14] = functype(nil, []*ir.Field{anonfield(typs[13])}, []*ir.Field{anonfield(typs[10])})
typs[15] = types.Types[types.TINT]
typs[16] = functype(nil, []ir.Node{anonfield(typs[15]), anonfield(typs[15])}, nil)
typs[16] = functype(nil, []*ir.Field{anonfield(typs[15]), anonfield(typs[15])}, nil)
typs[17] = types.Types[types.TUINT]
typs[18] = functype(nil, []ir.Node{anonfield(typs[17]), anonfield(typs[15])}, nil)
typs[19] = functype(nil, []ir.Node{anonfield(typs[6])}, nil)
typs[18] = functype(nil, []*ir.Field{anonfield(typs[17]), anonfield(typs[15])}, nil)
typs[19] = functype(nil, []*ir.Field{anonfield(typs[6])}, nil)
typs[20] = types.Types[types.TFLOAT64]
typs[21] = functype(nil, []ir.Node{anonfield(typs[20])}, nil)
typs[21] = functype(nil, []*ir.Field{anonfield(typs[20])}, nil)
typs[22] = types.Types[types.TINT64]
typs[23] = functype(nil, []ir.Node{anonfield(typs[22])}, nil)
typs[23] = functype(nil, []*ir.Field{anonfield(typs[22])}, nil)
typs[24] = types.Types[types.TUINT64]
typs[25] = functype(nil, []ir.Node{anonfield(typs[24])}, nil)
typs[25] = functype(nil, []*ir.Field{anonfield(typs[24])}, nil)
typs[26] = types.Types[types.TCOMPLEX128]
typs[27] = functype(nil, []ir.Node{anonfield(typs[26])}, nil)
typs[27] = functype(nil, []*ir.Field{anonfield(typs[26])}, nil)
typs[28] = types.Types[types.TSTRING]
typs[29] = functype(nil, []ir.Node{anonfield(typs[28])}, nil)
typs[30] = functype(nil, []ir.Node{anonfield(typs[2])}, nil)
typs[31] = functype(nil, []ir.Node{anonfield(typs[5])}, nil)
typs[29] = functype(nil, []*ir.Field{anonfield(typs[28])}, nil)
typs[30] = functype(nil, []*ir.Field{anonfield(typs[2])}, nil)
typs[31] = functype(nil, []*ir.Field{anonfield(typs[5])}, nil)
typs[32] = types.NewArray(typs[0], 32)
typs[33] = types.NewPtr(typs[32])
typs[34] = functype(nil, []ir.Node{anonfield(typs[33]), anonfield(typs[28]), anonfield(typs[28])}, []ir.Node{anonfield(typs[28])})
typs[35] = functype(nil, []ir.Node{anonfield(typs[33]), anonfield(typs[28]), anonfield(typs[28]), anonfield(typs[28])}, []ir.Node{anonfield(typs[28])})
typs[36] = functype(nil, []ir.Node{anonfield(typs[33]), anonfield(typs[28]), anonfield(typs[28]), anonfield(typs[28]), anonfield(typs[28])}, []ir.Node{anonfield(typs[28])})
typs[37] = functype(nil, []ir.Node{anonfield(typs[33]), anonfield(typs[28]), anonfield(typs[28]), anonfield(typs[28]), anonfield(typs[28]), anonfield(typs[28])}, []ir.Node{anonfield(typs[28])})
typs[34] = functype(nil, []*ir.Field{anonfield(typs[33]), anonfield(typs[28]), anonfield(typs[28])}, []*ir.Field{anonfield(typs[28])})
typs[35] = functype(nil, []*ir.Field{anonfield(typs[33]), anonfield(typs[28]), anonfield(typs[28]), anonfield(typs[28])}, []*ir.Field{anonfield(typs[28])})
typs[36] = functype(nil, []*ir.Field{anonfield(typs[33]), anonfield(typs[28]), anonfield(typs[28]), anonfield(typs[28]), anonfield(typs[28])}, []*ir.Field{anonfield(typs[28])})
typs[37] = functype(nil, []*ir.Field{anonfield(typs[33]), anonfield(typs[28]), anonfield(typs[28]), anonfield(typs[28]), anonfield(typs[28]), anonfield(typs[28])}, []*ir.Field{anonfield(typs[28])})
typs[38] = types.NewSlice(typs[28])
typs[39] = functype(nil, []ir.Node{anonfield(typs[33]), anonfield(typs[38])}, []ir.Node{anonfield(typs[28])})
typs[40] = functype(nil, []ir.Node{anonfield(typs[28]), anonfield(typs[28])}, []ir.Node{anonfield(typs[15])})
typs[39] = functype(nil, []*ir.Field{anonfield(typs[33]), anonfield(typs[38])}, []*ir.Field{anonfield(typs[28])})
typs[40] = functype(nil, []*ir.Field{anonfield(typs[28]), anonfield(typs[28])}, []*ir.Field{anonfield(typs[15])})
typs[41] = types.NewArray(typs[0], 4)
typs[42] = types.NewPtr(typs[41])
typs[43] = functype(nil, []ir.Node{anonfield(typs[42]), anonfield(typs[22])}, []ir.Node{anonfield(typs[28])})
typs[44] = functype(nil, []ir.Node{anonfield(typs[33]), anonfield(typs[1]), anonfield(typs[15])}, []ir.Node{anonfield(typs[28])})
typs[45] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[15])}, []ir.Node{anonfield(typs[28])})
typs[46] = types.Runetype
typs[43] = functype(nil, []*ir.Field{anonfield(typs[42]), anonfield(typs[22])}, []*ir.Field{anonfield(typs[28])})
typs[44] = functype(nil, []*ir.Field{anonfield(typs[33]), anonfield(typs[1]), anonfield(typs[15])}, []*ir.Field{anonfield(typs[28])})
typs[45] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[15])}, []*ir.Field{anonfield(typs[28])})
typs[46] = types.RuneType
typs[47] = types.NewSlice(typs[46])
typs[48] = functype(nil, []ir.Node{anonfield(typs[33]), anonfield(typs[47])}, []ir.Node{anonfield(typs[28])})
typs[48] = functype(nil, []*ir.Field{anonfield(typs[33]), anonfield(typs[47])}, []*ir.Field{anonfield(typs[28])})
typs[49] = types.NewSlice(typs[0])
typs[50] = functype(nil, []ir.Node{anonfield(typs[33]), anonfield(typs[28])}, []ir.Node{anonfield(typs[49])})
typs[50] = functype(nil, []*ir.Field{anonfield(typs[33]), anonfield(typs[28])}, []*ir.Field{anonfield(typs[49])})
typs[51] = types.NewArray(typs[46], 32)
typs[52] = types.NewPtr(typs[51])
typs[53] = functype(nil, []ir.Node{anonfield(typs[52]), anonfield(typs[28])}, []ir.Node{anonfield(typs[47])})
typs[54] = functype(nil, []ir.Node{anonfield(typs[3]), anonfield(typs[15]), anonfield(typs[3]), anonfield(typs[15]), anonfield(typs[5])}, []ir.Node{anonfield(typs[15])})
typs[55] = functype(nil, []ir.Node{anonfield(typs[28]), anonfield(typs[15])}, []ir.Node{anonfield(typs[46]), anonfield(typs[15])})
typs[56] = functype(nil, []ir.Node{anonfield(typs[28])}, []ir.Node{anonfield(typs[15])})
typs[57] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[2])}, []ir.Node{anonfield(typs[2])})
typs[58] = functype(nil, []ir.Node{anonfield(typs[2])}, []ir.Node{anonfield(typs[7])})
typs[59] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[3])}, []ir.Node{anonfield(typs[2])})
typs[60] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[2])}, []ir.Node{anonfield(typs[2]), anonfield(typs[6])})
typs[61] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[1]), anonfield(typs[1])}, nil)
typs[62] = functype(nil, []ir.Node{anonfield(typs[1])}, nil)
typs[53] = functype(nil, []*ir.Field{anonfield(typs[52]), anonfield(typs[28])}, []*ir.Field{anonfield(typs[47])})
typs[54] = functype(nil, []*ir.Field{anonfield(typs[3]), anonfield(typs[15]), anonfield(typs[3]), anonfield(typs[15]), anonfield(typs[5])}, []*ir.Field{anonfield(typs[15])})
typs[55] = functype(nil, []*ir.Field{anonfield(typs[28]), anonfield(typs[15])}, []*ir.Field{anonfield(typs[46]), anonfield(typs[15])})
typs[56] = functype(nil, []*ir.Field{anonfield(typs[28])}, []*ir.Field{anonfield(typs[15])})
typs[57] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[2])}, []*ir.Field{anonfield(typs[2])})
typs[58] = functype(nil, []*ir.Field{anonfield(typs[2])}, []*ir.Field{anonfield(typs[7])})
typs[59] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[3])}, []*ir.Field{anonfield(typs[2])})
typs[60] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[2])}, []*ir.Field{anonfield(typs[2]), anonfield(typs[6])})
typs[61] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[1]), anonfield(typs[1])}, nil)
typs[62] = functype(nil, []*ir.Field{anonfield(typs[1])}, nil)
typs[63] = types.NewPtr(typs[5])
typs[64] = functype(nil, []ir.Node{anonfield(typs[63]), anonfield(typs[7]), anonfield(typs[7])}, []ir.Node{anonfield(typs[6])})
typs[64] = functype(nil, []*ir.Field{anonfield(typs[63]), anonfield(typs[7]), anonfield(typs[7])}, []*ir.Field{anonfield(typs[6])})
typs[65] = types.Types[types.TUINT32]
typs[66] = functype(nil, nil, []ir.Node{anonfield(typs[65])})
typs[66] = functype(nil, nil, []*ir.Field{anonfield(typs[65])})
typs[67] = types.NewMap(typs[2], typs[2])
typs[68] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[22]), anonfield(typs[3])}, []ir.Node{anonfield(typs[67])})
typs[69] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[15]), anonfield(typs[3])}, []ir.Node{anonfield(typs[67])})
typs[70] = functype(nil, nil, []ir.Node{anonfield(typs[67])})
typs[71] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[67]), anonfield(typs[3])}, []ir.Node{anonfield(typs[3])})
typs[72] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[67]), anonfield(typs[2])}, []ir.Node{anonfield(typs[3])})
typs[73] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[67]), anonfield(typs[3]), anonfield(typs[1])}, []ir.Node{anonfield(typs[3])})
typs[74] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[67]), anonfield(typs[3])}, []ir.Node{anonfield(typs[3]), anonfield(typs[6])})
typs[75] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[67]), anonfield(typs[2])}, []ir.Node{anonfield(typs[3]), anonfield(typs[6])})
typs[76] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[67]), anonfield(typs[3]), anonfield(typs[1])}, []ir.Node{anonfield(typs[3]), anonfield(typs[6])})
typs[77] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[67]), anonfield(typs[3])}, nil)
typs[78] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[67]), anonfield(typs[2])}, nil)
typs[79] = functype(nil, []ir.Node{anonfield(typs[3])}, nil)
typs[80] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[67])}, nil)
typs[68] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[22]), anonfield(typs[3])}, []*ir.Field{anonfield(typs[67])})
typs[69] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[15]), anonfield(typs[3])}, []*ir.Field{anonfield(typs[67])})
typs[70] = functype(nil, nil, []*ir.Field{anonfield(typs[67])})
typs[71] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[67]), anonfield(typs[3])}, []*ir.Field{anonfield(typs[3])})
typs[72] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[67]), anonfield(typs[2])}, []*ir.Field{anonfield(typs[3])})
typs[73] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[67]), anonfield(typs[3]), anonfield(typs[1])}, []*ir.Field{anonfield(typs[3])})
typs[74] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[67]), anonfield(typs[3])}, []*ir.Field{anonfield(typs[3]), anonfield(typs[6])})
typs[75] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[67]), anonfield(typs[2])}, []*ir.Field{anonfield(typs[3]), anonfield(typs[6])})
typs[76] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[67]), anonfield(typs[3]), anonfield(typs[1])}, []*ir.Field{anonfield(typs[3]), anonfield(typs[6])})
typs[77] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[67]), anonfield(typs[3])}, nil)
typs[78] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[67]), anonfield(typs[2])}, nil)
typs[79] = functype(nil, []*ir.Field{anonfield(typs[3])}, nil)
typs[80] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[67])}, nil)
typs[81] = types.NewChan(typs[2], types.Cboth)
typs[82] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[22])}, []ir.Node{anonfield(typs[81])})
typs[83] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[15])}, []ir.Node{anonfield(typs[81])})
typs[82] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[22])}, []*ir.Field{anonfield(typs[81])})
typs[83] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[15])}, []*ir.Field{anonfield(typs[81])})
typs[84] = types.NewChan(typs[2], types.Crecv)
typs[85] = functype(nil, []ir.Node{anonfield(typs[84]), anonfield(typs[3])}, nil)
typs[86] = functype(nil, []ir.Node{anonfield(typs[84]), anonfield(typs[3])}, []ir.Node{anonfield(typs[6])})
typs[85] = functype(nil, []*ir.Field{anonfield(typs[84]), anonfield(typs[3])}, nil)
typs[86] = functype(nil, []*ir.Field{anonfield(typs[84]), anonfield(typs[3])}, []*ir.Field{anonfield(typs[6])})
typs[87] = types.NewChan(typs[2], types.Csend)
typs[88] = functype(nil, []ir.Node{anonfield(typs[87]), anonfield(typs[3])}, nil)
typs[88] = functype(nil, []*ir.Field{anonfield(typs[87]), anonfield(typs[3])}, nil)
typs[89] = types.NewArray(typs[0], 3)
typs[90] = tostruct([]ir.Node{namedfield("enabled", typs[6]), namedfield("pad", typs[89]), namedfield("needed", typs[6]), namedfield("cgo", typs[6]), namedfield("alignme", typs[24])})
typs[91] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[3]), anonfield(typs[3])}, nil)
typs[92] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[3])}, nil)
typs[93] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[3]), anonfield(typs[15]), anonfield(typs[3]), anonfield(typs[15])}, []ir.Node{anonfield(typs[15])})
typs[94] = functype(nil, []ir.Node{anonfield(typs[87]), anonfield(typs[3])}, []ir.Node{anonfield(typs[6])})
typs[95] = functype(nil, []ir.Node{anonfield(typs[3]), anonfield(typs[84])}, []ir.Node{anonfield(typs[6])})
typs[90] = tostruct([]*ir.Field{namedfield("enabled", typs[6]), namedfield("pad", typs[89]), namedfield("needed", typs[6]), namedfield("cgo", typs[6]), namedfield("alignme", typs[24])})
typs[91] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[3]), anonfield(typs[3])}, nil)
typs[92] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[3])}, nil)
typs[93] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[3]), anonfield(typs[15]), anonfield(typs[3]), anonfield(typs[15])}, []*ir.Field{anonfield(typs[15])})
typs[94] = functype(nil, []*ir.Field{anonfield(typs[87]), anonfield(typs[3])}, []*ir.Field{anonfield(typs[6])})
typs[95] = functype(nil, []*ir.Field{anonfield(typs[3]), anonfield(typs[84])}, []*ir.Field{anonfield(typs[6])})
typs[96] = types.NewPtr(typs[6])
typs[97] = functype(nil, []ir.Node{anonfield(typs[3]), anonfield(typs[96]), anonfield(typs[84])}, []ir.Node{anonfield(typs[6])})
typs[98] = functype(nil, []ir.Node{anonfield(typs[63])}, nil)
typs[99] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[1]), anonfield(typs[63]), anonfield(typs[15]), anonfield(typs[15]), anonfield(typs[6])}, []ir.Node{anonfield(typs[15]), anonfield(typs[6])})
typs[100] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[15]), anonfield(typs[15])}, []ir.Node{anonfield(typs[7])})
typs[101] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[22]), anonfield(typs[22])}, []ir.Node{anonfield(typs[7])})
typs[102] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[15]), anonfield(typs[15]), anonfield(typs[7])}, []ir.Node{anonfield(typs[7])})
typs[97] = functype(nil, []*ir.Field{anonfield(typs[3]), anonfield(typs[96]), anonfield(typs[84])}, []*ir.Field{anonfield(typs[6])})
typs[98] = functype(nil, []*ir.Field{anonfield(typs[63])}, nil)
typs[99] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[1]), anonfield(typs[63]), anonfield(typs[15]), anonfield(typs[15]), anonfield(typs[6])}, []*ir.Field{anonfield(typs[15]), anonfield(typs[6])})
typs[100] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[15]), anonfield(typs[15])}, []*ir.Field{anonfield(typs[7])})
typs[101] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[22]), anonfield(typs[22])}, []*ir.Field{anonfield(typs[7])})
typs[102] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[15]), anonfield(typs[15]), anonfield(typs[7])}, []*ir.Field{anonfield(typs[7])})
typs[103] = types.NewSlice(typs[2])
typs[104] = functype(nil, []ir.Node{anonfield(typs[1]), anonfield(typs[103]), anonfield(typs[15])}, []ir.Node{anonfield(typs[103])})
typs[105] = functype(nil, []ir.Node{anonfield(typs[3]), anonfield(typs[3]), anonfield(typs[5])}, nil)
typs[106] = functype(nil, []ir.Node{anonfield(typs[7]), anonfield(typs[5])}, nil)
typs[107] = functype(nil, []ir.Node{anonfield(typs[3]), anonfield(typs[3]), anonfield(typs[5])}, []ir.Node{anonfield(typs[6])})
typs[108] = functype(nil, []ir.Node{anonfield(typs[3]), anonfield(typs[3])}, []ir.Node{anonfield(typs[6])})
typs[109] = functype(nil, []ir.Node{anonfield(typs[7]), anonfield(typs[7])}, []ir.Node{anonfield(typs[6])})
typs[110] = functype(nil, []ir.Node{anonfield(typs[7]), anonfield(typs[5]), anonfield(typs[5])}, []ir.Node{anonfield(typs[5])})
typs[111] = functype(nil, []ir.Node{anonfield(typs[7]), anonfield(typs[5])}, []ir.Node{anonfield(typs[5])})
typs[112] = functype(nil, []ir.Node{anonfield(typs[22]), anonfield(typs[22])}, []ir.Node{anonfield(typs[22])})
typs[113] = functype(nil, []ir.Node{anonfield(typs[24]), anonfield(typs[24])}, []ir.Node{anonfield(typs[24])})
typs[114] = functype(nil, []ir.Node{anonfield(typs[20])}, []ir.Node{anonfield(typs[22])})
typs[115] = functype(nil, []ir.Node{anonfield(typs[20])}, []ir.Node{anonfield(typs[24])})
typs[116] = functype(nil, []ir.Node{anonfield(typs[20])}, []ir.Node{anonfield(typs[65])})
typs[117] = functype(nil, []ir.Node{anonfield(typs[22])}, []ir.Node{anonfield(typs[20])})
typs[118] = functype(nil, []ir.Node{anonfield(typs[24])}, []ir.Node{anonfield(typs[20])})
typs[119] = functype(nil, []ir.Node{anonfield(typs[65])}, []ir.Node{anonfield(typs[20])})
typs[120] = functype(nil, []ir.Node{anonfield(typs[26]), anonfield(typs[26])}, []ir.Node{anonfield(typs[26])})
typs[121] = functype(nil, []ir.Node{anonfield(typs[5]), anonfield(typs[5])}, nil)
typs[122] = functype(nil, []ir.Node{anonfield(typs[7]), anonfield(typs[1]), anonfield(typs[5])}, nil)
typs[123] = types.NewSlice(typs[7])
typs[124] = functype(nil, []ir.Node{anonfield(typs[7]), anonfield(typs[123])}, nil)
typs[125] = types.Types[types.TUINT8]
typs[126] = functype(nil, []ir.Node{anonfield(typs[125]), anonfield(typs[125])}, nil)
typs[127] = types.Types[types.TUINT16]
typs[128] = functype(nil, []ir.Node{anonfield(typs[127]), anonfield(typs[127])}, nil)
typs[129] = functype(nil, []ir.Node{anonfield(typs[65]), anonfield(typs[65])}, nil)
typs[130] = functype(nil, []ir.Node{anonfield(typs[24]), anonfield(typs[24])}, nil)
typs[104] = functype(nil, []*ir.Field{anonfield(typs[1]), anonfield(typs[103]), anonfield(typs[15])}, []*ir.Field{anonfield(typs[103])})
typs[105] = functype(nil, []*ir.Field{anonfield(typs[3]), anonfield(typs[3]), anonfield(typs[5])}, nil)
typs[106] = functype(nil, []*ir.Field{anonfield(typs[7]), anonfield(typs[5])}, nil)
typs[107] = functype(nil, []*ir.Field{anonfield(typs[3]), anonfield(typs[3]), anonfield(typs[5])}, []*ir.Field{anonfield(typs[6])})
typs[108] = functype(nil, []*ir.Field{anonfield(typs[3]), anonfield(typs[3])}, []*ir.Field{anonfield(typs[6])})
typs[109] = functype(nil, []*ir.Field{anonfield(typs[7]), anonfield(typs[7])}, []*ir.Field{anonfield(typs[6])})
typs[110] = functype(nil, []*ir.Field{anonfield(typs[7]), anonfield(typs[5]), anonfield(typs[5])}, []*ir.Field{anonfield(typs[5])})
typs[111] = functype(nil, []*ir.Field{anonfield(typs[7]), anonfield(typs[5])}, []*ir.Field{anonfield(typs[5])})
typs[112] = functype(nil, []*ir.Field{anonfield(typs[22]), anonfield(typs[22])}, []*ir.Field{anonfield(typs[22])})
typs[113] = functype(nil, []*ir.Field{anonfield(typs[24]), anonfield(typs[24])}, []*ir.Field{anonfield(typs[24])})
typs[114] = functype(nil, []*ir.Field{anonfield(typs[20])}, []*ir.Field{anonfield(typs[22])})
typs[115] = functype(nil, []*ir.Field{anonfield(typs[20])}, []*ir.Field{anonfield(typs[24])})
typs[116] = functype(nil, []*ir.Field{anonfield(typs[20])}, []*ir.Field{anonfield(typs[65])})
typs[117] = functype(nil, []*ir.Field{anonfield(typs[22])}, []*ir.Field{anonfield(typs[20])})
typs[118] = functype(nil, []*ir.Field{anonfield(typs[24])}, []*ir.Field{anonfield(typs[20])})
typs[119] = functype(nil, []*ir.Field{anonfield(typs[65])}, []*ir.Field{anonfield(typs[20])})
typs[120] = functype(nil, []*ir.Field{anonfield(typs[26]), anonfield(typs[26])}, []*ir.Field{anonfield(typs[26])})
typs[121] = functype(nil, []*ir.Field{anonfield(typs[5]), anonfield(typs[5])}, nil)
typs[122] = functype(nil, []*ir.Field{anonfield(typs[5]), anonfield(typs[5]), anonfield(typs[5])}, nil)
typs[123] = functype(nil, []*ir.Field{anonfield(typs[7]), anonfield(typs[1]), anonfield(typs[5])}, nil)
typs[124] = types.NewSlice(typs[7])
typs[125] = functype(nil, []*ir.Field{anonfield(typs[7]), anonfield(typs[124])}, nil)
typs[126] = types.Types[types.TUINT8]
typs[127] = functype(nil, []*ir.Field{anonfield(typs[126]), anonfield(typs[126])}, nil)
typs[128] = types.Types[types.TUINT16]
typs[129] = functype(nil, []*ir.Field{anonfield(typs[128]), anonfield(typs[128])}, nil)
typs[130] = functype(nil, []*ir.Field{anonfield(typs[65]), anonfield(typs[65])}, nil)
typs[131] = functype(nil, []*ir.Field{anonfield(typs[24]), anonfield(typs[24])}, nil)
return typs[:]
}

View File

@ -237,6 +237,7 @@ func racewriterange(addr, size uintptr)
// memory sanitizer
func msanread(addr, size uintptr)
func msanwrite(addr, size uintptr)
func msanmove(dst, src, size uintptr)
func checkptrAlignment(unsafe.Pointer, *byte, uintptr)
func checkptrArithmetic(unsafe.Pointer, []unsafe.Pointer)

View File

@ -17,28 +17,26 @@ func (p *noder) funcLit(expr *syntax.FuncLit) ir.Node {
xtype := p.typeExpr(expr.Type)
ntype := p.typeExpr(expr.Type)
dcl := p.nod(expr, ir.ODCLFUNC, nil, nil)
fn := dcl.Func()
fn := ir.NewFunc(p.pos(expr))
fn.SetIsHiddenClosure(Curfn != nil)
fn.Nname = newfuncnamel(p.pos(expr), ir.BlankNode.Sym(), fn) // filled in by typecheckclosure
fn.Nname.Name().Param.Ntype = xtype
fn.Nname.Name().Defn = dcl
fn.Nname = newFuncNameAt(p.pos(expr), ir.BlankNode.Sym(), fn) // filled in by typecheckclosure
fn.Nname.Ntype = xtype
fn.Nname.Defn = fn
clo := p.nod(expr, ir.OCLOSURE, nil, nil)
clo.SetFunc(fn)
clo := ir.NewClosureExpr(p.pos(expr), fn)
fn.ClosureType = ntype
fn.OClosure = clo
p.funcBody(dcl, expr.Body)
p.funcBody(fn, expr.Body)
// closure-specific variables are hanging off the
// ordinary ones in the symbol table; see oldname.
// unhook them.
// make the list of pointers for the closure call.
for _, v := range fn.ClosureVars.Slice() {
for _, v := range fn.ClosureVars {
// Unlink from v1; see comment in syntax.go type Param for these fields.
v1 := v.Name().Defn
v1.Name().Param.Innermost = v.Name().Param.Outer
v1 := v.Defn
v1.Name().Innermost = v.Outer
// If the closure usage of v is not dense,
// we need to make it dense; now that we're out
@ -68,7 +66,7 @@ func (p *noder) funcLit(expr *syntax.FuncLit) ir.Node {
// obtains f3's v, creating it if necessary (as it is in the example).
//
// capturevars will decide whether to use v directly or &v.
v.Name().Param.Outer = oldname(v.Sym())
v.Outer = oldname(v.Sym()).(*ir.Name)
}
return clo
@ -80,30 +78,29 @@ func (p *noder) funcLit(expr *syntax.FuncLit) ir.Node {
// separate pass from type-checking.
func typecheckclosure(clo ir.Node, top int) {
fn := clo.Func()
dcl := fn.Decl
// Set current associated iota value, so iota can be used inside
// function in ConstSpec, see issue #22344
if x := getIotaValue(); x >= 0 {
dcl.SetIota(x)
fn.SetIota(x)
}
fn.ClosureType = typecheck(fn.ClosureType, ctxType)
clo.SetType(fn.ClosureType.Type())
fn.ClosureCalled = top&ctxCallee != 0
fn.SetClosureCalled(top&ctxCallee != 0)
// Do not typecheck dcl twice, otherwise, we will end up pushing
// dcl to xtop multiple times, causing initLSym called twice.
// Do not typecheck fn twice, otherwise, we will end up pushing
// fn to xtop multiple times, causing initLSym called twice.
// See #30709
if dcl.Typecheck() == 1 {
if fn.Typecheck() == 1 {
return
}
for _, ln := range fn.ClosureVars.Slice() {
n := ln.Name().Defn
for _, ln := range fn.ClosureVars {
n := ln.Defn
if !n.Name().Captured() {
n.Name().SetCaptured(true)
if n.Name().Decldepth == 0 {
base.Fatalf("typecheckclosure: var %S does not have decldepth assigned", n)
base.Fatalf("typecheckclosure: var %v does not have decldepth assigned", n)
}
// Ignore assignments to the variable in straightline code
@ -116,7 +113,7 @@ func typecheckclosure(clo ir.Node, top int) {
fn.Nname.SetSym(closurename(Curfn))
setNodeNameFunc(fn.Nname)
dcl = typecheck(dcl, ctxStmt)
typecheckFunc(fn)
// Type check the body now, but only if we're inside a function.
// At top level (in a variable initialization: curfn==nil) we're not
@ -124,29 +121,29 @@ func typecheckclosure(clo ir.Node, top int) {
// underlying closure function we create is added to xtop.
if Curfn != nil && clo.Type() != nil {
oldfn := Curfn
Curfn = dcl
Curfn = fn
olddd := decldepth
decldepth = 1
typecheckslice(dcl.Body().Slice(), ctxStmt)
typecheckslice(fn.Body().Slice(), ctxStmt)
decldepth = olddd
Curfn = oldfn
}
xtop = append(xtop, dcl)
xtop = append(xtop, fn)
}
// globClosgen is like Func.Closgen, but for the global scope.
var globClosgen int
var globClosgen int32
// closurename generates a new unique name for a closure within
// outerfunc.
func closurename(outerfunc ir.Node) *types.Sym {
func closurename(outerfunc *ir.Func) *types.Sym {
outer := "glob."
prefix := "func"
gen := &globClosgen
if outerfunc != nil {
if outerfunc.Func().OClosure != nil {
if outerfunc.OClosure != nil {
prefix = ""
}
@ -155,8 +152,8 @@ func closurename(outerfunc ir.Node) *types.Sym {
// There may be multiple functions named "_". In those
// cases, we can't use their individual Closgens as it
// would lead to name clashes.
if !ir.IsBlank(outerfunc.Func().Nname) {
gen = &outerfunc.Func().Closgen
if !ir.IsBlank(outerfunc.Nname) {
gen = &outerfunc.Closgen
}
}
@ -172,11 +169,10 @@ var capturevarscomplete bool
// by value or by reference.
// We use value capturing for values <= 128 bytes that are never reassigned
// after capturing (effectively constant).
func capturevars(dcl ir.Node) {
func capturevars(fn *ir.Func) {
lno := base.Pos
base.Pos = dcl.Pos()
fn := dcl.Func()
cvars := fn.ClosureVars.Slice()
base.Pos = fn.Pos()
cvars := fn.ClosureVars
out := cvars[:0]
for _, v := range cvars {
if v.Type() == nil {
@ -194,12 +190,13 @@ func capturevars(dcl ir.Node) {
// so that the outer frame also grabs them and knows they escape.
dowidth(v.Type())
outer := v.Name().Param.Outer
outermost := v.Name().Defn
var outer ir.Node
outer = v.Outer
outermost := v.Defn
// out parameters will be assigned to implicitly upon return.
if outermost.Class() != ir.PPARAMOUT && !outermost.Name().Addrtaken() && !outermost.Name().Assigned() && v.Type().Width <= 128 {
v.Name().SetByval(true)
v.SetByval(true)
} else {
outermost.Name().SetAddrtaken(true)
outer = ir.Nod(ir.OADDR, outer, nil)
@ -207,11 +204,11 @@ func capturevars(dcl ir.Node) {
if base.Flag.LowerM > 1 {
var name *types.Sym
if v.Name().Curfn != nil && v.Name().Curfn.Func().Nname != nil {
name = v.Name().Curfn.Func().Nname.Sym()
if v.Curfn != nil && v.Curfn.Nname != nil {
name = v.Curfn.Sym()
}
how := "ref"
if v.Name().Byval() {
if v.Byval() {
how = "value"
}
base.WarnfAt(v.Pos(), "%v capturing by %s: %v (addr=%v assign=%v width=%d)", name, how, v.Sym(), outermost.Name().Addrtaken(), outermost.Name().Assigned(), int32(v.Type().Width))
@ -221,18 +218,17 @@ func capturevars(dcl ir.Node) {
fn.ClosureEnter.Append(outer)
}
fn.ClosureVars.Set(out)
fn.ClosureVars = out
base.Pos = lno
}
// transformclosure is called in a separate phase after escape analysis.
// It transform closure bodies to properly reference captured variables.
func transformclosure(dcl ir.Node) {
func transformclosure(fn *ir.Func) {
lno := base.Pos
base.Pos = dcl.Pos()
fn := dcl.Func()
base.Pos = fn.Pos()
if fn.ClosureCalled {
if fn.ClosureCalled() {
// If the closure is directly called, we transform it to a plain function call
// with variables passed as args. This avoids allocation of a closure object.
// Here we do only a part of the transformation. Walk of OCALLFUNC(OCLOSURE)
@ -253,16 +249,16 @@ func transformclosure(dcl ir.Node) {
// We are going to insert captured variables before input args.
var params []*types.Field
var decls []ir.Node
for _, v := range fn.ClosureVars.Slice() {
if !v.Name().Byval() {
var decls []*ir.Name
for _, v := range fn.ClosureVars {
if !v.Byval() {
// If v of type T is captured by reference,
// we introduce function param &v *T
// and v remains PAUTOHEAP with &v heapaddr
// (accesses will implicitly deref &v).
addr := NewName(lookup("&" + v.Sym().Name))
addr.SetType(types.NewPtr(v.Type()))
v.Name().Param.Heapaddr = addr
v.Heapaddr = addr
v = addr
}
@ -281,42 +277,41 @@ func transformclosure(dcl ir.Node) {
}
dowidth(f.Type())
dcl.SetType(f.Type()) // update type of ODCLFUNC
fn.SetType(f.Type()) // update type of ODCLFUNC
} else {
// The closure is not called, so it is going to stay as closure.
var body []ir.Node
offset := int64(Widthptr)
for _, v := range fn.ClosureVars.Slice() {
for _, v := range fn.ClosureVars {
// cv refers to the field inside of closure OSTRUCTLIT.
cv := ir.Nod(ir.OCLOSUREVAR, nil, nil)
cv.SetType(v.Type())
if !v.Name().Byval() {
cv.SetType(types.NewPtr(v.Type()))
typ := v.Type()
if !v.Byval() {
typ = types.NewPtr(typ)
}
offset = Rnd(offset, int64(cv.Type().Align))
cv.SetOffset(offset)
offset += cv.Type().Width
offset = Rnd(offset, int64(typ.Align))
cr := ir.NewClosureRead(typ, offset)
offset += typ.Width
if v.Name().Byval() && v.Type().Width <= int64(2*Widthptr) {
if v.Byval() && v.Type().Width <= int64(2*Widthptr) {
// If it is a small variable captured by value, downgrade it to PAUTO.
v.SetClass(ir.PAUTO)
fn.Dcl = append(fn.Dcl, v)
body = append(body, ir.Nod(ir.OAS, v, cv))
body = append(body, ir.Nod(ir.OAS, v, cr))
} else {
// Declare variable holding addresses taken from closure
// and initialize in entry prologue.
addr := NewName(lookup("&" + v.Sym().Name))
addr.SetType(types.NewPtr(v.Type()))
addr.SetClass(ir.PAUTO)
addr.Name().SetUsed(true)
addr.Name().Curfn = dcl
addr.SetUsed(true)
addr.Curfn = fn
fn.Dcl = append(fn.Dcl, addr)
v.Name().Param.Heapaddr = addr
if v.Name().Byval() {
cv = ir.Nod(ir.OADDR, cv, nil)
v.Heapaddr = addr
var src ir.Node = cr
if v.Byval() {
src = ir.Nod(ir.OADDR, cr, nil)
}
body = append(body, ir.Nod(ir.OAS, addr, cv))
body = append(body, ir.Nod(ir.OAS, addr, src))
}
}
@ -333,7 +328,7 @@ func transformclosure(dcl ir.Node) {
// hasemptycvars reports whether closure clo has an
// empty list of captured vars.
func hasemptycvars(clo ir.Node) bool {
return clo.Func().ClosureVars.Len() == 0
return len(clo.Func().ClosureVars) == 0
}
// closuredebugruntimecheck applies boilerplate checks for debug flags
@ -368,12 +363,12 @@ func closureType(clo ir.Node) *types.Type {
// The information appears in the binary in the form of type descriptors;
// the struct is unnamed so that closures in multiple packages with the
// same struct type can share the descriptor.
fields := []ir.Node{
fields := []*ir.Field{
namedfield(".F", types.Types[types.TUINTPTR]),
}
for _, v := range clo.Func().ClosureVars.Slice() {
for _, v := range clo.Func().ClosureVars {
typ := v.Type()
if !v.Name().Byval() {
if !v.Byval() {
typ = types.NewPtr(typ)
}
fields = append(fields, symfield(v.Sym(), typ))
@ -397,7 +392,7 @@ func walkclosure(clo ir.Node, init *ir.Nodes) ir.Node {
typ := closureType(clo)
clos := ir.Nod(ir.OCOMPLIT, nil, typenod(typ))
clos := ir.Nod(ir.OCOMPLIT, nil, ir.TypeNode(typ))
clos.SetEsc(clo.Esc())
clos.PtrList().Set(append([]ir.Node{ir.Nod(ir.OCFUNC, fn.Nname, nil)}, fn.ClosureEnter.Slice()...))
@ -419,7 +414,7 @@ func walkclosure(clo ir.Node, init *ir.Nodes) ir.Node {
return walkexpr(clos, init)
}
func typecheckpartialcall(dot ir.Node, sym *types.Sym) {
func typecheckpartialcall(dot ir.Node, sym *types.Sym) *ir.CallPartExpr {
switch dot.Op() {
case ir.ODOTINTER, ir.ODOTMETH:
break
@ -429,23 +424,20 @@ func typecheckpartialcall(dot ir.Node, sym *types.Sym) {
}
// Create top-level function.
dcl := makepartialcall(dot, dot.Type(), sym)
dcl.Func().SetWrapper(true)
dot.SetOp(ir.OCALLPART)
dot.SetRight(NewName(sym))
dot.SetType(dcl.Type())
dot.SetFunc(dcl.Func())
dot.SetOpt(nil) // clear types.Field from ODOTMETH
fn := makepartialcall(dot, dot.Type(), sym)
fn.SetWrapper(true)
return ir.NewCallPartExpr(dot.Pos(), dot.Left(), dot.(*ir.SelectorExpr).Selection, fn)
}
// makepartialcall returns a DCLFUNC node representing the wrapper function (*-fm) needed
// for partial calls.
func makepartialcall(dot ir.Node, t0 *types.Type, meth *types.Sym) ir.Node {
func makepartialcall(dot ir.Node, t0 *types.Type, meth *types.Sym) *ir.Func {
rcvrtype := dot.Left().Type()
sym := methodSymSuffix(rcvrtype, meth, "-fm")
if sym.Uniq() {
return ir.AsNode(sym.Def)
return sym.Def.(*ir.Func)
}
sym.SetUniq(true)
@ -464,33 +456,26 @@ func makepartialcall(dot ir.Node, t0 *types.Type, meth *types.Sym) ir.Node {
// number at the use of the method expression in this
// case. See issue 29389.
tfn := ir.Nod(ir.OTFUNC, nil, nil)
tfn.PtrList().Set(structargs(t0.Params(), true))
tfn.PtrRlist().Set(structargs(t0.Results(), false))
tfn := ir.NewFuncType(base.Pos, nil,
structargs(t0.Params(), true),
structargs(t0.Results(), false))
dcl := dclfunc(sym, tfn)
fn := dcl.Func()
fn := dclfunc(sym, tfn)
fn.SetDupok(true)
fn.SetNeedctxt(true)
tfn.Type().SetPkg(t0.Pkg())
// Declare and initialize variable holding receiver.
cv := ir.Nod(ir.OCLOSUREVAR, nil, nil)
cv.SetType(rcvrtype)
cv.SetOffset(Rnd(int64(Widthptr), int64(cv.Type().Align)))
cr := ir.NewClosureRead(rcvrtype, Rnd(int64(Widthptr), int64(rcvrtype.Align)))
ptr := NewName(lookup(".this"))
declare(ptr, ir.PAUTO)
ptr.Name().SetUsed(true)
ptr.SetUsed(true)
var body []ir.Node
if rcvrtype.IsPtr() || rcvrtype.IsInterface() {
ptr.SetType(rcvrtype)
body = append(body, ir.Nod(ir.OAS, ptr, cv))
body = append(body, ir.Nod(ir.OAS, ptr, cr))
} else {
ptr.SetType(types.NewPtr(rcvrtype))
body = append(body, ir.Nod(ir.OAS, ptr, ir.Nod(ir.OADDR, cv, nil)))
body = append(body, ir.Nod(ir.OAS, ptr, ir.Nod(ir.OADDR, cr, nil)))
}
call := ir.Nod(ir.OCALL, nodSym(ir.OXDOT, ptr, meth), nil)
@ -503,27 +488,27 @@ func makepartialcall(dot ir.Node, t0 *types.Type, meth *types.Sym) ir.Node {
}
body = append(body, call)
dcl.PtrBody().Set(body)
fn.PtrBody().Set(body)
funcbody()
dcl = typecheck(dcl, ctxStmt)
typecheckFunc(fn)
// Need to typecheck the body of the just-generated wrapper.
// typecheckslice() requires that Curfn is set when processing an ORETURN.
Curfn = dcl
typecheckslice(dcl.Body().Slice(), ctxStmt)
sym.Def = dcl
xtop = append(xtop, dcl)
Curfn = fn
typecheckslice(fn.Body().Slice(), ctxStmt)
sym.Def = fn
xtop = append(xtop, fn)
Curfn = savecurfn
base.Pos = saveLineNo
return dcl
return fn
}
// partialCallType returns the struct type used to hold all the information
// needed in the closure for n (n must be a OCALLPART node).
// The address of a variable of the returned type can be cast to a func.
func partialCallType(n ir.Node) *types.Type {
t := tostruct([]ir.Node{
t := tostruct([]*ir.Field{
namedfield("F", types.Types[types.TUINTPTR]),
namedfield("R", n.Left().Type()),
})
@ -531,7 +516,7 @@ func partialCallType(n ir.Node) *types.Type {
return t
}
func walkpartialcall(n ir.Node, init *ir.Nodes) ir.Node {
func walkpartialcall(n *ir.CallPartExpr, init *ir.Nodes) ir.Node {
// Create closure in the form of a composite literal.
// For x.M with receiver (x) type T, the generated code looks like:
//
@ -555,7 +540,7 @@ func walkpartialcall(n ir.Node, init *ir.Nodes) ir.Node {
typ := partialCallType(n)
clos := ir.Nod(ir.OCOMPLIT, nil, typenod(typ))
clos := ir.Nod(ir.OCOMPLIT, nil, ir.TypeNode(typ))
clos.SetEsc(n.Esc())
clos.PtrList().Set2(ir.Nod(ir.OCFUNC, n.Func().Nname, nil), n.Left())
@ -580,16 +565,5 @@ func walkpartialcall(n ir.Node, init *ir.Nodes) ir.Node {
// callpartMethod returns the *types.Field representing the method
// referenced by method value n.
func callpartMethod(n ir.Node) *types.Field {
if n.Op() != ir.OCALLPART {
base.Fatalf("expected OCALLPART, got %v", n)
}
// TODO(mdempsky): Optimize this. If necessary,
// makepartialcall could save m for us somewhere.
var m *types.Field
if lookdot0(n.Right().Sym(), n.Left().Type(), &m, false) != 1 {
base.Fatalf("failed to find field for OCALLPART")
}
return m
return n.(*ir.CallPartExpr).Method
}

View File

@ -115,17 +115,12 @@ func convlit1(n ir.Node, t *types.Type, explicit bool, context func() string) ir
return n
}
if n.Op() == ir.OLITERAL || n.Op() == ir.ONIL {
// Can't always set n.Type directly on OLITERAL nodes.
// See discussion on CL 20813.
n = n.RawCopy()
}
// Nil is technically not a constant, so handle it specially.
if n.Type().Etype == types.TNIL {
if n.Type().Kind() == types.TNIL {
if n.Op() != ir.ONIL {
base.Fatalf("unexpected op: %v (%v)", n, n.Op())
}
n = ir.Copy(n)
if t == nil {
base.Errorf("use of untyped nil")
n.SetDiag(true)
@ -142,7 +137,7 @@ func convlit1(n ir.Node, t *types.Type, explicit bool, context func() string) ir
return n
}
if t == nil || !ir.OKForConst[t.Etype] {
if t == nil || !ir.OKForConst[t.Kind()] {
t = defaultType(n.Type())
}
@ -153,10 +148,11 @@ func convlit1(n ir.Node, t *types.Type, explicit bool, context func() string) ir
case ir.OLITERAL:
v := convertVal(n.Val(), t, explicit)
if v.Kind() == constant.Unknown {
n = ir.NewConstExpr(n.Val(), n)
break
}
n = ir.NewConstExpr(v, n)
n.SetType(t)
n.SetVal(v)
return n
case ir.OPLUS, ir.ONEG, ir.OBITNOT, ir.ONOT, ir.OREAL, ir.OIMAG:
@ -240,7 +236,7 @@ func operandType(op ir.Op, t *types.Type) *types.Type {
return complexForFloat(t)
}
default:
if okfor[op][t.Etype] {
if okfor[op][t.Kind()] {
return t
}
}
@ -388,7 +384,7 @@ func overflow(v constant.Value, t *types.Type) bool {
return true
}
if doesoverflow(v, t) {
base.Errorf("constant %v overflows %v", ir.FmtConst(v, 0), t)
base.Errorf("constant %v overflows %v", types.FmtConst(v, false), t)
return true
}
return false
@ -494,12 +490,12 @@ func evalConst(n ir.Node) ir.Node {
}
case ir.OCONV, ir.ORUNESTR:
if ir.OKForConst[n.Type().Etype] && nl.Op() == ir.OLITERAL {
if ir.OKForConst[n.Type().Kind()] && nl.Op() == ir.OLITERAL {
return origConst(n, convertVal(nl.Val(), n.Type(), true))
}
case ir.OCONVNOP:
if ir.OKForConst[n.Type().Etype] && nl.Op() == ir.OLITERAL {
if ir.OKForConst[n.Type().Kind()] && nl.Op() == ir.OLITERAL {
// set so n.Orig gets OCONV instead of OCONVNOP
n.SetOp(ir.OCONV)
return origConst(n, nl.Val())
@ -521,7 +517,7 @@ func evalConst(n ir.Node) ir.Node {
if need == 1 {
var strs []string
for _, c := range s {
strs = append(strs, c.StringVal())
strs = append(strs, ir.StringVal(c))
}
return origConst(n, constant.MakeString(strings.Join(strs, "")))
}
@ -532,12 +528,13 @@ func evalConst(n ir.Node) ir.Node {
var strs []string
i2 := i
for i2 < len(s) && ir.IsConst(s[i2], constant.String) {
strs = append(strs, s[i2].StringVal())
strs = append(strs, ir.StringVal(s[i2]))
i2++
}
nl := origConst(s[i], constant.MakeString(strings.Join(strs, "")))
nl.SetOrig(nl) // it's bigger than just s[i]
nl := ir.Copy(n)
nl.PtrList().Set(s[i:i2])
nl = origConst(nl, constant.MakeString(strings.Join(strs, "")))
newList = append(newList, nl)
i = i2 - 1
} else {
@ -550,13 +547,13 @@ func evalConst(n ir.Node) ir.Node {
return n
case ir.OCAP, ir.OLEN:
switch nl.Type().Etype {
switch nl.Type().Kind() {
case types.TSTRING:
if ir.IsConst(nl, constant.String) {
return origIntConst(n, int64(len(nl.StringVal())))
return origIntConst(n, int64(len(ir.StringVal(nl))))
}
case types.TARRAY:
if !hascallchan(nl) {
if !hasCallOrChan(nl) {
return origIntConst(n, nl.Type().NumElem())
}
}
@ -640,12 +637,7 @@ func origConst(n ir.Node, v constant.Value) ir.Node {
return n
}
orig := n
n = ir.NodAt(orig.Pos(), ir.OLITERAL, nil, nil)
n.SetOrig(orig)
n.SetType(orig.Type())
n.SetVal(v)
return n
return ir.NewConstExpr(v, n)
}
func origBoolConst(n ir.Node, v bool) ir.Node {
@ -724,7 +716,7 @@ func mixUntyped(t1, t2 *types.Type) *types.Type {
}
func defaultType(t *types.Type) *types.Type {
if !t.IsUntyped() || t.Etype == types.TNIL {
if !t.IsUntyped() || t.Kind() == types.TNIL {
return t
}
@ -736,7 +728,7 @@ func defaultType(t *types.Type) *types.Type {
case types.UntypedInt:
return types.Types[types.TINT]
case types.UntypedRune:
return types.Runetype
return types.RuneType
case types.UntypedFloat:
return types.Types[types.TFLOAT64]
case types.UntypedComplex:
@ -764,7 +756,7 @@ func indexconst(n ir.Node) int64 {
if n.Op() != ir.OLITERAL {
return -1
}
if !n.Type().IsInteger() && n.Type().Etype != types.TIDEAL {
if !n.Type().IsInteger() && n.Type().Kind() != types.TIDEAL {
return -1
}
@ -775,7 +767,7 @@ func indexconst(n ir.Node) int64 {
if doesoverflow(v, types.Types[types.TINT]) {
return -2
}
return ir.Int64Val(types.Types[types.TINT], v)
return ir.IntVal(types.Types[types.TINT], v)
}
// isGoConst reports whether n is a Go language constant (as opposed to a
@ -787,49 +779,35 @@ func isGoConst(n ir.Node) bool {
return n.Op() == ir.OLITERAL
}
func hascallchan(n ir.Node) bool {
if n == nil {
return false
}
switch n.Op() {
case ir.OAPPEND,
ir.OCALL,
ir.OCALLFUNC,
ir.OCALLINTER,
ir.OCALLMETH,
ir.OCAP,
ir.OCLOSE,
ir.OCOMPLEX,
ir.OCOPY,
ir.ODELETE,
ir.OIMAG,
ir.OLEN,
ir.OMAKE,
ir.ONEW,
ir.OPANIC,
ir.OPRINT,
ir.OPRINTN,
ir.OREAL,
ir.ORECOVER,
ir.ORECV:
return true
}
if hascallchan(n.Left()) || hascallchan(n.Right()) {
return true
}
for _, n1 := range n.List().Slice() {
if hascallchan(n1) {
return true
// hasCallOrChan reports whether n contains any calls or channel operations.
func hasCallOrChan(n ir.Node) bool {
found := ir.Find(n, func(n ir.Node) interface{} {
switch n.Op() {
case ir.OAPPEND,
ir.OCALL,
ir.OCALLFUNC,
ir.OCALLINTER,
ir.OCALLMETH,
ir.OCAP,
ir.OCLOSE,
ir.OCOMPLEX,
ir.OCOPY,
ir.ODELETE,
ir.OIMAG,
ir.OLEN,
ir.OMAKE,
ir.ONEW,
ir.OPANIC,
ir.OPRINT,
ir.OPRINTN,
ir.OREAL,
ir.ORECOVER,
ir.ORECV:
return n
}
}
for _, n2 := range n.Rlist().Slice() {
if hascallchan(n2) {
return true
}
}
return false
return nil
})
return found != nil
}
// A constSet represents a set of Go constant expressions.
@ -880,9 +858,9 @@ func (s *constSet) add(pos src.XPos, n ir.Node, what, where string) {
typ := n.Type()
switch typ {
case types.Bytetype:
case types.ByteType:
typ = types.Types[types.TUINT8]
case types.Runetype:
case types.RuneType:
typ = types.Types[types.TINT32]
}
k := constSetKey{typ, ir.ConstValue(n)}
@ -909,7 +887,7 @@ func (s *constSet) add(pos src.XPos, n ir.Node, what, where string) {
//
// TODO(mdempsky): This could probably be a fmt.go flag.
func nodeAndVal(n ir.Node) string {
show := n.String()
show := fmt.Sprint(n)
val := ir.ConstValue(n)
if s := fmt.Sprintf("%#v", val); show != s {
show += " (value " + s + ")"

View File

@ -12,7 +12,6 @@ import (
"cmd/internal/obj"
"cmd/internal/src"
"fmt"
"go/constant"
"strings"
)
@ -59,20 +58,15 @@ var declare_typegen int
// declare records that Node n declares symbol n.Sym in the specified
// declaration context.
func declare(n ir.Node, ctxt ir.Class) {
func declare(n *ir.Name, ctxt ir.Class) {
if ir.IsBlank(n) {
return
}
if n.Name() == nil {
// named OLITERAL needs Name; most OLITERALs don't.
n.SetName(new(ir.Name))
}
s := n.Sym()
// kludgy: typecheckok means we're past parsing. Eg genwrapper may declare out of package names later.
if !inimport && !typecheckok && s.Pkg != ir.LocalPkg {
if !inimport && !typecheckok && s.Pkg != types.LocalPkg {
base.ErrorfAt(n.Pos(), "cannot declare name %v", s)
}
@ -90,8 +84,8 @@ func declare(n ir.Node, ctxt ir.Class) {
base.Pos = n.Pos()
base.Fatalf("automatic outside function")
}
if Curfn != nil && ctxt != ir.PFUNC {
Curfn.Func().Dcl = append(Curfn.Func().Dcl, n)
if Curfn != nil && ctxt != ir.PFUNC && n.Op() == ir.ONAME {
Curfn.Dcl = append(Curfn.Dcl, n)
}
if n.Op() == ir.OTYPE {
declare_typegen++
@ -101,7 +95,7 @@ func declare(n ir.Node, ctxt ir.Class) {
gen = vargen
}
types.Pushdcl(s)
n.Name().Curfn = Curfn
n.Curfn = Curfn
}
if ctxt == ir.PAUTO {
@ -119,7 +113,7 @@ func declare(n ir.Node, ctxt ir.Class) {
s.Block = types.Block
s.Lastlineno = base.Pos
s.Def = n
n.Name().Vargen = int32(gen)
n.Vargen = int32(gen)
n.SetClass(ctxt)
if ctxt == ir.PFUNC {
n.Sym().SetFunc(true)
@ -128,19 +122,9 @@ func declare(n ir.Node, ctxt ir.Class) {
autoexport(n, ctxt)
}
func addvar(n ir.Node, t *types.Type, ctxt ir.Class) {
if n == nil || n.Sym() == nil || (n.Op() != ir.ONAME && n.Op() != ir.ONONAME) || t == nil {
base.Fatalf("addvar: n=%v t=%v nil", n, t)
}
n.SetOp(ir.ONAME)
declare(n, ctxt)
n.SetType(t)
}
// declare variables from grammar
// new_name_list (type | [type] = expr_list)
func variter(vl []ir.Node, t ir.Node, el []ir.Node) []ir.Node {
func variter(vl []ir.Node, t ir.Ntype, el []ir.Node) []ir.Node {
var init []ir.Node
doexpr := len(el) > 0
@ -150,10 +134,11 @@ func variter(vl []ir.Node, t ir.Node, el []ir.Node) []ir.Node {
as2.PtrList().Set(vl)
as2.PtrRlist().Set1(e)
for _, v := range vl {
v := v.(*ir.Name)
v.SetOp(ir.ONAME)
declare(v, dclcontext)
v.Name().Param.Ntype = t
v.Name().Defn = as2
v.Ntype = t
v.Defn = as2
if Curfn != nil {
init = append(init, ir.Nod(ir.ODCL, v, nil))
}
@ -164,6 +149,7 @@ func variter(vl []ir.Node, t ir.Node, el []ir.Node) []ir.Node {
nel := len(el)
for _, v := range vl {
v := v.(*ir.Name)
var e ir.Node
if doexpr {
if len(el) == 0 {
@ -176,7 +162,7 @@ func variter(vl []ir.Node, t ir.Node, el []ir.Node) []ir.Node {
v.SetOp(ir.ONAME)
declare(v, dclcontext)
v.Name().Param.Ntype = t
v.Ntype = t
if e != nil || Curfn != nil || ir.IsBlank(v) {
if Curfn != nil {
@ -185,7 +171,7 @@ func variter(vl []ir.Node, t ir.Node, el []ir.Node) []ir.Node {
e = ir.Nod(ir.OAS, v, e)
init = append(init, e)
if e.Right() != nil {
v.Name().Defn = e
v.Defn = e
}
}
}
@ -196,21 +182,10 @@ func variter(vl []ir.Node, t ir.Node, el []ir.Node) []ir.Node {
return init
}
// newnoname returns a new ONONAME Node associated with symbol s.
func newnoname(s *types.Sym) ir.Node {
if s == nil {
base.Fatalf("newnoname nil")
}
n := ir.Nod(ir.ONONAME, nil, nil)
n.SetSym(s)
n.SetOffset(0)
return n
}
// newfuncnamel generates a new name node for a function or method.
func newfuncnamel(pos src.XPos, s *types.Sym, fn *ir.Func) ir.Node {
// newFuncNameAt generates a new name node for a function or method.
func newFuncNameAt(pos src.XPos, s *types.Sym, fn *ir.Func) *ir.Name {
if fn.Nname != nil {
base.Fatalf("newfuncnamel - already have name")
base.Fatalf("newFuncName - already have name")
}
n := ir.NewNameAt(pos, s)
n.SetFunc(fn)
@ -218,43 +193,16 @@ func newfuncnamel(pos src.XPos, s *types.Sym, fn *ir.Func) ir.Node {
return n
}
// this generates a new name node for a name
// being declared.
func dclname(s *types.Sym) ir.Node {
n := NewName(s)
n.SetOp(ir.ONONAME) // caller will correct it
return n
}
func typenod(t *types.Type) ir.Node {
return typenodl(src.NoXPos, t)
}
func typenodl(pos src.XPos, t *types.Type) ir.Node {
// if we copied another type with *t = *u
// then t->nod might be out of date, so
// check t->nod->type too
if ir.AsNode(t.Nod) == nil || ir.AsNode(t.Nod).Type() != t {
t.Nod = ir.NodAt(pos, ir.OTYPE, nil, nil)
ir.AsNode(t.Nod).SetType(t)
ir.AsNode(t.Nod).SetSym(t.Sym)
}
return ir.AsNode(t.Nod)
}
func anonfield(typ *types.Type) ir.Node {
func anonfield(typ *types.Type) *ir.Field {
return symfield(nil, typ)
}
func namedfield(s string, typ *types.Type) ir.Node {
func namedfield(s string, typ *types.Type) *ir.Field {
return symfield(lookup(s), typ)
}
func symfield(s *types.Sym, typ *types.Type) ir.Node {
n := nodSym(ir.ODCLFIELD, nil, s)
n.SetType(typ)
return n
func symfield(s *types.Sym, typ *types.Type) *ir.Field {
return ir.NewField(base.Pos, s, nil, typ)
}
// oldname returns the Node that declares symbol s in the current scope.
@ -267,7 +215,7 @@ func oldname(s *types.Sym) ir.Node {
// Maybe a top-level declaration will come along later to
// define s. resolve will check s.Def again once all input
// source has been processed.
return newnoname(s)
return ir.NewIdent(base.Pos, s)
}
if Curfn != nil && n.Op() == ir.ONAME && n.Name().Curfn != nil && n.Name().Curfn != Curfn {
@ -277,21 +225,21 @@ func oldname(s *types.Sym) ir.Node {
// are parsing x := 5 inside the closure, until we get to
// the := it looks like a reference to the outer x so we'll
// make x a closure variable unnecessarily.
c := n.Name().Param.Innermost
if c == nil || c.Name().Curfn != Curfn {
c := n.Name().Innermost
if c == nil || c.Curfn != Curfn {
// Do not have a closure var for the active closure yet; make one.
c = NewName(s)
c.SetClass(ir.PAUTOHEAP)
c.Name().SetIsClosureVar(true)
c.SetIsClosureVar(true)
c.SetIsDDD(n.IsDDD())
c.Name().Defn = n
c.Defn = n
// Link into list of active closure variables.
// Popped from list in func funcLit.
c.Name().Param.Outer = n.Name().Param.Innermost
n.Name().Param.Innermost = c
c.Outer = n.Name().Innermost
n.Name().Innermost = c
Curfn.Func().ClosureVars.Append(c)
Curfn.ClosureVars = append(Curfn.ClosureVars, c)
}
// return ref to closure var, not original
@ -301,10 +249,11 @@ func oldname(s *types.Sym) ir.Node {
return n
}
// importName is like oldname, but it reports an error if sym is from another package and not exported.
// importName is like oldname,
// but it reports an error if sym is from another package and not exported.
func importName(sym *types.Sym) ir.Node {
n := oldname(sym)
if !types.IsExported(sym.Name) && sym.Pkg != ir.LocalPkg {
if !types.IsExported(sym.Name) && sym.Pkg != types.LocalPkg {
n.SetDiag(true)
base.Errorf("cannot refer to unexported name %s.%s", sym.Pkg.Name, sym.Name)
}
@ -356,9 +305,9 @@ func colasdefn(left []ir.Node, defn ir.Node) {
}
nnew++
n = NewName(n.Sym())
n := NewName(n.Sym())
declare(n, dclcontext)
n.Name().Defn = defn
n.Defn = defn
defn.PtrInit().Append(ir.Nod(ir.ODCL, n, nil))
left[i] = n
}
@ -368,38 +317,26 @@ func colasdefn(left []ir.Node, defn ir.Node) {
}
}
// declare the arguments in an
// interface field declaration.
func ifacedcl(n ir.Node) {
if n.Op() != ir.ODCLFIELD || n.Left() == nil {
base.Fatalf("ifacedcl")
}
if n.Sym().IsBlank() {
base.Errorf("methods must have a unique non-blank name")
}
}
// declare the function proper
// and declare the arguments.
// called in extern-declaration context
// returns in auto-declaration context.
func funchdr(n ir.Node) {
func funchdr(fn *ir.Func) {
// change the declaration context from extern to auto
funcStack = append(funcStack, funcStackEnt{Curfn, dclcontext})
Curfn = n
Curfn = fn
dclcontext = ir.PAUTO
types.Markdcl()
if n.Func().Nname != nil && n.Func().Nname.Name().Param.Ntype != nil {
funcargs(n.Func().Nname.Name().Param.Ntype)
if fn.Nname.Ntype != nil {
funcargs(fn.Nname.Ntype.(*ir.FuncType))
} else {
funcargs2(n.Type())
funcargs2(fn.Type())
}
}
func funcargs(nt ir.Node) {
func funcargs(nt *ir.FuncType) {
if nt.Op() != ir.OTFUNC {
base.Fatalf("funcargs %v", nt.Op())
}
@ -411,13 +348,13 @@ func funcargs(nt ir.Node) {
// TODO(mdempsky): This is ugly, and only necessary because
// esc.go uses Vargen to figure out result parameters' index
// within the result tuple.
vargen = nt.Rlist().Len()
vargen = len(nt.Results)
// declare the receiver and in arguments.
if nt.Left() != nil {
funcarg(nt.Left(), ir.PPARAM)
if nt.Recv != nil {
funcarg(nt.Recv, ir.PPARAM)
}
for _, n := range nt.List().Slice() {
for _, n := range nt.Params {
funcarg(n, ir.PPARAM)
}
@ -425,21 +362,21 @@ func funcargs(nt ir.Node) {
vargen = 0
// declare the out arguments.
gen := nt.List().Len()
for _, n := range nt.Rlist().Slice() {
if n.Sym() == nil {
gen := len(nt.Params)
for _, n := range nt.Results {
if n.Sym == nil {
// Name so that escape analysis can track it. ~r stands for 'result'.
n.SetSym(lookupN("~r", gen))
n.Sym = lookupN("~r", gen)
gen++
}
if n.Sym().IsBlank() {
if n.Sym.IsBlank() {
// Give it a name so we can assign to it during return. ~b stands for 'blank'.
// The name must be different from ~r above because if you have
// func f() (_ int)
// func g() int
// f is allowed to use a plain 'return' with no arguments, while g is not.
// So the two cases must be distinguished.
n.SetSym(lookupN("~b", gen))
n.Sym = lookupN("~b", gen)
gen++
}
@ -449,28 +386,26 @@ func funcargs(nt ir.Node) {
vargen = oldvargen
}
func funcarg(n ir.Node, ctxt ir.Class) {
if n.Op() != ir.ODCLFIELD {
base.Fatalf("funcarg %v", n.Op())
}
if n.Sym() == nil {
func funcarg(n *ir.Field, ctxt ir.Class) {
if n.Sym == nil {
return
}
n.SetRight(ir.NewNameAt(n.Pos(), n.Sym()))
n.Right().Name().Param.Ntype = n.Left()
n.Right().SetIsDDD(n.IsDDD())
declare(n.Right(), ctxt)
name := ir.NewNameAt(n.Pos, n.Sym)
n.Decl = name
name.Ntype = n.Ntype
name.SetIsDDD(n.IsDDD)
declare(name, ctxt)
vargen++
n.Right().Name().Vargen = int32(vargen)
n.Decl.Vargen = int32(vargen)
}
// Same as funcargs, except run over an already constructed TFUNC.
// This happens during import, where the hidden_fndcl rule has
// used functype directly to parse the function's type.
func funcargs2(t *types.Type) {
if t.Etype != types.TFUNC {
if t.Kind() != types.TFUNC {
base.Fatalf("funcargs2 %v", t)
}
@ -499,7 +434,7 @@ func funcarg2(f *types.Field, ctxt ir.Class) {
var funcStack []funcStackEnt // stack of previous values of Curfn/dclcontext
type funcStackEnt struct {
curfn ir.Node
curfn *ir.Func
dclcontext ir.Class
}
@ -521,7 +456,7 @@ func checkembeddedtype(t *types.Type) {
return
}
if t.Sym == nil && t.IsPtr() {
if t.Sym() == nil && t.IsPtr() {
t = t.Elem()
if t.IsInterface() {
base.Errorf("embedded type cannot be a pointer to interface")
@ -530,38 +465,11 @@ func checkembeddedtype(t *types.Type) {
if t.IsPtr() || t.IsUnsafePtr() {
base.Errorf("embedded type cannot be a pointer")
} else if t.Etype == types.TFORW && !t.ForwardType().Embedlineno.IsKnown() {
} else if t.Kind() == types.TFORW && !t.ForwardType().Embedlineno.IsKnown() {
t.ForwardType().Embedlineno = base.Pos
}
}
func structfield(n ir.Node) *types.Field {
lno := base.Pos
base.Pos = n.Pos()
if n.Op() != ir.ODCLFIELD {
base.Fatalf("structfield: oops %v\n", n)
}
if n.Left() != nil {
n.SetLeft(typecheck(n.Left(), ctxType))
n.SetType(n.Left().Type())
n.SetLeft(nil)
}
f := types.NewField(n.Pos(), n.Sym(), n.Type())
if n.Embedded() {
checkembeddedtype(n.Type())
f.Embedded = 1
}
if n.HasVal() {
f.Note = constant.StringVal(n.Val())
}
base.Pos = lno
return f
}
// checkdupfields emits errors for duplicately named fields or methods in
// a list of struct or interface types.
func checkdupfields(what string, fss ...[]*types.Field) {
@ -582,103 +490,53 @@ func checkdupfields(what string, fss ...[]*types.Field) {
// convert a parsed id/type list into
// a type for struct/interface/arglist
func tostruct(l []ir.Node) *types.Type {
t := types.New(types.TSTRUCT)
fields := make([]*types.Field, len(l))
for i, n := range l {
f := structfield(n)
if f.Broke() {
t.SetBroke(true)
}
fields[i] = f
}
t.SetFields(fields)
checkdupfields("field", t.FieldSlice())
if !t.Broke() {
checkwidth(t)
}
return t
}
func tofunargs(l []ir.Node, funarg types.Funarg) *types.Type {
t := types.New(types.TSTRUCT)
t.StructType().Funarg = funarg
fields := make([]*types.Field, len(l))
for i, n := range l {
f := structfield(n)
f.SetIsDDD(n.IsDDD())
if n.Right() != nil {
n.Right().SetType(f.Type)
f.Nname = n.Right()
}
if f.Broke() {
t.SetBroke(true)
}
fields[i] = f
}
t.SetFields(fields)
return t
}
func tofunargsfield(fields []*types.Field, funarg types.Funarg) *types.Type {
t := types.New(types.TSTRUCT)
t.StructType().Funarg = funarg
t.SetFields(fields)
return t
}
func interfacefield(n ir.Node) *types.Field {
func tostruct(l []*ir.Field) *types.Type {
lno := base.Pos
base.Pos = n.Pos()
if n.Op() != ir.ODCLFIELD {
base.Fatalf("interfacefield: oops %v\n", n)
fields := make([]*types.Field, len(l))
for i, n := range l {
base.Pos = n.Pos
if n.Ntype != nil {
n.Type = typecheckNtype(n.Ntype).Type()
n.Ntype = nil
}
f := types.NewField(n.Pos, n.Sym, n.Type)
if n.Embedded {
checkembeddedtype(n.Type)
f.Embedded = 1
}
f.Note = n.Note
fields[i] = f
}
if n.HasVal() {
base.Errorf("interface method cannot have annotation")
}
// MethodSpec = MethodName Signature | InterfaceTypeName .
//
// If Sym != nil, then Sym is MethodName and Left is Signature.
// Otherwise, Left is InterfaceTypeName.
if n.Left() != nil {
n.SetLeft(typecheck(n.Left(), ctxType))
n.SetType(n.Left().Type())
n.SetLeft(nil)
}
f := types.NewField(n.Pos(), n.Sym(), n.Type())
checkdupfields("field", fields)
base.Pos = lno
return f
return types.NewStruct(types.LocalPkg, fields)
}
func tointerface(l []ir.Node) *types.Type {
if len(l) == 0 {
func tointerface(nmethods []*ir.Field) *types.Type {
if len(nmethods) == 0 {
return types.Types[types.TINTER]
}
t := types.New(types.TINTER)
var fields []*types.Field
for _, n := range l {
f := interfacefield(n)
if f.Broke() {
t.SetBroke(true)
lno := base.Pos
methods := make([]*types.Field, len(nmethods))
for i, n := range nmethods {
base.Pos = n.Pos
if n.Ntype != nil {
n.Type = typecheckNtype(n.Ntype).Type()
n.Ntype = nil
}
fields = append(fields, f)
methods[i] = types.NewField(n.Pos, n.Sym, n.Type)
}
t.SetInterface(fields)
return t
base.Pos = lno
return types.NewInterface(types.LocalPkg, methods)
}
func fakeRecv() ir.Node {
func fakeRecv() *ir.Field {
return anonfield(types.FakeRecvType())
}
@ -694,42 +552,47 @@ func isifacemethod(f *types.Type) bool {
}
// turn a parsed function declaration into a type
func functype(this ir.Node, in, out []ir.Node) *types.Type {
t := types.New(types.TFUNC)
func functype(nrecv *ir.Field, nparams, nresults []*ir.Field) *types.Type {
funarg := func(n *ir.Field) *types.Field {
lno := base.Pos
base.Pos = n.Pos
var rcvr []ir.Node
if this != nil {
rcvr = []ir.Node{this}
if n.Ntype != nil {
n.Type = typecheckNtype(n.Ntype).Type()
n.Ntype = nil
}
f := types.NewField(n.Pos, n.Sym, n.Type)
f.SetIsDDD(n.IsDDD)
if n.Decl != nil {
n.Decl.SetType(f.Type)
f.Nname = n.Decl
}
base.Pos = lno
return f
}
funargs := func(nn []*ir.Field) []*types.Field {
res := make([]*types.Field, len(nn))
for i, n := range nn {
res[i] = funarg(n)
}
return res
}
t.FuncType().Receiver = tofunargs(rcvr, types.FunargRcvr)
t.FuncType().Params = tofunargs(in, types.FunargParams)
t.FuncType().Results = tofunargs(out, types.FunargResults)
var recv *types.Field
if nrecv != nil {
recv = funarg(nrecv)
}
t := types.NewSignature(types.LocalPkg, recv, funargs(nparams), funargs(nresults))
checkdupfields("argument", t.Recvs().FieldSlice(), t.Params().FieldSlice(), t.Results().FieldSlice())
if t.Recvs().Broke() || t.Results().Broke() || t.Params().Broke() {
t.SetBroke(true)
}
t.FuncType().Outnamed = t.NumResults() > 0 && ir.OrigSym(t.Results().Field(0).Sym) != nil
return t
}
func functypefield(this *types.Field, in, out []*types.Field) *types.Type {
t := types.New(types.TFUNC)
var rcvr []*types.Field
if this != nil {
rcvr = []*types.Field{this}
}
t.FuncType().Receiver = tofunargsfield(rcvr, types.FunargRcvr)
t.FuncType().Params = tofunargsfield(in, types.FunargParams)
t.FuncType().Results = tofunargsfield(out, types.FunargResults)
t.FuncType().Outnamed = t.NumResults() > 0 && ir.OrigSym(t.Results().Field(0).Sym) != nil
return t
func hasNamedResults(fn *ir.Func) bool {
typ := fn.Type()
return typ.NumResults() > 0 && types.OrigSym(typ.Results().Field(0).Sym) != nil
}
// methodSym returns the method symbol representing a method name
@ -754,12 +617,12 @@ func methodSymSuffix(recv *types.Type, msym *types.Sym, suffix string) *types.Sy
base.Fatalf("blank method name")
}
rsym := recv.Sym
rsym := recv.Sym()
if recv.IsPtr() {
if rsym != nil {
base.Fatalf("declared pointer receiver type: %v", recv)
}
rsym = recv.Elem().Sym
rsym = recv.Elem().Sym()
}
// Find the package the receiver type appeared in. For
@ -799,7 +662,7 @@ func methodSymSuffix(recv *types.Type, msym *types.Sym, suffix string) *types.Sy
// - msym is the method symbol
// - t is function type (with receiver)
// Returns a pointer to the existing or added Field; or nil if there's an error.
func addmethod(n ir.Node, msym *types.Sym, t *types.Type, local, nointerface bool) *types.Field {
func addmethod(n *ir.Func, msym *types.Sym, t *types.Type, local, nointerface bool) *types.Field {
if msym == nil {
base.Fatalf("no method symbol")
}
@ -812,11 +675,11 @@ func addmethod(n ir.Node, msym *types.Sym, t *types.Type, local, nointerface boo
}
mt := methtype(rf.Type)
if mt == nil || mt.Sym == nil {
if mt == nil || mt.Sym() == nil {
pa := rf.Type
t := pa
if t != nil && t.IsPtr() {
if t.Sym != nil {
if t.Sym() != nil {
base.Errorf("invalid receiver type %v (%v is a pointer type)", pa, t)
return nil
}
@ -826,7 +689,7 @@ func addmethod(n ir.Node, msym *types.Sym, t *types.Type, local, nointerface boo
switch {
case t == nil || t.Broke():
// rely on typecheck having complained before
case t.Sym == nil:
case t.Sym() == nil:
base.Errorf("invalid receiver type %v (%v is not a defined type)", pa, t)
case t.IsPtr():
base.Errorf("invalid receiver type %v (%v is a pointer type)", pa, t)
@ -840,7 +703,7 @@ func addmethod(n ir.Node, msym *types.Sym, t *types.Type, local, nointerface boo
return nil
}
if local && mt.Sym.Pkg != ir.LocalPkg {
if local && mt.Sym().Pkg != types.LocalPkg {
base.Errorf("cannot define new methods on non-local type %v", mt)
return nil
}
@ -872,7 +735,7 @@ func addmethod(n ir.Node, msym *types.Sym, t *types.Type, local, nointerface boo
}
f := types.NewField(base.Pos, msym, t)
f.Nname = n.Func().Nname
f.Nname = n.Nname
f.SetNointerface(nointerface)
mt.Methods().Append(f)
@ -944,18 +807,18 @@ func setNodeNameFunc(n ir.Node) {
n.Sym().SetFunc(true)
}
func dclfunc(sym *types.Sym, tfn ir.Node) ir.Node {
func dclfunc(sym *types.Sym, tfn ir.Ntype) *ir.Func {
if tfn.Op() != ir.OTFUNC {
base.Fatalf("expected OTFUNC node, got %v", tfn)
}
fn := ir.Nod(ir.ODCLFUNC, nil, nil)
fn.Func().Nname = newfuncnamel(base.Pos, sym, fn.Func())
fn.Func().Nname.Name().Defn = fn
fn.Func().Nname.Name().Param.Ntype = tfn
setNodeNameFunc(fn.Func().Nname)
fn := ir.NewFunc(base.Pos)
fn.Nname = newFuncNameAt(base.Pos, sym, fn)
fn.Nname.Defn = fn
fn.Nname.Ntype = tfn
setNodeNameFunc(fn.Nname)
funchdr(fn)
fn.Func().Nname.Name().Param.Ntype = typecheck(fn.Func().Nname.Name().Param.Ntype, ctxType)
fn.Nname.Ntype = typecheckNtype(fn.Nname.Ntype)
return fn
}
@ -963,14 +826,14 @@ type nowritebarrierrecChecker struct {
// extraCalls contains extra function calls that may not be
// visible during later analysis. It maps from the ODCLFUNC of
// the caller to a list of callees.
extraCalls map[ir.Node][]nowritebarrierrecCall
extraCalls map[*ir.Func][]nowritebarrierrecCall
// curfn is the current function during AST walks.
curfn ir.Node
curfn *ir.Func
}
type nowritebarrierrecCall struct {
target ir.Node // ODCLFUNC of caller or callee
target *ir.Func // caller or callee
lineno src.XPos // line of call
}
@ -978,7 +841,7 @@ type nowritebarrierrecCall struct {
// must be called before transformclosure and walk.
func newNowritebarrierrecChecker() *nowritebarrierrecChecker {
c := &nowritebarrierrecChecker{
extraCalls: make(map[ir.Node][]nowritebarrierrecCall),
extraCalls: make(map[*ir.Func][]nowritebarrierrecCall),
}
// Find all systemstack calls and record their targets. In
@ -990,7 +853,7 @@ func newNowritebarrierrecChecker() *nowritebarrierrecChecker {
if n.Op() != ir.ODCLFUNC {
continue
}
c.curfn = n
c.curfn = n.(*ir.Func)
ir.Inspect(n, c.findExtraCalls)
}
c.curfn = nil
@ -1009,13 +872,13 @@ func (c *nowritebarrierrecChecker) findExtraCalls(n ir.Node) bool {
return true
}
var callee ir.Node
var callee *ir.Func
arg := n.List().First()
switch arg.Op() {
case ir.ONAME:
callee = arg.Name().Defn
callee = arg.Name().Defn.(*ir.Func)
case ir.OCLOSURE:
callee = arg.Func().Decl
callee = arg.Func()
default:
base.Fatalf("expected ONAME or OCLOSURE node, got %+v", arg)
}
@ -1034,13 +897,8 @@ func (c *nowritebarrierrecChecker) findExtraCalls(n ir.Node) bool {
// because that's all we know after we start SSA.
//
// This can be called concurrently for different from Nodes.
func (c *nowritebarrierrecChecker) recordCall(from ir.Node, to *obj.LSym, pos src.XPos) {
if from.Op() != ir.ODCLFUNC {
base.Fatalf("expected ODCLFUNC, got %v", from)
}
// We record this information on the *Func so this is
// concurrent-safe.
fn := from.Func()
func (c *nowritebarrierrecChecker) recordCall(fn *ir.Func, to *obj.LSym, pos src.XPos) {
// We record this information on the *Func so this is concurrent-safe.
if fn.NWBRCalls == nil {
fn.NWBRCalls = new([]ir.SymAndPos)
}
@ -1052,39 +910,40 @@ func (c *nowritebarrierrecChecker) check() {
// capture all calls created by lowering, but this means we
// only get to see the obj.LSyms of calls. symToFunc lets us
// get back to the ODCLFUNCs.
symToFunc := make(map[*obj.LSym]ir.Node)
symToFunc := make(map[*obj.LSym]*ir.Func)
// funcs records the back-edges of the BFS call graph walk. It
// maps from the ODCLFUNC of each function that must not have
// write barriers to the call that inhibits them. Functions
// that are directly marked go:nowritebarrierrec are in this
// map with a zero-valued nowritebarrierrecCall. This also
// acts as the set of marks for the BFS of the call graph.
funcs := make(map[ir.Node]nowritebarrierrecCall)
funcs := make(map[*ir.Func]nowritebarrierrecCall)
// q is the queue of ODCLFUNC Nodes to visit in BFS order.
var q ir.NodeQueue
var q ir.NameQueue
for _, n := range xtop {
if n.Op() != ir.ODCLFUNC {
continue
}
fn := n.(*ir.Func)
symToFunc[n.Func().LSym] = n
symToFunc[fn.LSym] = fn
// Make nowritebarrierrec functions BFS roots.
if n.Func().Pragma&ir.Nowritebarrierrec != 0 {
funcs[n] = nowritebarrierrecCall{}
q.PushRight(n)
if fn.Pragma&ir.Nowritebarrierrec != 0 {
funcs[fn] = nowritebarrierrecCall{}
q.PushRight(fn.Nname)
}
// Check go:nowritebarrier functions.
if n.Func().Pragma&ir.Nowritebarrier != 0 && n.Func().WBPos.IsKnown() {
base.ErrorfAt(n.Func().WBPos, "write barrier prohibited")
if fn.Pragma&ir.Nowritebarrier != 0 && fn.WBPos.IsKnown() {
base.ErrorfAt(fn.WBPos, "write barrier prohibited")
}
}
// Perform a BFS of the call graph from all
// go:nowritebarrierrec functions.
enqueue := func(src, target ir.Node, pos src.XPos) {
if target.Func().Pragma&ir.Yeswritebarrierrec != 0 {
enqueue := func(src, target *ir.Func, pos src.XPos) {
if target.Pragma&ir.Yeswritebarrierrec != 0 {
// Don't flow into this function.
return
}
@ -1095,20 +954,20 @@ func (c *nowritebarrierrecChecker) check() {
// Record the path.
funcs[target] = nowritebarrierrecCall{target: src, lineno: pos}
q.PushRight(target)
q.PushRight(target.Nname)
}
for !q.Empty() {
fn := q.PopLeft()
fn := q.PopLeft().Func()
// Check fn.
if fn.Func().WBPos.IsKnown() {
if fn.WBPos.IsKnown() {
var err bytes.Buffer
call := funcs[fn]
for call.target != nil {
fmt.Fprintf(&err, "\n\t%v: called by %v", base.FmtPos(call.lineno), call.target.Func().Nname)
fmt.Fprintf(&err, "\n\t%v: called by %v", base.FmtPos(call.lineno), call.target.Nname)
call = funcs[call.target]
}
base.ErrorfAt(fn.Func().WBPos, "write barrier prohibited by caller; %v%s", fn.Func().Nname, err.String())
base.ErrorfAt(fn.WBPos, "write barrier prohibited by caller; %v%s", fn.Nname, err.String())
continue
}
@ -1116,10 +975,10 @@ func (c *nowritebarrierrecChecker) check() {
for _, callee := range c.extraCalls[fn] {
enqueue(fn, callee.target, callee.lineno)
}
if fn.Func().NWBRCalls == nil {
if fn.NWBRCalls == nil {
continue
}
for _, callee := range *fn.Func().NWBRCalls {
for _, callee := range *fn.NWBRCalls {
target := symToFunc[callee.Sym]
if target != nil {
enqueue(fn, target, callee.Pos)

View File

@ -6,6 +6,7 @@ package gc
import (
"cmd/compile/internal/base"
"cmd/compile/internal/ir"
"cmd/internal/dwarf"
"cmd/internal/obj"
"cmd/internal/src"
@ -211,6 +212,7 @@ func genAbstractFunc(fn *obj.LSym) {
base.Ctxt.Diag("failed to locate precursor fn for %v", fn)
return
}
_ = ifn.(*ir.Func)
if base.Debug.DwarfInl != 0 {
base.Ctxt.Logf("DwarfAbstractFunc(%v)\n", fn.Name)
}

View File

@ -28,7 +28,7 @@ const (
var numLocalEmbed int
func varEmbed(p *noder, names []ir.Node, typ ir.Node, exprs []ir.Node, embeds []PragmaEmbed) (newExprs []ir.Node) {
func varEmbed(p *noder, names []ir.Node, typ ir.Ntype, exprs []ir.Node, embeds []PragmaEmbed) (newExprs []ir.Node) {
haveEmbed := false
for _, decl := range p.file.DeclList {
imp, ok := decl.(*syntax.ImportDecl)
@ -115,13 +115,13 @@ func varEmbed(p *noder, names []ir.Node, typ ir.Node, exprs []ir.Node, embeds []
numLocalEmbed++
v = ir.NewNameAt(v.Pos(), lookupN("embed.", numLocalEmbed))
v.Sym().Def = v
v.Name().Param.Ntype = typ
v.Name().Ntype = typ
v.SetClass(ir.PEXTERN)
externdcl = append(externdcl, v)
exprs = []ir.Node{v}
}
v.Name().Param.SetEmbedFiles(list)
v.Name().SetEmbedFiles(list)
embedlist = append(embedlist, v)
return exprs
}
@ -131,31 +131,33 @@ func varEmbed(p *noder, names []ir.Node, typ ir.Node, exprs []ir.Node, embeds []
// can't tell whether "string" and "byte" really mean "string" and "byte".
// The result must be confirmed later, after type checking, using embedKind.
func embedKindApprox(typ ir.Node) int {
if typ.Sym() != nil && typ.Sym().Name == "FS" && (typ.Sym().Pkg.Path == "embed" || (typ.Sym().Pkg == ir.LocalPkg && base.Ctxt.Pkgpath == "embed")) {
if typ.Sym() != nil && typ.Sym().Name == "FS" && (typ.Sym().Pkg.Path == "embed" || (typ.Sym().Pkg == types.LocalPkg && base.Ctxt.Pkgpath == "embed")) {
return embedFiles
}
// These are not guaranteed to match only string and []byte -
// maybe the local package has redefined one of those words.
// But it's the best we can do now during the noder.
// The stricter check happens later, in initEmbed calling embedKind.
if typ.Sym() != nil && typ.Sym().Name == "string" && typ.Sym().Pkg == ir.LocalPkg {
if typ.Sym() != nil && typ.Sym().Name == "string" && typ.Sym().Pkg == types.LocalPkg {
return embedString
}
if typ.Op() == ir.OTARRAY && typ.Left() == nil && typ.Right().Sym() != nil && typ.Right().Sym().Name == "byte" && typ.Right().Sym().Pkg == ir.LocalPkg {
return embedBytes
if typ, ok := typ.(*ir.SliceType); ok {
if sym := typ.Elem.Sym(); sym != nil && sym.Name == "byte" && sym.Pkg == types.LocalPkg {
return embedBytes
}
}
return embedUnknown
}
// embedKind determines the kind of embedding variable.
func embedKind(typ *types.Type) int {
if typ.Sym != nil && typ.Sym.Name == "FS" && (typ.Sym.Pkg.Path == "embed" || (typ.Sym.Pkg == ir.LocalPkg && base.Ctxt.Pkgpath == "embed")) {
if typ.Sym() != nil && typ.Sym().Name == "FS" && (typ.Sym().Pkg.Path == "embed" || (typ.Sym().Pkg == types.LocalPkg && base.Ctxt.Pkgpath == "embed")) {
return embedFiles
}
if typ == types.Types[types.TSTRING] {
return embedString
}
if typ.Sym == nil && typ.IsSlice() && typ.Elem() == types.Bytetype {
if typ.Sym() == nil && typ.IsSlice() && typ.Elem() == types.ByteType {
return embedBytes
}
return embedUnknown
@ -193,7 +195,7 @@ func dumpembeds() {
// initEmbed emits the init data for a //go:embed variable,
// which is either a string, a []byte, or an embed.FS.
func initEmbed(v ir.Node) {
files := v.Name().Param.EmbedFiles()
files := v.Name().EmbedFiles()
switch kind := embedKind(v.Type()); kind {
case embedUnknown:
base.ErrorfAt(v.Pos(), "go:embed cannot apply to var of type %v", v.Type())

View File

@ -85,8 +85,9 @@ import (
type Escape struct {
allLocs []*EscLocation
labels map[*types.Sym]labelState // known labels
curfn ir.Node
curfn *ir.Func
// loopDepth counts the current loop nesting depth within
// curfn. It increments within each "for" loop and at each
@ -102,7 +103,7 @@ type Escape struct {
// variable.
type EscLocation struct {
n ir.Node // represented variable or expression, if any
curfn ir.Node // enclosing function
curfn *ir.Func // enclosing function
edges []EscEdge // incoming edges
loopDepth int // loopDepth at declaration
@ -147,7 +148,7 @@ func init() {
}
// escFmt is called from node printing to print information about escape analysis results.
func escFmt(n ir.Node, short bool) string {
func escFmt(n ir.Node) string {
text := ""
switch n.Esc() {
case EscUnknown:
@ -160,9 +161,7 @@ func escFmt(n ir.Node, short bool) string {
text = "esc(no)"
case EscNever:
if !short {
text = "esc(N)"
}
text = "esc(N)"
default:
text = fmt.Sprintf("esc(%d)", n.Esc())
@ -179,7 +178,7 @@ func escFmt(n ir.Node, short bool) string {
// escapeFuncs performs escape analysis on a minimal batch of
// functions.
func escapeFuncs(fns []ir.Node, recursive bool) {
func escapeFuncs(fns []*ir.Func, recursive bool) {
for _, fn := range fns {
if fn.Op() != ir.ODCLFUNC {
base.Fatalf("unexpected node: %v", fn)
@ -202,8 +201,8 @@ func escapeFuncs(fns []ir.Node, recursive bool) {
e.finish(fns)
}
func (e *Escape) initFunc(fn ir.Node) {
if fn.Op() != ir.ODCLFUNC || fn.Esc() != EscFuncUnknown {
func (e *Escape) initFunc(fn *ir.Func) {
if fn.Esc() != EscFuncUnknown {
base.Fatalf("unexpected node: %v", fn)
}
fn.SetEsc(EscFuncPlanned)
@ -215,27 +214,30 @@ func (e *Escape) initFunc(fn ir.Node) {
e.loopDepth = 1
// Allocate locations for local variables.
for _, dcl := range fn.Func().Dcl {
for _, dcl := range fn.Dcl {
if dcl.Op() == ir.ONAME {
e.newLoc(dcl, false)
}
}
}
func (e *Escape) walkFunc(fn ir.Node) {
func (e *Escape) walkFunc(fn *ir.Func) {
fn.SetEsc(EscFuncStarted)
// Identify labels that mark the head of an unstructured loop.
ir.InspectList(fn.Body(), func(n ir.Node) bool {
switch n.Op() {
case ir.OLABEL:
n.Sym().Label = nonlooping
if e.labels == nil {
e.labels = make(map[*types.Sym]labelState)
}
e.labels[n.Sym()] = nonlooping
case ir.OGOTO:
// If we visited the label before the goto,
// then this is a looping label.
if n.Sym().Label == nonlooping {
n.Sym().Label = looping
if e.labels[n.Sym()] == nonlooping {
e.labels[n.Sym()] = looping
}
}
@ -245,6 +247,10 @@ func (e *Escape) walkFunc(fn ir.Node) {
e.curfn = fn
e.loopDepth = 1
e.block(fn.Body())
if len(e.labels) != 0 {
base.FatalfAt(fn.Pos(), "leftover labels after walkFunc")
}
}
// Below we implement the methods for walking the AST and recording
@ -294,7 +300,7 @@ func (e *Escape) stmt(n ir.Node) {
default:
base.Fatalf("unexpected stmt: %v", n)
case ir.ODCLCONST, ir.ODCLTYPE, ir.OEMPTY, ir.OFALL, ir.OINLMARK:
case ir.ODCLCONST, ir.ODCLTYPE, ir.OFALL, ir.OINLMARK:
// nop
case ir.OBREAK, ir.OCONTINUE, ir.OGOTO:
@ -310,7 +316,7 @@ func (e *Escape) stmt(n ir.Node) {
}
case ir.OLABEL:
switch ir.AsNode(n.Sym().Label) {
switch e.labels[n.Sym()] {
case nonlooping:
if base.Flag.LowerM > 2 {
fmt.Printf("%v:%v non-looping label\n", base.FmtPos(base.Pos), n)
@ -323,7 +329,7 @@ func (e *Escape) stmt(n ir.Node) {
default:
base.Fatalf("label missing tag")
}
n.Sym().Label = nil
delete(e.labels, n.Sym())
case ir.OIF:
e.discard(n.Left())
@ -386,8 +392,8 @@ func (e *Escape) stmt(n ir.Node) {
case ir.OSELRECV:
e.assign(n.Left(), n.Right(), "selrecv", n)
case ir.OSELRECV2:
e.assign(n.Left(), n.Right(), "selrecv", n)
e.assign(n.List().First(), nil, "selrecv", n)
e.assign(n.List().First(), n.Rlist().First(), "selrecv", n)
e.assign(n.List().Second(), nil, "selrecv", n)
case ir.ORECV:
// TODO(mdempsky): Consider e.discard(n.Left).
e.exprSkipInit(e.discardHole(), n) // already visited n.Ninit
@ -404,18 +410,18 @@ func (e *Escape) stmt(n ir.Node) {
}
case ir.OAS2DOTTYPE: // v, ok = x.(type)
e.assign(n.List().First(), n.Right(), "assign-pair-dot-type", n)
e.assign(n.List().First(), n.Rlist().First(), "assign-pair-dot-type", n)
e.assign(n.List().Second(), nil, "assign-pair-dot-type", n)
case ir.OAS2MAPR: // v, ok = m[k]
e.assign(n.List().First(), n.Right(), "assign-pair-mapr", n)
e.assign(n.List().First(), n.Rlist().First(), "assign-pair-mapr", n)
e.assign(n.List().Second(), nil, "assign-pair-mapr", n)
case ir.OAS2RECV: // v, ok = <-ch
e.assign(n.List().First(), n.Right(), "assign-pair-receive", n)
e.assign(n.List().First(), n.Rlist().First(), "assign-pair-receive", n)
e.assign(n.List().Second(), nil, "assign-pair-receive", n)
case ir.OAS2FUNC:
e.stmts(n.Right().Init())
e.call(e.addrs(n.List()), n.Right(), nil)
e.stmts(n.Rlist().First().Init())
e.call(e.addrs(n.List()), n.Rlist().First(), nil)
case ir.ORETURN:
results := e.curfn.Type().Results().FieldSlice()
for i, v := range n.List().Slice() {
@ -478,7 +484,7 @@ func (e *Escape) exprSkipInit(k EscHole, n ir.Node) {
default:
base.Fatalf("unexpected expr: %v", n)
case ir.OLITERAL, ir.ONIL, ir.OGETG, ir.OCLOSUREVAR, ir.OTYPE, ir.OMETHEXPR:
case ir.OLITERAL, ir.ONIL, ir.OGETG, ir.OCLOSUREREAD, ir.OTYPE, ir.OMETHEXPR:
// nop
case ir.ONAME:
@ -581,7 +587,8 @@ func (e *Escape) exprSkipInit(k EscHole, n ir.Node) {
for i := m.Type.NumResults(); i > 0; i-- {
ks = append(ks, e.heapHole())
}
paramK := e.tagHole(ks, ir.AsNode(m.Nname), m.Type.Recv())
name, _ := m.Nname.(*ir.Name)
paramK := e.tagHole(ks, name, m.Type.Recv())
e.expr(e.teeHole(paramK, closureK), n.Left())
@ -625,17 +632,13 @@ func (e *Escape) exprSkipInit(k EscHole, n ir.Node) {
k = e.spill(k, n)
// Link addresses of captured variables to closure.
for _, v := range n.Func().ClosureVars.Slice() {
if v.Op() == ir.OXXX { // unnamed out argument; see dcl.go:/^funcargs
continue
}
for _, v := range n.Func().ClosureVars {
k := k
if !v.Name().Byval() {
if !v.Byval() {
k = k.addr(v, "reference")
}
e.expr(k.note(n, "captured by a closure"), v.Name().Defn)
e.expr(k.note(n, "captured by a closure"), v.Defn)
}
case ir.ORUNES2STR, ir.OBYTES2STR, ir.OSTR2RUNES, ir.OSTR2BYTES, ir.ORUNESTR:
@ -654,7 +657,7 @@ func (e *Escape) exprSkipInit(k EscHole, n ir.Node) {
// unsafeValue evaluates a uintptr-typed arithmetic expression looking
// for conversions from an unsafe.Pointer.
func (e *Escape) unsafeValue(k EscHole, n ir.Node) {
if n.Type().Etype != types.TUINTPTR {
if n.Type().Kind() != types.TUINTPTR {
base.Fatalf("unexpected type %v for %v", n.Type(), n)
}
@ -704,8 +707,7 @@ func (e *Escape) discards(l ir.Nodes) {
// that represents storing into the represented location.
func (e *Escape) addr(n ir.Node) EscHole {
if n == nil || ir.IsBlank(n) {
// Can happen at least in OSELRECV.
// TODO(mdempsky): Anywhere else?
// Can happen in select case, range, maybe others.
return e.discardHole()
}
@ -755,7 +757,7 @@ func (e *Escape) assign(dst, src ir.Node, why string, where ir.Node) {
// Filter out some no-op assignments for escape analysis.
ignore := dst != nil && src != nil && isSelfAssign(dst, src)
if ignore && base.Flag.LowerM != 0 {
base.WarnfAt(where.Pos(), "%v ignoring self-assignment in %S", funcSym(e.curfn), where)
base.WarnfAt(where.Pos(), "%v ignoring self-assignment in %v", funcSym(e.curfn), where)
}
k := e.addr(dst)
@ -799,18 +801,19 @@ func (e *Escape) call(ks []EscHole, call, where ir.Node) {
switch call.Op() {
default:
ir.Dump("esc", call)
base.Fatalf("unexpected call op: %v", call.Op())
case ir.OCALLFUNC, ir.OCALLMETH, ir.OCALLINTER:
fixVariadicCall(call)
// Pick out the function callee, if statically known.
var fn ir.Node
var fn *ir.Name
switch call.Op() {
case ir.OCALLFUNC:
switch v := staticValue(call.Left()); {
case v.Op() == ir.ONAME && v.Class() == ir.PFUNC:
fn = v
fn = v.(*ir.Name)
case v.Op() == ir.OCLOSURE:
fn = v.Func().Nname
}
@ -894,7 +897,7 @@ func (e *Escape) call(ks []EscHole, call, where ir.Node) {
// ks should contain the holes representing where the function
// callee's results flows. fn is the statically-known callee function,
// if any.
func (e *Escape) tagHole(ks []EscHole, fn ir.Node, param *types.Field) EscHole {
func (e *Escape) tagHole(ks []EscHole, fn *ir.Name, param *types.Field) EscHole {
// If this is a dynamic call, we can't rely on param.Note.
if fn == nil {
return e.heapHole()
@ -935,9 +938,9 @@ func (e *Escape) tagHole(ks []EscHole, fn ir.Node, param *types.Field) EscHole {
// fn has not yet been analyzed, so its parameters and results
// should be incorporated directly into the flow graph instead of
// relying on its escape analysis tagging.
func (e *Escape) inMutualBatch(fn ir.Node) bool {
if fn.Name().Defn != nil && fn.Name().Defn.Esc() < EscFuncTagged {
if fn.Name().Defn.Esc() == EscFuncUnknown {
func (e *Escape) inMutualBatch(fn *ir.Name) bool {
if fn.Defn != nil && fn.Defn.Esc() < EscFuncTagged {
if fn.Defn.Esc() == EscFuncUnknown {
base.Fatalf("graph inconsistency")
}
return true
@ -1084,7 +1087,7 @@ func (e *Escape) newLoc(n ir.Node, transient bool) *EscLocation {
base.Fatalf("curfn mismatch: %v != %v", n.Name().Curfn, e.curfn)
}
if n.HasOpt() {
if n.Opt() != nil {
base.Fatalf("%v already has a location", n)
}
n.SetOpt(loc)
@ -1360,7 +1363,7 @@ func (e *Escape) outlives(l, other *EscLocation) bool {
//
// var u int // okay to stack allocate
// *(func() *int { return &u }()) = 42
if containsClosure(other.curfn, l.curfn) && l.curfn.Func().ClosureCalled {
if containsClosure(other.curfn, l.curfn) && l.curfn.ClosureCalled() {
return false
}
@ -1394,11 +1397,7 @@ func (e *Escape) outlives(l, other *EscLocation) bool {
}
// containsClosure reports whether c is a closure contained within f.
func containsClosure(f, c ir.Node) bool {
if f.Op() != ir.ODCLFUNC || c.Op() != ir.ODCLFUNC {
base.Fatalf("bad containsClosure: %v, %v", f, c)
}
func containsClosure(f, c *ir.Func) bool {
// Common case.
if f == c {
return false
@ -1406,8 +1405,8 @@ func containsClosure(f, c ir.Node) bool {
// Closures within function Foo are named like "Foo.funcN..."
// TODO(mdempsky): Better way to recognize this.
fn := f.Func().Nname.Sym().Name
cn := c.Func().Nname.Sym().Name
fn := f.Sym().Name
cn := c.Sym().Name
return len(cn) > len(fn) && cn[:len(fn)] == fn && cn[len(fn)] == '.'
}
@ -1429,7 +1428,7 @@ func (l *EscLocation) leakTo(sink *EscLocation, derefs int) {
l.paramEsc.AddHeap(derefs)
}
func (e *Escape) finish(fns []ir.Node) {
func (e *Escape) finish(fns []*ir.Func) {
// Record parameter tags for package export data.
for _, fn := range fns {
fn.SetEsc(EscFuncTagged)
@ -1455,7 +1454,7 @@ func (e *Escape) finish(fns []ir.Node) {
if loc.escapes {
if n.Op() != ir.ONAME {
if base.Flag.LowerM != 0 {
base.WarnfAt(n.Pos(), "%S escapes to heap", n)
base.WarnfAt(n.Pos(), "%v escapes to heap", n)
}
if logopt.Enabled() {
logopt.LogOpt(n.Pos(), "escape", "escape", ir.FuncName(e.curfn))
@ -1465,7 +1464,7 @@ func (e *Escape) finish(fns []ir.Node) {
addrescapes(n)
} else {
if base.Flag.LowerM != 0 && n.Op() != ir.ONAME {
base.WarnfAt(n.Pos(), "%S does not escape", n)
base.WarnfAt(n.Pos(), "%v does not escape", n)
}
n.SetEsc(EscNone)
if loc.transient {
@ -1606,20 +1605,20 @@ const (
EscNever // By construction will not escape.
)
// funcSym returns fn.Func.Nname.Sym if no nils are encountered along the way.
func funcSym(fn ir.Node) *types.Sym {
if fn == nil || fn.Func().Nname == nil {
// funcSym returns fn.Nname.Sym if no nils are encountered along the way.
func funcSym(fn *ir.Func) *types.Sym {
if fn == nil || fn.Nname == nil {
return nil
}
return fn.Func().Nname.Sym()
return fn.Sym()
}
// Mark labels that have no backjumps to them as not increasing e.loopdepth.
// Walk hasn't generated (goto|label).Left.Sym.Label yet, so we'll cheat
// and set it to one of the following two. Then in esc we'll clear it again.
var (
looping = ir.Nod(ir.OXXX, nil, nil)
nonlooping = ir.Nod(ir.OXXX, nil, nil)
type labelState int
const (
looping labelState = 1 + iota
nonlooping
)
func isSliceSelfAssign(dst, src ir.Node) bool {
@ -1717,7 +1716,7 @@ func mayAffectMemory(n ir.Node) bool {
// We're ignoring things like division by zero, index out of range,
// and nil pointer dereference here.
switch n.Op() {
case ir.ONAME, ir.OCLOSUREVAR, ir.OLITERAL, ir.ONIL:
case ir.ONAME, ir.OCLOSUREREAD, ir.OLITERAL, ir.ONIL:
return false
// Left+Right group.
@ -1769,7 +1768,7 @@ func heapAllocReason(n ir.Node) string {
if !smallintconst(r) {
return "non-constant size"
}
if t := n.Type(); t.Elem().Width != 0 && r.Int64Val() >= maxImplicitStackVarSize/t.Elem().Width {
if t := n.Type(); t.Elem().Width != 0 && ir.Int64Val(r) >= maxImplicitStackVarSize/t.Elem().Width {
return "too large for stack"
}
}
@ -1790,6 +1789,7 @@ func addrescapes(n ir.Node) {
// Nothing to do.
case ir.ONAME:
n := n.(*ir.Name)
if n == nodfp {
break
}
@ -1801,8 +1801,8 @@ func addrescapes(n ir.Node) {
}
// If a closure reference escapes, mark the outer variable as escaping.
if n.Name().IsClosureVar() {
addrescapes(n.Name().Defn)
if n.IsClosureVar() {
addrescapes(n.Defn)
break
}
@ -1823,11 +1823,7 @@ func addrescapes(n ir.Node) {
// then we're analyzing the inner closure but we need to move x to the
// heap in f, not in the inner closure. Flip over to f before calling moveToHeap.
oldfn := Curfn
Curfn = n.Name().Curfn
if Curfn.Op() == ir.OCLOSURE {
Curfn = Curfn.Func().Decl
panic("can't happen")
}
Curfn = n.Curfn
ln := base.Pos
base.Pos = Curfn.Pos()
moveToHeap(n)
@ -1847,7 +1843,7 @@ func addrescapes(n ir.Node) {
}
// moveToHeap records the parameter or local variable n as moved to the heap.
func moveToHeap(n ir.Node) {
func moveToHeap(n *ir.Name) {
if base.Flag.LowerR != 0 {
ir.Dump("MOVE", n)
}
@ -1863,13 +1859,13 @@ func moveToHeap(n ir.Node) {
// temp will add it to the function declaration list automatically.
heapaddr := temp(types.NewPtr(n.Type()))
heapaddr.SetSym(lookup("&" + n.Sym().Name))
heapaddr.Orig().SetSym(heapaddr.Sym())
ir.Orig(heapaddr).SetSym(heapaddr.Sym())
heapaddr.SetPos(n.Pos())
// Unset AutoTemp to persist the &foo variable name through SSA to
// liveness analysis.
// TODO(mdempsky/drchase): Cleaner solution?
heapaddr.Name().SetAutoTemp(false)
heapaddr.SetAutoTemp(false)
// Parameters have a local stack copy used at function start/end
// in addition to the copy in the heap that may live longer than
@ -1887,24 +1883,24 @@ func moveToHeap(n ir.Node) {
stackcopy.SetType(n.Type())
stackcopy.SetOffset(n.Offset())
stackcopy.SetClass(n.Class())
stackcopy.Name().Param.Heapaddr = heapaddr
stackcopy.Heapaddr = heapaddr
if n.Class() == ir.PPARAMOUT {
// Make sure the pointer to the heap copy is kept live throughout the function.
// The function could panic at any point, and then a defer could recover.
// Thus, we need the pointer to the heap copy always available so the
// post-deferreturn code can copy the return value back to the stack.
// See issue 16095.
heapaddr.Name().SetIsOutputParamHeapAddr(true)
heapaddr.SetIsOutputParamHeapAddr(true)
}
n.Name().Param.Stackcopy = stackcopy
n.Stackcopy = stackcopy
// Substitute the stackcopy into the function variable list so that
// liveness and other analyses use the underlying stack slot
// and not the now-pseudo-variable n.
found := false
for i, d := range Curfn.Func().Dcl {
for i, d := range Curfn.Dcl {
if d == n {
Curfn.Func().Dcl[i] = stackcopy
Curfn.Dcl[i] = stackcopy
found = true
break
}
@ -1917,13 +1913,13 @@ func moveToHeap(n ir.Node) {
if !found {
base.Fatalf("cannot find %v in local variable list", n)
}
Curfn.Func().Dcl = append(Curfn.Func().Dcl, n)
Curfn.Dcl = append(Curfn.Dcl, n)
}
// Modify n in place so that uses of n now mean indirection of the heapaddr.
n.SetClass(ir.PAUTOHEAP)
n.SetOffset(0)
n.Name().Param.Heapaddr = heapaddr
n.Heapaddr = heapaddr
n.SetEsc(EscHeap)
if base.Flag.LowerM != 0 {
base.WarnfAt(n.Pos(), "moved to heap: %v", n)

View File

@ -24,7 +24,7 @@ func exportf(bout *bio.Writer, format string, args ...interface{}) {
var asmlist []ir.Node
// exportsym marks n for export (or reexport).
func exportsym(n ir.Node) {
func exportsym(n *ir.Name) {
if n.Sym().OnExportList() {
return
}
@ -41,8 +41,8 @@ func initname(s string) bool {
return s == "init"
}
func autoexport(n ir.Node, ctxt ir.Class) {
if n.Sym().Pkg != ir.LocalPkg {
func autoexport(n *ir.Name, ctxt ir.Class) {
if n.Sym().Pkg != types.LocalPkg {
return
}
if (ctxt != ir.PEXTERN && ctxt != ir.PFUNC) || dclcontext != ir.PEXTERN {
@ -85,7 +85,7 @@ func importsym(ipkg *types.Pkg, s *types.Sym, op ir.Op) ir.Node {
base.Fatalf("missing ONONAME for %v\n", s)
}
n = dclname(s)
n = ir.NewDeclNameAt(src.NoXPos, s)
s.SetPkgDef(n)
s.Importdef = ipkg
}
@ -101,9 +101,7 @@ func importsym(ipkg *types.Pkg, s *types.Sym, op ir.Op) ir.Node {
func importtype(ipkg *types.Pkg, pos src.XPos, s *types.Sym) *types.Type {
n := importsym(ipkg, s, ir.OTYPE)
if n.Op() != ir.OTYPE {
t := types.New(types.TFORW)
t.Sym = s
t.Nod = n
t := types.NewNamed(n)
n.SetOp(ir.OTYPE)
n.SetPos(pos)
@ -161,8 +159,12 @@ func importfunc(ipkg *types.Pkg, pos src.XPos, s *types.Sym, t *types.Type) {
if n == nil {
return
}
name := n.(*ir.Name)
n.SetFunc(new(ir.Func))
fn := ir.NewFunc(pos)
fn.SetType(t)
name.SetFunc(fn)
fn.Nname = name
if base.Flag.E != 0 {
fmt.Printf("import func %v%S\n", s, t)
@ -200,7 +202,7 @@ func dumpasmhdr() {
if err != nil {
base.Fatalf("%v", err)
}
fmt.Fprintf(b, "// generated by compile -asmhdr from package %s\n\n", ir.LocalPkg.Name)
fmt.Fprintf(b, "// generated by compile -asmhdr from package %s\n\n", types.LocalPkg.Name)
for _, n := range asmlist {
if n.Sym().IsBlank() {
continue

View File

@ -31,13 +31,13 @@ func sysvar(name string) *obj.LSym {
// isParamStackCopy reports whether this is the on-stack copy of a
// function parameter that moved to the heap.
func isParamStackCopy(n ir.Node) bool {
return n.Op() == ir.ONAME && (n.Class() == ir.PPARAM || n.Class() == ir.PPARAMOUT) && n.Name().Param.Heapaddr != nil
return n.Op() == ir.ONAME && (n.Class() == ir.PPARAM || n.Class() == ir.PPARAMOUT) && n.Name().Heapaddr != nil
}
// isParamHeapCopy reports whether this is the on-heap copy of
// a function parameter that moved to the heap.
func isParamHeapCopy(n ir.Node) bool {
return n.Op() == ir.ONAME && n.Class() == ir.PAUTOHEAP && n.Name().Param.Stackcopy != nil
return n.Op() == ir.ONAME && n.Class() == ir.PAUTOHEAP && n.Name().Stackcopy != nil
}
// autotmpname returns the name for an autotmp variable numbered n.
@ -52,7 +52,7 @@ func autotmpname(n int) string {
}
// make a new Node off the books
func tempAt(pos src.XPos, curfn ir.Node, t *types.Type) ir.Node {
func tempAt(pos src.XPos, curfn *ir.Func, t *types.Type) *ir.Name {
if curfn == nil {
base.Fatalf("no curfn for tempAt")
}
@ -65,24 +65,24 @@ func tempAt(pos src.XPos, curfn ir.Node, t *types.Type) ir.Node {
}
s := &types.Sym{
Name: autotmpname(len(curfn.Func().Dcl)),
Pkg: ir.LocalPkg,
Name: autotmpname(len(curfn.Dcl)),
Pkg: types.LocalPkg,
}
n := ir.NewNameAt(pos, s)
s.Def = n
n.SetType(t)
n.SetClass(ir.PAUTO)
n.SetEsc(EscNever)
n.Name().Curfn = curfn
n.Name().SetUsed(true)
n.Name().SetAutoTemp(true)
curfn.Func().Dcl = append(curfn.Func().Dcl, n)
n.Curfn = curfn
n.SetUsed(true)
n.SetAutoTemp(true)
curfn.Dcl = append(curfn.Dcl, n)
dowidth(t)
return n.Orig()
return n
}
func temp(t *types.Type) ir.Node {
func temp(t *types.Type) *ir.Name {
return tempAt(base.Pos, Curfn, t)
}

View File

@ -37,7 +37,7 @@ var (
// isRuntimePkg reports whether p is package runtime.
func isRuntimePkg(p *types.Pkg) bool {
if base.Flag.CompilingRuntime && p == ir.LocalPkg {
if base.Flag.CompilingRuntime && p == types.LocalPkg {
return true
}
return p.Path == "runtime"
@ -45,7 +45,7 @@ func isRuntimePkg(p *types.Pkg) bool {
// isReflectPkg reports whether p is package reflect.
func isReflectPkg(p *types.Pkg) bool {
if p == ir.LocalPkg {
if p == types.LocalPkg {
return base.Ctxt.Pkgpath == "reflect"
}
return p.Path == "reflect"
@ -104,7 +104,7 @@ var gopkg *types.Pkg // pseudo-package for method symbols on anonymous receiver
var zerosize int64
var simtype [types.NTYPE]types.EType
var simtype [types.NTYPE]types.Kind
var (
isInt [types.NTYPE]bool
@ -132,9 +132,9 @@ var (
var xtop []ir.Node
var exportlist []ir.Node
var exportlist []*ir.Name
var importlist []ir.Node // imported functions and methods with inlinable bodies
var importlist []*ir.Func // imported functions and methods with inlinable bodies
var (
funcsymsmu sync.Mutex // protects funcsyms and associated package lookups (see func funcsym)
@ -143,7 +143,7 @@ var (
var dclcontext ir.Class // PEXTERN/PAUTO
var Curfn ir.Node
var Curfn *ir.Func
var Widthptr int
@ -158,7 +158,7 @@ var instrumenting bool
// Whether we are tracking lexical scopes for DWARF.
var trackScopes bool
var nodfp ir.Node
var nodfp *ir.Name
var autogeneratedPos src.XPos
@ -211,6 +211,7 @@ var (
growslice,
msanread,
msanwrite,
msanmove,
newobject,
newproc,
panicdivide,

View File

@ -47,7 +47,7 @@ type Progs struct {
next *obj.Prog // next Prog
pc int64 // virtual PC; count of Progs
pos src.XPos // position to use for new Progs
curfn ir.Node // fn these Progs are for
curfn *ir.Func // fn these Progs are for
progcache []obj.Prog // local progcache
cacheidx int // first free element of progcache
@ -57,7 +57,7 @@ type Progs struct {
// newProgs returns a new Progs for fn.
// worker indicates which of the backend workers will use the Progs.
func newProgs(fn ir.Node, worker int) *Progs {
func newProgs(fn *ir.Func, worker int) *Progs {
pp := new(Progs)
if base.Ctxt.CanReuseProgs() {
sz := len(sharedProgArray) / base.Flag.LowerC
@ -174,17 +174,17 @@ func (pp *Progs) Appendpp(p *obj.Prog, as obj.As, ftype obj.AddrType, freg int16
return q
}
func (pp *Progs) settext(fn ir.Node) {
func (pp *Progs) settext(fn *ir.Func) {
if pp.Text != nil {
base.Fatalf("Progs.settext called twice")
}
ptxt := pp.Prog(obj.ATEXT)
pp.Text = ptxt
fn.Func().LSym.Func().Text = ptxt
fn.LSym.Func().Text = ptxt
ptxt.From.Type = obj.TYPE_MEM
ptxt.From.Name = obj.NAME_EXTERN
ptxt.From.Sym = fn.Func().LSym
ptxt.From.Sym = fn.LSym
}
// initLSym defines f's obj.LSym and initializes it based on the
@ -281,7 +281,7 @@ func initLSym(f *ir.Func, hasBody bool) {
// See test/recover.go for test cases and src/reflect/value.go
// for the actual functions being considered.
if base.Ctxt.Pkgpath == "reflect" {
switch f.Nname.Sym().Name {
switch f.Sym().Name {
case "callReflect", "callMethod":
flag |= obj.WRAPPER
}

View File

@ -259,8 +259,8 @@ func iexport(out *bufio.Writer) {
p := iexporter{
allPkgs: map[*types.Pkg]bool{},
stringIndex: map[string]uint64{},
declIndex: map[ir.Node]uint64{},
inlineIndex: map[ir.Node]uint64{},
declIndex: map[*types.Sym]uint64{},
inlineIndex: map[*types.Sym]uint64{},
typIndex: map[*types.Type]uint64{},
}
@ -290,6 +290,10 @@ func iexport(out *bufio.Writer) {
w.writeIndex(p.inlineIndex, false)
w.flush()
if *base.Flag.LowerV {
fmt.Printf("export: hdr strings %v, data %v, index %v\n", p.strings.Len(), dataLen, p.data0.Len())
}
// Assemble header.
var hdr intWriter
hdr.WriteByte('i')
@ -310,37 +314,34 @@ func iexport(out *bufio.Writer) {
out.Write(base.Ctxt.Fingerprint[:])
}
// writeIndex writes out an object index. mainIndex indicates whether
// writeIndex writes out a symbol index. mainIndex indicates whether
// we're writing out the main index, which is also read by
// non-compiler tools and includes a complete package description
// (i.e., name and height).
func (w *exportWriter) writeIndex(index map[ir.Node]uint64, mainIndex bool) {
// Build a map from packages to objects from that package.
pkgObjs := map[*types.Pkg][]ir.Node{}
func (w *exportWriter) writeIndex(index map[*types.Sym]uint64, mainIndex bool) {
// Build a map from packages to symbols from that package.
pkgSyms := map[*types.Pkg][]*types.Sym{}
// For the main index, make sure to include every package that
// we reference, even if we're not exporting (or reexporting)
// any symbols from it.
if mainIndex {
pkgObjs[ir.LocalPkg] = nil
pkgSyms[types.LocalPkg] = nil
for pkg := range w.p.allPkgs {
pkgObjs[pkg] = nil
pkgSyms[pkg] = nil
}
}
for n := range index {
pkgObjs[n.Sym().Pkg] = append(pkgObjs[n.Sym().Pkg], n)
// Group symbols by package.
for sym := range index {
pkgSyms[sym.Pkg] = append(pkgSyms[sym.Pkg], sym)
}
// Sort packages by path.
var pkgs []*types.Pkg
for pkg, objs := range pkgObjs {
for pkg := range pkgSyms {
pkgs = append(pkgs, pkg)
sort.Slice(objs, func(i, j int) bool {
return objs[i].Sym().Name < objs[j].Sym().Name
})
}
sort.Slice(pkgs, func(i, j int) bool {
return pkgs[i].Path < pkgs[j].Path
})
@ -353,11 +354,16 @@ func (w *exportWriter) writeIndex(index map[ir.Node]uint64, mainIndex bool) {
w.uint64(uint64(pkg.Height))
}
objs := pkgObjs[pkg]
w.uint64(uint64(len(objs)))
for _, n := range objs {
w.string(n.Sym().Name)
w.uint64(index[n])
// Sort symbols within a package by name.
syms := pkgSyms[pkg]
sort.Slice(syms, func(i, j int) bool {
return syms[i].Name < syms[j].Name
})
w.uint64(uint64(len(syms)))
for _, sym := range syms {
w.string(sym.Name)
w.uint64(index[sym])
}
}
}
@ -368,14 +374,14 @@ type iexporter struct {
// main index.
allPkgs map[*types.Pkg]bool
declTodo ir.NodeQueue
declTodo ir.NameQueue
strings intWriter
stringIndex map[string]uint64
data0 intWriter
declIndex map[ir.Node]uint64
inlineIndex map[ir.Node]uint64
declIndex map[*types.Sym]uint64
inlineIndex map[*types.Sym]uint64
typIndex map[*types.Type]uint64
}
@ -387,6 +393,10 @@ func (p *iexporter) stringOff(s string) uint64 {
off = uint64(p.strings.Len())
p.stringIndex[s] = off
if *base.Flag.LowerV {
fmt.Printf("export: str %v %.40q\n", off, s)
}
p.strings.uint64(uint64(len(s)))
p.strings.WriteString(s)
}
@ -394,21 +404,21 @@ func (p *iexporter) stringOff(s string) uint64 {
}
// pushDecl adds n to the declaration work queue, if not already present.
func (p *iexporter) pushDecl(n ir.Node) {
if n.Sym() == nil || ir.AsNode(n.Sym().Def) != n && n.Op() != ir.OTYPE {
func (p *iexporter) pushDecl(n *ir.Name) {
if n.Sym() == nil || n.Sym().Def != n && n.Op() != ir.OTYPE {
base.Fatalf("weird Sym: %v, %v", n, n.Sym())
}
// Don't export predeclared declarations.
if n.Sym().Pkg == ir.BuiltinPkg || n.Sym().Pkg == unsafepkg {
if n.Sym().Pkg == types.BuiltinPkg || n.Sym().Pkg == unsafepkg {
return
}
if _, ok := p.declIndex[n]; ok {
if _, ok := p.declIndex[n.Sym()]; ok {
return
}
p.declIndex[n] = ^uint64(0) // mark n present in work queue
p.declIndex[n.Sym()] = ^uint64(0) // mark n present in work queue
p.declTodo.PushRight(n)
}
@ -423,7 +433,7 @@ type exportWriter struct {
prevColumn int64
}
func (p *iexporter) doDecl(n ir.Node) {
func (p *iexporter) doDecl(n *ir.Name) {
w := p.newWriter()
w.setPkg(n.Sym().Pkg, false)
@ -454,7 +464,8 @@ func (p *iexporter) doDecl(n ir.Node) {
case ir.OLITERAL:
// Constant.
n = typecheck(n, ctxExpr)
// TODO(mdempsky): Do we still need this typecheck? If so, why?
n = typecheck(n, ctxExpr).(*ir.Name)
w.tag('C')
w.pos(n.Pos())
w.value(n.Type(), n.Val())
@ -472,15 +483,15 @@ func (p *iexporter) doDecl(n ir.Node) {
w.tag('T')
w.pos(n.Pos())
underlying := n.Type().Orig
if underlying == types.Errortype.Orig {
underlying := n.Type().Underlying()
if underlying == types.ErrorType.Underlying() {
// For "type T error", use error as the
// underlying type instead of error's own
// underlying anonymous interface. This
// ensures consistency with how importers may
// declare error (e.g., go/types uses nil Pkg
// for predeclared objects).
underlying = types.Errortype
underlying = types.ErrorType
}
w.typ(underlying)
@ -508,20 +519,28 @@ func (p *iexporter) doDecl(n ir.Node) {
base.Fatalf("unexpected node: %v", n)
}
p.declIndex[n] = w.flush()
w.finish("dcl", p.declIndex, n.Sym())
}
func (w *exportWriter) tag(tag byte) {
w.data.WriteByte(tag)
}
func (p *iexporter) doInline(f ir.Node) {
func (w *exportWriter) finish(what string, index map[*types.Sym]uint64, sym *types.Sym) {
off := w.flush()
if *base.Flag.LowerV {
fmt.Printf("export: %v %v %v\n", what, off, sym)
}
index[sym] = off
}
func (p *iexporter) doInline(f *ir.Name) {
w := p.newWriter()
w.setPkg(fnpkg(f), false)
w.stmtList(ir.AsNodes(f.Func().Inl.Body))
p.inlineIndex[f] = w.flush()
w.finish("inl", p.inlineIndex, f.Sym())
}
func (w *exportWriter) pos(pos src.XPos) {
@ -572,7 +591,7 @@ func (w *exportWriter) pkg(pkg *types.Pkg) {
func (w *exportWriter) qualifiedIdent(n ir.Node) {
// Ensure any referenced declarations are written out too.
w.p.pushDecl(n)
w.p.pushDecl(n.Name())
s := n.Sym()
w.string(s.Name)
@ -593,7 +612,7 @@ func (w *exportWriter) selector(s *types.Sym) {
} else {
pkg := w.currPkg
if types.IsExported(name) {
pkg = ir.LocalPkg
pkg = types.LocalPkg
}
if s.Pkg != pkg {
base.Fatalf("package mismatch in selector: %v in package %q, but want %q", s, s.Pkg.Path, pkg.Path)
@ -622,7 +641,11 @@ func (p *iexporter) typOff(t *types.Type) uint64 {
if !ok {
w := p.newWriter()
w.doTyp(t)
off = predeclReserved + w.flush()
rawOff := w.flush()
if *base.Flag.LowerV {
fmt.Printf("export: typ %v %v\n", rawOff, t)
}
off = predeclReserved + rawOff
p.typIndex[t] = off
}
return off
@ -633,17 +656,17 @@ func (w *exportWriter) startType(k itag) {
}
func (w *exportWriter) doTyp(t *types.Type) {
if t.Sym != nil {
if t.Sym.Pkg == ir.BuiltinPkg || t.Sym.Pkg == unsafepkg {
if t.Sym() != nil {
if t.Sym().Pkg == types.BuiltinPkg || t.Sym().Pkg == unsafepkg {
base.Fatalf("builtin type missing from typIndex: %v", t)
}
w.startType(definedType)
w.qualifiedIdent(typenod(t))
w.qualifiedIdent(t.Obj().(*ir.Name))
return
}
switch t.Etype {
switch t.Kind() {
case types.TPTR:
w.startType(pointerType)
w.typ(t.Elem())
@ -717,10 +740,8 @@ func (w *exportWriter) doTyp(t *types.Type) {
}
func (w *exportWriter) setPkg(pkg *types.Pkg, write bool) {
if pkg == nil {
// TODO(mdempsky): Proactively set Pkg for types and
// remove this fallback logic.
pkg = ir.LocalPkg
if pkg == types.NoPkg {
base.Fatalf("missing pkg")
}
if write {
@ -747,7 +768,7 @@ func (w *exportWriter) paramList(fs []*types.Field) {
func (w *exportWriter) param(f *types.Field) {
w.pos(f.Pos)
w.localIdent(ir.OrigSym(f.Sym), 0)
w.localIdent(types.OrigSym(f.Sym), 0)
w.typ(f.Type)
}
@ -761,7 +782,7 @@ func constTypeOf(typ *types.Type) constant.Kind {
return constant.Complex
}
switch typ.Etype {
switch typ.Kind() {
case types.TBOOL:
return constant.Bool
case types.TSTRING:
@ -808,7 +829,7 @@ func intSize(typ *types.Type) (signed bool, maxBytes uint) {
return true, Mpprec / 8
}
switch typ.Etype {
switch typ.Kind() {
case types.TFLOAT32, types.TCOMPLEX64:
return true, 3
case types.TFLOAT64, types.TCOMPLEX128:
@ -820,7 +841,7 @@ func intSize(typ *types.Type) (signed bool, maxBytes uint) {
// The go/types API doesn't expose sizes to importers, so they
// don't know how big these types are.
switch typ.Etype {
switch typ.Kind() {
case types.TINT, types.TUINT, types.TUINTPTR:
maxBytes = 8
}
@ -960,7 +981,7 @@ func (w *exportWriter) varExt(n ir.Node) {
w.symIdx(n.Sym())
}
func (w *exportWriter) funcExt(n ir.Node) {
func (w *exportWriter) funcExt(n *ir.Name) {
w.linkname(n.Sym())
w.symIdx(n.Sym())
@ -979,14 +1000,7 @@ func (w *exportWriter) funcExt(n ir.Node) {
}
// Endlineno for inlined function.
if n.Name().Defn != nil {
w.pos(n.Name().Defn.Func().Endlineno)
} else {
// When the exported node was defined externally,
// e.g. io exports atomic.(*Value).Load or bytes exports errors.New.
// Keep it as we don't distinguish this case in iimport.go.
w.pos(n.Func().Endlineno)
}
w.pos(n.Func().Endlineno)
} else {
w.uint64(0)
}
@ -994,7 +1008,7 @@ func (w *exportWriter) funcExt(n ir.Node) {
func (w *exportWriter) methExt(m *types.Field) {
w.bool(m.Nointerface())
w.funcExt(ir.AsNode(m.Nname))
w.funcExt(m.Nname.(*ir.Name))
}
func (w *exportWriter) linkname(s *types.Sym) {
@ -1056,15 +1070,23 @@ func (w *exportWriter) stmt(n ir.Node) {
}
switch op := n.Op(); op {
case ir.OBLOCK:
// No OBLOCK in export data.
// Inline content into this statement list,
// like the init list above.
// (At the moment neither the parser nor the typechecker
// generate OBLOCK nodes except to denote an empty
// function body, although that may change.)
for _, n := range n.List().Slice() {
w.stmt(n)
}
case ir.ODCL:
w.op(ir.ODCL)
w.pos(n.Left().Pos())
w.localName(n.Left())
w.typ(n.Left().Type())
// case ODCLFIELD:
// unimplemented - handled by default case
case ir.OAS:
// Don't export "v = <N>" initializing statements, hope they're always
// preceded by the DCL which will be re-parsed and typecheck to reproduce
@ -1085,18 +1107,12 @@ func (w *exportWriter) stmt(n ir.Node) {
w.expr(n.Right())
}
case ir.OAS2:
case ir.OAS2, ir.OAS2DOTTYPE, ir.OAS2FUNC, ir.OAS2MAPR, ir.OAS2RECV:
w.op(ir.OAS2)
w.pos(n.Pos())
w.exprList(n.List())
w.exprList(n.Rlist())
case ir.OAS2DOTTYPE, ir.OAS2FUNC, ir.OAS2MAPR, ir.OAS2RECV:
w.op(ir.OAS2)
w.pos(n.Pos())
w.exprList(n.List())
w.exprList(ir.AsNodes([]ir.Node{n.Right()}))
case ir.ORETURN:
w.op(ir.ORETURN)
w.pos(n.Pos())
@ -1146,18 +1162,14 @@ func (w *exportWriter) stmt(n ir.Node) {
w.op(ir.OFALL)
w.pos(n.Pos())
case ir.OBREAK, ir.OCONTINUE:
case ir.OBREAK, ir.OCONTINUE, ir.OGOTO, ir.OLABEL:
w.op(op)
w.pos(n.Pos())
w.exprsOrNil(n.Left(), nil)
case ir.OEMPTY:
// nothing to emit
case ir.OGOTO, ir.OLABEL:
w.op(op)
w.pos(n.Pos())
w.string(n.Sym().Name)
label := ""
if sym := n.Sym(); sym != nil {
label = sym.Name
}
w.string(label)
default:
base.Fatalf("exporter: CANNOT EXPORT: %v\nPlease notify gri@\n", n.Op())
@ -1211,11 +1223,7 @@ func (w *exportWriter) expr(n ir.Node) {
if !n.Type().HasNil() {
base.Fatalf("unexpected type for nil: %v", n.Type())
}
if n.Orig() != nil && n.Orig() != n {
w.expr(n.Orig())
break
}
w.op(ir.OLITERAL)
w.op(ir.ONIL)
w.pos(n.Pos())
w.typ(n.Type())
@ -1304,8 +1312,7 @@ func (w *exportWriter) expr(n ir.Node) {
w.op(ir.OXDOT)
w.pos(n.Pos())
w.expr(n.Left())
// Right node should be ONAME
w.selector(n.Right().Sym())
w.selector(n.Sym())
case ir.OXDOT, ir.ODOT, ir.ODOTPTR, ir.ODOTINTER, ir.ODOTMETH:
w.op(ir.OXDOT)
@ -1464,7 +1471,7 @@ func (w *exportWriter) localName(n ir.Node) {
// PPARAM/PPARAMOUT, because we only want to include vargen in
// non-param names.
var v int32
if n.Class() == ir.PAUTO || (n.Class() == ir.PAUTOHEAP && n.Name().Param.Stackcopy == nil) {
if n.Class() == ir.PAUTO || (n.Class() == ir.PAUTOHEAP && n.Name().Stackcopy == nil) {
v = n.Name().Vargen
}

View File

@ -41,7 +41,7 @@ var (
inlineImporter = map[*types.Sym]iimporterAndOffset{}
)
func expandDecl(n ir.Node) {
func expandDecl(n *ir.Name) {
if n.Op() != ir.ONONAME {
return
}
@ -55,12 +55,12 @@ func expandDecl(n ir.Node) {
r.doDecl(n)
}
func expandInline(fn ir.Node) {
if fn.Func().Inl.Body != nil {
func expandInline(fn *ir.Func) {
if fn.Inl.Body != nil {
return
}
r := importReaderFor(fn, inlineImporter)
r := importReaderFor(fn.Nname, inlineImporter)
if r == nil {
base.Fatalf("missing import reader for %v", fn)
}
@ -68,7 +68,7 @@ func expandInline(fn ir.Node) {
r.doInline(fn)
}
func importReaderFor(n ir.Node, importers map[*types.Sym]iimporterAndOffset) *importReader {
func importReaderFor(n *ir.Name, importers map[*types.Sym]iimporterAndOffset) *importReader {
x, ok := importers[n.Sym()]
if !ok {
return nil
@ -148,7 +148,7 @@ func iimport(pkg *types.Pkg, in *bio.Reader) (fingerprint goobj.FingerprintType)
if pkg.Name == "" {
pkg.Name = pkgName
pkg.Height = pkgHeight
ir.NumImport[pkgName]++
types.NumImport[pkgName]++
// TODO(mdempsky): This belongs somewhere else.
pkg.Lookup("_").Def = ir.BlankNode
@ -175,7 +175,7 @@ func iimport(pkg *types.Pkg, in *bio.Reader) (fingerprint goobj.FingerprintType)
if s.Def != nil {
base.Fatalf("unexpected definition for %v: %v", s, ir.AsNode(s.Def))
}
s.Def = npos(src.NoXPos, dclname(s))
s.Def = ir.NewDeclNameAt(src.NoXPos, s)
}
}
@ -316,7 +316,7 @@ func (r *importReader) doDecl(n ir.Node) {
// after the underlying type has been assigned.
defercheckwidth()
underlying := r.typ()
setUnderlying(t, underlying)
t.SetUnderlying(underlying)
resumecheckwidth()
if underlying.IsInterface() {
@ -331,7 +331,9 @@ func (r *importReader) doDecl(n ir.Node) {
recv := r.param()
mtyp := r.signature(recv)
m := newfuncnamel(mpos, methodSym(recv.Type, msym), new(ir.Func))
fn := ir.NewFunc(mpos)
fn.SetType(mtyp)
m := newFuncNameAt(mpos, methodSym(recv.Type, msym), fn)
m.SetType(mtyp)
m.SetClass(ir.PFUNC)
// methodSym already marked m.Sym as a function.
@ -435,7 +437,7 @@ func (r *importReader) ident() *types.Sym {
}
pkg := r.currPkg
if types.IsExported(name) {
pkg = ir.LocalPkg
pkg = types.LocalPkg
}
return pkg.Lookup(name)
}
@ -501,7 +503,7 @@ func (r *importReader) typ1() *types.Type {
// type.
n := ir.AsNode(r.qualifiedIdent().PkgDef())
if n.Op() == ir.ONONAME {
expandDecl(n)
expandDecl(n.(*ir.Name))
}
if n.Op() != ir.OTYPE {
base.Fatalf("expected OTYPE, got %v: %v, %v", n.Op(), n.Sym(), n)
@ -543,10 +545,7 @@ func (r *importReader) typ1() *types.Type {
fs[i] = f
}
t := types.New(types.TSTRUCT)
t.SetPkg(r.currPkg)
t.SetFields(fs)
return t
return types.NewStruct(r.currPkg, fs)
case interfaceType:
r.setPkg()
@ -568,9 +567,7 @@ func (r *importReader) typ1() *types.Type {
methods[i] = types.NewField(pos, sym, typ)
}
t := types.New(types.TINTER)
t.SetPkg(r.currPkg)
t.SetInterface(append(embeddeds, methods...))
t := types.NewInterface(r.currPkg, append(embeddeds, methods...))
// Ensure we expand the interface in the frontend (#25055).
checkwidth(t)
@ -588,9 +585,7 @@ func (r *importReader) signature(recv *types.Field) *types.Type {
if n := len(params); n > 0 {
params[n-1].SetIsDDD(r.bool())
}
t := functypefield(recv, params, results)
t.SetPkg(r.currPkg)
return t
return types.NewSignature(r.currPkg, recv, params, results)
}
func (r *importReader) paramList() []*types.Field {
@ -695,12 +690,12 @@ func (r *importReader) typeExt(t *types.Type) {
// so we can use index to reference the symbol.
var typeSymIdx = make(map[*types.Type][2]int64)
func (r *importReader) doInline(n ir.Node) {
if len(n.Func().Inl.Body) != 0 {
base.Fatalf("%v already has inline body", n)
func (r *importReader) doInline(fn *ir.Func) {
if len(fn.Inl.Body) != 0 {
base.Fatalf("%v already has inline body", fn)
}
funchdr(n)
funchdr(fn)
body := r.stmtList()
funcbody()
if body == nil {
@ -712,15 +707,15 @@ func (r *importReader) doInline(n ir.Node) {
// functions).
body = []ir.Node{}
}
n.Func().Inl.Body = body
fn.Inl.Body = body
importlist = append(importlist, n)
importlist = append(importlist, fn)
if base.Flag.E > 0 && base.Flag.LowerM > 2 {
if base.Flag.LowerM > 3 {
fmt.Printf("inl body for %v %#v: %+v\n", n, n.Type(), ir.AsNodes(n.Func().Inl.Body))
fmt.Printf("inl body for %v %v: %+v\n", fn, fn.Type(), ir.AsNodes(fn.Inl.Body))
} else {
fmt.Printf("inl body for %v %#v: %v\n", n, n.Type(), ir.AsNodes(n.Func().Inl.Body))
fmt.Printf("inl body for %v %v: %v\n", fn, fn.Type(), ir.AsNodes(fn.Inl.Body))
}
}
}
@ -747,7 +742,9 @@ func (r *importReader) stmtList() []ir.Node {
if n == nil {
break
}
// OBLOCK nodes may be created when importing ODCL nodes - unpack them
// OBLOCK nodes are not written to the import data directly,
// but the handling of ODCL calls liststmt, which creates one.
// Inline them into the statement list.
if n.Op() == ir.OBLOCK {
list = append(list, n.List().Slice()...)
} else {
@ -772,7 +769,7 @@ func (r *importReader) caseList(sw ir.Node) []ir.Node {
caseVar := ir.NewNameAt(cas.Pos(), r.ident())
declare(caseVar, dclcontext)
cas.PtrRlist().Set1(caseVar)
caseVar.Name().Defn = sw.Left()
caseVar.Defn = sw.Left()
}
cas.PtrBody().Set(r.stmtList())
cases[i] = cas
@ -807,20 +804,19 @@ func (r *importReader) node() ir.Node {
// case OPAREN:
// unreachable - unpacked by exporter
// case ONIL:
// unreachable - mapped to OLITERAL
case ir.ONIL:
pos := r.pos()
typ := r.typ()
n := npos(pos, nodnil())
n.SetType(typ)
return n
case ir.OLITERAL:
pos := r.pos()
typ := r.typ()
var n ir.Node
if typ.HasNil() {
n = nodnil()
} else {
n = ir.NewLiteral(r.value(typ))
}
n = npos(pos, n)
n := npos(pos, ir.NewLiteral(r.value(typ)))
n.SetType(typ)
return n
@ -834,16 +830,16 @@ func (r *importReader) node() ir.Node {
// unreachable - should have been resolved by typechecking
case ir.OTYPE:
return typenod(r.typ())
return ir.TypeNode(r.typ())
case ir.OTYPESW:
n := ir.NodAt(r.pos(), ir.OTYPESW, nil, nil)
pos := r.pos()
var tag *ir.Ident
if s := r.ident(); s != nil {
n.SetLeft(npos(n.Pos(), newnoname(s)))
tag = ir.NewIdent(pos, s)
}
right, _ := r.exprsOrNil()
n.SetRight(right)
return n
expr, _ := r.exprsOrNil()
return ir.NewTypeSwitchGuard(pos, tag, expr)
// case OTARRAY, OTMAP, OTCHAN, OTSTRUCT, OTINTER, OTFUNC:
// unreachable - should have been resolved by typechecking
@ -858,7 +854,7 @@ func (r *importReader) node() ir.Node {
// TODO(mdempsky): Export position information for OSTRUCTKEY nodes.
savedlineno := base.Pos
base.Pos = r.pos()
n := ir.NodAt(base.Pos, ir.OCOMPLIT, nil, typenod(r.typ()))
n := ir.NodAt(base.Pos, ir.OCOMPLIT, nil, ir.TypeNode(r.typ()))
n.PtrList().Set(r.elemList()) // special handling of field names
base.Pos = savedlineno
return n
@ -867,7 +863,7 @@ func (r *importReader) node() ir.Node {
// unreachable - mapped to case OCOMPLIT below by exporter
case ir.OCOMPLIT:
n := ir.NodAt(r.pos(), ir.OCOMPLIT, nil, typenod(r.typ()))
n := ir.NodAt(r.pos(), ir.OCOMPLIT, nil, ir.TypeNode(r.typ()))
n.PtrList().Set(r.exprList())
return n
@ -942,7 +938,7 @@ func (r *importReader) node() ir.Node {
case ir.OMAKEMAP, ir.OMAKECHAN, ir.OMAKESLICE:
n := npos(r.pos(), builtinCall(ir.OMAKE))
n.PtrList().Append(typenod(r.typ()))
n.PtrList().Append(ir.TypeNode(r.typ()))
n.PtrList().Append(r.exprList()...)
return n
@ -968,13 +964,10 @@ func (r *importReader) node() ir.Node {
// statements
case ir.ODCL:
pos := r.pos()
lhs := npos(pos, dclname(r.ident()))
typ := typenod(r.typ())
lhs := ir.NewDeclNameAt(pos, r.ident())
typ := ir.TypeNode(r.typ())
return npos(pos, liststmt(variter([]ir.Node{lhs}, typ, nil))) // TODO(gri) avoid list creation
// case ODCLFIELD:
// unimplemented
// case OAS, OASWB:
// unreachable - mapped to OAS case below by exporter
@ -1052,20 +1045,14 @@ func (r *importReader) node() ir.Node {
n := ir.NodAt(r.pos(), ir.OFALL, nil, nil)
return n
case ir.OBREAK, ir.OCONTINUE:
pos := r.pos()
left, _ := r.exprsOrNil()
if left != nil {
left = NewName(left.Sym())
}
return ir.NodAt(pos, op, left, nil)
// case OEMPTY:
// unreachable - not emitted by exporter
case ir.OGOTO, ir.OLABEL:
case ir.OBREAK, ir.OCONTINUE, ir.OGOTO, ir.OLABEL:
n := ir.NodAt(r.pos(), op, nil, nil)
n.SetSym(lookup(r.string()))
if label := r.string(); label != "" {
n.SetSym(lookup(label))
}
return n
case ir.OEND:

View File

@ -19,7 +19,7 @@ var renameinitgen int
// Function collecting autotmps generated during typechecking,
// to be included in the package-level init function.
var initTodo = ir.Nod(ir.ODCLFUNC, nil, nil)
var initTodo = ir.NewFunc(base.Pos)
func renameinit() *types.Sym {
s := lookupN("init.", renameinitgen)
@ -27,6 +27,9 @@ func renameinit() *types.Sym {
return s
}
// List of imported packages, in source code order. See #31636.
var sourceOrderImports []*types.Pkg
// fninit makes an initialization record for the package.
// See runtime/proc.go:initTask for its layout.
// The 3 tasks for initialization are:
@ -40,32 +43,39 @@ func fninit(n []ir.Node) {
var fns []*obj.LSym // functions to call for package initialization
// Find imported packages with init tasks.
for _, s := range types.InitSyms {
deps = append(deps, s.Linksym())
for _, pkg := range sourceOrderImports {
n := resolve(ir.AsNode(pkg.Lookup(".inittask").Def))
if n == nil {
continue
}
if n.Op() != ir.ONAME || n.Class() != ir.PEXTERN {
base.Fatalf("bad inittask: %v", n)
}
deps = append(deps, n.Sym().Linksym())
}
// Make a function that contains all the initialization statements.
if len(nf) > 0 {
base.Pos = nf[0].Pos() // prolog/epilog gets line number of first init stmt
initializers := lookup("init")
fn := dclfunc(initializers, ir.Nod(ir.OTFUNC, nil, nil))
for _, dcl := range initTodo.Func().Dcl {
dcl.Name().Curfn = fn
fn := dclfunc(initializers, ir.NewFuncType(base.Pos, nil, nil, nil))
for _, dcl := range initTodo.Dcl {
dcl.Curfn = fn
}
fn.Func().Dcl = append(fn.Func().Dcl, initTodo.Func().Dcl...)
initTodo.Func().Dcl = nil
fn.Dcl = append(fn.Dcl, initTodo.Dcl...)
initTodo.Dcl = nil
fn.PtrBody().Set(nf)
funcbody()
fn = typecheck(fn, ctxStmt)
typecheckFunc(fn)
Curfn = fn
typecheckslice(nf, ctxStmt)
Curfn = nil
xtop = append(xtop, fn)
fns = append(fns, initializers.Linksym())
}
if initTodo.Func().Dcl != nil {
if initTodo.Dcl != nil {
// We only generate temps using initTodo if there
// are package-scope initialization statements, so
// something's weird if we get here.
@ -78,13 +88,15 @@ func fninit(n []ir.Node) {
s := lookupN("init.", i)
fn := ir.AsNode(s.Def).Name().Defn
// Skip init functions with empty bodies.
if fn.Body().Len() == 1 && fn.Body().First().Op() == ir.OEMPTY {
continue
if fn.Body().Len() == 1 {
if stmt := fn.Body().First(); stmt.Op() == ir.OBLOCK && stmt.List().Len() == 0 {
continue
}
}
fns = append(fns, s.Linksym())
}
if len(deps) == 0 && len(fns) == 0 && ir.LocalPkg.Name != "main" && ir.LocalPkg.Name != "runtime" {
if len(deps) == 0 && len(fns) == 0 && types.LocalPkg.Name != "main" && types.LocalPkg.Name != "runtime" {
return // nothing to initialize
}

View File

@ -110,7 +110,7 @@ func initOrder(l []ir.Node) []ir.Node {
// first.
base.ExitIfErrors()
findInitLoopAndExit(firstLHS(n), new([]ir.Node))
findInitLoopAndExit(firstLHS(n), new([]*ir.Name))
base.Fatalf("initialization unfinished, but failed to identify loop")
}
}
@ -136,7 +136,7 @@ func (o *InitOrder) processAssign(n ir.Node) {
// Compute number of variable dependencies and build the
// inverse dependency ("blocking") graph.
for dep := range collectDeps(n, true) {
defn := dep.Name().Defn
defn := dep.Defn
// Skip dependencies on functions (PFUNC) and
// variables already initialized (InitDone).
if dep.Class() != ir.PEXTERN || defn.Initorder() == InitDone {
@ -183,7 +183,7 @@ func (o *InitOrder) flushReady(initialize func(ir.Node)) {
// path points to a slice used for tracking the sequence of
// variables/functions visited. Using a pointer to a slice allows the
// slice capacity to grow and limit reallocations.
func findInitLoopAndExit(n ir.Node, path *[]ir.Node) {
func findInitLoopAndExit(n *ir.Name, path *[]*ir.Name) {
// We implement a simple DFS loop-finding algorithm. This
// could be faster, but initialization cycles are rare.
@ -196,14 +196,14 @@ func findInitLoopAndExit(n ir.Node, path *[]ir.Node) {
// There might be multiple loops involving n; by sorting
// references, we deterministically pick the one reported.
refers := collectDeps(n.Name().Defn, false).Sorted(func(ni, nj ir.Node) bool {
refers := collectDeps(n.Name().Defn, false).Sorted(func(ni, nj *ir.Name) bool {
return ni.Pos().Before(nj.Pos())
})
*path = append(*path, n)
for _, ref := range refers {
// Short-circuit variables that were initialized.
if ref.Class() == ir.PEXTERN && ref.Name().Defn.Initorder() == InitDone {
if ref.Class() == ir.PEXTERN && ref.Defn.Initorder() == InitDone {
continue
}
@ -215,7 +215,7 @@ func findInitLoopAndExit(n ir.Node, path *[]ir.Node) {
// reportInitLoopAndExit reports and initialization loop as an error
// and exits. However, if l is not actually an initialization loop, it
// simply returns instead.
func reportInitLoopAndExit(l []ir.Node) {
func reportInitLoopAndExit(l []*ir.Name) {
// Rotate loop so that the earliest variable declaration is at
// the start.
i := -1
@ -250,13 +250,13 @@ func reportInitLoopAndExit(l []ir.Node) {
// variables that declaration n depends on. If transitive is true,
// then it also includes the transitive dependencies of any depended
// upon functions (but not variables).
func collectDeps(n ir.Node, transitive bool) ir.NodeSet {
func collectDeps(n ir.Node, transitive bool) ir.NameSet {
d := initDeps{transitive: transitive}
switch n.Op() {
case ir.OAS:
d.inspect(n.Right())
case ir.OAS2DOTTYPE, ir.OAS2FUNC, ir.OAS2MAPR, ir.OAS2RECV:
d.inspect(n.Right())
d.inspect(n.Rlist().First())
case ir.ODCLFUNC:
d.inspectList(n.Body())
default:
@ -267,7 +267,7 @@ func collectDeps(n ir.Node, transitive bool) ir.NodeSet {
type initDeps struct {
transitive bool
seen ir.NodeSet
seen ir.NameSet
}
func (d *initDeps) inspect(n ir.Node) { ir.Inspect(n, d.visit) }
@ -284,11 +284,11 @@ func (d *initDeps) visit(n ir.Node) bool {
case ir.ONAME:
switch n.Class() {
case ir.PEXTERN, ir.PFUNC:
d.foundDep(n)
d.foundDep(n.(*ir.Name))
}
case ir.OCLOSURE:
d.inspectList(n.Func().Decl.Body())
d.inspectList(n.Func().Body())
case ir.ODOTMETH, ir.OCALLPART:
d.foundDep(methodExprName(n))
@ -299,7 +299,7 @@ func (d *initDeps) visit(n ir.Node) bool {
// foundDep records that we've found a dependency on n by adding it to
// seen.
func (d *initDeps) foundDep(n ir.Node) {
func (d *initDeps) foundDep(n *ir.Name) {
// Can happen with method expressions involving interface
// types; e.g., fixedbugs/issue4495.go.
if n == nil {
@ -308,7 +308,7 @@ func (d *initDeps) foundDep(n ir.Node) {
// Names without definitions aren't interesting as far as
// initialization ordering goes.
if n.Name().Defn == nil {
if n.Defn == nil {
return
}
@ -317,7 +317,7 @@ func (d *initDeps) foundDep(n ir.Node) {
}
d.seen.Add(n)
if d.transitive && n.Class() == ir.PFUNC {
d.inspectList(n.Name().Defn.Body())
d.inspectList(n.Defn.Body())
}
}
@ -345,12 +345,12 @@ func (s *declOrder) Pop() interface{} {
// firstLHS returns the first expression on the left-hand side of
// assignment n.
func firstLHS(n ir.Node) ir.Node {
func firstLHS(n ir.Node) *ir.Name {
switch n.Op() {
case ir.OAS:
return n.Left()
return n.Left().Name()
case ir.OAS2DOTTYPE, ir.OAS2FUNC, ir.OAS2RECV, ir.OAS2MAPR:
return n.List().First()
return n.List().First().Name()
}
base.Fatalf("unexpected Op: %v", n.Op())

File diff suppressed because it is too large Load Diff

View File

@ -43,6 +43,9 @@ func hidePanic() {
// about a panic too; let the user clean up
// the code and try again.
if err := recover(); err != nil {
if err == "-h" {
panic(err)
}
base.ErrorExit()
}
}
@ -74,17 +77,17 @@ func Main(archInit func(*Arch)) {
// See bugs 31188 and 21945 (CLs 170638, 98075, 72371).
base.Ctxt.UseBASEntries = base.Ctxt.Headtype != objabi.Hdarwin
ir.LocalPkg = types.NewPkg("", "")
ir.LocalPkg.Prefix = "\"\""
types.LocalPkg = types.NewPkg("", "")
types.LocalPkg.Prefix = "\"\""
// We won't know localpkg's height until after import
// processing. In the mean time, set to MaxPkgHeight to ensure
// height comparisons at least work until then.
ir.LocalPkg.Height = types.MaxPkgHeight
types.LocalPkg.Height = types.MaxPkgHeight
// pseudo-package, for scoping
ir.BuiltinPkg = types.NewPkg("go.builtin", "") // TODO(gri) name this package go.builtin?
ir.BuiltinPkg.Prefix = "go.builtin" // not go%2ebuiltin
types.BuiltinPkg = types.NewPkg("go.builtin", "") // TODO(gri) name this package go.builtin?
types.BuiltinPkg.Prefix = "go.builtin" // not go%2ebuiltin
// pseudo-package, accessed by import "unsafe"
unsafepkg = types.NewPkg("unsafe", "unsafe")
@ -207,19 +210,7 @@ func Main(archInit func(*Arch)) {
// initialize types package
// (we need to do this to break dependencies that otherwise
// would lead to import cycles)
types.Widthptr = Widthptr
types.Dowidth = dowidth
types.Fatalf = base.Fatalf
ir.InstallTypeFormats()
types.TypeLinkSym = func(t *types.Type) *obj.LSym {
return typenamesym(t).Linksym()
}
types.FmtLeft = int(ir.FmtLeft)
types.FmtUnsigned = int(ir.FmtUnsigned)
types.FErr = int(ir.FErr)
types.Ctxt = base.Ctxt
initUniverse()
initializeTypesPackage()
dclcontext = ir.PEXTERN
@ -258,7 +249,7 @@ func Main(archInit func(*Arch)) {
timings.Start("fe", "typecheck", "top1")
for i := 0; i < len(xtop); i++ {
n := xtop[i]
if op := n.Op(); op != ir.ODCL && op != ir.OAS && op != ir.OAS2 && (op != ir.ODCLTYPE || !n.Left().Name().Param.Alias()) {
if op := n.Op(); op != ir.ODCL && op != ir.OAS && op != ir.OAS2 && (op != ir.ODCLTYPE || !n.Left().Name().Alias()) {
xtop[i] = typecheck(n, ctxStmt)
}
}
@ -270,7 +261,7 @@ func Main(archInit func(*Arch)) {
timings.Start("fe", "typecheck", "top2")
for i := 0; i < len(xtop); i++ {
n := xtop[i]
if op := n.Op(); op == ir.ODCL || op == ir.OAS || op == ir.OAS2 || op == ir.ODCLTYPE && n.Left().Name().Param.Alias() {
if op := n.Op(); op == ir.ODCL || op == ir.OAS || op == ir.OAS2 || op == ir.ODCLTYPE && n.Left().Name().Alias() {
xtop[i] = typecheck(n, ctxStmt)
}
}
@ -282,7 +273,7 @@ func Main(archInit func(*Arch)) {
for i := 0; i < len(xtop); i++ {
n := xtop[i]
if n.Op() == ir.ODCLFUNC {
Curfn = n
Curfn = n.(*ir.Func)
decldepth = 1
errorsBefore := base.Errors()
typecheckslice(Curfn.Body().Slice(), ctxStmt)
@ -312,8 +303,8 @@ func Main(archInit func(*Arch)) {
timings.Start("fe", "capturevars")
for _, n := range xtop {
if n.Op() == ir.ODCLFUNC && n.Func().OClosure != nil {
Curfn = n
capturevars(n)
Curfn = n.(*ir.Func)
capturevars(Curfn)
}
}
capturevarscomplete = true
@ -326,7 +317,7 @@ func Main(archInit func(*Arch)) {
// Typecheck imported function bodies if Debug.l > 1,
// otherwise lazily when used or re-exported.
for _, n := range importlist {
if n.Func().Inl != nil {
if n.Inl != nil {
typecheckinl(n)
}
}
@ -335,7 +326,7 @@ func Main(archInit func(*Arch)) {
if base.Flag.LowerL != 0 {
// Find functions that can be inlined and clone them before walk expands them.
visitBottomUp(xtop, func(list []ir.Node, recursive bool) {
visitBottomUp(xtop, func(list []*ir.Func, recursive bool) {
numfns := numNonClosures(list)
for _, n := range list {
if !recursive || numfns > 1 {
@ -345,7 +336,7 @@ func Main(archInit func(*Arch)) {
caninl(n)
} else {
if base.Flag.LowerM > 1 {
fmt.Printf("%v: cannot inline %v: recursive\n", ir.Line(n), n.Func().Nname)
fmt.Printf("%v: cannot inline %v: recursive\n", ir.Line(n), n.Nname)
}
}
inlcalls(n)
@ -355,7 +346,7 @@ func Main(archInit func(*Arch)) {
for _, n := range xtop {
if n.Op() == ir.ODCLFUNC {
devirtualize(n)
devirtualize(n.(*ir.Func))
}
}
Curfn = nil
@ -385,8 +376,8 @@ func Main(archInit func(*Arch)) {
timings.Start("fe", "xclosures")
for _, n := range xtop {
if n.Op() == ir.ODCLFUNC && n.Func().OClosure != nil {
Curfn = n
transformclosure(n)
Curfn = n.(*ir.Func)
transformclosure(Curfn)
}
}
@ -408,7 +399,7 @@ func Main(archInit func(*Arch)) {
for i := 0; i < len(xtop); i++ {
n := xtop[i]
if n.Op() == ir.ODCLFUNC {
funccompile(n)
funccompile(n.(*ir.Func))
fcount++
}
}
@ -486,10 +477,10 @@ func Main(archInit func(*Arch)) {
}
// numNonClosures returns the number of functions in list which are not closures.
func numNonClosures(list []ir.Node) int {
func numNonClosures(list []*ir.Func) int {
count := 0
for _, n := range list {
if n.Func().OClosure == nil {
for _, fn := range list {
if fn.OClosure == nil {
count++
}
}
@ -929,14 +920,14 @@ func pkgnotused(lineno src.XPos, path string, name string) {
}
func mkpackage(pkgname string) {
if ir.LocalPkg.Name == "" {
if types.LocalPkg.Name == "" {
if pkgname == "_" {
base.Errorf("invalid package name _")
}
ir.LocalPkg.Name = pkgname
types.LocalPkg.Name = pkgname
} else {
if pkgname != ir.LocalPkg.Name {
base.Errorf("package %s; expected %s", pkgname, ir.LocalPkg.Name)
if pkgname != types.LocalPkg.Name {
base.Errorf("package %s; expected %s", pkgname, types.LocalPkg.Name)
}
}
}
@ -949,7 +940,7 @@ func clearImports() {
}
var unused []importedPkg
for _, s := range ir.LocalPkg.Syms {
for _, s := range types.LocalPkg.Syms {
n := ir.AsNode(s.Def)
if n == nil {
continue
@ -960,8 +951,9 @@ func clearImports() {
// leave s->block set to cause redeclaration
// errors if a conflicting top-level name is
// introduced by a different file.
if !n.Name().Used() && base.SyntaxErrors() == 0 {
unused = append(unused, importedPkg{n.Pos(), n.Name().Pkg.Path, s.Name})
p := n.(*ir.PkgName)
if !p.Used && base.SyntaxErrors() == 0 {
unused = append(unused, importedPkg{p.Pos(), p.Pkg.Path, s.Name})
}
s.Def = nil
continue
@ -969,9 +961,9 @@ func clearImports() {
if IsAlias(s) {
// throw away top-level name left over
// from previous import . "x"
if n.Name() != nil && n.Name().Pack != nil && !n.Name().Pack.Name().Used() && base.SyntaxErrors() == 0 {
unused = append(unused, importedPkg{n.Name().Pack.Pos(), n.Name().Pack.Name().Pkg.Path, ""})
n.Name().Pack.Name().SetUsed(true)
if name := n.Name(); name != nil && name.PkgName != nil && !name.PkgName.Used && base.SyntaxErrors() == 0 {
unused = append(unused, importedPkg{name.PkgName.Pos(), name.PkgName.Pkg.Path, ""})
name.PkgName.Used = true
}
s.Def = nil
continue
@ -985,7 +977,7 @@ func clearImports() {
}
func IsAlias(sym *types.Sym) bool {
return sym.Def != nil && ir.AsNode(sym.Def).Sym() != sym
return sym.Def != nil && sym.Def.Sym() != sym
}
// recordFlags records the specified command-line flags to be placed
@ -1052,7 +1044,7 @@ func recordPackageName() {
// together two package main archives. So allow dups.
s.Set(obj.AttrDuplicateOK, true)
base.Ctxt.Data = append(base.Ctxt.Data, s)
s.P = []byte(ir.LocalPkg.Name)
s.P = []byte(types.LocalPkg.Name)
}
// currentLang returns the current language version.
@ -1079,9 +1071,9 @@ var langWant lang
func langSupported(major, minor int, pkg *types.Pkg) bool {
if pkg == nil {
// TODO(mdempsky): Set Pkg for local types earlier.
pkg = ir.LocalPkg
pkg = types.LocalPkg
}
if pkg != ir.LocalPkg {
if pkg != types.LocalPkg {
// Assume imported packages passed type-checking.
return true
}
@ -1132,3 +1124,13 @@ func parseLang(s string) (lang, error) {
}
return lang{major: major, minor: minor}, nil
}
func initializeTypesPackage() {
types.Widthptr = Widthptr
types.Dowidth = dowidth
types.TypeLinkSym = func(t *types.Type) *obj.LSym {
return typenamesym(t).Linksym()
}
initUniverse()
}

View File

@ -143,9 +143,9 @@ func (i *typeInterner) mktype(t ast.Expr) string {
case *ast.Ident:
switch t.Name {
case "byte":
return "types.Bytetype"
return "types.ByteType"
case "rune":
return "types.Runetype"
return "types.RuneType"
}
return fmt.Sprintf("types.Types[types.T%s]", strings.ToUpper(t.Name))
case *ast.SelectorExpr:
@ -207,7 +207,7 @@ func (i *typeInterner) fields(fl *ast.FieldList, keepNames bool) string {
}
}
}
return fmt.Sprintf("[]ir.Node{%s}", strings.Join(res, ", "))
return fmt.Sprintf("[]*ir.Field{%s}", strings.Join(res, ", "))
}
func intconst(e ast.Expr) int64 {

View File

@ -139,7 +139,7 @@ func parseFiles(filenames []string) (lines uint) {
testdclstack()
}
ir.LocalPkg.Height = myheight
types.LocalPkg.Height = myheight
return
}
@ -160,7 +160,11 @@ func parseFiles(filenames []string) (lines uint) {
testdclstack()
}
ir.LocalPkg.Height = myheight
for _, p := range noders {
p.processPragmas()
}
types.LocalPkg.Height = myheight
return
}
@ -279,7 +283,7 @@ func (p *noder) use(x *syntax.Name) types2.Object {
return p.typeInfo.Uses[x]
}
func (p *noder) funcBody(fn ir.Node, block *syntax.BlockStmt) {
func (p *noder) funcBody(fn *ir.Func, block *syntax.BlockStmt) {
oldScope := p.scope
p.scope = 0
funchdr(fn)
@ -287,12 +291,12 @@ func (p *noder) funcBody(fn ir.Node, block *syntax.BlockStmt) {
if block != nil {
body := p.stmts(block.List)
if body == nil {
body = []ir.Node{ir.Nod(ir.OEMPTY, nil, nil)}
body = []ir.Node{ir.Nod(ir.OBLOCK, nil, nil)}
}
fn.PtrBody().Set(body)
base.Pos = p.makeXPos(block.Rbrace)
fn.Func().Endlineno = base.Pos
fn.Endlineno = base.Pos
}
funcbody()
@ -303,9 +307,9 @@ func (p *noder) openScope(pos syntax.Pos) {
types.Markdcl()
if trackScopes {
Curfn.Func().Parents = append(Curfn.Func().Parents, p.scope)
p.scopeVars = append(p.scopeVars, len(Curfn.Func().Dcl))
p.scope = ir.ScopeID(len(Curfn.Func().Parents))
Curfn.Parents = append(Curfn.Parents, p.scope)
p.scopeVars = append(p.scopeVars, len(Curfn.Dcl))
p.scope = ir.ScopeID(len(Curfn.Parents))
p.markScope(pos)
}
@ -318,29 +322,29 @@ func (p *noder) closeScope(pos syntax.Pos) {
if trackScopes {
scopeVars := p.scopeVars[len(p.scopeVars)-1]
p.scopeVars = p.scopeVars[:len(p.scopeVars)-1]
if scopeVars == len(Curfn.Func().Dcl) {
if scopeVars == len(Curfn.Dcl) {
// no variables were declared in this scope, so we can retract it.
if int(p.scope) != len(Curfn.Func().Parents) {
if int(p.scope) != len(Curfn.Parents) {
base.Fatalf("scope tracking inconsistency, no variables declared but scopes were not retracted")
}
p.scope = Curfn.Func().Parents[p.scope-1]
Curfn.Func().Parents = Curfn.Func().Parents[:len(Curfn.Func().Parents)-1]
p.scope = Curfn.Parents[p.scope-1]
Curfn.Parents = Curfn.Parents[:len(Curfn.Parents)-1]
nmarks := len(Curfn.Func().Marks)
Curfn.Func().Marks[nmarks-1].Scope = p.scope
nmarks := len(Curfn.Marks)
Curfn.Marks[nmarks-1].Scope = p.scope
prevScope := ir.ScopeID(0)
if nmarks >= 2 {
prevScope = Curfn.Func().Marks[nmarks-2].Scope
prevScope = Curfn.Marks[nmarks-2].Scope
}
if Curfn.Func().Marks[nmarks-1].Scope == prevScope {
Curfn.Func().Marks = Curfn.Func().Marks[:nmarks-1]
if Curfn.Marks[nmarks-1].Scope == prevScope {
Curfn.Marks = Curfn.Marks[:nmarks-1]
}
return
}
p.scope = Curfn.Func().Parents[p.scope-1]
p.scope = Curfn.Parents[p.scope-1]
p.markScope(pos)
}
@ -348,10 +352,10 @@ func (p *noder) closeScope(pos syntax.Pos) {
func (p *noder) markScope(pos syntax.Pos) {
xpos := p.makeXPos(pos)
if i := len(Curfn.Func().Marks); i > 0 && Curfn.Func().Marks[i-1].Pos == xpos {
Curfn.Func().Marks[i-1].Scope = p.scope
if i := len(Curfn.Marks); i > 0 && Curfn.Marks[i-1].Pos == xpos {
Curfn.Marks[i-1].Scope = p.scope
} else {
Curfn.Func().Marks = append(Curfn.Func().Marks, ir.Mark{Pos: xpos, Scope: p.scope})
Curfn.Marks = append(Curfn.Marks, ir.Mark{Pos: xpos, Scope: p.scope})
}
}
@ -385,23 +389,27 @@ func (p *noder) node() {
xtop = append(xtop, p.decls(p.file.DeclList)...)
for _, n := range p.linknames {
base.Pos = src.NoXPos
clearImports()
}
func (p *noder) processPragmas() {
for _, l := range p.linknames {
if !p.importedUnsafe {
p.errorAt(n.pos, "//go:linkname only allowed in Go files that import \"unsafe\"")
p.errorAt(l.pos, "//go:linkname only allowed in Go files that import \"unsafe\"")
continue
}
s := lookup(n.local)
if n.remote != "" {
s.Linkname = n.remote
} else {
// Use the default object symbol name if the
// user didn't provide one.
if base.Ctxt.Pkgpath == "" {
p.errorAt(n.pos, "//go:linkname requires linkname argument or -p compiler flag")
} else {
s.Linkname = objabi.PathToPrefix(base.Ctxt.Pkgpath) + "." + n.local
}
n := ir.AsNode(lookup(l.local).Def)
if n == nil || n.Op() != ir.ONAME {
// TODO(mdempsky): Change to p.errorAt before Go 1.17 release.
// base.WarnfAt(p.makeXPos(l.pos), "//go:linkname must refer to declared function or variable (will be an error in Go 1.17)")
continue
}
if n.Sym().Linkname != "" {
p.errorAt(l.pos, "duplicate //go:linkname for %s", l.local)
continue
}
n.Sym().Linkname = l.remote
}
// The linker expects an ABI0 wrapper for all cgo-exported
@ -417,8 +425,6 @@ func (p *noder) node() {
}
pragcgobuf = append(pragcgobuf, p.pragcgobuf...)
base.Pos = src.NoXPos
clearImports()
}
func (p *noder) decls(decls []syntax.Decl) (l []ir.Node) {
@ -474,6 +480,9 @@ func (p *noder) importDecl(imp *syntax.ImportDecl) {
p.importedEmbed = true
}
if !ipkg.Direct {
sourceOrderImports = append(sourceOrderImports, ipkg)
}
ipkg.Direct = true
var my *types.Sym
@ -483,9 +492,7 @@ func (p *noder) importDecl(imp *syntax.ImportDecl) {
my = lookup(ipkg.Name)
}
pack := p.nod(imp, ir.OPACK, nil, nil)
pack.SetSym(my)
pack.Name().Pkg = ipkg
pack := ir.NewPkgName(p.pos(imp), my, ipkg)
switch my.Name {
case ".":
@ -541,7 +548,7 @@ func (p *noder) varDecl(decl *syntax.VarDecl) []ir.Node {
// constant declarations are handled correctly (e.g., issue 15550).
type constState struct {
group *syntax.Group
typ ir.Node
typ ir.Ntype
values []ir.Node
iota int64
}
@ -573,20 +580,21 @@ func (p *noder) constDecl(decl *syntax.ConstDecl, cs *constState) []ir.Node {
nn := make([]ir.Node, 0, len(names))
for i, n := range names {
n := n.(*ir.Name)
if i >= len(values) {
base.Errorf("missing value in const declaration")
break
}
v := values[i]
if decl.Values == nil {
v = treecopy(v, n.Pos())
v = ir.DeepCopy(n.Pos(), v)
}
n.SetOp(ir.OLITERAL)
declare(n, dclcontext)
n.Name().Param.Ntype = typ
n.Name().Defn = v
n.Ntype = typ
n.Defn = v
n.SetIota(cs.iota)
nn = append(nn, p.nod(decl, ir.ODCLCONST, n, nil))
@ -609,19 +617,18 @@ func (p *noder) typeDecl(decl *syntax.TypeDecl) ir.Node {
// decl.Type may be nil but in that case we got a syntax error during parsing
typ := p.typeExprOrNil(decl.Type)
param := n.Name().Param
param.Ntype = typ
param.SetAlias(decl.Alias)
n.Ntype = typ
n.SetAlias(decl.Alias)
if pragma, ok := decl.Pragma.(*Pragma); ok {
if !decl.Alias {
param.SetPragma(pragma.Flag & TypePragmas)
n.SetPragma(pragma.Flag & TypePragmas)
pragma.Flag &^= TypePragmas
}
p.checkUnused(pragma)
}
nod := p.nod(decl, ir.ODCLTYPE, n, nil)
if param.Alias() && !langSupported(1, 9, ir.LocalPkg) {
if n.Alias() && !langSupported(1, 9, types.LocalPkg) {
base.ErrorfAt(nod.Pos(), "type aliases only supported as of -lang=go1.9")
}
return nod
@ -635,16 +642,14 @@ func (p *noder) declNames(names []*syntax.Name) []ir.Node {
return nodes
}
func (p *noder) declName(name *syntax.Name) ir.Node {
n := dclname(p.name(name))
n.SetPos(p.pos(name))
return n
func (p *noder) declName(name *syntax.Name) *ir.Name {
return ir.NewDeclNameAt(p.pos(name), p.name(name))
}
func (p *noder) funcDecl(fun *syntax.FuncDecl) ir.Node {
name := p.name(fun.Name)
t := p.signature(fun.Recv, fun.Type)
f := p.nod(fun, ir.ODCLFUNC, nil, nil)
f := ir.NewFunc(p.pos(fun))
if fun.Recv == nil {
if name.Name == "init" {
@ -654,22 +659,22 @@ func (p *noder) funcDecl(fun *syntax.FuncDecl) ir.Node {
}
}
if ir.LocalPkg.Name == "main" && name.Name == "main" {
if types.LocalPkg.Name == "main" && name.Name == "main" {
if t.List().Len() > 0 || t.Rlist().Len() > 0 {
base.ErrorfAt(f.Pos(), "func main must have no arguments and no return values")
}
}
} else {
f.Func().Shortname = name
f.Shortname = name
name = ir.BlankNode.Sym() // filled in by typecheckfunc
}
f.Func().Nname = newfuncnamel(p.pos(fun.Name), name, f.Func())
f.Func().Nname.Name().Defn = f
f.Func().Nname.Name().Param.Ntype = t
f.Nname = newFuncNameAt(p.pos(fun.Name), name, f)
f.Nname.Defn = f
f.Nname.Ntype = t
if pragma, ok := fun.Pragma.(*Pragma); ok {
f.Func().Pragma = pragma.Flag & FuncPragmas
f.Pragma = pragma.Flag & FuncPragmas
if pragma.Flag&ir.Systemstack != 0 && pragma.Flag&ir.Nosplit != 0 {
base.ErrorfAt(f.Pos(), "go:nosplit and go:systemstack cannot be combined")
}
@ -678,13 +683,13 @@ func (p *noder) funcDecl(fun *syntax.FuncDecl) ir.Node {
}
if fun.Recv == nil {
declare(f.Func().Nname, ir.PFUNC)
declare(f.Nname, ir.PFUNC)
}
p.funcBody(f, fun.Body)
if fun.Body != nil {
if f.Func().Pragma&ir.Noescape != 0 {
if f.Pragma&ir.Noescape != 0 {
base.ErrorfAt(f.Pos(), "can only use //go:noescape with external func implementations")
}
} else {
@ -707,18 +712,18 @@ func (p *noder) funcDecl(fun *syntax.FuncDecl) ir.Node {
return f
}
func (p *noder) signature(recv *syntax.Field, typ *syntax.FuncType) ir.Node {
n := p.nod(typ, ir.OTFUNC, nil, nil)
func (p *noder) signature(recv *syntax.Field, typ *syntax.FuncType) *ir.FuncType {
var rcvr *ir.Field
if recv != nil {
n.SetLeft(p.param(recv, false, false))
rcvr = p.param(recv, false, false)
}
n.PtrList().Set(p.params(typ.ParamList, true))
n.PtrRlist().Set(p.params(typ.ResultList, false))
return n
return ir.NewFuncType(p.pos(typ), rcvr,
p.params(typ.ParamList, true),
p.params(typ.ResultList, false))
}
func (p *noder) params(params []*syntax.Field, dddOk bool) []ir.Node {
nodes := make([]ir.Node, 0, len(params))
func (p *noder) params(params []*syntax.Field, dddOk bool) []*ir.Field {
nodes := make([]*ir.Field, 0, len(params))
for i, param := range params {
p.setlineno(param)
nodes = append(nodes, p.param(param, dddOk, i+1 == len(params)))
@ -726,17 +731,17 @@ func (p *noder) params(params []*syntax.Field, dddOk bool) []ir.Node {
return nodes
}
func (p *noder) param(param *syntax.Field, dddOk, final bool) ir.Node {
func (p *noder) param(param *syntax.Field, dddOk, final bool) *ir.Field {
var name *types.Sym
if param.Name != nil {
name = p.name(param.Name)
}
typ := p.typeExpr(param.Type)
n := p.nodSym(param, ir.ODCLFIELD, typ, name)
n := ir.NewField(p.pos(param), name, typ, nil)
// rewrite ...T parameter
if typ.Op() == ir.ODDD {
if typ, ok := typ.(*ir.SliceType); ok && typ.DDD {
if !dddOk {
// We mark these as syntax errors to get automatic elimination
// of multiple such errors per line (see ErrorfAt in subr.go).
@ -748,13 +753,8 @@ func (p *noder) param(param *syntax.Field, dddOk, final bool) ir.Node {
p.errorAt(param.Name.Pos(), "syntax error: cannot use ... with non-final parameter %s", param.Name.Value)
}
}
typ.SetOp(ir.OTARRAY)
typ.SetRight(typ.Left())
typ.SetLeft(nil)
n.SetIsDDD(true)
if n.Left() != nil {
n.Left().SetIsDDD(true)
}
typ.DDD = false
n.IsDDD = true
}
return n
@ -812,8 +812,9 @@ func (p *noder) expr(expr syntax.Expr) ir.Node {
// parser.new_dotname
obj := p.expr(expr.X)
if obj.Op() == ir.OPACK {
obj.Name().SetUsed(true)
return importName(obj.Name().Pkg.Lookup(expr.Sel.Value))
pack := obj.(*ir.PkgName)
pack.Used = true
return importName(pack.Pkg.Lookup(expr.Sel.Value))
}
n := nodSym(ir.OXDOT, obj, p.name(expr.Sel))
n.SetPos(p.pos(expr)) // lineno may have been changed by p.expr(expr.X)
@ -855,14 +856,14 @@ func (p *noder) expr(expr syntax.Expr) ir.Node {
var len ir.Node
if expr.Len != nil {
len = p.expr(expr.Len)
} else {
len = p.nod(expr, ir.ODDD, nil, nil)
}
return p.nod(expr, ir.OTARRAY, len, p.typeExpr(expr.Elem))
return ir.NewArrayType(p.pos(expr), len, p.typeExpr(expr.Elem))
case *syntax.SliceType:
return p.nod(expr, ir.OTARRAY, nil, p.typeExpr(expr.Elem))
return ir.NewSliceType(p.pos(expr), p.typeExpr(expr.Elem))
case *syntax.DotsType:
return p.nod(expr, ir.ODDD, p.typeExpr(expr.Elem), nil)
t := ir.NewSliceType(p.pos(expr), p.typeExpr(expr.Elem))
t.DDD = true
return t
case *syntax.StructType:
return p.structType(expr)
case *syntax.InterfaceType:
@ -870,21 +871,21 @@ func (p *noder) expr(expr syntax.Expr) ir.Node {
case *syntax.FuncType:
return p.signature(nil, expr)
case *syntax.MapType:
return p.nod(expr, ir.OTMAP, p.typeExpr(expr.Key), p.typeExpr(expr.Value))
return ir.NewMapType(p.pos(expr),
p.typeExpr(expr.Key), p.typeExpr(expr.Value))
case *syntax.ChanType:
n := p.nod(expr, ir.OTCHAN, p.typeExpr(expr.Elem), nil)
n.SetTChanDir(p.chanDir(expr.Dir))
return n
return ir.NewChanType(p.pos(expr),
p.typeExpr(expr.Elem), p.chanDir(expr.Dir))
case *syntax.TypeSwitchGuard:
n := p.nod(expr, ir.OTYPESW, nil, p.expr(expr.X))
var tag *ir.Ident
if expr.Lhs != nil {
n.SetLeft(p.declName(expr.Lhs))
if ir.IsBlank(n.Left()) {
base.Errorf("invalid variable name %v in type switch", n.Left())
tag = ir.NewIdent(p.pos(expr.Lhs), p.name(expr.Lhs))
if ir.IsBlank(tag) {
base.Errorf("invalid variable name %v in type switch", tag)
}
}
return n
return ir.NewTypeSwitchGuard(p.pos(expr), tag, p.expr(expr.X))
}
panic("unhandled Expr")
}
@ -933,7 +934,7 @@ func (p *noder) sum(x syntax.Expr) ir.Node {
n := p.expr(x)
if ir.IsConst(n, constant.String) && n.Sym() == nil {
nstr = n
chunks = append(chunks, nstr.StringVal())
chunks = append(chunks, ir.StringVal(nstr))
}
for i := len(adds) - 1; i >= 0; i-- {
@ -943,12 +944,12 @@ func (p *noder) sum(x syntax.Expr) ir.Node {
if ir.IsConst(r, constant.String) && r.Sym() == nil {
if nstr != nil {
// Collapse r into nstr instead of adding to n.
chunks = append(chunks, r.StringVal())
chunks = append(chunks, ir.StringVal(r))
continue
}
nstr = r
chunks = append(chunks, nstr.StringVal())
chunks = append(chunks, ir.StringVal(nstr))
} else {
if len(chunks) > 1 {
nstr.SetVal(constant.MakeString(strings.Join(chunks, "")))
@ -965,14 +966,21 @@ func (p *noder) sum(x syntax.Expr) ir.Node {
return n
}
func (p *noder) typeExpr(typ syntax.Expr) ir.Node {
func (p *noder) typeExpr(typ syntax.Expr) ir.Ntype {
// TODO(mdempsky): Be stricter? typecheck should handle errors anyway.
return p.expr(typ)
n := p.expr(typ)
if n == nil {
return nil
}
if _, ok := n.(ir.Ntype); !ok {
ir.Dump("NOT NTYPE", n)
}
return n.(ir.Ntype)
}
func (p *noder) typeExprOrNil(typ syntax.Expr) ir.Node {
func (p *noder) typeExprOrNil(typ syntax.Expr) ir.Ntype {
if typ != nil {
return p.expr(typ)
return p.typeExpr(typ)
}
return nil
}
@ -990,55 +998,54 @@ func (p *noder) chanDir(dir syntax.ChanDir) types.ChanDir {
}
func (p *noder) structType(expr *syntax.StructType) ir.Node {
l := make([]ir.Node, 0, len(expr.FieldList))
l := make([]*ir.Field, 0, len(expr.FieldList))
for i, field := range expr.FieldList {
p.setlineno(field)
var n ir.Node
var n *ir.Field
if field.Name == nil {
n = p.embedded(field.Type)
} else {
n = p.nodSym(field, ir.ODCLFIELD, p.typeExpr(field.Type), p.name(field.Name))
n = ir.NewField(p.pos(field), p.name(field.Name), p.typeExpr(field.Type), nil)
}
if i < len(expr.TagList) && expr.TagList[i] != nil {
n.SetVal(p.basicLit(expr.TagList[i]))
n.Note = constant.StringVal(p.basicLit(expr.TagList[i]))
}
l = append(l, n)
}
p.setlineno(expr)
n := p.nod(expr, ir.OTSTRUCT, nil, nil)
n.PtrList().Set(l)
return n
return ir.NewStructType(p.pos(expr), l)
}
func (p *noder) interfaceType(expr *syntax.InterfaceType) ir.Node {
l := make([]ir.Node, 0, len(expr.MethodList))
l := make([]*ir.Field, 0, len(expr.MethodList))
for _, method := range expr.MethodList {
p.setlineno(method)
var n ir.Node
var n *ir.Field
if method.Name == nil {
n = p.nodSym(method, ir.ODCLFIELD, importName(p.packname(method.Type)), nil)
n = ir.NewField(p.pos(method), nil, importName(p.packname(method.Type)).(ir.Ntype), nil)
} else {
mname := p.name(method.Name)
sig := p.typeExpr(method.Type)
sig.SetLeft(fakeRecv())
n = p.nodSym(method, ir.ODCLFIELD, sig, mname)
ifacedcl(n)
if mname.IsBlank() {
base.Errorf("methods must have a unique non-blank name")
continue
}
sig := p.typeExpr(method.Type).(*ir.FuncType)
sig.Recv = fakeRecv()
n = ir.NewField(p.pos(method), mname, sig, nil)
}
l = append(l, n)
}
n := p.nod(expr, ir.OTINTER, nil, nil)
n.PtrList().Set(l)
return n
return ir.NewInterfaceType(p.pos(expr), l)
}
func (p *noder) packname(expr syntax.Expr) *types.Sym {
switch expr := expr.(type) {
case *syntax.Name:
name := p.name(expr)
if n := oldname(name); n.Name() != nil && n.Name().Pack != nil {
n.Name().Pack.Name().SetUsed(true)
if n := oldname(name); n.Name() != nil && n.Name().PkgName != nil {
n.Name().PkgName.Used = true
}
return name
case *syntax.SelectorExpr:
@ -1051,17 +1058,18 @@ func (p *noder) packname(expr syntax.Expr) *types.Sym {
var pkg *types.Pkg
if def.Op() != ir.OPACK {
base.Errorf("%v is not a package", name)
pkg = ir.LocalPkg
pkg = types.LocalPkg
} else {
def.Name().SetUsed(true)
pkg = def.Name().Pkg
def := def.(*ir.PkgName)
def.Used = true
pkg = def.Pkg
}
return pkg.Lookup(expr.Sel.Value)
}
panic(fmt.Sprintf("unexpected packname: %#v", expr))
}
func (p *noder) embedded(typ syntax.Expr) ir.Node {
func (p *noder) embedded(typ syntax.Expr) *ir.Field {
op, isStar := typ.(*syntax.Operation)
if isStar {
if op.Op != syntax.Mul || op.Y != nil {
@ -1071,11 +1079,11 @@ func (p *noder) embedded(typ syntax.Expr) ir.Node {
}
sym := p.packname(typ)
n := p.nodSym(typ, ir.ODCLFIELD, importName(sym), lookup(sym.Name))
n.SetEmbedded(true)
n := ir.NewField(p.pos(typ), lookup(sym.Name), importName(sym).(ir.Ntype), nil)
n.Embedded = true
if isStar {
n.SetLeft(p.nod(op, ir.ODEREF, n.Left(), nil))
n.Ntype = ir.NewStarExpr(p.pos(op), n.Ntype)
}
return n
}
@ -1089,7 +1097,9 @@ func (p *noder) stmtsFall(stmts []syntax.Stmt, fallOK bool) []ir.Node {
for i, stmt := range stmts {
s := p.stmtFall(stmt, fallOK && i+1 == len(stmts))
if s == nil {
} else if s.Op() == ir.OBLOCK && s.Init().Len() == 0 {
} else if s.Op() == ir.OBLOCK && s.List().Len() > 0 {
// Inline non-empty block.
// Empty blocks must be preserved for checkreturn.
nodes = append(nodes, s.List().Slice()...)
} else {
nodes = append(nodes, s)
@ -1113,7 +1123,7 @@ func (p *noder) stmtFall(stmt syntax.Stmt, fallOK bool) ir.Node {
l := p.blockStmt(stmt)
if len(l) == 0 {
// TODO(mdempsky): Line number?
return ir.Nod(ir.OEMPTY, nil, nil)
return ir.Nod(ir.OBLOCK, nil, nil)
}
return liststmt(l)
case *syntax.ExprStmt:
@ -1130,20 +1140,17 @@ func (p *noder) stmtFall(stmt syntax.Stmt, fallOK bool) ir.Node {
return n
}
n := p.nod(stmt, ir.OAS, nil, nil) // assume common case
rhs := p.exprList(stmt.Rhs)
lhs := p.assignList(stmt.Lhs, n, stmt.Op == syntax.Def)
if len(lhs) == 1 && len(rhs) == 1 {
// common case
n.SetLeft(lhs[0])
n.SetRight(rhs[0])
} else {
n.SetOp(ir.OAS2)
n.PtrList().Set(lhs)
if list, ok := stmt.Lhs.(*syntax.ListExpr); ok && len(list.ElemList) != 1 || len(rhs) != 1 {
n := p.nod(stmt, ir.OAS2, nil, nil)
n.PtrList().Set(p.assignList(stmt.Lhs, n, stmt.Op == syntax.Def))
n.PtrRlist().Set(rhs)
return n
}
n := p.nod(stmt, ir.OAS, nil, nil)
n.SetLeft(p.assignList(stmt.Lhs, n, stmt.Op == syntax.Def)[0])
n.SetRight(rhs[0])
return n
case *syntax.BranchStmt:
@ -1187,14 +1194,14 @@ func (p *noder) stmtFall(stmt syntax.Stmt, fallOK bool) ir.Node {
n := p.nod(stmt, ir.ORETURN, nil, nil)
n.PtrList().Set(results)
if n.List().Len() == 0 && Curfn != nil {
for _, ln := range Curfn.Func().Dcl {
for _, ln := range Curfn.Dcl {
if ln.Class() == ir.PPARAM {
continue
}
if ln.Class() != ir.PPARAMOUT {
break
}
if ir.AsNode(ln.Sym().Def) != ln {
if ln.Sym().Def != ln {
base.Errorf("%s is shadowed during return", ln.Sym().Name)
}
}
@ -1261,7 +1268,7 @@ func (p *noder) assignList(expr syntax.Expr, defn ir.Node, colas bool) []ir.Node
newOrErr = true
n := NewName(sym)
declare(n, dclcontext)
n.Name().Defn = defn
n.Defn = defn
defn.PtrInit().Append(ir.Nod(ir.ODCL, n, nil))
res[i] = n
}
@ -1291,7 +1298,7 @@ func (p *noder) ifStmt(stmt *syntax.IfStmt) ir.Node {
n.PtrBody().Set(p.blockStmt(stmt.Then))
if stmt.Else != nil {
e := p.stmt(stmt.Else)
if e.Op() == ir.OBLOCK && e.Init().Len() == 0 {
if e.Op() == ir.OBLOCK {
n.PtrRlist().Set(e.List().Slice())
} else {
n.PtrRlist().Set1(e)
@ -1368,7 +1375,7 @@ func (p *noder) caseClauses(clauses []*syntax.CaseClause, tswitch ir.Node, rbrac
declare(nn, dclcontext)
n.PtrRlist().Set1(nn)
// keep track of the instances for reporting unused
nn.Name().Defn = tswitch
nn.Defn = tswitch
}
// Trim trailing empty statements. We omit them from
@ -1429,17 +1436,22 @@ func (p *noder) commClauses(clauses []*syntax.CommClause, rbrace syntax.Pos) []i
}
func (p *noder) labeledStmt(label *syntax.LabeledStmt, fallOK bool) ir.Node {
lhs := p.nodSym(label, ir.OLABEL, nil, p.name(label.Label))
sym := p.name(label.Label)
lhs := p.nodSym(label, ir.OLABEL, nil, sym)
var ls ir.Node
if label.Stmt != nil { // TODO(mdempsky): Should always be present.
ls = p.stmtFall(label.Stmt, fallOK)
switch label.Stmt.(type) {
case *syntax.ForStmt, *syntax.SwitchStmt, *syntax.SelectStmt:
// Attach label directly to control statement too.
ls.SetSym(sym)
}
}
lhs.Name().Defn = ls
l := []ir.Node{lhs}
if ls != nil {
if ls.Op() == ir.OBLOCK && ls.Init().Len() == 0 {
if ls.Op() == ir.OBLOCK {
l = append(l, ls.List().Slice()...)
} else {
l = append(l, ls)
@ -1502,7 +1514,7 @@ func (p *noder) binOp(op syntax.Operator) ir.Op {
// literal is not compatible with the current language version.
func checkLangCompat(lit *syntax.BasicLit) {
s := lit.Value
if len(s) <= 2 || langSupported(1, 13, ir.LocalPkg) {
if len(s) <= 2 || langSupported(1, 13, types.LocalPkg) {
return
}
// len(s) > 2
@ -1716,6 +1728,13 @@ func (p *noder) pragma(pos syntax.Pos, blankLine bool, text string, old syntax.P
var target string
if len(f) == 3 {
target = f[2]
} else if base.Ctxt.Pkgpath != "" {
// Use the default object symbol name if the
// user didn't provide one.
target = objabi.PathToPrefix(base.Ctxt.Pkgpath) + "." + f[1]
} else {
p.error(syntax.Error{Pos: pos, Msg: "//go:linkname requires linkname argument or -p compiler flag"})
break
}
p.linknames = append(p.linknames, linkname{pos, f[1], target})
@ -1797,8 +1816,8 @@ func safeArg(name string) bool {
func mkname(sym *types.Sym) ir.Node {
n := oldname(sym)
if n.Name() != nil && n.Name().Pack != nil {
n.Name().Pack.Name().SetUsed(true)
if n.Name() != nil && n.Name().PkgName != nil {
n.Name().PkgName.Used = true
}
return n
}

View File

@ -84,7 +84,7 @@ func printObjHeader(bout *bio.Writer) {
if base.Flag.BuildID != "" {
fmt.Fprintf(bout, "build id %q\n", base.Flag.BuildID)
}
if ir.LocalPkg.Name == "main" {
if types.LocalPkg.Name == "main" {
fmt.Fprintf(bout, "main\n")
}
fmt.Fprintf(bout, "\n") // header ends with blank line
@ -143,7 +143,7 @@ func dumpdata() {
for i := xtops; i < len(xtop); i++ {
n := xtop[i]
if n.Op() == ir.ODCLFUNC {
funccompile(n)
funccompile(n.(*ir.Func))
}
}
xtops = len(xtop)
@ -200,7 +200,7 @@ func dumpLinkerObj(bout *bio.Writer) {
}
func addptabs() {
if !base.Ctxt.Flag_dynlink || ir.LocalPkg.Name != "main" {
if !base.Ctxt.Flag_dynlink || types.LocalPkg.Name != "main" {
return
}
for _, exportn := range exportlist {
@ -218,12 +218,12 @@ func addptabs() {
if s.Pkg.Name != "main" {
continue
}
if n.Type().Etype == types.TFUNC && n.Class() == ir.PFUNC {
if n.Type().Kind() == types.TFUNC && n.Class() == ir.PFUNC {
// function
ptabs = append(ptabs, ptabEntry{s: s, t: ir.AsNode(s.Def).Type()})
ptabs = append(ptabs, ptabEntry{s: s, t: s.Def.Type()})
} else {
// variable
ptabs = append(ptabs, ptabEntry{s: s, t: types.NewPtr(ir.AsNode(s.Def).Type())})
ptabs = append(ptabs, ptabEntry{s: s, t: types.NewPtr(s.Def.Type())})
}
}
}
@ -235,7 +235,7 @@ func dumpGlobal(n ir.Node) {
if n.Class() == ir.PFUNC {
return
}
if n.Sym().Pkg != ir.LocalPkg {
if n.Sym().Pkg != types.LocalPkg {
return
}
dowidth(n.Type())
@ -248,7 +248,7 @@ func dumpGlobalConst(n ir.Node) {
if t == nil {
return
}
if n.Sym().Pkg != ir.LocalPkg {
if n.Sym().Pkg != types.LocalPkg {
return
}
// only export integer constants for now
@ -263,7 +263,7 @@ func dumpGlobalConst(n ir.Node) {
return
}
}
base.Ctxt.DwarfIntConst(base.Ctxt.Pkgpath, n.Sym().Name, typesymname(t), ir.Int64Val(t, v))
base.Ctxt.DwarfIntConst(base.Ctxt.Pkgpath, n.Sym().Name, typesymname(t), ir.IntVal(t, v))
}
func dumpglobls() {
@ -478,7 +478,7 @@ var slicedataGen int
func slicedata(pos src.XPos, s string) ir.Node {
slicedataGen++
symname := fmt.Sprintf(".gobytes.%d", slicedataGen)
sym := ir.LocalPkg.Lookup(symname)
sym := types.LocalPkg.Lookup(symname)
symnode := NewName(sym)
sym.Def = symnode
@ -598,11 +598,11 @@ func litsym(n, c ir.Node, wid int) {
s.WriteInt(base.Ctxt, n.Offset(), wid, i)
case constant.Int:
s.WriteInt(base.Ctxt, n.Offset(), wid, ir.Int64Val(n.Type(), u))
s.WriteInt(base.Ctxt, n.Offset(), wid, ir.IntVal(n.Type(), u))
case constant.Float:
f, _ := constant.Float64Val(u)
switch n.Type().Etype {
switch n.Type().Kind() {
case types.TFLOAT32:
s.WriteFloat32(base.Ctxt, n.Offset(), float32(f))
case types.TFLOAT64:
@ -612,7 +612,7 @@ func litsym(n, c ir.Node, wid int) {
case constant.Complex:
re, _ := constant.Float64Val(constant.Real(u))
im, _ := constant.Float64Val(constant.Imag(u))
switch n.Type().Etype {
switch n.Type().Kind() {
case types.TCOMPLEX64:
s.WriteFloat32(base.Ctxt, n.Offset(), float32(re))
s.WriteFloat32(base.Ctxt, n.Offset()+4, float32(im))

View File

@ -44,27 +44,27 @@ import (
// Order holds state during the ordering process.
type Order struct {
out []ir.Node // list of generated statements
temp []ir.Node // stack of temporary variables
free map[string][]ir.Node // free list of unused temporaries, by type.LongString().
out []ir.Node // list of generated statements
temp []*ir.Name // stack of temporary variables
free map[string][]*ir.Name // free list of unused temporaries, by type.LongString().
}
// Order rewrites fn.Nbody to apply the ordering constraints
// described in the comment at the top of the file.
func order(fn ir.Node) {
func order(fn *ir.Func) {
if base.Flag.W > 1 {
s := fmt.Sprintf("\nbefore order %v", fn.Func().Nname.Sym())
s := fmt.Sprintf("\nbefore order %v", fn.Sym())
ir.DumpList(s, fn.Body())
}
orderBlock(fn.PtrBody(), map[string][]ir.Node{})
orderBlock(fn.PtrBody(), map[string][]*ir.Name{})
}
// newTemp allocates a new temporary with the given type,
// pushes it onto the temp stack, and returns it.
// If clear is true, newTemp emits code to zero the temporary.
func (o *Order) newTemp(t *types.Type, clear bool) ir.Node {
var v ir.Node
func (o *Order) newTemp(t *types.Type, clear bool) *ir.Name {
var v *ir.Name
// Note: LongString is close to the type equality we want,
// but not exactly. We still need to double-check with types.Identical.
key := t.LongString()
@ -93,17 +93,26 @@ func (o *Order) newTemp(t *types.Type, clear bool) ir.Node {
// copyExpr behaves like newTemp but also emits
// code to initialize the temporary to the value n.
//
// The clear argument is provided for use when the evaluation
// of tmp = n turns into a function call that is passed a pointer
// to the temporary as the output space. If the call blocks before
// tmp has been written, the garbage collector will still treat the
// temporary as live, so we must zero it before entering that call.
func (o *Order) copyExpr(n ir.Node) ir.Node {
return o.copyExpr1(n, false)
}
// copyExprClear is like copyExpr but clears the temp before assignment.
// It is provided for use when the evaluation of tmp = n turns into
// a function call that is passed a pointer to the temporary as the output space.
// If the call blocks before tmp has been written,
// the garbage collector will still treat the temporary as live,
// so we must zero it before entering that call.
// Today, this only happens for channel receive operations.
// (The other candidate would be map access, but map access
// returns a pointer to the result data instead of taking a pointer
// to be filled in.)
func (o *Order) copyExpr(n ir.Node, t *types.Type, clear bool) ir.Node {
func (o *Order) copyExprClear(n ir.Node) *ir.Name {
return o.copyExpr1(n, true)
}
func (o *Order) copyExpr1(n ir.Node, clear bool) *ir.Name {
t := n.Type()
v := o.newTemp(t, clear)
a := ir.Nod(ir.OAS, v, n)
a = typecheck(a, ctxStmt)
@ -133,7 +142,7 @@ func (o *Order) cheapExpr(n ir.Node) ir.Node {
return typecheck(a, ctxExpr)
}
return o.copyExpr(n, n.Type(), false)
return o.copyExpr(n)
}
// safeExpr returns a safe version of n.
@ -220,7 +229,7 @@ func (o *Order) addrTemp(n ir.Node) ir.Node {
if isaddrokay(n) {
return n
}
return o.copyExpr(n, n.Type(), false)
return o.copyExpr(n)
}
// mapKeyTemp prepares n to be a key in a map runtime call and returns n.
@ -406,7 +415,7 @@ func (o *Order) edge() {
// orderBlock orders the block of statements in n into a new slice,
// and then replaces the old slice in n with the new slice.
// free is a map that can be used to obtain temporary variables by type.
func orderBlock(n *ir.Nodes, free map[string][]ir.Node) {
func orderBlock(n *ir.Nodes, free map[string][]*ir.Name) {
var order Order
order.free = free
mark := order.markTemp()
@ -424,7 +433,7 @@ func (o *Order) exprInPlace(n ir.Node) ir.Node {
var order Order
order.free = o.free
n = order.expr(n, nil)
n = addinit(n, order.out)
n = initExpr(order.out, n)
// insert new temporaries from order
// at head of outer list.
@ -437,7 +446,7 @@ func (o *Order) exprInPlace(n ir.Node) ir.Node {
// The result of orderStmtInPlace MUST be assigned back to n, e.g.
// n.Left = orderStmtInPlace(n.Left)
// free is a map that can be used to obtain temporary variables by type.
func orderStmtInPlace(n ir.Node, free map[string][]ir.Node) ir.Node {
func orderStmtInPlace(n ir.Node, free map[string][]*ir.Name) ir.Node {
var order Order
order.free = free
mark := order.markTemp()
@ -489,7 +498,7 @@ func (o *Order) call(n ir.Node) {
// by copying it into a temp and marking that temp
// still alive when we pop the temp stack.
if arg.Op() == ir.OCONVNOP && arg.Left().Type().IsUnsafePtr() {
x := o.copyExpr(arg.Left(), arg.Left().Type(), false)
x := o.copyExpr(arg.Left())
arg.SetLeft(x)
x.Name().SetAddrtaken(true) // ensure SSA keeps the x variable
n.PtrBody().Append(typecheck(ir.Nod(ir.OVARLIVE, x, nil), ctxStmt))
@ -551,10 +560,10 @@ func (o *Order) mapAssign(n ir.Node) {
switch {
case m.Op() == ir.OINDEXMAP:
if !ir.IsAutoTmp(m.Left()) {
m.SetLeft(o.copyExpr(m.Left(), m.Left().Type(), false))
m.SetLeft(o.copyExpr(m.Left()))
}
if !ir.IsAutoTmp(m.Right()) {
m.SetRight(o.copyExpr(m.Right(), m.Right().Type(), false))
m.SetRight(o.copyExpr(m.Right()))
}
fallthrough
case instrumenting && n.Op() == ir.OAS2FUNC && !ir.IsBlank(m):
@ -606,20 +615,19 @@ func (o *Order) stmt(n ir.Node) {
// that we can ensure that if op panics
// because r is zero, the panic happens before
// the map assignment.
n.SetLeft(o.safeExpr(n.Left()))
l := treecopy(n.Left(), src.NoXPos)
if l.Op() == ir.OINDEXMAP {
l.SetIndexMapLValue(false)
// DeepCopy is a big hammer here, but safeExpr
// makes sure there is nothing too deep being copied.
l1 := o.safeExpr(n.Left())
l2 := ir.DeepCopy(src.NoXPos, l1)
if l1.Op() == ir.OINDEXMAP {
l2.SetIndexMapLValue(false)
}
l = o.copyExpr(l, n.Left().Type(), false)
n.SetRight(ir.Nod(n.SubOp(), l, n.Right()))
n.SetRight(typecheck(n.Right(), ctxExpr))
n.SetRight(o.expr(n.Right(), nil))
n.SetOp(ir.OAS)
n.ResetAux()
l2 = o.copyExpr(l2)
r := ir.NodAt(n.Pos(), n.SubOp(), l2, n.Right())
r = typecheck(r, ctxExpr)
r = o.expr(r, nil)
n = ir.NodAt(n.Pos(), ir.OAS, l1, r)
n = typecheck(n, ctxStmt)
}
o.mapAssign(n)
@ -636,8 +644,8 @@ func (o *Order) stmt(n ir.Node) {
case ir.OAS2FUNC:
t := o.markTemp()
o.exprList(n.List())
o.init(n.Right())
o.call(n.Right())
o.init(n.Rlist().First())
o.call(n.Rlist().First())
o.as2(n)
o.cleanTemp(t)
@ -651,7 +659,7 @@ func (o *Order) stmt(n ir.Node) {
t := o.markTemp()
o.exprList(n.List())
switch r := n.Right(); r.Op() {
switch r := n.Rlist().First(); r.Op() {
case ir.ODOTTYPE2, ir.ORECV:
r.SetLeft(o.expr(r.Left(), nil))
case ir.OINDEXMAP:
@ -668,7 +676,7 @@ func (o *Order) stmt(n ir.Node) {
o.cleanTemp(t)
// Special: does not save n onto out.
case ir.OBLOCK, ir.OEMPTY:
case ir.OBLOCK:
o.stmtList(n.List())
// Special: n->left is not an expression; save as is.
@ -776,7 +784,7 @@ func (o *Order) stmt(n ir.Node) {
n.SetRight(o.expr(n.Right(), nil))
orderBody := true
switch n.Type().Etype {
switch n.Type().Kind() {
default:
base.Fatalf("order.stmt range %v", n.Type())
@ -799,7 +807,7 @@ func (o *Order) stmt(n ir.Node) {
r = typecheck(r, ctxExpr)
}
n.SetRight(o.copyExpr(r, r.Type(), false))
n.SetRight(o.copyExpr(r))
case types.TMAP:
if isMapClear(n) {
@ -814,7 +822,7 @@ func (o *Order) stmt(n ir.Node) {
// TODO(rsc): Make tmp = literal expressions reuse tmp.
// For maps tmp is just one word so it hardly matters.
r := n.Right()
n.SetRight(o.copyExpr(r, r.Type(), false))
n.SetRight(o.copyExpr(r))
// prealloc[n] is the temp for the iterator.
// hiter contains pointers and needs to be zeroed.
@ -863,38 +871,39 @@ func (o *Order) stmt(n ir.Node) {
ir.Dump("select case", r)
base.Fatalf("unknown op in select %v", r.Op())
// If this is case x := <-ch or case x, y := <-ch, the case has
// the ODCL nodes to declare x and y. We want to delay that
// declaration (and possible allocation) until inside the case body.
// Delete the ODCL nodes here and recreate them inside the body below.
case ir.OSELRECV, ir.OSELRECV2:
if r.Colas() {
i := 0
if r.Init().Len() != 0 && r.Init().First().Op() == ir.ODCL && r.Init().First().Left() == r.Left() {
i++
}
if i < r.Init().Len() && r.Init().Index(i).Op() == ir.ODCL && r.List().Len() != 0 && r.Init().Index(i).Left() == r.List().First() {
i++
}
if i >= r.Init().Len() {
r.PtrInit().Set(nil)
}
var dst, ok, recv ir.Node
if r.Op() == ir.OSELRECV {
// case x = <-c
// case <-c (dst is ir.BlankNode)
dst, ok, recv = r.Left(), ir.BlankNode, r.Right()
} else {
// case x, ok = <-c
dst, ok, recv = r.List().First(), r.List().Second(), r.Rlist().First()
}
// If this is case x := <-ch or case x, y := <-ch, the case has
// the ODCL nodes to declare x and y. We want to delay that
// declaration (and possible allocation) until inside the case body.
// Delete the ODCL nodes here and recreate them inside the body below.
if r.Colas() {
init := r.Init().Slice()
if len(init) > 0 && init[0].Op() == ir.ODCL && init[0].Left() == dst {
init = init[1:]
}
if len(init) > 0 && init[0].Op() == ir.ODCL && init[0].Left() == ok {
init = init[1:]
}
r.PtrInit().Set(init)
}
if r.Init().Len() != 0 {
ir.DumpList("ninit", r.Init())
base.Fatalf("ninit on select recv")
}
// case x = <-c
// case x, ok = <-c
// r->left is x, r->ntest is ok, r->right is ORECV, r->right->left is c.
// r->left == N means 'case <-c'.
// c is always evaluated; x and ok are only evaluated when assigned.
r.Right().SetLeft(o.expr(r.Right().Left(), nil))
if r.Right().Left().Op() != ir.ONAME {
r.Right().SetLeft(o.copyExpr(r.Right().Left(), r.Right().Left().Type(), false))
recv.SetLeft(o.expr(recv.Left(), nil))
if recv.Left().Op() != ir.ONAME {
recv.SetLeft(o.copyExpr(recv.Left()))
}
// Introduce temporary for receive and move actual copy into case body.
@ -903,42 +912,41 @@ func (o *Order) stmt(n ir.Node) {
// temporary per distinct type, sharing the temp among all receives
// with that temp. Similarly one ok bool could be shared among all
// the x,ok receives. Not worth doing until there's a clear need.
if r.Left() != nil && ir.IsBlank(r.Left()) {
r.SetLeft(nil)
}
if r.Left() != nil {
if !ir.IsBlank(dst) {
// use channel element type for temporary to avoid conversions,
// such as in case interfacevalue = <-intchan.
// the conversion happens in the OAS instead.
tmp1 := r.Left()
if r.Colas() {
tmp2 := ir.Nod(ir.ODCL, tmp1, nil)
tmp2 = typecheck(tmp2, ctxStmt)
n2.PtrInit().Append(tmp2)
dcl := ir.Nod(ir.ODCL, dst, nil)
dcl = typecheck(dcl, ctxStmt)
n2.PtrInit().Append(dcl)
}
r.SetLeft(o.newTemp(r.Right().Left().Type().Elem(), r.Right().Left().Type().Elem().HasPointers()))
tmp2 := ir.Nod(ir.OAS, tmp1, r.Left())
tmp2 = typecheck(tmp2, ctxStmt)
n2.PtrInit().Append(tmp2)
tmp := o.newTemp(recv.Left().Type().Elem(), recv.Left().Type().Elem().HasPointers())
as := ir.Nod(ir.OAS, dst, tmp)
as = typecheck(as, ctxStmt)
n2.PtrInit().Append(as)
dst = tmp
}
if r.List().Len() != 0 && ir.IsBlank(r.List().First()) {
r.PtrList().Set(nil)
}
if r.List().Len() != 0 {
tmp1 := r.List().First()
if !ir.IsBlank(ok) {
if r.Colas() {
tmp2 := ir.Nod(ir.ODCL, tmp1, nil)
tmp2 = typecheck(tmp2, ctxStmt)
n2.PtrInit().Append(tmp2)
dcl := ir.Nod(ir.ODCL, ok, nil)
dcl = typecheck(dcl, ctxStmt)
n2.PtrInit().Append(dcl)
}
r.PtrList().Set1(o.newTemp(types.Types[types.TBOOL], false))
tmp2 := okas(tmp1, r.List().First())
tmp2 = typecheck(tmp2, ctxStmt)
n2.PtrInit().Append(tmp2)
tmp := o.newTemp(types.Types[types.TBOOL], false)
as := ir.Nod(ir.OAS, ok, conv(tmp, ok.Type()))
as = typecheck(as, ctxStmt)
n2.PtrInit().Append(as)
ok = tmp
}
if r.Op() == ir.OSELRECV {
r.SetLeft(dst)
} else {
r.List().SetIndex(0, dst)
r.List().SetIndex(1, ok)
}
orderBlock(n2.PtrInit(), o.free)
@ -953,11 +961,11 @@ func (o *Order) stmt(n ir.Node) {
r.SetLeft(o.expr(r.Left(), nil))
if !ir.IsAutoTmp(r.Left()) {
r.SetLeft(o.copyExpr(r.Left(), r.Left().Type(), false))
r.SetLeft(o.copyExpr(r.Left()))
}
r.SetRight(o.expr(r.Right(), nil))
if !ir.IsAutoTmp(r.Right()) {
r.SetRight(o.copyExpr(r.Right(), r.Right().Type(), false))
r.SetRight(o.copyExpr(r.Right()))
}
}
}
@ -985,7 +993,7 @@ func (o *Order) stmt(n ir.Node) {
if instrumenting {
// Force copying to the stack so that (chan T)(nil) <- x
// is still instrumented as a read of x.
n.SetRight(o.copyExpr(n.Right(), n.Right().Type(), false))
n.SetRight(o.copyExpr(n.Right()))
} else {
n.SetRight(o.addrTemp(n.Right()))
}
@ -1054,6 +1062,10 @@ func (o *Order) exprListInPlace(l ir.Nodes) {
// prealloc[x] records the allocation to use for x.
var prealloc = map[ir.Node]ir.Node{}
func (o *Order) exprNoLHS(n ir.Node) ir.Node {
return o.expr(n, nil)
}
// expr orders a single expression, appending side
// effects to o.out as needed.
// If this is part of an assignment lhs = *np, lhs is given.
@ -1071,10 +1083,7 @@ func (o *Order) expr(n, lhs ir.Node) ir.Node {
switch n.Op() {
default:
n.SetLeft(o.expr(n.Left(), nil))
n.SetRight(o.expr(n.Right(), nil))
o.exprList(n.List())
o.exprList(n.Rlist())
ir.EditChildren(n, o.exprNoLHS)
// Addition of strings turns into a function call.
// Allocate a temporary to hold the strings.
@ -1099,7 +1108,7 @@ func (o *Order) expr(n, lhs ir.Node) ir.Node {
haslit := false
for _, n1 := range n.List().Slice() {
hasbyte = hasbyte || n1.Op() == ir.OBYTES2STR
haslit = haslit || n1.Op() == ir.OLITERAL && len(n1.StringVal()) != 0
haslit = haslit || n1.Op() == ir.OLITERAL && len(ir.StringVal(n1)) != 0
}
if haslit && hasbyte {
@ -1123,8 +1132,7 @@ func (o *Order) expr(n, lhs ir.Node) ir.Node {
needCopy = mapKeyReplaceStrConv(n.Right())
if instrumenting {
// Race detector needs the copy so it can
// call treecopy on the result.
// Race detector needs the copy.
needCopy = true
}
}
@ -1132,7 +1140,7 @@ func (o *Order) expr(n, lhs ir.Node) ir.Node {
// key must be addressable
n.SetRight(o.mapKeyTemp(n.Left().Type(), n.Right()))
if needCopy {
n = o.copyExpr(n, n.Type(), false)
n = o.copyExpr(n)
}
// concrete type (not interface) argument might need an addressable
@ -1157,7 +1165,7 @@ func (o *Order) expr(n, lhs ir.Node) ir.Node {
o.init(n.Left())
o.call(n.Left())
if lhs == nil || lhs.Op() != ir.ONAME || instrumenting {
n = o.copyExpr(n, n.Type(), false)
n = o.copyExpr(n)
}
} else {
n.SetLeft(o.expr(n.Left(), nil))
@ -1227,7 +1235,7 @@ func (o *Order) expr(n, lhs ir.Node) ir.Node {
}
if lhs == nil || lhs.Op() != ir.ONAME || instrumenting {
n = o.copyExpr(n, n.Type(), false)
n = o.copyExpr(n)
}
case ir.OAPPEND:
@ -1240,7 +1248,7 @@ func (o *Order) expr(n, lhs ir.Node) ir.Node {
}
if lhs == nil || lhs.Op() != ir.ONAME && !samesafeexpr(lhs, n.List().First()) {
n = o.copyExpr(n, n.Type(), false)
n = o.copyExpr(n)
}
case ir.OSLICE, ir.OSLICEARR, ir.OSLICESTR, ir.OSLICE3, ir.OSLICE3ARR:
@ -1254,11 +1262,11 @@ func (o *Order) expr(n, lhs ir.Node) ir.Node {
max = o.cheapExpr(max)
n.SetSliceBounds(low, high, max)
if lhs == nil || lhs.Op() != ir.ONAME && !samesafeexpr(lhs, n.Left()) {
n = o.copyExpr(n, n.Type(), false)
n = o.copyExpr(n)
}
case ir.OCLOSURE:
if n.Transient() && n.Func().ClosureVars.Len() > 0 {
if n.Transient() && len(n.Func().ClosureVars) > 0 {
prealloc[n] = o.newTemp(closureType(n), false)
}
@ -1271,7 +1279,7 @@ func (o *Order) expr(n, lhs ir.Node) ir.Node {
var t *types.Type
switch n.Op() {
case ir.OSLICELIT:
t = types.NewArray(n.Type().Elem(), n.Right().Int64Val())
t = types.NewArray(n.Type().Elem(), ir.Int64Val(n.Right()))
case ir.OCALLPART:
t = partialCallType(n)
}
@ -1281,12 +1289,12 @@ func (o *Order) expr(n, lhs ir.Node) ir.Node {
case ir.ODOTTYPE, ir.ODOTTYPE2:
n.SetLeft(o.expr(n.Left(), nil))
if !isdirectiface(n.Type()) || instrumenting {
n = o.copyExpr(n, n.Type(), true)
n = o.copyExprClear(n)
}
case ir.ORECV:
n.SetLeft(o.expr(n.Left(), nil))
n = o.copyExpr(n, n.Type(), true)
n = o.copyExprClear(n)
case ir.OEQ, ir.ONE, ir.OLT, ir.OLE, ir.OGT, ir.OGE:
n.SetLeft(o.expr(n.Left(), nil))
@ -1375,15 +1383,6 @@ func (o *Order) expr(n, lhs ir.Node) ir.Node {
return n
}
// okas creates and returns an assignment of val to ok,
// including an explicit conversion if necessary.
func okas(ok, val ir.Node) ir.Node {
if !ir.IsBlank(ok) {
val = conv(val, ok.Type())
}
return ir.Nod(ir.OAS, ok, val)
}
// as2 orders OAS2XXXX nodes. It creates temporaries to ensure left-to-right assignment.
// The caller should order the right-hand side of the assignment before calling order.as2.
// It rewrites,
@ -1418,7 +1417,7 @@ func (o *Order) as2(n ir.Node) {
func (o *Order) okAs2(n ir.Node) {
var tmp1, tmp2 ir.Node
if !ir.IsBlank(n.List().First()) {
typ := n.Right().Type()
typ := n.Rlist().First().Type()
tmp1 = o.newTemp(typ, typ.HasPointers())
}
@ -1435,7 +1434,7 @@ func (o *Order) okAs2(n ir.Node) {
n.List().SetFirst(tmp1)
}
if tmp2 != nil {
r := okas(n.List().Second(), tmp2)
r := ir.Nod(ir.OAS, n.List().Second(), conv(tmp2, n.List().Second().Type()))
r = typecheck(r, ctxStmt)
o.mapAssign(r)
n.List().SetSecond(tmp2)

View File

@ -24,14 +24,14 @@ import (
// "Portable" code generation.
var (
compilequeue []ir.Node // functions waiting to be compiled
compilequeue []*ir.Func // functions waiting to be compiled
)
func emitptrargsmap(fn ir.Node) {
if ir.FuncName(fn) == "_" || fn.Func().Nname.Sym().Linkname != "" {
func emitptrargsmap(fn *ir.Func) {
if ir.FuncName(fn) == "_" || fn.Sym().Linkname != "" {
return
}
lsym := base.Ctxt.Lookup(fn.Func().LSym.Name + ".args_stackmap")
lsym := base.Ctxt.Lookup(fn.LSym.Name + ".args_stackmap")
nptr := int(fn.Type().ArgWidth() / int64(Widthptr))
bv := bvalloc(int32(nptr) * 2)
@ -68,7 +68,7 @@ func emitptrargsmap(fn ir.Node) {
// really means, in memory, things with pointers needing zeroing at
// the top of the stack and increasing in size.
// Non-autos sort on offset.
func cmpstackvarlt(a, b ir.Node) bool {
func cmpstackvarlt(a, b *ir.Name) bool {
if (a.Class() == ir.PAUTO) != (b.Class() == ir.PAUTO) {
return b.Class() == ir.PAUTO
}
@ -77,8 +77,8 @@ func cmpstackvarlt(a, b ir.Node) bool {
return a.Offset() < b.Offset()
}
if a.Name().Used() != b.Name().Used() {
return a.Name().Used()
if a.Used() != b.Used() {
return a.Used()
}
ap := a.Type().HasPointers()
@ -87,8 +87,8 @@ func cmpstackvarlt(a, b ir.Node) bool {
return ap
}
ap = a.Name().Needzero()
bp = b.Name().Needzero()
ap = a.Needzero()
bp = b.Needzero()
if ap != bp {
return ap
}
@ -101,7 +101,7 @@ func cmpstackvarlt(a, b ir.Node) bool {
}
// byStackvar implements sort.Interface for []*Node using cmpstackvarlt.
type byStackVar []ir.Node
type byStackVar []*ir.Name
func (s byStackVar) Len() int { return len(s) }
func (s byStackVar) Less(i, j int) bool { return cmpstackvarlt(s[i], s[j]) }
@ -110,12 +110,12 @@ func (s byStackVar) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func (s *ssafn) AllocFrame(f *ssa.Func) {
s.stksize = 0
s.stkptrsize = 0
fn := s.curfn.Func()
fn := s.curfn
// Mark the PAUTO's unused.
for _, ln := range fn.Dcl {
if ln.Class() == ir.PAUTO {
ln.Name().SetUsed(false)
ln.SetUsed(false)
}
}
@ -128,7 +128,7 @@ func (s *ssafn) AllocFrame(f *ssa.Func) {
scratchUsed := false
for _, b := range f.Blocks {
for _, v := range b.Values {
if n, ok := v.Aux.(ir.Node); ok {
if n, ok := v.Aux.(*ir.Name); ok {
switch n.Class() {
case ir.PPARAM, ir.PPARAMOUT:
// Don't modify nodfp; it is a global.
@ -158,7 +158,7 @@ func (s *ssafn) AllocFrame(f *ssa.Func) {
if n.Op() != ir.ONAME || n.Class() != ir.PAUTO {
continue
}
if !n.Name().Used() {
if !n.Used() {
fn.Dcl = fn.Dcl[:i]
break
}
@ -193,9 +193,9 @@ func (s *ssafn) AllocFrame(f *ssa.Func) {
s.stkptrsize = Rnd(s.stkptrsize, int64(Widthreg))
}
func funccompile(fn ir.Node) {
func funccompile(fn *ir.Func) {
if Curfn != nil {
base.Fatalf("funccompile %v inside %v", fn.Func().Nname.Sym(), Curfn.Func().Nname.Sym())
base.Fatalf("funccompile %v inside %v", fn.Sym(), Curfn.Sym())
}
if fn.Type() == nil {
@ -210,21 +210,19 @@ func funccompile(fn ir.Node) {
if fn.Body().Len() == 0 {
// Initialize ABI wrappers if necessary.
initLSym(fn.Func(), false)
initLSym(fn, false)
emitptrargsmap(fn)
return
}
dclcontext = ir.PAUTO
Curfn = fn
compile(fn)
Curfn = nil
dclcontext = ir.PEXTERN
}
func compile(fn ir.Node) {
func compile(fn *ir.Func) {
errorsBefore := base.Errors()
order(fn)
if base.Errors() > errorsBefore {
@ -234,7 +232,7 @@ func compile(fn ir.Node) {
// Set up the function's LSym early to avoid data races with the assemblers.
// Do this before walk, as walk needs the LSym to set attributes/relocations
// (e.g. in markTypeUsedInInterface).
initLSym(fn.Func(), true)
initLSym(fn, true)
walk(fn)
if base.Errors() > errorsBefore {
@ -259,15 +257,15 @@ func compile(fn ir.Node) {
// be types of stack objects. We need to do this here
// because symbols must be allocated before the parallel
// phase of the compiler.
for _, n := range fn.Func().Dcl {
for _, n := range fn.Dcl {
switch n.Class() {
case ir.PPARAM, ir.PPARAMOUT, ir.PAUTO:
if livenessShouldTrack(n) && n.Name().Addrtaken() {
if livenessShouldTrack(n) && n.Addrtaken() {
dtypesym(n.Type())
// Also make sure we allocate a linker symbol
// for the stack object data, for the same reason.
if fn.Func().LSym.Func().StackObjects == nil {
fn.Func().LSym.Func().StackObjects = base.Ctxt.Lookup(fn.Func().LSym.Name + ".stkobj")
if fn.LSym.Func().StackObjects == nil {
fn.LSym.Func().StackObjects = base.Ctxt.Lookup(fn.LSym.Name + ".stkobj")
}
}
}
@ -284,7 +282,7 @@ func compile(fn ir.Node) {
// If functions are not compiled immediately,
// they are enqueued in compilequeue,
// which is drained by compileFunctions.
func compilenow(fn ir.Node) bool {
func compilenow(fn *ir.Func) bool {
// Issue 38068: if this function is a method AND an inline
// candidate AND was not inlined (yet), put it onto the compile
// queue instead of compiling it immediately. This is in case we
@ -299,8 +297,8 @@ func compilenow(fn ir.Node) bool {
// isInlinableButNotInlined returns true if 'fn' was marked as an
// inline candidate but then never inlined (presumably because we
// found no call sites).
func isInlinableButNotInlined(fn ir.Node) bool {
if fn.Func().Nname.Func().Inl == nil {
func isInlinableButNotInlined(fn *ir.Func) bool {
if fn.Inl == nil {
return false
}
if fn.Sym() == nil {
@ -315,7 +313,7 @@ const maxStackSize = 1 << 30
// uses it to generate a plist,
// and flushes that plist to machine code.
// worker indicates which of the backend workers is doing the processing.
func compileSSA(fn ir.Node, worker int) {
func compileSSA(fn *ir.Func, worker int) {
f := buildssa(fn, worker)
// Note: check arg size to fix issue 25507.
if f.Frontend().(*ssafn).stksize >= maxStackSize || fn.Type().ArgWidth() >= maxStackSize {
@ -343,7 +341,7 @@ func compileSSA(fn ir.Node, worker int) {
pp.Flush() // assemble, fill in boilerplate, etc.
// fieldtrack must be called after pp.Flush. See issue 20014.
fieldtrack(pp.Text.From.Sym, fn.Func().FieldTrack)
fieldtrack(pp.Text.From.Sym, fn.FieldTrack)
}
func init() {
@ -360,7 +358,7 @@ func compileFunctions() {
sizeCalculationDisabled = true // not safe to calculate sizes concurrently
if race.Enabled {
// Randomize compilation order to try to shake out races.
tmp := make([]ir.Node, len(compilequeue))
tmp := make([]*ir.Func, len(compilequeue))
perm := rand.Perm(len(compilequeue))
for i, v := range perm {
tmp[v] = compilequeue[i]
@ -376,7 +374,7 @@ func compileFunctions() {
}
var wg sync.WaitGroup
base.Ctxt.InParallel = true
c := make(chan ir.Node, base.Flag.LowerC)
c := make(chan *ir.Func, base.Flag.LowerC)
for i := 0; i < base.Flag.LowerC; i++ {
wg.Add(1)
go func(worker int) {
@ -398,9 +396,10 @@ func compileFunctions() {
}
func debuginfo(fnsym *obj.LSym, infosym *obj.LSym, curfn interface{}) ([]dwarf.Scope, dwarf.InlCalls) {
fn := curfn.(ir.Node)
if fn.Func().Nname != nil {
if expect := fn.Func().Nname.Sym().Linksym(); fnsym != expect {
fn := curfn.(*ir.Func)
if fn.Nname != nil {
if expect := fn.Sym().Linksym(); fnsym != expect {
base.Fatalf("unexpected fnsym: %v != %v", fnsym, expect)
}
}
@ -430,18 +429,25 @@ func debuginfo(fnsym *obj.LSym, infosym *obj.LSym, curfn interface{}) ([]dwarf.S
//
// These two adjustments keep toolstash -cmp working for now.
// Deciding the right answer is, as they say, future work.
isODCLFUNC := fn.Op() == ir.ODCLFUNC
//
// We can tell the difference between the old ODCLFUNC and ONAME
// cases by looking at the infosym.Name. If it's empty, DebugInfo is
// being called from (*obj.Link).populateDWARF, which used to use
// the ODCLFUNC. If it's non-empty (the name will end in $abstract),
// DebugInfo is being called from (*obj.Link).DwarfAbstractFunc,
// which used to use the ONAME form.
isODCLFUNC := infosym.Name == ""
var apdecls []ir.Node
var apdecls []*ir.Name
// Populate decls for fn.
if isODCLFUNC {
for _, n := range fn.Func().Dcl {
for _, n := range fn.Dcl {
if n.Op() != ir.ONAME { // might be OTYPE or OLITERAL
continue
}
switch n.Class() {
case ir.PAUTO:
if !n.Name().Used() {
if !n.Used() {
// Text == nil -> generating abstract function
if fnsym.Func().Text != nil {
base.Fatalf("debuginfo unused node (AllocFrame should truncate fn.Func.Dcl)")
@ -457,7 +463,7 @@ func debuginfo(fnsym *obj.LSym, infosym *obj.LSym, curfn interface{}) ([]dwarf.S
}
}
decls, dwarfVars := createDwarfVars(fnsym, isODCLFUNC, fn.Func(), apdecls)
decls, dwarfVars := createDwarfVars(fnsym, isODCLFUNC, fn, apdecls)
// For each type referenced by the functions auto vars but not
// already referenced by a dwarf var, attach an R_USETYPE relocation to
@ -478,7 +484,7 @@ func debuginfo(fnsym *obj.LSym, infosym *obj.LSym, curfn interface{}) ([]dwarf.S
var varScopes []ir.ScopeID
for _, decl := range decls {
pos := declPos(decl)
varScopes = append(varScopes, findScope(fn.Func().Marks, pos))
varScopes = append(varScopes, findScope(fn.Marks, pos))
}
scopes := assembleScopes(fnsym, fn, dwarfVars, varScopes)
@ -489,7 +495,7 @@ func debuginfo(fnsym *obj.LSym, infosym *obj.LSym, curfn interface{}) ([]dwarf.S
return scopes, inlcalls
}
func declPos(decl ir.Node) src.XPos {
func declPos(decl *ir.Name) src.XPos {
if decl.Name().Defn != nil && (decl.Name().Captured() || decl.Name().Byval()) {
// It's not clear which position is correct for captured variables here:
// * decl.Pos is the wrong position for captured variables, in the inner
@ -512,10 +518,10 @@ func declPos(decl ir.Node) src.XPos {
// createSimpleVars creates a DWARF entry for every variable declared in the
// function, claiming that they are permanently on the stack.
func createSimpleVars(fnsym *obj.LSym, apDecls []ir.Node) ([]ir.Node, []*dwarf.Var, map[ir.Node]bool) {
func createSimpleVars(fnsym *obj.LSym, apDecls []*ir.Name) ([]*ir.Name, []*dwarf.Var, map[*ir.Name]bool) {
var vars []*dwarf.Var
var decls []ir.Node
selected := make(map[ir.Node]bool)
var decls []*ir.Name
selected := make(map[*ir.Name]bool)
for _, n := range apDecls {
if ir.IsAutoTmp(n) {
continue
@ -528,7 +534,7 @@ func createSimpleVars(fnsym *obj.LSym, apDecls []ir.Node) ([]ir.Node, []*dwarf.V
return decls, vars, selected
}
func createSimpleVar(fnsym *obj.LSym, n ir.Node) *dwarf.Var {
func createSimpleVar(fnsym *obj.LSym, n *ir.Name) *dwarf.Var {
var abbrev int
offs := n.Offset()
@ -579,13 +585,13 @@ func createSimpleVar(fnsym *obj.LSym, n ir.Node) *dwarf.Var {
// createComplexVars creates recomposed DWARF vars with location lists,
// suitable for describing optimized code.
func createComplexVars(fnsym *obj.LSym, fn *ir.Func) ([]ir.Node, []*dwarf.Var, map[ir.Node]bool) {
func createComplexVars(fnsym *obj.LSym, fn *ir.Func) ([]*ir.Name, []*dwarf.Var, map[*ir.Name]bool) {
debugInfo := fn.DebugInfo.(*ssa.FuncDebug)
// Produce a DWARF variable entry for each user variable.
var decls []ir.Node
var decls []*ir.Name
var vars []*dwarf.Var
ssaVars := make(map[ir.Node]bool)
ssaVars := make(map[*ir.Name]bool)
for varID, dvar := range debugInfo.Vars {
n := dvar
@ -605,11 +611,11 @@ func createComplexVars(fnsym *obj.LSym, fn *ir.Func) ([]ir.Node, []*dwarf.Var, m
// createDwarfVars process fn, returning a list of DWARF variables and the
// Nodes they represent.
func createDwarfVars(fnsym *obj.LSym, complexOK bool, fn *ir.Func, apDecls []ir.Node) ([]ir.Node, []*dwarf.Var) {
func createDwarfVars(fnsym *obj.LSym, complexOK bool, fn *ir.Func, apDecls []*ir.Name) ([]*ir.Name, []*dwarf.Var) {
// Collect a raw list of DWARF vars.
var vars []*dwarf.Var
var decls []ir.Node
var selected map[ir.Node]bool
var decls []*ir.Name
var selected map[*ir.Name]bool
if base.Ctxt.Flag_locationlists && base.Ctxt.Flag_optimize && fn.DebugInfo != nil && complexOK {
decls, vars, selected = createComplexVars(fnsym, fn)
} else {
@ -667,7 +673,7 @@ func createDwarfVars(fnsym *obj.LSym, complexOK bool, fn *ir.Func, apDecls []ir.
// misleading location for the param (we want pointer-to-heap
// and not stack).
// TODO(thanm): generate a better location expression
stackcopy := n.Name().Param.Stackcopy
stackcopy := n.Name().Stackcopy
if stackcopy != nil && (stackcopy.Class() == ir.PPARAM || stackcopy.Class() == ir.PPARAMOUT) {
abbrev = dwarf.DW_ABRV_PARAM_LOCLIST
isReturnValue = (stackcopy.Class() == ir.PPARAMOUT)
@ -708,10 +714,10 @@ func createDwarfVars(fnsym *obj.LSym, complexOK bool, fn *ir.Func, apDecls []ir.
// function that is not local to the package being compiled, then the
// names of the variables may have been "versioned" to avoid conflicts
// with local vars; disregard this versioning when sorting.
func preInliningDcls(fnsym *obj.LSym) []ir.Node {
fn := base.Ctxt.DwFixups.GetPrecursorFunc(fnsym).(ir.Node)
var rdcl []ir.Node
for _, n := range fn.Func().Inl.Dcl {
func preInliningDcls(fnsym *obj.LSym) []*ir.Name {
fn := base.Ctxt.DwFixups.GetPrecursorFunc(fnsym).(*ir.Func)
var rdcl []*ir.Name
for _, n := range fn.Inl.Dcl {
c := n.Sym().Name[0]
// Avoid reporting "_" parameters, since if there are more than
// one, it can result in a collision later on, as in #23179.

View File

@ -7,38 +7,37 @@ package gc
import (
"cmd/compile/internal/ir"
"cmd/compile/internal/types"
"cmd/internal/src"
"reflect"
"sort"
"testing"
)
func typeWithoutPointers() *types.Type {
t := types.New(types.TSTRUCT)
f := &types.Field{Type: types.New(types.TINT)}
t.SetFields([]*types.Field{f})
return t
return types.NewStruct(types.NoPkg, []*types.Field{
types.NewField(src.NoXPos, nil, types.New(types.TINT)),
})
}
func typeWithPointers() *types.Type {
t := types.New(types.TSTRUCT)
f := &types.Field{Type: types.NewPtr(types.New(types.TINT))}
t.SetFields([]*types.Field{f})
return t
return types.NewStruct(types.NoPkg, []*types.Field{
types.NewField(src.NoXPos, nil, types.NewPtr(types.New(types.TINT))),
})
}
func markUsed(n ir.Node) ir.Node {
n.Name().SetUsed(true)
func markUsed(n *ir.Name) *ir.Name {
n.SetUsed(true)
return n
}
func markNeedZero(n ir.Node) ir.Node {
n.Name().SetNeedzero(true)
func markNeedZero(n *ir.Name) *ir.Name {
n.SetNeedzero(true)
return n
}
// Test all code paths for cmpstackvarlt.
func TestCmpstackvar(t *testing.T) {
nod := func(xoffset int64, t *types.Type, s *types.Sym, cl ir.Class) ir.Node {
nod := func(xoffset int64, t *types.Type, s *types.Sym, cl ir.Class) *ir.Name {
if s == nil {
s = &types.Sym{Name: "."}
}
@ -49,7 +48,7 @@ func TestCmpstackvar(t *testing.T) {
return n
}
testdata := []struct {
a, b ir.Node
a, b *ir.Name
lt bool
}{
{
@ -146,24 +145,24 @@ func TestCmpstackvar(t *testing.T) {
for _, d := range testdata {
got := cmpstackvarlt(d.a, d.b)
if got != d.lt {
t.Errorf("want %#v < %#v", d.a, d.b)
t.Errorf("want %v < %v", d.a, d.b)
}
// If we expect a < b to be true, check that b < a is false.
if d.lt && cmpstackvarlt(d.b, d.a) {
t.Errorf("unexpected %#v < %#v", d.b, d.a)
t.Errorf("unexpected %v < %v", d.b, d.a)
}
}
}
func TestStackvarSort(t *testing.T) {
nod := func(xoffset int64, t *types.Type, s *types.Sym, cl ir.Class) ir.Node {
nod := func(xoffset int64, t *types.Type, s *types.Sym, cl ir.Class) *ir.Name {
n := NewName(s)
n.SetType(t)
n.SetOffset(xoffset)
n.SetClass(cl)
return n
}
inp := []ir.Node{
inp := []*ir.Name{
nod(0, &types.Type{}, &types.Sym{}, ir.PFUNC),
nod(0, &types.Type{}, &types.Sym{}, ir.PAUTO),
nod(0, &types.Type{}, &types.Sym{}, ir.PFUNC),
@ -178,7 +177,7 @@ func TestStackvarSort(t *testing.T) {
nod(0, &types.Type{}, &types.Sym{Name: "abc"}, ir.PAUTO),
nod(0, &types.Type{}, &types.Sym{Name: "xyz"}, ir.PAUTO),
}
want := []ir.Node{
want := []*ir.Name{
nod(0, &types.Type{}, &types.Sym{}, ir.PFUNC),
nod(0, &types.Type{}, &types.Sym{}, ir.PFUNC),
nod(10, &types.Type{}, &types.Sym{}, ir.PFUNC),

View File

@ -23,6 +23,14 @@ const smallBlocks = 500
const debugPhi = false
// FwdRefAux wraps an arbitrary ir.Node as an ssa.Aux for use with OpFwdref.
type FwdRefAux struct {
_ [0]func() // ensure ir.Node isn't compared for equality
N ir.Node
}
func (FwdRefAux) CanBeAnSSAAux() {}
// insertPhis finds all the places in the function where a phi is
// necessary and inserts them.
// Uses FwdRef ops to find all uses of variables, and s.defvars to find
@ -79,7 +87,7 @@ func (s *phiState) insertPhis() {
if v.Op != ssa.OpFwdRef {
continue
}
var_ := v.Aux.(ir.Node)
var_ := v.Aux.(FwdRefAux).N
// Optimization: look back 1 block for the definition.
if len(b.Preds) == 1 {
@ -180,6 +188,11 @@ levels:
if v.Op == ssa.OpPhi {
v.AuxInt = 0
}
// Any remaining FwdRefs are dead code.
if v.Op == ssa.OpFwdRef {
v.Op = ssa.OpUnknown
v.Aux = nil
}
}
}
}
@ -319,7 +332,7 @@ func (s *phiState) resolveFwdRefs() {
if v.Op != ssa.OpFwdRef {
continue
}
n := s.varnum[v.Aux.(ir.Node)]
n := s.varnum[v.Aux.(FwdRefAux).N]
v.Op = ssa.OpCopy
v.Aux = nil
v.AddArg(values[n])
@ -450,7 +463,7 @@ func (s *simplePhiState) insertPhis() {
continue
}
s.fwdrefs = append(s.fwdrefs, v)
var_ := v.Aux.(ir.Node)
var_ := v.Aux.(FwdRefAux).N
if _, ok := s.defvars[b.ID][var_]; !ok {
s.defvars[b.ID][var_] = v // treat FwdDefs as definitions.
}
@ -464,7 +477,7 @@ loop:
v := s.fwdrefs[len(s.fwdrefs)-1]
s.fwdrefs = s.fwdrefs[:len(s.fwdrefs)-1]
b := v.Block
var_ := v.Aux.(ir.Node)
var_ := v.Aux.(FwdRefAux).N
if b == s.f.Entry {
// No variable should be live at entry.
s.s.Fatalf("Value live at entry. It shouldn't be. func %s, node %v, value %v", s.f.Name, var_, v)
@ -531,7 +544,7 @@ func (s *simplePhiState) lookupVarOutgoing(b *ssa.Block, t *types.Type, var_ ir.
}
}
// Generate a FwdRef for the variable and return that.
v := b.NewValue0A(line, ssa.OpFwdRef, t, var_)
v := b.NewValue0A(line, ssa.OpFwdRef, t, FwdRefAux{N: var_})
s.defvars[b.ID][var_] = v
s.s.addNamedValue(var_, v)
s.fwdrefs = append(s.fwdrefs, v)

View File

@ -101,10 +101,10 @@ type BlockEffects struct {
// A collection of global state used by liveness analysis.
type Liveness struct {
fn ir.Node
fn *ir.Func
f *ssa.Func
vars []ir.Node
idx map[ir.Node]int32
vars []*ir.Name
idx map[*ir.Name]int32
stkptrsize int64
be []BlockEffects
@ -212,14 +212,14 @@ func livenessShouldTrack(n ir.Node) bool {
// getvariables returns the list of on-stack variables that we need to track
// and a map for looking up indices by *Node.
func getvariables(fn ir.Node) ([]ir.Node, map[ir.Node]int32) {
var vars []ir.Node
for _, n := range fn.Func().Dcl {
func getvariables(fn *ir.Func) ([]*ir.Name, map[*ir.Name]int32) {
var vars []*ir.Name
for _, n := range fn.Dcl {
if livenessShouldTrack(n) {
vars = append(vars, n)
}
}
idx := make(map[ir.Node]int32, len(vars))
idx := make(map[*ir.Name]int32, len(vars))
for i, n := range vars {
idx[n] = int32(i)
}
@ -276,13 +276,14 @@ func (lv *Liveness) valueEffects(v *ssa.Value) (int32, liveEffect) {
return -1, 0
}
nn := n.(*ir.Name)
// AllocFrame has dropped unused variables from
// lv.fn.Func.Dcl, but they might still be referenced by
// OpVarFoo pseudo-ops. Ignore them to prevent "lost track of
// variable" ICEs (issue 19632).
switch v.Op {
case ssa.OpVarDef, ssa.OpVarKill, ssa.OpVarLive, ssa.OpKeepAlive:
if !n.Name().Used() {
if !nn.Name().Used() {
return -1, 0
}
}
@ -305,7 +306,7 @@ func (lv *Liveness) valueEffects(v *ssa.Value) (int32, liveEffect) {
return -1, 0
}
if pos, ok := lv.idx[n]; ok {
if pos, ok := lv.idx[nn]; ok {
return pos, effect
}
return -1, 0
@ -323,9 +324,9 @@ func affectedNode(v *ssa.Value) (ir.Node, ssa.SymEffect) {
return n, ssa.SymWrite
case ssa.OpVarLive:
return v.Aux.(ir.Node), ssa.SymRead
return v.Aux.(*ir.Name), ssa.SymRead
case ssa.OpVarDef, ssa.OpVarKill:
return v.Aux.(ir.Node), ssa.SymWrite
return v.Aux.(*ir.Name), ssa.SymWrite
case ssa.OpKeepAlive:
n, _ := AutoVar(v.Args[0])
return n, ssa.SymRead
@ -340,7 +341,7 @@ func affectedNode(v *ssa.Value) (ir.Node, ssa.SymEffect) {
case nil, *obj.LSym:
// ok, but no node
return nil, e
case ir.Node:
case *ir.Name:
return a, e
default:
base.Fatalf("weird aux: %s", v.LongString())
@ -356,7 +357,7 @@ type livenessFuncCache struct {
// Constructs a new liveness structure used to hold the global state of the
// liveness computation. The cfg argument is a slice of *BasicBlocks and the
// vars argument is a slice of *Nodes.
func newliveness(fn ir.Node, f *ssa.Func, vars []ir.Node, idx map[ir.Node]int32, stkptrsize int64) *Liveness {
func newliveness(fn *ir.Func, f *ssa.Func, vars []*ir.Name, idx map[*ir.Name]int32, stkptrsize int64) *Liveness {
lv := &Liveness{
fn: fn,
f: f,
@ -416,7 +417,7 @@ func onebitwalktype1(t *types.Type, off int64, bv bvec) {
return
}
switch t.Etype {
switch t.Kind() {
case types.TPTR, types.TUNSAFEPTR, types.TFUNC, types.TCHAN, types.TMAP:
if off&int64(Widthptr-1) != 0 {
base.Fatalf("onebitwalktype1: invalid alignment, %v", t)
@ -482,7 +483,7 @@ func onebitwalktype1(t *types.Type, off int64, bv bvec) {
// Generates live pointer value maps for arguments and local variables. The
// this argument and the in arguments are always assumed live. The vars
// argument is a slice of *Nodes.
func (lv *Liveness) pointerMap(liveout bvec, vars []ir.Node, args, locals bvec) {
func (lv *Liveness) pointerMap(liveout bvec, vars []*ir.Name, args, locals bvec) {
for i := int32(0); ; i++ {
i = liveout.Next(i)
if i < 0 {
@ -788,14 +789,14 @@ func (lv *Liveness) epilogue() {
// pointers to copy values back to the stack).
// TODO: if the output parameter is heap-allocated, then we
// don't need to keep the stack copy live?
if lv.fn.Func().HasDefer() {
if lv.fn.HasDefer() {
for i, n := range lv.vars {
if n.Class() == ir.PPARAMOUT {
if n.Name().IsOutputParamHeapAddr() {
// Just to be paranoid. Heap addresses are PAUTOs.
base.Fatalf("variable %v both output param and heap output param", n)
}
if n.Name().Param.Heapaddr != nil {
if n.Name().Heapaddr != nil {
// If this variable moved to the heap, then
// its stack copy is not live.
continue
@ -891,7 +892,7 @@ func (lv *Liveness) epilogue() {
if n.Class() == ir.PPARAM {
continue // ok
}
base.Fatalf("bad live variable at entry of %v: %L", lv.fn.Func().Nname, n)
base.Fatalf("bad live variable at entry of %v: %L", lv.fn.Nname, n)
}
// Record live variables.
@ -904,7 +905,7 @@ func (lv *Liveness) epilogue() {
}
// If we have an open-coded deferreturn call, make a liveness map for it.
if lv.fn.Func().OpenCodedDeferDisallowed() {
if lv.fn.OpenCodedDeferDisallowed() {
lv.livenessMap.deferreturn = LivenessDontCare
} else {
lv.livenessMap.deferreturn = LivenessIndex{
@ -922,7 +923,7 @@ func (lv *Liveness) epilogue() {
// input parameters.
for j, n := range lv.vars {
if n.Class() != ir.PPARAM && lv.stackMaps[0].Get(int32(j)) {
lv.f.Fatalf("%v %L recorded as live on entry", lv.fn.Func().Nname, n)
lv.f.Fatalf("%v %L recorded as live on entry", lv.fn.Nname, n)
}
}
}
@ -980,7 +981,7 @@ func (lv *Liveness) showlive(v *ssa.Value, live bvec) {
return
}
pos := lv.fn.Func().Nname.Pos()
pos := lv.fn.Nname.Pos()
if v != nil {
pos = v.Pos
}
@ -1090,7 +1091,7 @@ func (lv *Liveness) printDebug() {
if b == lv.f.Entry {
live := lv.stackMaps[0]
fmt.Printf("(%s) function entry\n", base.FmtPos(lv.fn.Func().Nname.Pos()))
fmt.Printf("(%s) function entry\n", base.FmtPos(lv.fn.Nname.Pos()))
fmt.Printf("\tlive=")
printed = false
for j, n := range lv.vars {
@ -1266,7 +1267,7 @@ func liveness(e *ssafn, f *ssa.Func, pp *Progs) LivenessMap {
}
// Emit the live pointer map data structures
ls := e.curfn.Func().LSym
ls := e.curfn.LSym
fninfo := ls.Func()
fninfo.GCArgs, fninfo.GCLocals = lv.emit()
@ -1300,7 +1301,7 @@ func liveness(e *ssafn, f *ssa.Func, pp *Progs) LivenessMap {
// to fully initialize t.
func isfat(t *types.Type) bool {
if t != nil {
switch t.Etype {
switch t.Kind() {
case types.TSLICE, types.TSTRING,
types.TINTER: // maybe remove later
return true

View File

@ -60,13 +60,13 @@ func ispkgin(pkgs []string) bool {
return false
}
func instrument(fn ir.Node) {
if fn.Func().Pragma&ir.Norace != 0 {
func instrument(fn *ir.Func) {
if fn.Pragma&ir.Norace != 0 {
return
}
if !base.Flag.Race || !ispkgin(norace_inst_pkgs) {
fn.Func().SetInstrumentBody(true)
fn.SetInstrumentBody(true)
}
if base.Flag.Race {
@ -74,8 +74,8 @@ func instrument(fn ir.Node) {
base.Pos = src.NoXPos
if thearch.LinkArch.Arch.Family != sys.AMD64 {
fn.Func().Enter.Prepend(mkcall("racefuncenterfp", nil, nil))
fn.Func().Exit.Append(mkcall("racefuncexit", nil, nil))
fn.Enter.Prepend(mkcall("racefuncenterfp", nil, nil))
fn.Exit.Append(mkcall("racefuncexit", nil, nil))
} else {
// nodpc is the PC of the caller as extracted by
@ -83,12 +83,12 @@ func instrument(fn ir.Node) {
// This only works for amd64. This will not
// work on arm or others that might support
// race in the future.
nodpc := ir.Copy(nodfp)
nodpc := ir.Copy(nodfp).(*ir.Name)
nodpc.SetType(types.Types[types.TUINTPTR])
nodpc.SetOffset(int64(-Widthptr))
fn.Func().Dcl = append(fn.Func().Dcl, nodpc)
fn.Func().Enter.Prepend(mkcall("racefuncenter", nil, nil, nodpc))
fn.Func().Exit.Append(mkcall("racefuncexit", nil, nil))
fn.Dcl = append(fn.Dcl, nodpc)
fn.Enter.Prepend(mkcall("racefuncenter", nil, nil, nodpc))
fn.Exit.Append(mkcall("racefuncexit", nil, nil))
}
base.Pos = lno
}

View File

@ -49,7 +49,7 @@ func typecheckrangeExpr(n ir.Node) {
// delicate little dance. see typecheckas2
ls := n.List().Slice()
for i1, n1 := range ls {
if n1.Name() == nil || n1.Name().Defn != n {
if !ir.DeclaredBy(n1, n) {
ls[i1] = typecheck(ls[i1], ctxExpr|ctxAssign)
}
}
@ -61,7 +61,7 @@ func typecheckrangeExpr(n ir.Node) {
var t1, t2 *types.Type
toomany := false
switch t.Etype {
switch t.Kind() {
default:
base.ErrorfAt(n.Pos(), "cannot range over %L", n.Right())
return
@ -88,7 +88,7 @@ func typecheckrangeExpr(n ir.Node) {
case types.TSTRING:
t1 = types.Types[types.TINT]
t2 = types.Runetype
t2 = types.RuneType
}
if n.List().Len() > 2 || toomany {
@ -115,7 +115,7 @@ func typecheckrangeExpr(n ir.Node) {
}
if v1 != nil {
if v1.Name() != nil && v1.Name().Defn == n {
if ir.DeclaredBy(v1, n) {
v1.SetType(t1)
} else if v1.Type() != nil {
if op, why := assignop(t1, v1.Type()); op == ir.OXXX {
@ -126,7 +126,7 @@ func typecheckrangeExpr(n ir.Node) {
}
if v2 != nil {
if v2.Name() != nil && v2.Name().Defn == n {
if ir.DeclaredBy(v2, n) {
v2.SetType(t2)
} else if v2.Type() != nil {
if op, why := assignop(t2, v2.Type()); op == ir.OXXX {
@ -157,15 +157,19 @@ func cheapComputableIndex(width int64) bool {
// simpler forms. The result must be assigned back to n.
// Node n may also be modified in place, and may also be
// the returned node.
func walkrange(n ir.Node) ir.Node {
if isMapClear(n) {
m := n.Right()
func walkrange(nrange ir.Node) ir.Node {
if isMapClear(nrange) {
m := nrange.Right()
lno := setlineno(m)
n = mapClear(m)
n := mapClear(m)
base.Pos = lno
return n
}
nfor := ir.NodAt(nrange.Pos(), ir.OFOR, nil, nil)
nfor.SetInit(nrange.Init())
nfor.SetSym(nrange.Sym())
// variable name conventions:
// ohv1, hv1, hv2: hidden (old) val 1, 2
// ha, hit: hidden aggregate, iterator
@ -173,20 +177,19 @@ func walkrange(n ir.Node) ir.Node {
// hb: hidden bool
// a, v1, v2: not hidden aggregate, val 1, 2
t := n.Type()
t := nrange.Type()
a := n.Right()
a := nrange.Right()
lno := setlineno(a)
n.SetRight(nil)
var v1, v2 ir.Node
l := n.List().Len()
l := nrange.List().Len()
if l > 0 {
v1 = n.List().First()
v1 = nrange.List().First()
}
if l > 1 {
v2 = n.List().Second()
v2 = nrange.List().Second()
}
if ir.IsBlank(v2) {
@ -201,24 +204,18 @@ func walkrange(n ir.Node) ir.Node {
base.Fatalf("walkrange: v2 != nil while v1 == nil")
}
// n.List has no meaning anymore, clear it
// to avoid erroneous processing by racewalk.
n.PtrList().Set(nil)
var ifGuard ir.Node
translatedLoopOp := ir.OFOR
var body []ir.Node
var init []ir.Node
switch t.Etype {
switch t.Kind() {
default:
base.Fatalf("walkrange")
case types.TARRAY, types.TSLICE:
if arrayClear(n, v1, v2, a) {
if nn := arrayClear(nrange, v1, v2, a); nn != nil {
base.Pos = lno
return n
return nn
}
// order.stmt arranged for a copy of the array/slice variable if needed.
@ -230,8 +227,8 @@ func walkrange(n ir.Node) ir.Node {
init = append(init, ir.Nod(ir.OAS, hv1, nil))
init = append(init, ir.Nod(ir.OAS, hn, ir.Nod(ir.OLEN, ha, nil)))
n.SetLeft(ir.Nod(ir.OLT, hv1, hn))
n.SetRight(ir.Nod(ir.OAS, hv1, ir.Nod(ir.OADD, hv1, nodintconst(1))))
nfor.SetLeft(ir.Nod(ir.OLT, hv1, hn))
nfor.SetRight(ir.Nod(ir.OAS, hv1, ir.Nod(ir.OADD, hv1, nodintconst(1))))
// for range ha { body }
if v1 == nil {
@ -245,7 +242,7 @@ func walkrange(n ir.Node) ir.Node {
}
// for v1, v2 := range ha { body }
if cheapComputableIndex(n.Type().Elem().Width) {
if cheapComputableIndex(nrange.Type().Elem().Width) {
// v1, v2 = hv1, ha[hv1]
tmp := ir.Nod(ir.OINDEX, ha, hv1)
tmp.SetBounded(true)
@ -272,9 +269,9 @@ func walkrange(n ir.Node) ir.Node {
// Enhance the prove pass to understand this.
ifGuard = ir.Nod(ir.OIF, nil, nil)
ifGuard.SetLeft(ir.Nod(ir.OLT, hv1, hn))
translatedLoopOp = ir.OFORUNTIL
nfor.SetOp(ir.OFORUNTIL)
hp := temp(types.NewPtr(n.Type().Elem()))
hp := temp(types.NewPtr(nrange.Type().Elem()))
tmp := ir.Nod(ir.OINDEX, ha, nodintconst(0))
tmp.SetBounded(true)
init = append(init, ir.Nod(ir.OAS, hp, ir.Nod(ir.OADDR, tmp, nil)))
@ -293,16 +290,15 @@ func walkrange(n ir.Node) ir.Node {
// end of the allocation.
a = ir.Nod(ir.OAS, hp, addptr(hp, t.Elem().Width))
a = typecheck(a, ctxStmt)
n.PtrList().Set1(a)
nfor.PtrList().Set1(a)
case types.TMAP:
// order.stmt allocated the iterator for us.
// we only use a once, so no copy needed.
ha := a
hit := prealloc[n]
hit := prealloc[nrange]
th := hit.Type()
n.SetLeft(nil)
keysym := th.Field(0).Sym // depends on layout of iterator struct. See reflect.go:hiter
elemsym := th.Field(1).Sym // ditto
@ -310,11 +306,11 @@ func walkrange(n ir.Node) ir.Node {
fn = substArgTypes(fn, t.Key(), t.Elem(), th)
init = append(init, mkcall1(fn, nil, nil, typename(t), ha, ir.Nod(ir.OADDR, hit, nil)))
n.SetLeft(ir.Nod(ir.ONE, nodSym(ir.ODOT, hit, keysym), nodnil()))
nfor.SetLeft(ir.Nod(ir.ONE, nodSym(ir.ODOT, hit, keysym), nodnil()))
fn = syslook("mapiternext")
fn = substArgTypes(fn, th)
n.SetRight(mkcall1(fn, nil, nil, ir.Nod(ir.OADDR, hit, nil)))
nfor.SetRight(mkcall1(fn, nil, nil, ir.Nod(ir.OADDR, hit, nil)))
key := nodSym(ir.ODOT, hit, keysym)
key = ir.Nod(ir.ODEREF, key, nil)
@ -335,8 +331,6 @@ func walkrange(n ir.Node) ir.Node {
// order.stmt arranged for a copy of the channel variable.
ha := a
n.SetLeft(nil)
hv1 := temp(t.Elem())
hv1.SetTypecheck(1)
if t.Elem().HasPointers() {
@ -344,12 +338,12 @@ func walkrange(n ir.Node) ir.Node {
}
hb := temp(types.Types[types.TBOOL])
n.SetLeft(ir.Nod(ir.ONE, hb, nodbool(false)))
nfor.SetLeft(ir.Nod(ir.ONE, hb, nodbool(false)))
a := ir.Nod(ir.OAS2RECV, nil, nil)
a.SetTypecheck(1)
a.PtrList().Set2(hv1, hb)
a.SetRight(ir.Nod(ir.ORECV, ha, nil))
n.Left().PtrInit().Set1(a)
a.PtrRlist().Set1(ir.Nod(ir.ORECV, ha, nil))
nfor.Left().PtrInit().Set1(a)
if v1 == nil {
body = nil
} else {
@ -381,13 +375,13 @@ func walkrange(n ir.Node) ir.Node {
hv1 := temp(types.Types[types.TINT])
hv1t := temp(types.Types[types.TINT])
hv2 := temp(types.Runetype)
hv2 := temp(types.RuneType)
// hv1 := 0
init = append(init, ir.Nod(ir.OAS, hv1, nil))
// hv1 < len(ha)
n.SetLeft(ir.Nod(ir.OLT, hv1, ir.Nod(ir.OLEN, ha, nil)))
nfor.SetLeft(ir.Nod(ir.OLT, hv1, ir.Nod(ir.OLEN, ha, nil)))
if v1 != nil {
// hv1t = hv1
@ -397,7 +391,7 @@ func walkrange(n ir.Node) ir.Node {
// hv2 := rune(ha[hv1])
nind := ir.Nod(ir.OINDEX, ha, hv1)
nind.SetBounded(true)
body = append(body, ir.Nod(ir.OAS, hv2, conv(nind, types.Runetype)))
body = append(body, ir.Nod(ir.OAS, hv2, conv(nind, types.RuneType)))
// if hv2 < utf8.RuneSelf
nif := ir.Nod(ir.OIF, nil, nil)
@ -431,24 +425,25 @@ func walkrange(n ir.Node) ir.Node {
}
}
n.SetOp(translatedLoopOp)
typecheckslice(init, ctxStmt)
if ifGuard != nil {
ifGuard.PtrInit().Append(init...)
ifGuard = typecheck(ifGuard, ctxStmt)
} else {
n.PtrInit().Append(init...)
nfor.PtrInit().Append(init...)
}
typecheckslice(n.Left().Init().Slice(), ctxStmt)
typecheckslice(nfor.Left().Init().Slice(), ctxStmt)
n.SetLeft(typecheck(n.Left(), ctxExpr))
n.SetLeft(defaultlit(n.Left(), nil))
n.SetRight(typecheck(n.Right(), ctxStmt))
nfor.SetLeft(typecheck(nfor.Left(), ctxExpr))
nfor.SetLeft(defaultlit(nfor.Left(), nil))
nfor.SetRight(typecheck(nfor.Right(), ctxStmt))
typecheckslice(body, ctxStmt)
n.PtrBody().Prepend(body...)
nfor.PtrBody().Append(body...)
nfor.PtrBody().Append(nrange.Body().Slice()...)
var n ir.Node = nfor
if ifGuard != nil {
ifGuard.PtrBody().Set1(n)
n = ifGuard
@ -472,7 +467,7 @@ func isMapClear(n ir.Node) bool {
return false
}
if n.Op() != ir.ORANGE || n.Type().Etype != types.TMAP || n.List().Len() != 1 {
if n.Op() != ir.ORANGE || n.Type().Kind() != types.TMAP || n.List().Len() != 1 {
return false
}
@ -482,7 +477,7 @@ func isMapClear(n ir.Node) bool {
}
// Require k to be a new variable name.
if k.Name() == nil || k.Name().Defn != n {
if !ir.DeclaredBy(k, n) {
return false
}
@ -534,31 +529,31 @@ func mapClear(m ir.Node) ir.Node {
// in which the evaluation of a is side-effect-free.
//
// Parameters are as in walkrange: "for v1, v2 = range a".
func arrayClear(n, v1, v2, a ir.Node) bool {
func arrayClear(loop, v1, v2, a ir.Node) ir.Node {
if base.Flag.N != 0 || instrumenting {
return false
return nil
}
if v1 == nil || v2 != nil {
return false
return nil
}
if n.Body().Len() != 1 || n.Body().First() == nil {
return false
if loop.Body().Len() != 1 || loop.Body().First() == nil {
return nil
}
stmt := n.Body().First() // only stmt in body
stmt := loop.Body().First() // only stmt in body
if stmt.Op() != ir.OAS || stmt.Left().Op() != ir.OINDEX {
return false
return nil
}
if !samesafeexpr(stmt.Left().Left(), a) || !samesafeexpr(stmt.Left().Right(), v1) {
return false
return nil
}
elemsize := n.Type().Elem().Width
elemsize := loop.Type().Elem().Width
if elemsize <= 0 || !isZero(stmt.Right()) {
return false
return nil
}
// Convert to
@ -568,8 +563,7 @@ func arrayClear(n, v1, v2, a ir.Node) bool {
// memclr{NoHeap,Has}Pointers(hp, hn)
// i = len(a) - 1
// }
n.SetOp(ir.OIF)
n := ir.Nod(ir.OIF, nil, nil)
n.PtrBody().Set(nil)
n.SetLeft(ir.Nod(ir.ONE, ir.Nod(ir.OLEN, a, nil), nodintconst(0)))
@ -593,7 +587,7 @@ func arrayClear(n, v1, v2, a ir.Node) bool {
var fn ir.Node
if a.Type().Elem().HasPointers() {
// memclrHasPointers(hp, hn)
Curfn.Func().SetWBPos(stmt.Pos())
Curfn.SetWBPos(stmt.Pos())
fn = mkcall("memclrHasPointers", nil, nil, hp, hn)
} else {
// memclrNoHeapPointers(hp, hn)
@ -611,7 +605,7 @@ func arrayClear(n, v1, v2, a ir.Node) bool {
n.SetLeft(defaultlit(n.Left(), nil))
typecheckslice(n.Body().Slice(), ctxStmt)
n = walkstmt(n)
return true
return n
}
// addptr returns (*T)(uintptr(p) + n).

View File

@ -68,7 +68,7 @@ func imethodSize() int { return 4 + 4 } // Sizeof(runtime.imeth
func commonSize() int { return 4*Widthptr + 8 + 8 } // Sizeof(runtime._type{})
func uncommonSize(t *types.Type) int { // Sizeof(runtime.uncommontype{})
if t.Sym == nil && len(methods(t)) == 0 {
if t.Sym() == nil && len(methods(t)) == 0 {
return 0
}
return 4 + 2 + 2 + 4 + 4
@ -85,7 +85,6 @@ func bmap(t *types.Type) *types.Type {
return t.MapType().Bucket
}
bucket := types.New(types.TSTRUCT)
keytype := t.Key()
elemtype := t.Elem()
dowidth(keytype)
@ -119,7 +118,7 @@ func bmap(t *types.Type) *types.Type {
// Arrange for the bucket to have no pointers by changing
// the type of the overflow field to uintptr in this case.
// See comment on hmap.overflow in runtime/map.go.
otyp := types.NewPtr(bucket)
otyp := types.Types[types.TUNSAFEPTR]
if !elemtype.HasPointers() && !keytype.HasPointers() {
otyp = types.Types[types.TUINTPTR]
}
@ -127,8 +126,8 @@ func bmap(t *types.Type) *types.Type {
field = append(field, overflow)
// link up fields
bucket := types.NewStruct(types.NoPkg, field[:])
bucket.SetNoalg(true)
bucket.SetFields(field[:])
dowidth(bucket)
// Check invariants that map code depends on.
@ -221,9 +220,8 @@ func hmap(t *types.Type) *types.Type {
makefield("extra", types.Types[types.TUNSAFEPTR]),
}
hmap := types.New(types.TSTRUCT)
hmap := types.NewStruct(types.NoPkg, fields)
hmap.SetNoalg(true)
hmap.SetFields(fields)
dowidth(hmap)
// The size of hmap should be 48 bytes on 64 bit
@ -285,9 +283,8 @@ func hiter(t *types.Type) *types.Type {
}
// build iterator struct holding the above fields
hiter := types.New(types.TSTRUCT)
hiter := types.NewStruct(types.NoPkg, fields)
hiter.SetNoalg(true)
hiter.SetFields(fields)
dowidth(hiter)
if hiter.Width != int64(12*Widthptr) {
base.Fatalf("hash_iter size not correct %d %d", hiter.Width, 12*Widthptr)
@ -304,7 +301,7 @@ func deferstruct(stksize int64) *types.Type {
// Unlike the global makefield function, this one needs to set Pkg
// because these types might be compared (in SSA CSE sorting).
// TODO: unify this makefield and the global one above.
sym := &types.Sym{Name: name, Pkg: ir.LocalPkg}
sym := &types.Sym{Name: name, Pkg: types.LocalPkg}
return types.NewField(src.NoXPos, sym, typ)
}
argtype := types.NewArray(types.Types[types.TUINT8], stksize)
@ -332,9 +329,8 @@ func deferstruct(stksize int64) *types.Type {
}
// build struct holding the above fields
s := types.New(types.TSTRUCT)
s := types.NewStruct(types.NoPkg, fields)
s.SetNoalg(true)
s.SetFields(fields)
s.Width = widstruct(s, s, 0, 1)
s.Align = uint8(Widthptr)
return s
@ -347,7 +343,7 @@ func methodfunc(f *types.Type, receiver *types.Type) *types.Type {
if receiver != nil {
inLen++
}
in := make([]ir.Node, 0, inLen)
in := make([]*ir.Field, 0, inLen)
if receiver != nil {
d := anonfield(receiver)
@ -356,12 +352,12 @@ func methodfunc(f *types.Type, receiver *types.Type) *types.Type {
for _, t := range f.Params().Fields().Slice() {
d := anonfield(t.Type)
d.SetIsDDD(t.IsDDD())
d.IsDDD = t.IsDDD()
in = append(in, d)
}
outLen := f.Results().Fields().Len()
out := make([]ir.Node, 0, outLen)
out := make([]*ir.Field, 0, outLen)
for _, t := range f.Results().Fields().Slice() {
d := anonfield(t.Type)
out = append(out, d)
@ -448,7 +444,7 @@ func methods(t *types.Type) []*Sig {
func imethods(t *types.Type) []*Sig {
var methods []*Sig
for _, f := range t.Fields().Slice() {
if f.Type.Etype != types.TFUNC || f.Sym == nil {
if f.Type.Kind() != types.TFUNC || f.Sym == nil {
continue
}
if f.Sym.IsBlank() {
@ -495,7 +491,7 @@ func dimportpath(p *types.Pkg) {
}
str := p.Path
if p == ir.LocalPkg {
if p == types.LocalPkg {
// Note: myimportpath != "", or else dgopkgpath won't call dimportpath.
str = base.Ctxt.Pkgpath
}
@ -512,7 +508,7 @@ func dgopkgpath(s *obj.LSym, ot int, pkg *types.Pkg) int {
return duintptr(s, ot, 0)
}
if pkg == ir.LocalPkg && base.Ctxt.Pkgpath == "" {
if pkg == types.LocalPkg && base.Ctxt.Pkgpath == "" {
// If we don't know the full import path of the package being compiled
// (i.e. -p was not passed on the compiler command line), emit a reference to
// type..importpath.""., which the linker will rewrite using the correct import path.
@ -531,7 +527,7 @@ func dgopkgpathOff(s *obj.LSym, ot int, pkg *types.Pkg) int {
if pkg == nil {
return duint32(s, ot, 0)
}
if pkg == ir.LocalPkg && base.Ctxt.Pkgpath == "" {
if pkg == types.LocalPkg && base.Ctxt.Pkgpath == "" {
// If we don't know the full import path of the package being compiled
// (i.e. -p was not passed on the compiler command line), emit a reference to
// type..importpath.""., which the linker will rewrite using the correct import path.
@ -640,7 +636,7 @@ func dname(name, tag string, pkg *types.Pkg, exported bool) *obj.LSym {
// backing array of the []method field is written (by dextratypeData).
func dextratype(lsym *obj.LSym, ot int, t *types.Type, dataAdd int) int {
m := methods(t)
if t.Sym == nil && len(m) == 0 {
if t.Sym() == nil && len(m) == 0 {
return ot
}
noff := int(Rnd(int64(ot), int64(Widthptr)))
@ -672,16 +668,16 @@ func dextratype(lsym *obj.LSym, ot int, t *types.Type, dataAdd int) int {
}
func typePkg(t *types.Type) *types.Pkg {
tsym := t.Sym
tsym := t.Sym()
if tsym == nil {
switch t.Etype {
switch t.Kind() {
case types.TARRAY, types.TSLICE, types.TPTR, types.TCHAN:
if t.Elem() != nil {
tsym = t.Elem().Sym
tsym = t.Elem().Sym()
}
}
}
if tsym != nil && t != types.Types[t.Etype] && t != types.Errortype {
if tsym != nil && t != types.Types[t.Kind()] && t != types.ErrorType {
return tsym.Pkg
}
return nil
@ -753,7 +749,7 @@ func typeptrdata(t *types.Type) int64 {
return 0
}
switch t.Etype {
switch t.Kind() {
case types.TPTR,
types.TUNSAFEPTR,
types.TFUNC,
@ -823,7 +819,7 @@ func dcommontype(lsym *obj.LSym, t *types.Type) int {
var sptr *obj.LSym
if !t.IsPtr() || t.IsPtrElem() {
tptr := types.NewPtr(t)
if t.Sym != nil || methods(tptr) != nil {
if t.Sym() != nil || methods(tptr) != nil {
sptrWeak = false
}
sptr = dtypesym(tptr)
@ -855,7 +851,7 @@ func dcommontype(lsym *obj.LSym, t *types.Type) int {
if uncommonSize(t) != 0 {
tflag |= tflagUncommon
}
if t.Sym != nil && t.Sym.Name != "" {
if t.Sym() != nil && t.Sym().Name != "" {
tflag |= tflagNamed
}
if IsRegularMemory(t) {
@ -872,12 +868,12 @@ func dcommontype(lsym *obj.LSym, t *types.Type) int {
if !strings.HasPrefix(p, "*") {
p = "*" + p
tflag |= tflagExtraStar
if t.Sym != nil {
exported = types.IsExported(t.Sym.Name)
if t.Sym() != nil {
exported = types.IsExported(t.Sym().Name)
}
} else {
if t.Elem() != nil && t.Elem().Sym != nil {
exported = types.IsExported(t.Elem().Sym.Name)
if t.Elem() != nil && t.Elem().Sym() != nil {
exported = types.IsExported(t.Elem().Sym().Name)
}
}
@ -895,7 +891,7 @@ func dcommontype(lsym *obj.LSym, t *types.Type) int {
ot = duint8(lsym, ot, t.Align) // align
ot = duint8(lsym, ot, t.Align) // fieldAlign
i = kinds[t.Etype]
i = kinds[t.Kind()]
if isdirectiface(t) {
i |= objabi.KindDirectIface
}
@ -1001,7 +997,7 @@ func typename(t *types.Type) ir.Node {
}
n := ir.Nod(ir.OADDR, ir.AsNode(s.Def), nil)
n.SetType(types.NewPtr(ir.AsNode(s.Def).Type()))
n.SetType(types.NewPtr(s.Def.Type()))
n.SetTypecheck(1)
return n
}
@ -1021,7 +1017,7 @@ func itabname(t, itype *types.Type) ir.Node {
}
n := ir.Nod(ir.OADDR, ir.AsNode(s.Def), nil)
n.SetType(types.NewPtr(ir.AsNode(s.Def).Type()))
n.SetType(types.NewPtr(s.Def.Type()))
n.SetTypecheck(1)
return n
}
@ -1029,7 +1025,7 @@ func itabname(t, itype *types.Type) ir.Node {
// isreflexive reports whether t has a reflexive equality operator.
// That is, if x==x for all x of type t.
func isreflexive(t *types.Type) bool {
switch t.Etype {
switch t.Kind() {
case types.TBOOL,
types.TINT,
types.TUINT,
@ -1075,7 +1071,7 @@ func isreflexive(t *types.Type) bool {
// needkeyupdate reports whether map updates with t as a key
// need the key to be updated.
func needkeyupdate(t *types.Type) bool {
switch t.Etype {
switch t.Kind() {
case types.TBOOL, types.TINT, types.TUINT, types.TINT8, types.TUINT8, types.TINT16, types.TUINT16, types.TINT32, types.TUINT32,
types.TINT64, types.TUINT64, types.TUINTPTR, types.TPTR, types.TUNSAFEPTR, types.TCHAN:
return false
@ -1104,7 +1100,7 @@ func needkeyupdate(t *types.Type) bool {
// hashMightPanic reports whether the hash of a map key of type t might panic.
func hashMightPanic(t *types.Type) bool {
switch t.Etype {
switch t.Kind() {
case types.TINTER:
return true
@ -1128,8 +1124,8 @@ func hashMightPanic(t *types.Type) bool {
// They've been separate internally to make error messages
// better, but we have to merge them in the reflect tables.
func formalType(t *types.Type) *types.Type {
if t == types.Bytetype || t == types.Runetype {
return types.Types[t.Etype]
if t == types.ByteType || t == types.RuneType {
return types.Types[t.Kind()]
}
return t
}
@ -1152,19 +1148,19 @@ func dtypesym(t *types.Type) *obj.LSym {
// emit the type structures for int, float, etc.
tbase := t
if t.IsPtr() && t.Sym == nil && t.Elem().Sym != nil {
if t.IsPtr() && t.Sym() == nil && t.Elem().Sym() != nil {
tbase = t.Elem()
}
dupok := 0
if tbase.Sym == nil {
if tbase.Sym() == nil {
dupok = obj.DUPOK
}
if base.Ctxt.Pkgpath != "runtime" || (tbase != types.Types[tbase.Etype] && tbase != types.Bytetype && tbase != types.Runetype && tbase != types.Errortype) { // int, float, etc
if base.Ctxt.Pkgpath != "runtime" || (tbase != types.Types[tbase.Kind()] && tbase != types.ByteType && tbase != types.RuneType && tbase != types.ErrorType) { // int, float, etc
// named types from other files are defined only by those files
if tbase.Sym != nil && tbase.Sym.Pkg != ir.LocalPkg {
if tbase.Sym() != nil && tbase.Sym().Pkg != types.LocalPkg {
if i, ok := typeSymIdx[tbase]; ok {
lsym.Pkg = tbase.Sym.Pkg.Prefix
lsym.Pkg = tbase.Sym().Pkg.Prefix
if t != tbase {
lsym.SymIdx = int32(i[1])
} else {
@ -1175,13 +1171,13 @@ func dtypesym(t *types.Type) *obj.LSym {
return lsym
}
// TODO(mdempsky): Investigate whether this can happen.
if tbase.Etype == types.TFORW {
if tbase.Kind() == types.TFORW {
return lsym
}
}
ot := 0
switch t.Etype {
switch t.Kind() {
default:
ot = dcommontype(lsym, t)
ot = dextratype(lsym, ot, t, 0)
@ -1262,8 +1258,8 @@ func dtypesym(t *types.Type) *obj.LSym {
ot = dcommontype(lsym, t)
var tpkg *types.Pkg
if t.Sym != nil && t != types.Types[t.Etype] && t != types.Errortype {
tpkg = t.Sym.Pkg
if t.Sym() != nil && t != types.Types[t.Kind()] && t != types.ErrorType {
tpkg = t.Sym().Pkg
}
ot = dgopkgpath(lsym, ot, tpkg)
@ -1328,7 +1324,7 @@ func dtypesym(t *types.Type) *obj.LSym {
ot = dextratype(lsym, ot, t, 0)
case types.TPTR:
if t.Elem().Etype == types.TANY {
if t.Elem().Kind() == types.TANY {
// ../../../../runtime/type.go:/UnsafePointerType
ot = dcommontype(lsym, t)
ot = dextratype(lsym, ot, t, 0)
@ -1397,13 +1393,13 @@ func dtypesym(t *types.Type) *obj.LSym {
// When buildmode=shared, all types are in typelinks so the
// runtime can deduplicate type pointers.
keep := base.Ctxt.Flag_dynlink
if !keep && t.Sym == nil {
if !keep && t.Sym() == nil {
// For an unnamed type, we only need the link if the type can
// be created at run time by reflect.PtrTo and similar
// functions. If the type exists in the program, those
// functions must return the existing type structure rather
// than creating a new one.
switch t.Etype {
switch t.Kind() {
case types.TPTR, types.TARRAY, types.TCHAN, types.TFUNC, types.TMAP, types.TSLICE, types.TSTRUCT:
keep = true
}
@ -1541,7 +1537,7 @@ func dumpsignats() {
for _, ts := range signats {
t := ts.t
dtypesym(t)
if t.Sym != nil {
if t.Sym() != nil {
dtypesym(types.NewPtr(t))
}
}
@ -1572,7 +1568,7 @@ func dumptabs() {
}
// process ptabs
if ir.LocalPkg.Name == "main" && len(ptabs) > 0 {
if types.LocalPkg.Name == "main" && len(ptabs) > 0 {
ot := 0
s := base.Ctxt.Lookup("go.plugin.tabs")
for _, p := range ptabs {
@ -1616,7 +1612,7 @@ func dumpbasictypes() {
// another possible choice would be package main,
// but using runtime means fewer copies in object files.
if base.Ctxt.Pkgpath == "runtime" {
for i := types.EType(1); i <= types.TBOOL; i++ {
for i := types.Kind(1); i <= types.TBOOL; i++ {
dtypesym(types.NewPtr(types.Types[i]))
}
dtypesym(types.NewPtr(types.Types[types.TSTRING]))
@ -1624,9 +1620,9 @@ func dumpbasictypes() {
// emit type structs for error and func(error) string.
// The latter is the type of an auto-generated wrapper.
dtypesym(types.NewPtr(types.Errortype))
dtypesym(types.NewPtr(types.ErrorType))
dtypesym(functype(nil, []ir.Node{anonfield(types.Errortype)}, []ir.Node{anonfield(types.Types[types.TSTRING])}))
dtypesym(functype(nil, []*ir.Field{anonfield(types.ErrorType)}, []*ir.Field{anonfield(types.Types[types.TSTRING])}))
// add paths for runtime and main, which 6l imports implicitly.
dimportpath(Runtimepkg)
@ -1665,7 +1661,7 @@ func (a typesByString) Less(i, j int) bool {
// will be equal for the above checks, but different in DWARF output.
// Sort by source position to ensure deterministic order.
// See issues 27013 and 30202.
if a[i].t.Etype == types.TINTER && a[i].t.Methods().Len() > 0 {
if a[i].t.Kind() == types.TINTER && a[i].t.Methods().Len() > 0 {
return a[i].t.Methods().Index(0).Pos.Before(a[j].t.Methods().Index(0).Pos)
}
return false
@ -1821,7 +1817,7 @@ func (p *GCProg) emit(t *types.Type, offset int64) {
p.w.Ptr(offset / int64(Widthptr))
return
}
switch t.Etype {
switch t.Kind() {
default:
base.Fatalf("GCProg.emit: unexpected type %v", t)

View File

@ -32,10 +32,10 @@ import "cmd/compile/internal/ir"
// when analyzing a set of mutually recursive functions.
type bottomUpVisitor struct {
analyze func([]ir.Node, bool)
analyze func([]*ir.Func, bool)
visitgen uint32
nodeID map[ir.Node]uint32
stack []ir.Node
nodeID map[*ir.Func]uint32
stack []*ir.Func
}
// visitBottomUp invokes analyze on the ODCLFUNC nodes listed in list.
@ -51,18 +51,18 @@ type bottomUpVisitor struct {
// If recursive is false, the list consists of only a single function and its closures.
// If recursive is true, the list may still contain only a single function,
// if that function is itself recursive.
func visitBottomUp(list []ir.Node, analyze func(list []ir.Node, recursive bool)) {
func visitBottomUp(list []ir.Node, analyze func(list []*ir.Func, recursive bool)) {
var v bottomUpVisitor
v.analyze = analyze
v.nodeID = make(map[ir.Node]uint32)
v.nodeID = make(map[*ir.Func]uint32)
for _, n := range list {
if n.Op() == ir.ODCLFUNC && !n.Func().IsHiddenClosure() {
v.visit(n)
v.visit(n.(*ir.Func))
}
}
}
func (v *bottomUpVisitor) visit(n ir.Node) uint32 {
func (v *bottomUpVisitor) visit(n *ir.Func) uint32 {
if id := v.nodeID[n]; id > 0 {
// already visited
return id
@ -80,41 +80,41 @@ func (v *bottomUpVisitor) visit(n ir.Node) uint32 {
case ir.ONAME:
if n.Class() == ir.PFUNC {
if n != nil && n.Name().Defn != nil {
if m := v.visit(n.Name().Defn); m < min {
if m := v.visit(n.Name().Defn.(*ir.Func)); m < min {
min = m
}
}
}
case ir.OMETHEXPR:
fn := methodExprName(n)
if fn != nil && fn.Name().Defn != nil {
if m := v.visit(fn.Name().Defn); m < min {
if fn != nil && fn.Defn != nil {
if m := v.visit(fn.Defn.(*ir.Func)); m < min {
min = m
}
}
case ir.ODOTMETH:
fn := methodExprName(n)
if fn != nil && fn.Op() == ir.ONAME && fn.Class() == ir.PFUNC && fn.Name().Defn != nil {
if m := v.visit(fn.Name().Defn); m < min {
if fn != nil && fn.Op() == ir.ONAME && fn.Class() == ir.PFUNC && fn.Defn != nil {
if m := v.visit(fn.Defn.(*ir.Func)); m < min {
min = m
}
}
case ir.OCALLPART:
fn := ir.AsNode(callpartMethod(n).Nname)
if fn != nil && fn.Op() == ir.ONAME && fn.Class() == ir.PFUNC && fn.Name().Defn != nil {
if m := v.visit(fn.Name().Defn); m < min {
if m := v.visit(fn.Name().Defn.(*ir.Func)); m < min {
min = m
}
}
case ir.OCLOSURE:
if m := v.visit(n.Func().Decl); m < min {
if m := v.visit(n.Func()); m < min {
min = m
}
}
return true
})
if (min == id || min == id+1) && !n.Func().IsHiddenClosure() {
if (min == id || min == id+1) && !n.IsHiddenClosure() {
// This node is the root of a strongly connected component.
// The original min passed to visitcodelist was v.nodeID[n]+1.

View File

@ -47,36 +47,30 @@ func typecheckselect(sel ir.Node) {
}
base.ErrorfAt(pos, "select case must be receive, send or assign recv")
// convert x = <-c into OSELRECV(x, <-c).
// remove implicit conversions; the eventual assignment
// will reintroduce them.
case ir.OAS:
// convert x = <-c into OSELRECV(x, <-c).
// remove implicit conversions; the eventual assignment
// will reintroduce them.
if (n.Right().Op() == ir.OCONVNOP || n.Right().Op() == ir.OCONVIFACE) && n.Right().Implicit() {
n.SetRight(n.Right().Left())
}
if n.Right().Op() != ir.ORECV {
base.ErrorfAt(n.Pos(), "select assignment must have receive on right hand side")
break
}
n.SetOp(ir.OSELRECV)
// convert x, ok = <-c into OSELRECV2(x, <-c) with ntest=ok
case ir.OAS2RECV:
if n.Right().Op() != ir.ORECV {
// convert x, ok = <-c into OSELRECV2(x, <-c) with ntest=ok
if n.Rlist().First().Op() != ir.ORECV {
base.ErrorfAt(n.Pos(), "select assignment must have receive on right hand side")
break
}
n.SetOp(ir.OSELRECV2)
n.SetLeft(n.List().First())
n.PtrList().Set1(n.List().Second())
// convert <-c into OSELRECV(N, <-c)
case ir.ORECV:
n = ir.NodAt(n.Pos(), ir.OSELRECV, nil, n)
// convert <-c into OSELRECV(_, <-c)
n = ir.NodAt(n.Pos(), ir.OSELRECV, ir.BlankNode, n)
n.SetTypecheck(1)
ncase.SetLeft(n)
@ -134,28 +128,19 @@ func walkselectcases(cases *ir.Nodes) []ir.Node {
case ir.OSEND:
// already ok
case ir.OSELRECV, ir.OSELRECV2:
if n.Op() == ir.OSELRECV || n.List().Len() == 0 {
if n.Left() == nil {
n = n.Right()
} else {
n.SetOp(ir.OAS)
}
case ir.OSELRECV:
if ir.IsBlank(n.Left()) {
n = n.Right()
break
}
n.SetOp(ir.OAS)
if n.Left() == nil {
ir.BlankNode = typecheck(ir.BlankNode, ctxExpr|ctxAssign)
n.SetLeft(ir.BlankNode)
case ir.OSELRECV2:
if ir.IsBlank(n.List().First()) && ir.IsBlank(n.List().Second()) {
n = n.Rlist().First()
break
}
n.SetOp(ir.OAS2)
n.PtrList().Prepend(n.Left())
n.PtrRlist().Set1(n.Right())
n.SetRight(nil)
n.SetLeft(nil)
n.SetTypecheck(0)
n = typecheck(n, ctxStmt)
n.SetOp(ir.OAS2RECV)
}
l = append(l, n)
@ -176,20 +161,30 @@ func walkselectcases(cases *ir.Nodes) []ir.Node {
dflt = cas
continue
}
// Lower x, _ = <-c to x = <-c.
if n.Op() == ir.OSELRECV2 && ir.IsBlank(n.List().Second()) {
n = ir.NodAt(n.Pos(), ir.OSELRECV, n.List().First(), n.Rlist().First())
n.SetTypecheck(1)
cas.SetLeft(n)
}
switch n.Op() {
case ir.OSEND:
n.SetRight(ir.Nod(ir.OADDR, n.Right(), nil))
n.SetRight(typecheck(n.Right(), ctxExpr))
case ir.OSELRECV, ir.OSELRECV2:
if n.Op() == ir.OSELRECV2 && n.List().Len() == 0 {
n.SetOp(ir.OSELRECV)
}
if n.Left() != nil {
case ir.OSELRECV:
if !ir.IsBlank(n.Left()) {
n.SetLeft(ir.Nod(ir.OADDR, n.Left(), nil))
n.SetLeft(typecheck(n.Left(), ctxExpr))
}
case ir.OSELRECV2:
if !ir.IsBlank(n.List().First()) {
n.List().SetIndex(0, ir.Nod(ir.OADDR, n.List().First(), nil))
n.List().SetIndex(0, typecheck(n.List().First(), ctxExpr))
}
}
}
@ -204,6 +199,7 @@ func walkselectcases(cases *ir.Nodes) []ir.Node {
setlineno(n)
r := ir.Nod(ir.OIF, nil, nil)
r.PtrInit().Set(cas.Init().Slice())
var call ir.Node
switch n.Op() {
default:
base.Fatalf("select %v", n.Op())
@ -211,30 +207,30 @@ func walkselectcases(cases *ir.Nodes) []ir.Node {
case ir.OSEND:
// if selectnbsend(c, v) { body } else { default body }
ch := n.Left()
r.SetLeft(mkcall1(chanfn("selectnbsend", 2, ch.Type()), types.Types[types.TBOOL], r.PtrInit(), ch, n.Right()))
call = mkcall1(chanfn("selectnbsend", 2, ch.Type()), types.Types[types.TBOOL], r.PtrInit(), ch, n.Right())
case ir.OSELRECV:
// if selectnbrecv(&v, c) { body } else { default body }
ch := n.Right().Left()
elem := n.Left()
if elem == nil {
if ir.IsBlank(elem) {
elem = nodnil()
}
r.SetLeft(mkcall1(chanfn("selectnbrecv", 2, ch.Type()), types.Types[types.TBOOL], r.PtrInit(), elem, ch))
call = mkcall1(chanfn("selectnbrecv", 2, ch.Type()), types.Types[types.TBOOL], r.PtrInit(), elem, ch)
case ir.OSELRECV2:
// if selectnbrecv2(&v, &received, c) { body } else { default body }
ch := n.Right().Left()
elem := n.Left()
if elem == nil {
ch := n.Rlist().First().Left()
elem := n.List().First()
if ir.IsBlank(elem) {
elem = nodnil()
}
receivedp := ir.Nod(ir.OADDR, n.List().First(), nil)
receivedp := ir.Nod(ir.OADDR, n.List().Second(), nil)
receivedp = typecheck(receivedp, ctxExpr)
r.SetLeft(mkcall1(chanfn("selectnbrecv2", 2, ch.Type()), types.Types[types.TBOOL], r.PtrInit(), elem, receivedp, ch))
call = mkcall1(chanfn("selectnbrecv2", 2, ch.Type()), types.Types[types.TBOOL], r.PtrInit(), elem, receivedp, ch)
}
r.SetLeft(typecheck(r.Left(), ctxExpr))
r.SetLeft(typecheck(call, ctxExpr))
r.PtrBody().Set(cas.Body().Slice())
r.PtrRlist().Set(append(dflt.Init().Slice(), dflt.Body().Slice()...))
return []ir.Node{r, ir.Nod(ir.OBREAK, nil, nil)}
@ -288,11 +284,16 @@ func walkselectcases(cases *ir.Nodes) []ir.Node {
nsends++
c = n.Left()
elem = n.Right()
case ir.OSELRECV, ir.OSELRECV2:
case ir.OSELRECV:
nrecvs++
i = ncas - nrecvs
c = n.Right().Left()
elem = n.Left()
case ir.OSELRECV2:
nrecvs++
i = ncas - nrecvs
c = n.Rlist().First().Left()
elem = n.List().First()
}
casorder[i] = cas
@ -305,7 +306,7 @@ func walkselectcases(cases *ir.Nodes) []ir.Node {
c = convnop(c, types.Types[types.TUNSAFEPTR])
setField("c", c)
if elem != nil {
if !ir.IsBlank(elem) {
elem = convnop(elem, types.Types[types.TUNSAFEPTR])
setField("elem", elem)
}
@ -347,7 +348,7 @@ func walkselectcases(cases *ir.Nodes) []ir.Node {
r := ir.Nod(ir.OIF, cond, nil)
if n := cas.Left(); n != nil && n.Op() == ir.OSELRECV2 {
x := ir.Nod(ir.OAS, n.List().First(), recvOK)
x := ir.Nod(ir.OAS, n.List().Second(), recvOK)
x = typecheck(x, ctxStmt)
r.PtrBody().Append(x)
}
@ -381,7 +382,7 @@ var scase *types.Type
// Keep in sync with src/runtime/select.go.
func scasetype() *types.Type {
if scase == nil {
scase = tostruct([]ir.Node{
scase = tostruct([]*ir.Field{
namedfield("c", types.Types[types.TUNSAFEPTR]),
namedfield("elem", types.Types[types.TUNSAFEPTR]),
})

View File

@ -60,7 +60,8 @@ func (s *InitSchedule) tryStaticInit(n ir.Node) bool {
if n.Op() != ir.OAS {
return false
}
if ir.IsBlank(n.Left()) && candiscard(n.Right()) {
if ir.IsBlank(n.Left()) && !hasSideEffects(n.Right()) {
// Discard.
return true
}
lno := setlineno(n)
@ -78,7 +79,7 @@ func (s *InitSchedule) staticcopy(l ir.Node, r ir.Node) bool {
pfuncsym(l, r)
return true
}
if r.Class() != ir.PEXTERN || r.Sym().Pkg != ir.LocalPkg {
if r.Class() != ir.PEXTERN || r.Sym().Pkg != types.LocalPkg {
return false
}
if r.Name().Defn == nil { // probably zeroed but perhaps supplied externally and of unknown value
@ -134,7 +135,7 @@ func (s *InitSchedule) staticcopy(l ir.Node, r ir.Node) bool {
case ir.OSLICELIT:
// copy slice
a := s.inittemps[r]
slicesym(l, a, r.Right().Int64Val())
slicesym(l, a, ir.Int64Val(r.Right()))
return true
case ir.OARRAYLIT, ir.OSTRUCTLIT:
@ -213,7 +214,7 @@ func (s *InitSchedule) staticassign(l ir.Node, r ir.Node) bool {
case ir.OSTR2BYTES:
if l.Class() == ir.PEXTERN && r.Left().Op() == ir.OLITERAL {
sval := r.Left().StringVal()
sval := ir.StringVal(r.Left())
slicebytes(l, sval)
return true
}
@ -221,7 +222,7 @@ func (s *InitSchedule) staticassign(l ir.Node, r ir.Node) bool {
case ir.OSLICELIT:
s.initplan(r)
// Init slice.
bound := r.Right().Int64Val()
bound := ir.Int64Val(r.Right())
ta := types.NewArray(r.Type().Elem(), bound)
ta.SetNoalg(true)
a := staticname(ta)
@ -371,7 +372,8 @@ func staticname(t *types.Type) ir.Node {
// Don't use lookupN; it interns the resulting string, but these are all unique.
n := NewName(lookup(fmt.Sprintf("%s%d", obj.StaticNamePref, statuniqgen)))
statuniqgen++
addvar(n, t, ir.PEXTERN)
declare(n, ir.PEXTERN)
n.SetType(t)
n.Sym().Linksym().Set(obj.AttrLocal, true)
return n
}
@ -417,7 +419,7 @@ func getdyn(n ir.Node, top bool) initGenType {
if !top {
return initDynamic
}
if n.Right().Int64Val()/4 > int64(n.List().Len()) {
if ir.Int64Val(n.Right())/4 > int64(n.List().Len()) {
// <25% of entries have explicit values.
// Very rough estimation, it takes 4 bytes of instructions
// to initialize 1 byte of result. So don't use a static
@ -547,7 +549,8 @@ func fixedlit(ctxt initContext, kind initKind, n ir.Node, var_ ir.Node, init *ir
for _, r := range n.List().Slice() {
a, value := splitnode(r)
if a == ir.BlankNode && candiscard(value) {
if a == ir.BlankNode && !hasSideEffects(value) {
// Discard.
continue
}
@ -576,7 +579,7 @@ func fixedlit(ctxt initContext, kind initKind, n ir.Node, var_ ir.Node, init *ir
case initKindStatic:
genAsStatic(a)
case initKindDynamic, initKindLocalCode:
a = orderStmtInPlace(a, map[string][]ir.Node{})
a = orderStmtInPlace(a, map[string][]*ir.Name{})
a = walkstmt(a)
init.Append(a)
default:
@ -593,12 +596,12 @@ func isSmallSliceLit(n ir.Node) bool {
r := n.Right()
return smallintconst(r) && (n.Type().Elem().Width == 0 || r.Int64Val() <= smallArrayBytes/n.Type().Elem().Width)
return smallintconst(r) && (n.Type().Elem().Width == 0 || ir.Int64Val(r) <= smallArrayBytes/n.Type().Elem().Width)
}
func slicelit(ctxt initContext, n ir.Node, var_ ir.Node, init *ir.Nodes) {
// make an array type corresponding the number of elements we have
t := types.NewArray(n.Type().Elem(), n.Right().Int64Val())
t := types.NewArray(n.Type().Elem(), ir.Int64Val(n.Right()))
dowidth(t)
if ctxt == inNonInitFunction {
@ -686,8 +689,7 @@ func slicelit(ctxt initContext, n ir.Node, var_ ir.Node, init *ir.Nodes) {
a = ir.Nod(ir.OADDR, a, nil)
} else {
a = ir.Nod(ir.ONEW, nil, nil)
a.PtrList().Set1(typenod(t))
a = ir.Nod(ir.ONEW, ir.TypeNode(t), nil)
}
a = ir.Nod(ir.OAS, vauto, a)
@ -745,7 +747,7 @@ func slicelit(ctxt initContext, n ir.Node, var_ ir.Node, init *ir.Nodes) {
a = ir.Nod(ir.OAS, a, value)
a = typecheck(a, ctxStmt)
a = orderStmtInPlace(a, map[string][]ir.Node{})
a = orderStmtInPlace(a, map[string][]*ir.Name{})
a = walkstmt(a)
init.Append(a)
}
@ -754,7 +756,7 @@ func slicelit(ctxt initContext, n ir.Node, var_ ir.Node, init *ir.Nodes) {
a = ir.Nod(ir.OAS, var_, ir.Nod(ir.OSLICE, vauto, nil))
a = typecheck(a, ctxStmt)
a = orderStmtInPlace(a, map[string][]ir.Node{})
a = orderStmtInPlace(a, map[string][]*ir.Name{})
a = walkstmt(a)
init.Append(a)
}
@ -763,7 +765,7 @@ func maplit(n ir.Node, m ir.Node, init *ir.Nodes) {
// make the map var
a := ir.Nod(ir.OMAKE, nil, nil)
a.SetEsc(n.Esc())
a.PtrList().Set2(typenod(n.Type()), nodintconst(int64(n.List().Len())))
a.PtrList().Set2(ir.TypeNode(n.Type()), nodintconst(int64(n.List().Len())))
litas(m, a, init)
entries := n.List().Slice()
@ -889,9 +891,8 @@ func anylit(n ir.Node, var_ ir.Node, init *ir.Nodes) {
r = ir.Nod(ir.OADDR, n.Right(), nil)
r = typecheck(r, ctxExpr)
} else {
r = ir.Nod(ir.ONEW, nil, nil)
r.SetTypecheck(1)
r.SetType(t)
r = ir.Nod(ir.ONEW, ir.TypeNode(n.Left().Type()), nil)
r = typecheck(r, ctxExpr)
r.SetEsc(n.Esc())
}
@ -959,6 +960,9 @@ func anylit(n ir.Node, var_ ir.Node, init *ir.Nodes) {
}
}
// oaslit handles special composite literal assignments.
// It returns true if n's effects have been added to init,
// in which case n should be dropped from the program by the caller.
func oaslit(n ir.Node, init *ir.Nodes) bool {
if n.Left() == nil || n.Right() == nil {
// not a special composite literal assignment
@ -990,14 +994,12 @@ func oaslit(n ir.Node, init *ir.Nodes) bool {
anylit(n.Right(), n.Left(), init)
}
n.SetOp(ir.OEMPTY)
n.SetRight(nil)
return true
}
func getlit(lit ir.Node) int {
if smallintconst(lit) {
return int(lit.Int64Val())
return int(ir.Int64Val(lit))
}
return -1
}

File diff suppressed because it is too large Load Diff

View File

@ -69,7 +69,7 @@ func setlineno(n ir.Node) src.XPos {
}
func lookup(name string) *types.Sym {
return ir.LocalPkg.Lookup(name)
return types.LocalPkg.Lookup(name)
}
// lookupN looks up the symbol starting with prefix and ending with
@ -78,7 +78,7 @@ func lookupN(prefix string, n int) *types.Sym {
var buf [20]byte // plenty long enough for all current users
copy(buf[:], prefix)
b := strconv.AppendInt(buf[:len(prefix)], int64(n), 10)
return ir.LocalPkg.LookupBytes(b)
return types.LocalPkg.LookupBytes(b)
}
// autolabel generates a new Name node for use with
@ -95,14 +95,14 @@ func autolabel(prefix string) *types.Sym {
if Curfn == nil {
base.Fatalf("autolabel outside function")
}
n := fn.Func().Label
fn.Func().Label++
n := fn.Label
fn.Label++
return lookupN(prefix, int(n))
}
// find all the exported symbols in package opkg
// and make them available in the current package
func importdot(opkg *types.Pkg, pack ir.Node) {
func importdot(opkg *types.Pkg, pack *ir.PkgName) {
n := 0
for _, s := range opkg.Syms {
if s.Def == nil {
@ -124,7 +124,7 @@ func importdot(opkg *types.Pkg, pack ir.Node) {
ir.Dump("s1def", ir.AsNode(s1.Def))
base.Fatalf("missing Name")
}
ir.AsNode(s1.Def).Name().Pack = pack
ir.AsNode(s1.Def).Name().PkgName = pack
s1.Origpkg = opkg
n++
}
@ -136,9 +136,9 @@ func importdot(opkg *types.Pkg, pack ir.Node) {
}
// newname returns a new ONAME Node associated with symbol s.
func NewName(s *types.Sym) ir.Node {
func NewName(s *types.Sym) *ir.Name {
n := ir.NewNameAt(base.Pos, s)
n.Name().Curfn = Curfn
n.Curfn = Curfn
return n
}
@ -181,43 +181,7 @@ func nodstr(s string) ir.Node {
return ir.NewLiteral(constant.MakeString(s))
}
// treecopy recursively copies n, with the exception of
// ONAME, OLITERAL, OTYPE, and ONONAME leaves.
// If pos.IsKnown(), it sets the source position of newly
// allocated nodes to pos.
func treecopy(n ir.Node, pos src.XPos) ir.Node {
if n == nil {
return nil
}
switch n.Op() {
default:
m := ir.SepCopy(n)
m.SetLeft(treecopy(n.Left(), pos))
m.SetRight(treecopy(n.Right(), pos))
m.PtrList().Set(listtreecopy(n.List().Slice(), pos))
if pos.IsKnown() {
m.SetPos(pos)
}
if m.Name() != nil && n.Op() != ir.ODCLFIELD {
ir.Dump("treecopy", n)
base.Fatalf("treecopy Name")
}
return m
case ir.OPACK:
// OPACK nodes are never valid in const value declarations,
// but allow them like any other declared symbol to avoid
// crashing (golang.org/issue/11361).
fallthrough
case ir.ONAME, ir.ONONAME, ir.OLITERAL, ir.ONIL, ir.OTYPE:
return n
}
}
func isptrto(t *types.Type, et types.EType) bool {
func isptrto(t *types.Type, et types.Kind) bool {
if t == nil {
return false
}
@ -228,7 +192,7 @@ func isptrto(t *types.Type, et types.EType) bool {
if t == nil {
return false
}
if t.Etype != et {
if t.Kind() != et {
return false
}
return true
@ -244,7 +208,7 @@ func methtype(t *types.Type) *types.Type {
// Strip away pointer if it's there.
if t.IsPtr() {
if t.Sym != nil {
if t.Sym() != nil {
return nil
}
t = t.Elem()
@ -254,15 +218,15 @@ func methtype(t *types.Type) *types.Type {
}
// Must be a named type or anonymous struct.
if t.Sym == nil && !t.IsStruct() {
if t.Sym() == nil && !t.IsStruct() {
return nil
}
// Check types.
if issimple[t.Etype] {
if issimple[t.Kind()] {
return t
}
switch t.Etype {
switch t.Kind() {
case types.TARRAY, types.TCHAN, types.TFUNC, types.TMAP, types.TSLICE, types.TSTRING, types.TSTRUCT:
return t
}
@ -277,7 +241,7 @@ func assignop(src, dst *types.Type) (ir.Op, string) {
if src == dst {
return ir.OCONVNOP, ""
}
if src == nil || dst == nil || src.Etype == types.TFORW || dst.Etype == types.TFORW || src.Orig == nil || dst.Orig == nil {
if src == nil || dst == nil || src.Kind() == types.TFORW || dst.Kind() == types.TFORW || src.Underlying() == nil || dst.Underlying() == nil {
return ir.OXXX, ""
}
@ -293,13 +257,13 @@ func assignop(src, dst *types.Type) (ir.Op, string) {
// we want to recompute the itab. Recomputing the itab ensures
// that itabs are unique (thus an interface with a compile-time
// type I has an itab with interface type I).
if types.Identical(src.Orig, dst.Orig) {
if types.Identical(src.Underlying(), dst.Underlying()) {
if src.IsEmptyInterface() {
// Conversion between two empty interfaces
// requires no code.
return ir.OCONVNOP, ""
}
if (src.Sym == nil || dst.Sym == nil) && !src.IsInterface() {
if (src.Sym() == nil || dst.Sym() == nil) && !src.IsInterface() {
// Conversion between two types, at least one unnamed,
// needs no conversion. The exception is nonempty interfaces
// which need to have their itab updated.
@ -308,7 +272,7 @@ func assignop(src, dst *types.Type) (ir.Op, string) {
}
// 3. dst is an interface type and src implements dst.
if dst.IsInterface() && src.Etype != types.TNIL {
if dst.IsInterface() && src.Kind() != types.TNIL {
var missing, have *types.Field
var ptr int
if implements(src, dst, &missing, &have, &ptr) {
@ -327,12 +291,12 @@ func assignop(src, dst *types.Type) (ir.Op, string) {
why = fmt.Sprintf(":\n\t%v does not implement %v (%v method is marked 'nointerface')", src, dst, missing.Sym)
} else if have != nil && have.Sym == missing.Sym {
why = fmt.Sprintf(":\n\t%v does not implement %v (wrong type for %v method)\n"+
"\t\thave %v%0S\n\t\twant %v%0S", src, dst, missing.Sym, have.Sym, have.Type, missing.Sym, missing.Type)
"\t\thave %v%S\n\t\twant %v%S", src, dst, missing.Sym, have.Sym, have.Type, missing.Sym, missing.Type)
} else if ptr != 0 {
why = fmt.Sprintf(":\n\t%v does not implement %v (%v method has pointer receiver)", src, dst, missing.Sym)
} else if have != nil {
why = fmt.Sprintf(":\n\t%v does not implement %v (missing %v method)\n"+
"\t\thave %v%0S\n\t\twant %v%0S", src, dst, missing.Sym, have.Sym, have.Type, missing.Sym, missing.Type)
"\t\thave %v%S\n\t\twant %v%S", src, dst, missing.Sym, have.Sym, have.Type, missing.Sym, missing.Type)
} else {
why = fmt.Sprintf(":\n\t%v does not implement %v (missing %v method)", src, dst, missing.Sym)
}
@ -345,7 +309,7 @@ func assignop(src, dst *types.Type) (ir.Op, string) {
return ir.OXXX, why
}
if src.IsInterface() && dst.Etype != types.TBLANK {
if src.IsInterface() && dst.Kind() != types.TBLANK {
var missing, have *types.Field
var ptr int
var why string
@ -359,14 +323,14 @@ func assignop(src, dst *types.Type) (ir.Op, string) {
// src and dst have identical element types, and
// either src or dst is not a named type.
if src.IsChan() && src.ChanDir() == types.Cboth && dst.IsChan() {
if types.Identical(src.Elem(), dst.Elem()) && (src.Sym == nil || dst.Sym == nil) {
if types.Identical(src.Elem(), dst.Elem()) && (src.Sym() == nil || dst.Sym() == nil) {
return ir.OCONVNOP, ""
}
}
// 5. src is the predeclared identifier nil and dst is a nillable type.
if src.Etype == types.TNIL {
switch dst.Etype {
if src.Kind() == types.TNIL {
switch dst.Kind() {
case types.TPTR,
types.TFUNC,
types.TMAP,
@ -380,7 +344,7 @@ func assignop(src, dst *types.Type) (ir.Op, string) {
// 6. rule about untyped constants - already converted by defaultlit.
// 7. Any typed value can be assigned to the blank identifier.
if dst.Etype == types.TBLANK {
if dst.Kind() == types.TBLANK {
return ir.OCONVNOP, ""
}
@ -409,7 +373,7 @@ func convertop(srcConstant bool, src, dst *types.Type) (ir.Op, string) {
return ir.OXXX, why
}
// (b) Disallow string to []T where T is go:notinheap.
if src.IsString() && dst.IsSlice() && dst.Elem().NotInHeap() && (dst.Elem().Etype == types.Bytetype.Etype || dst.Elem().Etype == types.Runetype.Etype) {
if src.IsString() && dst.IsSlice() && dst.Elem().NotInHeap() && (dst.Elem().Kind() == types.ByteType.Kind() || dst.Elem().Kind() == types.RuneType.Kind()) {
why := fmt.Sprintf(":\n\t%v is incomplete (or unallocatable)", dst.Elem())
return ir.OXXX, why
}
@ -429,21 +393,21 @@ func convertop(srcConstant bool, src, dst *types.Type) (ir.Op, string) {
}
// 2. Ignoring struct tags, src and dst have identical underlying types.
if types.IdenticalIgnoreTags(src.Orig, dst.Orig) {
if types.IdenticalIgnoreTags(src.Underlying(), dst.Underlying()) {
return ir.OCONVNOP, ""
}
// 3. src and dst are unnamed pointer types and, ignoring struct tags,
// their base types have identical underlying types.
if src.IsPtr() && dst.IsPtr() && src.Sym == nil && dst.Sym == nil {
if types.IdenticalIgnoreTags(src.Elem().Orig, dst.Elem().Orig) {
if src.IsPtr() && dst.IsPtr() && src.Sym() == nil && dst.Sym() == nil {
if types.IdenticalIgnoreTags(src.Elem().Underlying(), dst.Elem().Underlying()) {
return ir.OCONVNOP, ""
}
}
// 4. src and dst are both integer or floating point types.
if (src.IsInteger() || src.IsFloat()) && (dst.IsInteger() || dst.IsFloat()) {
if simtype[src.Etype] == simtype[dst.Etype] {
if simtype[src.Kind()] == simtype[dst.Kind()] {
return ir.OCONVNOP, ""
}
return ir.OCONV, ""
@ -451,7 +415,7 @@ func convertop(srcConstant bool, src, dst *types.Type) (ir.Op, string) {
// 5. src and dst are both complex types.
if src.IsComplex() && dst.IsComplex() {
if simtype[src.Etype] == simtype[dst.Etype] {
if simtype[src.Kind()] == simtype[dst.Kind()] {
return ir.OCONVNOP, ""
}
return ir.OCONV, ""
@ -471,10 +435,10 @@ func convertop(srcConstant bool, src, dst *types.Type) (ir.Op, string) {
}
if src.IsSlice() && dst.IsString() {
if src.Elem().Etype == types.Bytetype.Etype {
if src.Elem().Kind() == types.ByteType.Kind() {
return ir.OBYTES2STR, ""
}
if src.Elem().Etype == types.Runetype.Etype {
if src.Elem().Kind() == types.RuneType.Kind() {
return ir.ORUNES2STR, ""
}
}
@ -482,10 +446,10 @@ func convertop(srcConstant bool, src, dst *types.Type) (ir.Op, string) {
// 7. src is a string and dst is []byte or []rune.
// String to slice.
if src.IsString() && dst.IsSlice() {
if dst.Elem().Etype == types.Bytetype.Etype {
if dst.Elem().Kind() == types.ByteType.Kind() {
return ir.OSTR2BYTES, ""
}
if dst.Elem().Etype == types.Runetype.Etype {
if dst.Elem().Kind() == types.RuneType.Kind() {
return ir.OSTR2RUNES, ""
}
}
@ -503,7 +467,7 @@ func convertop(srcConstant bool, src, dst *types.Type) (ir.Op, string) {
// src is map and dst is a pointer to corresponding hmap.
// This rule is needed for the implementation detail that
// go gc maps are implemented as a pointer to a hmap struct.
if src.Etype == types.TMAP && dst.IsPtr() &&
if src.Kind() == types.TMAP && dst.IsPtr() &&
src.MapType().Hmap == dst.Elem() {
return ir.OCONVNOP, ""
}
@ -521,7 +485,7 @@ func assignconvfn(n ir.Node, t *types.Type, context func() string) ir.Node {
return n
}
if t.Etype == types.TBLANK && n.Type().Etype == types.TNIL {
if t.Kind() == types.TBLANK && n.Type().Kind() == types.TNIL {
base.Errorf("use of untyped nil")
}
@ -529,7 +493,7 @@ func assignconvfn(n ir.Node, t *types.Type, context func() string) ir.Node {
if n.Type() == nil {
return n
}
if t.Etype == types.TBLANK {
if t.Kind() == types.TBLANK {
return n
}
@ -559,7 +523,6 @@ func assignconvfn(n ir.Node, t *types.Type, context func() string) ir.Node {
r.SetType(t)
r.SetTypecheck(1)
r.SetImplicit(true)
r.SetOrig(n.Orig())
return r
}
@ -582,23 +545,6 @@ func backingArrayPtrLen(n ir.Node) (ptr, len ir.Node) {
return ptr, len
}
// labeledControl returns the control flow Node (for, switch, select)
// associated with the label n, if any.
func labeledControl(n ir.Node) ir.Node {
if n.Op() != ir.OLABEL {
base.Fatalf("labeledControl %v", n.Op())
}
ctl := n.Name().Defn
if ctl == nil {
return nil
}
switch ctl.Op() {
case ir.OFOR, ir.OFORUNTIL, ir.OSWITCH, ir.OSELECT:
return ctl
}
return nil
}
func syslook(name string) ir.Node {
s := Runtimepkg.Lookup(name)
if s == nil || s.Def == nil {
@ -653,15 +599,15 @@ func calcHasCall(n ir.Node) bool {
// When using soft-float, these ops might be rewritten to function calls
// so we ensure they are evaluated first.
case ir.OADD, ir.OSUB, ir.ONEG, ir.OMUL:
if thearch.SoftFloat && (isFloat[n.Type().Etype] || isComplex[n.Type().Etype]) {
if thearch.SoftFloat && (isFloat[n.Type().Kind()] || isComplex[n.Type().Kind()]) {
return true
}
case ir.OLT, ir.OEQ, ir.ONE, ir.OLE, ir.OGE, ir.OGT:
if thearch.SoftFloat && (isFloat[n.Left().Type().Etype] || isComplex[n.Left().Type().Etype]) {
if thearch.SoftFloat && (isFloat[n.Left().Type().Kind()] || isComplex[n.Left().Type().Kind()]) {
return true
}
case ir.OCONV:
if thearch.SoftFloat && ((isFloat[n.Type().Etype] || isComplex[n.Type().Etype]) || (isFloat[n.Left().Type().Etype] || isComplex[n.Left().Type().Etype])) {
if thearch.SoftFloat && ((isFloat[n.Type().Kind()] || isComplex[n.Type().Kind()]) || (isFloat[n.Left().Type().Kind()] || isComplex[n.Left().Type().Kind()])) {
return true
}
}
@ -855,7 +801,7 @@ func lookdot0(s *types.Sym, t *types.Type, save **types.Field, ignorecase bool)
}
u = t
if t.Sym != nil && t.IsPtr() && !t.Elem().IsPtr() {
if t.Sym() != nil && t.IsPtr() && !t.Elem().IsPtr() {
// If t is a defined pointer type, then x.m is shorthand for (*x).m.
u = t.Elem()
}
@ -1115,9 +1061,9 @@ func expandmeth(t *types.Type) {
t.AllMethods().Set(ms)
}
// Given funarg struct list, return list of ODCLFIELD Node fn args.
func structargs(tl *types.Type, mustname bool) []ir.Node {
var args []ir.Node
// Given funarg struct list, return list of fn args.
func structargs(tl *types.Type, mustname bool) []*ir.Field {
var args []*ir.Field
gen := 0
for _, t := range tl.Fields().Slice() {
s := t.Sym
@ -1127,8 +1073,8 @@ func structargs(tl *types.Type, mustname bool) []ir.Node {
gen++
}
a := symfield(s, t.Type)
a.SetPos(t.Pos)
a.SetIsDDD(t.IsDDD())
a.Pos = t.Pos
a.IsDDD = t.IsDDD()
args = append(args, a)
}
@ -1163,26 +1109,26 @@ func genwrapper(rcvr *types.Type, method *types.Field, newnam *types.Sym) {
// Only generate (*T).M wrappers for T.M in T's own package.
if rcvr.IsPtr() && rcvr.Elem() == method.Type.Recv().Type &&
rcvr.Elem().Sym != nil && rcvr.Elem().Sym.Pkg != ir.LocalPkg {
rcvr.Elem().Sym() != nil && rcvr.Elem().Sym().Pkg != types.LocalPkg {
return
}
// Only generate I.M wrappers for I in I's own package
// but keep doing it for error.Error (was issue #29304).
if rcvr.IsInterface() && rcvr.Sym != nil && rcvr.Sym.Pkg != ir.LocalPkg && rcvr != types.Errortype {
if rcvr.IsInterface() && rcvr.Sym() != nil && rcvr.Sym().Pkg != types.LocalPkg && rcvr != types.ErrorType {
return
}
base.Pos = autogeneratedPos
dclcontext = ir.PEXTERN
tfn := ir.Nod(ir.OTFUNC, nil, nil)
tfn.SetLeft(namedfield(".this", rcvr))
tfn.PtrList().Set(structargs(method.Type.Params(), true))
tfn.PtrRlist().Set(structargs(method.Type.Results(), false))
tfn := ir.NewFuncType(base.Pos,
namedfield(".this", rcvr),
structargs(method.Type.Params(), true),
structargs(method.Type.Results(), false))
fn := dclfunc(newnam, tfn)
fn.Func().SetDupok(true)
fn.SetDupok(true)
nthis := ir.AsNode(tfn.Type().Recv().Nname)
@ -1218,7 +1164,7 @@ func genwrapper(rcvr *types.Type, method *types.Field, newnam *types.Sym) {
fn.PtrBody().Append(as)
fn.PtrBody().Append(nodSym(ir.ORETJMP, nil, methodSym(methodrcvr, method.Sym)))
} else {
fn.Func().SetWrapper(true) // ignore frame for panic+recover matching
fn.SetWrapper(true) // ignore frame for panic+recover matching
call := ir.Nod(ir.OCALL, dot, nil)
call.PtrList().Set(paramNnames(tfn.Type()))
call.SetIsDDD(tfn.Type().IsVariadic())
@ -1239,18 +1185,17 @@ func genwrapper(rcvr *types.Type, method *types.Field, newnam *types.Sym) {
testdclstack()
}
fn = typecheck(fn, ctxStmt)
typecheckFunc(fn)
Curfn = fn
typecheckslice(fn.Body().Slice(), ctxStmt)
// Inline calls within (*T).M wrappers. This is safe because we only
// generate those wrappers within the same compilation unit as (T).M.
// TODO(mdempsky): Investigate why we can't enable this more generally.
if rcvr.IsPtr() && rcvr.Elem() == method.Type.Recv().Type && rcvr.Elem().Sym != nil {
if rcvr.IsPtr() && rcvr.Elem() == method.Type.Recv().Type && rcvr.Elem().Sym() != nil {
inlcalls(fn)
}
escapeFuncs([]ir.Node{fn}, false)
escapeFuncs([]*ir.Func{fn}, false)
Curfn = nil
xtop = append(xtop, fn)
@ -1269,11 +1214,11 @@ func hashmem(t *types.Type) ir.Node {
n := NewName(sym)
setNodeNameFunc(n)
n.SetType(functype(nil, []ir.Node{
n.SetType(functype(nil, []*ir.Field{
anonfield(types.NewPtr(t)),
anonfield(types.Types[types.TUINTPTR]),
anonfield(types.Types[types.TUINTPTR]),
}, []ir.Node{
}, []*ir.Field{
anonfield(types.Types[types.TUINTPTR]),
}))
return n
@ -1393,14 +1338,6 @@ func implements(t, iface *types.Type, m, samename **types.Field, ptr *int) bool
return true
}
func listtreecopy(l []ir.Node, pos src.XPos) []ir.Node {
var out []ir.Node
for _, n := range l {
out = append(out, treecopy(n, pos))
}
return out
}
func liststmt(l []ir.Node) ir.Node {
n := ir.Nod(ir.OBLOCK, nil, nil)
n.PtrList().Set(l)
@ -1417,9 +1354,9 @@ func ngotype(n ir.Node) *types.Sym {
return nil
}
// The result of addinit MUST be assigned back to n, e.g.
// n.Left = addinit(n.Left, init)
func addinit(n ir.Node, init []ir.Node) ir.Node {
// The result of initExpr MUST be assigned back to n, e.g.
// n.Left = initExpr(init, n.Left)
func initExpr(init []ir.Node, n ir.Node) ir.Node {
if len(init) == 0 {
return n
}
@ -1495,7 +1432,7 @@ func isdirectiface(t *types.Type) bool {
return false
}
switch t.Etype {
switch t.Kind() {
case types.TPTR:
// Pointers to notinheap types must be stored indirectly. See issue 42076.
return !t.Elem().NotInHeap()
@ -1534,7 +1471,7 @@ func ifaceData(pos src.XPos, n ir.Node, t *types.Type) ir.Node {
if t.IsInterface() {
base.Fatalf("ifaceData interface: %v", t)
}
ptr := nodlSym(pos, ir.OIDATA, n, nil)
ptr := ir.NodAt(pos, ir.OIDATA, n, nil)
if isdirectiface(t) {
ptr.SetType(t)
ptr.SetTypecheck(1)
@ -1552,9 +1489,9 @@ func ifaceData(pos src.XPos, n ir.Node, t *types.Type) ir.Node {
// typePos returns the position associated with t.
// This is where t was declared or where it appeared as a type expression.
func typePos(t *types.Type) src.XPos {
n := ir.AsNode(t.Nod)
if n == nil || !n.Pos().IsKnown() {
base.Fatalf("bad type: %v", t)
if pos := t.Pos(); pos.IsKnown() {
return pos
}
return n.Pos()
base.Fatalf("bad type: %v", t)
panic("unreachable")
}

View File

@ -157,7 +157,7 @@ func typecheckExprSwitch(n ir.Node) {
switch {
case t.IsMap():
nilonly = "map"
case t.Etype == types.TFUNC:
case t.Kind() == types.TFUNC:
nilonly = "func"
case t.IsSlice():
nilonly = "slice"
@ -332,7 +332,7 @@ type exprClause struct {
func (s *exprSwitch) Add(pos src.XPos, expr, jmp ir.Node) {
c := exprClause{pos: pos, lo: expr, hi: expr, jmp: jmp}
if okforcmp[s.exprname.Type().Etype] && expr.Op() == ir.OLITERAL {
if okforcmp[s.exprname.Type().Kind()] && expr.Op() == ir.OLITERAL {
s.clauses = append(s.clauses, c)
return
}
@ -365,8 +365,8 @@ func (s *exprSwitch) flush() {
// all we need here is consistency. We respect this
// sorting below.
sort.Slice(cc, func(i, j int) bool {
si := cc[i].lo.StringVal()
sj := cc[j].lo.StringVal()
si := ir.StringVal(cc[i].lo)
sj := ir.StringVal(cc[j].lo)
if len(si) != len(sj) {
return len(si) < len(sj)
}
@ -375,7 +375,7 @@ func (s *exprSwitch) flush() {
// runLen returns the string length associated with a
// particular run of exprClauses.
runLen := func(run []exprClause) int64 { return int64(len(run[0].lo.StringVal())) }
runLen := func(run []exprClause) int64 { return int64(len(ir.StringVal(run[0].lo))) }
// Collapse runs of consecutive strings with the same length.
var runs [][]exprClause
@ -411,7 +411,7 @@ func (s *exprSwitch) flush() {
merged := cc[:1]
for _, c := range cc[1:] {
last := &merged[len(merged)-1]
if last.jmp == c.jmp && last.hi.Int64Val()+1 == c.lo.Int64Val() {
if last.jmp == c.jmp && ir.Int64Val(last.hi)+1 == ir.Int64Val(c.lo) {
last.hi = c.lo
} else {
merged = append(merged, c)
@ -446,7 +446,7 @@ func (c *exprClause) test(exprname ir.Node) ir.Node {
// Optimize "switch true { ...}" and "switch false { ... }".
if ir.IsConst(exprname, constant.Bool) && !c.lo.Type().IsInterface() {
if exprname.BoolVal() {
if ir.BoolVal(exprname) {
return c.lo
} else {
return ir.NodAt(c.pos, ir.ONOT, c.lo, nil)

File diff suppressed because it is too large Load Diff

View File

@ -15,7 +15,7 @@ import (
var basicTypes = [...]struct {
name string
etype types.EType
etype types.Kind
}{
{"int8", types.TINT8},
{"int16", types.TINT16},
@ -35,9 +35,9 @@ var basicTypes = [...]struct {
var typedefs = [...]struct {
name string
etype types.EType
sameas32 types.EType
sameas64 types.EType
etype types.Kind
sameas32 types.Kind
sameas64 types.Kind
}{
{"int", types.TINT, types.TINT32, types.TINT64},
{"uint", types.TUINT, types.TUINT32, types.TUINT64},
@ -65,17 +65,6 @@ var builtinFuncs = [...]struct {
{"recover", ir.ORECOVER},
}
// isBuiltinFuncName reports whether name matches a builtin function
// name.
func isBuiltinFuncName(name string) bool {
for _, fn := range &builtinFuncs {
if fn.name == name {
return true
}
}
return false
}
var unsafeFuncs = [...]struct {
name string
op ir.Op
@ -87,34 +76,82 @@ var unsafeFuncs = [...]struct {
// initUniverse initializes the universe block.
func initUniverse() {
lexinit()
typeinit()
lexinit1()
}
// lexinit initializes known symbols and the basic types.
func lexinit() {
for _, s := range &basicTypes {
etype := s.etype
if int(etype) >= len(types.Types) {
base.Fatalf("lexinit: %s bad etype", s.name)
}
s2 := ir.BuiltinPkg.Lookup(s.name)
t := types.Types[etype]
if t == nil {
t = types.New(etype)
t.Sym = s2
if etype != types.TANY && etype != types.TSTRING {
dowidth(t)
}
types.Types[etype] = t
}
s2.Def = typenod(t)
ir.AsNode(s2.Def).SetName(new(ir.Name))
if Widthptr == 0 {
base.Fatalf("typeinit before betypeinit")
}
slicePtrOffset = 0
sliceLenOffset = Rnd(slicePtrOffset+int64(Widthptr), int64(Widthptr))
sliceCapOffset = Rnd(sliceLenOffset+int64(Widthptr), int64(Widthptr))
sizeofSlice = Rnd(sliceCapOffset+int64(Widthptr), int64(Widthptr))
// string is same as slice wo the cap
sizeofString = Rnd(sliceLenOffset+int64(Widthptr), int64(Widthptr))
for et := types.Kind(0); et < types.NTYPE; et++ {
simtype[et] = et
}
types.Types[types.TANY] = types.New(types.TANY)
types.Types[types.TINTER] = types.NewInterface(types.LocalPkg, nil)
defBasic := func(kind types.Kind, pkg *types.Pkg, name string) *types.Type {
sym := pkg.Lookup(name)
n := ir.NewDeclNameAt(src.NoXPos, sym)
n.SetOp(ir.OTYPE)
t := types.NewBasic(kind, n)
n.SetType(t)
sym.Def = n
if kind != types.TANY {
dowidth(t)
}
return t
}
for _, s := range &basicTypes {
types.Types[s.etype] = defBasic(s.etype, types.BuiltinPkg, s.name)
}
for _, s := range &typedefs {
sameas := s.sameas32
if Widthptr == 8 {
sameas = s.sameas64
}
simtype[s.etype] = sameas
types.Types[s.etype] = defBasic(s.etype, types.BuiltinPkg, s.name)
}
// We create separate byte and rune types for better error messages
// rather than just creating type alias *types.Sym's for the uint8 and
// int32 types. Hence, (bytetype|runtype).Sym.isAlias() is false.
// TODO(gri) Should we get rid of this special case (at the cost
// of less informative error messages involving bytes and runes)?
// (Alternatively, we could introduce an OTALIAS node representing
// type aliases, albeit at the cost of having to deal with it everywhere).
types.ByteType = defBasic(types.TUINT8, types.BuiltinPkg, "byte")
types.RuneType = defBasic(types.TINT32, types.BuiltinPkg, "rune")
// error type
s := types.BuiltinPkg.Lookup("error")
n := ir.NewDeclNameAt(src.NoXPos, s)
n.SetOp(ir.OTYPE)
types.ErrorType = types.NewNamed(n)
types.ErrorType.SetUnderlying(makeErrorInterface())
n.SetType(types.ErrorType)
s.Def = n
dowidth(types.ErrorType)
types.Types[types.TUNSAFEPTR] = defBasic(types.TUNSAFEPTR, unsafepkg, "Pointer")
// simple aliases
simtype[types.TMAP] = types.TPTR
simtype[types.TCHAN] = types.TPTR
simtype[types.TFUNC] = types.TPTR
simtype[types.TUNSAFEPTR] = types.TPTR
for _, s := range &builtinFuncs {
s2 := ir.BuiltinPkg.Lookup(s.name)
s2 := types.BuiltinPkg.Lookup(s.name)
s2.Def = NewName(s2)
ir.AsNode(s2.Def).SetSubOp(s.op)
}
@ -125,65 +162,36 @@ func lexinit() {
ir.AsNode(s2.Def).SetSubOp(s.op)
}
types.UntypedString = types.New(types.TSTRING)
types.UntypedBool = types.New(types.TBOOL)
types.Types[types.TANY] = types.New(types.TANY)
s := ir.BuiltinPkg.Lookup("true")
s = types.BuiltinPkg.Lookup("true")
s.Def = nodbool(true)
ir.AsNode(s.Def).SetSym(lookup("true"))
ir.AsNode(s.Def).SetName(new(ir.Name))
ir.AsNode(s.Def).SetType(types.UntypedBool)
s = ir.BuiltinPkg.Lookup("false")
s = types.BuiltinPkg.Lookup("false")
s.Def = nodbool(false)
ir.AsNode(s.Def).SetSym(lookup("false"))
ir.AsNode(s.Def).SetName(new(ir.Name))
ir.AsNode(s.Def).SetType(types.UntypedBool)
s = lookup("_")
types.BlankSym = s
s.Block = -100
s.Def = NewName(s)
types.Types[types.TBLANK] = types.New(types.TBLANK)
ir.AsNode(s.Def).SetType(types.Types[types.TBLANK])
ir.BlankNode = ir.AsNode(s.Def)
ir.BlankNode.SetTypecheck(1)
s = ir.BuiltinPkg.Lookup("_")
s = types.BuiltinPkg.Lookup("_")
s.Block = -100
s.Def = NewName(s)
types.Types[types.TBLANK] = types.New(types.TBLANK)
ir.AsNode(s.Def).SetType(types.Types[types.TBLANK])
types.Types[types.TNIL] = types.New(types.TNIL)
s = ir.BuiltinPkg.Lookup("nil")
s = types.BuiltinPkg.Lookup("nil")
s.Def = nodnil()
ir.AsNode(s.Def).SetSym(s)
ir.AsNode(s.Def).SetName(new(ir.Name))
s = ir.BuiltinPkg.Lookup("iota")
s.Def = ir.Nod(ir.OIOTA, nil, nil)
ir.AsNode(s.Def).SetSym(s)
ir.AsNode(s.Def).SetName(new(ir.Name))
}
func typeinit() {
if Widthptr == 0 {
base.Fatalf("typeinit before betypeinit")
}
for et := types.EType(0); et < types.NTYPE; et++ {
simtype[et] = et
}
types.Types[types.TPTR] = types.New(types.TPTR)
dowidth(types.Types[types.TPTR])
t := types.New(types.TUNSAFEPTR)
types.Types[types.TUNSAFEPTR] = t
t.Sym = unsafepkg.Lookup("Pointer")
t.Sym.Def = typenod(t)
ir.AsNode(t.Sym.Def).SetName(new(ir.Name))
dowidth(types.Types[types.TUNSAFEPTR])
s = types.BuiltinPkg.Lookup("iota")
s.Def = ir.NewIota(base.Pos, s)
for et := types.TINT8; et <= types.TUINT64; et++ {
isInt[et] = true
@ -199,7 +207,7 @@ func typeinit() {
isComplex[types.TCOMPLEX128] = true
// initialize okfor
for et := types.EType(0); et < types.NTYPE; et++ {
for et := types.Kind(0); et < types.NTYPE; et++ {
if isInt[et] || et == types.TIDEAL {
okforeq[et] = true
okforcmp[et] = true
@ -261,8 +269,7 @@ func typeinit() {
okforcmp[types.TSTRING] = true
var i int
for i = 0; i < len(okfor); i++ {
for i := range okfor {
okfor[i] = okfornone[:]
}
@ -304,92 +311,14 @@ func typeinit() {
iscmp[ir.OLE] = true
iscmp[ir.OEQ] = true
iscmp[ir.ONE] = true
types.Types[types.TINTER] = types.New(types.TINTER) // empty interface
// simple aliases
simtype[types.TMAP] = types.TPTR
simtype[types.TCHAN] = types.TPTR
simtype[types.TFUNC] = types.TPTR
simtype[types.TUNSAFEPTR] = types.TPTR
slicePtrOffset = 0
sliceLenOffset = Rnd(slicePtrOffset+int64(Widthptr), int64(Widthptr))
sliceCapOffset = Rnd(sliceLenOffset+int64(Widthptr), int64(Widthptr))
sizeofSlice = Rnd(sliceCapOffset+int64(Widthptr), int64(Widthptr))
// string is same as slice wo the cap
sizeofString = Rnd(sliceLenOffset+int64(Widthptr), int64(Widthptr))
dowidth(types.Types[types.TSTRING])
dowidth(types.UntypedString)
}
func makeErrorInterface() *types.Type {
sig := functypefield(fakeRecvField(), nil, []*types.Field{
sig := types.NewSignature(types.NoPkg, fakeRecvField(), nil, []*types.Field{
types.NewField(src.NoXPos, nil, types.Types[types.TSTRING]),
})
method := types.NewField(src.NoXPos, lookup("Error"), sig)
t := types.New(types.TINTER)
t.SetInterface([]*types.Field{method})
return t
}
func lexinit1() {
// error type
s := ir.BuiltinPkg.Lookup("error")
types.Errortype = makeErrorInterface()
types.Errortype.Sym = s
types.Errortype.Orig = makeErrorInterface()
s.Def = typenod(types.Errortype)
dowidth(types.Errortype)
// We create separate byte and rune types for better error messages
// rather than just creating type alias *types.Sym's for the uint8 and
// int32 types. Hence, (bytetype|runtype).Sym.isAlias() is false.
// TODO(gri) Should we get rid of this special case (at the cost
// of less informative error messages involving bytes and runes)?
// (Alternatively, we could introduce an OTALIAS node representing
// type aliases, albeit at the cost of having to deal with it everywhere).
// byte alias
s = ir.BuiltinPkg.Lookup("byte")
types.Bytetype = types.New(types.TUINT8)
types.Bytetype.Sym = s
s.Def = typenod(types.Bytetype)
ir.AsNode(s.Def).SetName(new(ir.Name))
dowidth(types.Bytetype)
// rune alias
s = ir.BuiltinPkg.Lookup("rune")
types.Runetype = types.New(types.TINT32)
types.Runetype.Sym = s
s.Def = typenod(types.Runetype)
ir.AsNode(s.Def).SetName(new(ir.Name))
dowidth(types.Runetype)
// backend-dependent builtin types (e.g. int).
for _, s := range &typedefs {
s1 := ir.BuiltinPkg.Lookup(s.name)
sameas := s.sameas32
if Widthptr == 8 {
sameas = s.sameas64
}
simtype[s.etype] = sameas
t := types.New(s.etype)
t.Sym = s1
types.Types[s.etype] = t
s1.Def = typenod(t)
ir.AsNode(s1.Def).SetName(new(ir.Name))
s1.Origpkg = ir.BuiltinPkg
dowidth(t)
}
return types.NewInterface(types.NoPkg, []*types.Field{method})
}
// finishUniverse makes the universe block visible within the current package.
@ -398,7 +327,7 @@ func finishUniverse() {
// that we silently skip symbols that are already declared in the
// package block rather than emitting a redeclared symbol error.
for _, s := range ir.BuiltinPkg.Syms {
for _, s := range types.BuiltinPkg.Syms {
if s.Def == nil {
continue
}
@ -414,5 +343,5 @@ func finishUniverse() {
nodfp = NewName(lookup(".fp"))
nodfp.SetType(types.Types[types.TINT32])
nodfp.SetClass(ir.PPARAM)
nodfp.Name().SetUsed(true)
nodfp.SetUsed(true)
}

View File

@ -70,7 +70,7 @@ func evalunsafe(n ir.Node) int64 {
v += r.Offset()
default:
ir.Dump("unsafenmagic", n.Left())
base.Fatalf("impossible %#v node after dot insertion", r.Op())
base.Fatalf("impossible %v node after dot insertion", r.Op())
}
}
return v

File diff suppressed because it is too large Load Diff

View File

@ -14,6 +14,18 @@ func (f *bitset8) set(mask uint8, b bool) {
}
}
func (f bitset8) get2(shift uint8) uint8 {
return uint8(f>>shift) & 3
}
// set2 sets two bits in f using the bottom two bits of b.
func (f *bitset8) set2(shift uint8, b uint8) {
// Clear old bits.
*(*uint8)(f) &^= 3 << shift
// Set new bits.
*(*uint8)(f) |= uint8(b&3) << shift
}
type bitset16 uint16
func (f *bitset16) set(mask uint16, b bool) {

View File

@ -0,0 +1,100 @@
// Copyright 2020 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ir
import (
"cmd/compile/internal/base"
"cmd/internal/src"
)
// A Node may implement the Orig and SetOrig method to
// maintain a pointer to the "unrewritten" form of a Node.
// If a Node does not implement OrigNode, it is its own Orig.
//
// Note that both SepCopy and Copy have definitions compatible
// with a Node that does not implement OrigNode: such a Node
// is its own Orig, and in that case, that's what both want to return
// anyway (SepCopy unconditionally, and Copy only when the input
// is its own Orig as well, but if the output does not implement
// OrigNode, then neither does the input, making the condition true).
type OrigNode interface {
Node
Orig() Node
SetOrig(Node)
}
// Orig returns the “original” node for n.
// If n implements OrigNode, Orig returns n.Orig().
// Otherwise Orig returns n itself.
func Orig(n Node) Node {
if n, ok := n.(OrigNode); ok {
o := n.Orig()
if o == nil {
Dump("Orig nil", n)
base.Fatalf("Orig returned nil")
}
return o
}
return n
}
// SepCopy returns a separate shallow copy of n,
// breaking any Orig link to any other nodes.
func SepCopy(n Node) Node {
n = n.copy()
if n, ok := n.(OrigNode); ok {
n.SetOrig(n)
}
return n
}
// Copy returns a shallow copy of n.
// If Orig(n) == n, then Orig(Copy(n)) == the copy.
// Otherwise the Orig link is preserved as well.
//
// The specific semantics surrounding Orig are subtle but right for most uses.
// See issues #26855 and #27765 for pitfalls.
func Copy(n Node) Node {
c := n.copy()
if n, ok := n.(OrigNode); ok && n.Orig() == n {
c.(OrigNode).SetOrig(c)
}
return c
}
func copyList(x Nodes) Nodes {
c := make([]Node, x.Len())
copy(c, x.Slice())
return AsNodes(c)
}
// DeepCopy returns a “deep” copy of n, with its entire structure copied
// (except for shared nodes like ONAME, ONONAME, OLITERAL, and OTYPE).
// If pos.IsKnown(), it sets the source position of newly allocated Nodes to pos.
func DeepCopy(pos src.XPos, n Node) Node {
var edit func(Node) Node
edit = func(x Node) Node {
switch x.Op() {
case OPACK, ONAME, ONONAME, OLITERAL, ONIL, OTYPE:
return x
}
x = Copy(x)
if pos.IsKnown() {
x.SetPos(pos)
}
EditChildren(x, edit)
return x
}
return edit(n)
}
// DeepCopyList returns a list of deep copies (using DeepCopy) of the nodes in list.
func DeepCopyList(pos src.XPos, list []Node) []Node {
var out []Node
for _, n := range list {
out = append(out, DeepCopy(pos, n))
}
return out
}

View File

@ -200,9 +200,9 @@ func (p *dumper) dump(x reflect.Value, depth int) {
typ := x.Type()
isNode := false
if n, ok := x.Interface().(node); ok {
if n, ok := x.Interface().(Node); ok {
isNode = true
p.printf("%s %s {", n.op.String(), p.addr(x))
p.printf("%s %s {", n.Op().String(), p.addr(x))
} else {
p.printf("%s {", typ)
}

View File

@ -0,0 +1,860 @@
// Copyright 2020 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ir
import (
"cmd/compile/internal/base"
"cmd/compile/internal/types"
"cmd/internal/src"
"go/constant"
)
func maybeDo(x Node, err error, do func(Node) error) error {
if x != nil && err == nil {
err = do(x)
}
return err
}
func maybeDoList(x Nodes, err error, do func(Node) error) error {
if err == nil {
err = DoList(x, do)
}
return err
}
func maybeEdit(x Node, edit func(Node) Node) Node {
if x == nil {
return x
}
return edit(x)
}
// An Expr is a Node that can appear as an expression.
type Expr interface {
Node
isExpr()
}
// A miniExpr is a miniNode with extra fields common to expressions.
// TODO(rsc): Once we are sure about the contents, compact the bools
// into a bit field and leave extra bits available for implementations
// embedding miniExpr. Right now there are ~60 unused bits sitting here.
type miniExpr struct {
miniNode
typ *types.Type
init Nodes // TODO(rsc): Don't require every Node to have an init
opt interface{} // TODO(rsc): Don't require every Node to have an opt?
flags bitset8
}
const (
miniExprHasCall = 1 << iota
miniExprImplicit
miniExprNonNil
miniExprTransient
miniExprBounded
)
func (*miniExpr) isExpr() {}
func (n *miniExpr) Type() *types.Type { return n.typ }
func (n *miniExpr) SetType(x *types.Type) { n.typ = x }
func (n *miniExpr) Opt() interface{} { return n.opt }
func (n *miniExpr) SetOpt(x interface{}) { n.opt = x }
func (n *miniExpr) HasCall() bool { return n.flags&miniExprHasCall != 0 }
func (n *miniExpr) SetHasCall(b bool) { n.flags.set(miniExprHasCall, b) }
func (n *miniExpr) Implicit() bool { return n.flags&miniExprImplicit != 0 }
func (n *miniExpr) SetImplicit(b bool) { n.flags.set(miniExprImplicit, b) }
func (n *miniExpr) NonNil() bool { return n.flags&miniExprNonNil != 0 }
func (n *miniExpr) MarkNonNil() { n.flags |= miniExprNonNil }
func (n *miniExpr) Transient() bool { return n.flags&miniExprTransient != 0 }
func (n *miniExpr) SetTransient(b bool) { n.flags.set(miniExprTransient, b) }
func (n *miniExpr) Bounded() bool { return n.flags&miniExprBounded != 0 }
func (n *miniExpr) SetBounded(b bool) { n.flags.set(miniExprBounded, b) }
func (n *miniExpr) Init() Nodes { return n.init }
func (n *miniExpr) PtrInit() *Nodes { return &n.init }
func (n *miniExpr) SetInit(x Nodes) { n.init = x }
func toNtype(x Node) Ntype {
if x == nil {
return nil
}
if _, ok := x.(Ntype); !ok {
Dump("not Ntype", x)
}
return x.(Ntype)
}
// An AddStringExpr is a string concatenation Expr[0] + Exprs[1] + ... + Expr[len(Expr)-1].
type AddStringExpr struct {
miniExpr
List_ Nodes
}
func NewAddStringExpr(pos src.XPos, list []Node) *AddStringExpr {
n := &AddStringExpr{}
n.pos = pos
n.op = OADDSTR
n.List_.Set(list)
return n
}
func (n *AddStringExpr) List() Nodes { return n.List_ }
func (n *AddStringExpr) PtrList() *Nodes { return &n.List_ }
func (n *AddStringExpr) SetList(x Nodes) { n.List_ = x }
// An AddrExpr is an address-of expression &X.
// It may end up being a normal address-of or an allocation of a composite literal.
type AddrExpr struct {
miniExpr
X Node
Alloc Node // preallocated storage if any
}
func NewAddrExpr(pos src.XPos, x Node) *AddrExpr {
n := &AddrExpr{X: x}
n.op = OADDR
n.pos = pos
return n
}
func (n *AddrExpr) Left() Node { return n.X }
func (n *AddrExpr) SetLeft(x Node) { n.X = x }
func (n *AddrExpr) Right() Node { return n.Alloc }
func (n *AddrExpr) SetRight(x Node) { n.Alloc = x }
func (n *AddrExpr) SetOp(op Op) {
switch op {
default:
panic(n.no("SetOp " + op.String()))
case OADDR, OPTRLIT:
n.op = op
}
}
// A BinaryExpr is a binary expression X Op Y,
// or Op(X, Y) for builtin functions that do not become calls.
type BinaryExpr struct {
miniExpr
X Node
Y Node
}
func NewBinaryExpr(pos src.XPos, op Op, x, y Node) *BinaryExpr {
n := &BinaryExpr{X: x, Y: y}
n.pos = pos
n.SetOp(op)
return n
}
func (n *BinaryExpr) Left() Node { return n.X }
func (n *BinaryExpr) SetLeft(x Node) { n.X = x }
func (n *BinaryExpr) Right() Node { return n.Y }
func (n *BinaryExpr) SetRight(y Node) { n.Y = y }
func (n *BinaryExpr) SetOp(op Op) {
switch op {
default:
panic(n.no("SetOp " + op.String()))
case OADD, OADDSTR, OAND, OANDNOT, ODIV, OEQ, OGE, OGT, OLE,
OLSH, OLT, OMOD, OMUL, ONE, OOR, ORSH, OSUB, OXOR,
OCOPY, OCOMPLEX,
OEFACE:
n.op = op
}
}
// A CallUse records how the result of the call is used:
type CallUse int
const (
_ CallUse = iota
CallUseExpr // single expression result is used
CallUseList // list of results are used
CallUseStmt // results not used - call is a statement
)
// A CallExpr is a function call X(Args).
type CallExpr struct {
miniExpr
orig Node
X Node
Args Nodes
Rargs Nodes // TODO(rsc): Delete.
Body_ Nodes // TODO(rsc): Delete.
DDD bool
Use CallUse
NoInline_ bool
}
func NewCallExpr(pos src.XPos, op Op, fun Node, args []Node) *CallExpr {
n := &CallExpr{X: fun}
n.pos = pos
n.orig = n
n.SetOp(op)
n.Args.Set(args)
return n
}
func (*CallExpr) isStmt() {}
func (n *CallExpr) Orig() Node { return n.orig }
func (n *CallExpr) SetOrig(x Node) { n.orig = x }
func (n *CallExpr) Left() Node { return n.X }
func (n *CallExpr) SetLeft(x Node) { n.X = x }
func (n *CallExpr) List() Nodes { return n.Args }
func (n *CallExpr) PtrList() *Nodes { return &n.Args }
func (n *CallExpr) SetList(x Nodes) { n.Args = x }
func (n *CallExpr) Rlist() Nodes { return n.Rargs }
func (n *CallExpr) PtrRlist() *Nodes { return &n.Rargs }
func (n *CallExpr) SetRlist(x Nodes) { n.Rargs = x }
func (n *CallExpr) IsDDD() bool { return n.DDD }
func (n *CallExpr) SetIsDDD(x bool) { n.DDD = x }
func (n *CallExpr) NoInline() bool { return n.NoInline_ }
func (n *CallExpr) SetNoInline(x bool) { n.NoInline_ = x }
func (n *CallExpr) Body() Nodes { return n.Body_ }
func (n *CallExpr) PtrBody() *Nodes { return &n.Body_ }
func (n *CallExpr) SetBody(x Nodes) { n.Body_ = x }
func (n *CallExpr) SetOp(op Op) {
switch op {
default:
panic(n.no("SetOp " + op.String()))
case OCALL, OCALLFUNC, OCALLINTER, OCALLMETH,
OAPPEND, ODELETE, OGETG, OMAKE, OPRINT, OPRINTN, ORECOVER:
n.op = op
}
}
// A CallPartExpr is a method expression X.Method (uncalled).
type CallPartExpr struct {
miniExpr
Func_ *Func
X Node
Method *types.Field
}
func NewCallPartExpr(pos src.XPos, x Node, method *types.Field, fn *Func) *CallPartExpr {
n := &CallPartExpr{Func_: fn, X: x, Method: method}
n.op = OCALLPART
n.pos = pos
n.typ = fn.Type()
n.Func_ = fn
return n
}
func (n *CallPartExpr) Func() *Func { return n.Func_ }
func (n *CallPartExpr) Left() Node { return n.X }
func (n *CallPartExpr) Sym() *types.Sym { return n.Method.Sym }
func (n *CallPartExpr) SetLeft(x Node) { n.X = x }
// A ClosureExpr is a function literal expression.
type ClosureExpr struct {
miniExpr
Func_ *Func
}
func NewClosureExpr(pos src.XPos, fn *Func) *ClosureExpr {
n := &ClosureExpr{Func_: fn}
n.op = OCLOSURE
n.pos = pos
return n
}
func (n *ClosureExpr) Func() *Func { return n.Func_ }
// A ClosureRead denotes reading a variable stored within a closure struct.
type ClosureReadExpr struct {
miniExpr
Offset_ int64
}
func NewClosureRead(typ *types.Type, offset int64) *ClosureReadExpr {
n := &ClosureReadExpr{Offset_: offset}
n.typ = typ
n.op = OCLOSUREREAD
return n
}
func (n *ClosureReadExpr) Type() *types.Type { return n.typ }
func (n *ClosureReadExpr) Offset() int64 { return n.Offset_ }
// A CompLitExpr is a composite literal Type{Vals}.
// Before type-checking, the type is Ntype.
type CompLitExpr struct {
miniExpr
orig Node
Ntype Ntype
List_ Nodes // initialized values
}
func NewCompLitExpr(pos src.XPos, op Op, typ Ntype, list []Node) *CompLitExpr {
n := &CompLitExpr{Ntype: typ}
n.pos = pos
n.SetOp(op)
n.List_.Set(list)
n.orig = n
return n
}
func (n *CompLitExpr) Orig() Node { return n.orig }
func (n *CompLitExpr) SetOrig(x Node) { n.orig = x }
func (n *CompLitExpr) Right() Node { return n.Ntype }
func (n *CompLitExpr) SetRight(x Node) { n.Ntype = toNtype(x) }
func (n *CompLitExpr) List() Nodes { return n.List_ }
func (n *CompLitExpr) PtrList() *Nodes { return &n.List_ }
func (n *CompLitExpr) SetList(x Nodes) { n.List_ = x }
func (n *CompLitExpr) SetOp(op Op) {
switch op {
default:
panic(n.no("SetOp " + op.String()))
case OARRAYLIT, OCOMPLIT, OMAPLIT, OSTRUCTLIT, OSLICELIT:
n.op = op
}
}
type ConstExpr struct {
miniExpr
val constant.Value
orig Node
}
func NewConstExpr(val constant.Value, orig Node) Node {
n := &ConstExpr{orig: orig, val: val}
n.op = OLITERAL
n.pos = orig.Pos()
n.SetType(orig.Type())
n.SetTypecheck(orig.Typecheck())
n.SetDiag(orig.Diag())
return n
}
func (n *ConstExpr) Sym() *types.Sym { return n.orig.Sym() }
func (n *ConstExpr) Orig() Node { return n.orig }
func (n *ConstExpr) SetOrig(orig Node) { panic(n.no("SetOrig")) }
func (n *ConstExpr) Val() constant.Value { return n.val }
// A ConvExpr is a conversion Type(X).
// It may end up being a value or a type.
type ConvExpr struct {
miniExpr
X Node
}
func NewConvExpr(pos src.XPos, op Op, typ *types.Type, x Node) *ConvExpr {
n := &ConvExpr{X: x}
n.pos = pos
n.typ = typ
n.SetOp(op)
return n
}
func (n *ConvExpr) Left() Node { return n.X }
func (n *ConvExpr) SetLeft(x Node) { n.X = x }
func (n *ConvExpr) SetOp(op Op) {
switch op {
default:
panic(n.no("SetOp " + op.String()))
case OCONV, OCONVIFACE, OCONVNOP, OBYTES2STR, OBYTES2STRTMP, ORUNES2STR, OSTR2BYTES, OSTR2BYTESTMP, OSTR2RUNES, ORUNESTR:
n.op = op
}
}
// An IndexExpr is an index expression X[Y].
type IndexExpr struct {
miniExpr
X Node
Index Node
Assigned bool
}
func NewIndexExpr(pos src.XPos, x, index Node) *IndexExpr {
n := &IndexExpr{X: x, Index: index}
n.pos = pos
n.op = OINDEX
return n
}
func (n *IndexExpr) Left() Node { return n.X }
func (n *IndexExpr) SetLeft(x Node) { n.X = x }
func (n *IndexExpr) Right() Node { return n.Index }
func (n *IndexExpr) SetRight(y Node) { n.Index = y }
func (n *IndexExpr) IndexMapLValue() bool { return n.Assigned }
func (n *IndexExpr) SetIndexMapLValue(x bool) { n.Assigned = x }
func (n *IndexExpr) SetOp(op Op) {
switch op {
default:
panic(n.no("SetOp " + op.String()))
case OINDEX, OINDEXMAP:
n.op = op
}
}
// A KeyExpr is a Key: Value composite literal key.
type KeyExpr struct {
miniExpr
Key Node
Value Node
}
func NewKeyExpr(pos src.XPos, key, value Node) *KeyExpr {
n := &KeyExpr{Key: key, Value: value}
n.pos = pos
n.op = OKEY
return n
}
func (n *KeyExpr) Left() Node { return n.Key }
func (n *KeyExpr) SetLeft(x Node) { n.Key = x }
func (n *KeyExpr) Right() Node { return n.Value }
func (n *KeyExpr) SetRight(y Node) { n.Value = y }
// A StructKeyExpr is an Field: Value composite literal key.
type StructKeyExpr struct {
miniExpr
Field *types.Sym
Value Node
Offset_ int64
}
func NewStructKeyExpr(pos src.XPos, field *types.Sym, value Node) *StructKeyExpr {
n := &StructKeyExpr{Field: field, Value: value}
n.pos = pos
n.op = OSTRUCTKEY
n.Offset_ = types.BADWIDTH
return n
}
func (n *StructKeyExpr) Sym() *types.Sym { return n.Field }
func (n *StructKeyExpr) SetSym(x *types.Sym) { n.Field = x }
func (n *StructKeyExpr) Left() Node { return n.Value }
func (n *StructKeyExpr) SetLeft(x Node) { n.Value = x }
func (n *StructKeyExpr) Offset() int64 { return n.Offset_ }
func (n *StructKeyExpr) SetOffset(x int64) { n.Offset_ = x }
// An InlinedCallExpr is an inlined function call.
type InlinedCallExpr struct {
miniExpr
Body_ Nodes
ReturnVars Nodes
}
func NewInlinedCallExpr(pos src.XPos, body, retvars []Node) *InlinedCallExpr {
n := &InlinedCallExpr{}
n.pos = pos
n.op = OINLCALL
n.Body_.Set(body)
n.ReturnVars.Set(retvars)
return n
}
func (n *InlinedCallExpr) Body() Nodes { return n.Body_ }
func (n *InlinedCallExpr) PtrBody() *Nodes { return &n.Body_ }
func (n *InlinedCallExpr) SetBody(x Nodes) { n.Body_ = x }
func (n *InlinedCallExpr) Rlist() Nodes { return n.ReturnVars }
func (n *InlinedCallExpr) PtrRlist() *Nodes { return &n.ReturnVars }
func (n *InlinedCallExpr) SetRlist(x Nodes) { n.ReturnVars = x }
// A LogicalExpr is a expression X Op Y where Op is && or ||.
// It is separate from BinaryExpr to make room for statements
// that must be executed before Y but after X.
type LogicalExpr struct {
miniExpr
X Node
Y Node
}
func NewLogicalExpr(pos src.XPos, op Op, x, y Node) *LogicalExpr {
n := &LogicalExpr{X: x, Y: y}
n.pos = pos
n.SetOp(op)
return n
}
func (n *LogicalExpr) Left() Node { return n.X }
func (n *LogicalExpr) SetLeft(x Node) { n.X = x }
func (n *LogicalExpr) Right() Node { return n.Y }
func (n *LogicalExpr) SetRight(y Node) { n.Y = y }
func (n *LogicalExpr) SetOp(op Op) {
switch op {
default:
panic(n.no("SetOp " + op.String()))
case OANDAND, OOROR:
n.op = op
}
}
// A MakeExpr is a make expression: make(Type[, Len[, Cap]]).
// Op is OMAKECHAN, OMAKEMAP, OMAKESLICE, or OMAKESLICECOPY,
// but *not* OMAKE (that's a pre-typechecking CallExpr).
type MakeExpr struct {
miniExpr
Len Node
Cap Node
}
func NewMakeExpr(pos src.XPos, op Op, len, cap Node) *MakeExpr {
n := &MakeExpr{Len: len, Cap: cap}
n.pos = pos
n.SetOp(op)
return n
}
func (n *MakeExpr) Left() Node { return n.Len }
func (n *MakeExpr) SetLeft(x Node) { n.Len = x }
func (n *MakeExpr) Right() Node { return n.Cap }
func (n *MakeExpr) SetRight(x Node) { n.Cap = x }
func (n *MakeExpr) SetOp(op Op) {
switch op {
default:
panic(n.no("SetOp " + op.String()))
case OMAKECHAN, OMAKEMAP, OMAKESLICE, OMAKESLICECOPY:
n.op = op
}
}
// A MethodExpr is a method value X.M (where X is an expression, not a type).
type MethodExpr struct {
miniExpr
X Node
M Node
Sym_ *types.Sym
Offset_ int64
Class_ Class
Method *types.Field
}
func NewMethodExpr(pos src.XPos, x, m Node) *MethodExpr {
n := &MethodExpr{X: x, M: m}
n.pos = pos
n.op = OMETHEXPR
n.Offset_ = types.BADWIDTH
return n
}
func (n *MethodExpr) Left() Node { return n.X }
func (n *MethodExpr) SetLeft(x Node) { n.X = x }
func (n *MethodExpr) Right() Node { return n.M }
func (n *MethodExpr) SetRight(y Node) { n.M = y }
func (n *MethodExpr) Sym() *types.Sym { return n.Sym_ }
func (n *MethodExpr) SetSym(x *types.Sym) { n.Sym_ = x }
func (n *MethodExpr) Offset() int64 { return n.Offset_ }
func (n *MethodExpr) SetOffset(x int64) { n.Offset_ = x }
func (n *MethodExpr) Class() Class { return n.Class_ }
func (n *MethodExpr) SetClass(x Class) { n.Class_ = x }
// A NilExpr represents the predefined untyped constant nil.
// (It may be copied and assigned a type, though.)
type NilExpr struct {
miniExpr
Sym_ *types.Sym // TODO: Remove
}
func NewNilExpr(pos src.XPos) *NilExpr {
n := &NilExpr{}
n.pos = pos
n.op = ONIL
return n
}
func (n *NilExpr) Sym() *types.Sym { return n.Sym_ }
func (n *NilExpr) SetSym(x *types.Sym) { n.Sym_ = x }
// A ParenExpr is a parenthesized expression (X).
// It may end up being a value or a type.
type ParenExpr struct {
miniExpr
X Node
}
func NewParenExpr(pos src.XPos, x Node) *ParenExpr {
n := &ParenExpr{X: x}
n.op = OPAREN
n.pos = pos
return n
}
func (n *ParenExpr) Left() Node { return n.X }
func (n *ParenExpr) SetLeft(x Node) { n.X = x }
func (*ParenExpr) CanBeNtype() {}
// SetOTYPE changes n to be an OTYPE node returning t,
// like all the type nodes in type.go.
func (n *ParenExpr) SetOTYPE(t *types.Type) {
n.op = OTYPE
n.typ = t
t.SetNod(n)
}
// A ResultExpr represents a direct access to a result slot on the stack frame.
type ResultExpr struct {
miniExpr
Offset_ int64
}
func NewResultExpr(pos src.XPos, typ *types.Type, offset int64) *ResultExpr {
n := &ResultExpr{Offset_: offset}
n.pos = pos
n.op = ORESULT
n.typ = typ
return n
}
func (n *ResultExpr) Offset() int64 { return n.Offset_ }
func (n *ResultExpr) SetOffset(x int64) { n.Offset_ = x }
// A SelectorExpr is a selector expression X.Sym.
type SelectorExpr struct {
miniExpr
X Node
Sel *types.Sym
Offset_ int64
Selection *types.Field
}
func NewSelectorExpr(pos src.XPos, op Op, x Node, sel *types.Sym) *SelectorExpr {
n := &SelectorExpr{X: x, Sel: sel}
n.pos = pos
n.Offset_ = types.BADWIDTH
n.SetOp(op)
return n
}
func (n *SelectorExpr) SetOp(op Op) {
switch op {
default:
panic(n.no("SetOp " + op.String()))
case ODOT, ODOTPTR, ODOTMETH, ODOTINTER, OXDOT:
n.op = op
}
}
func (n *SelectorExpr) Left() Node { return n.X }
func (n *SelectorExpr) SetLeft(x Node) { n.X = x }
func (n *SelectorExpr) Sym() *types.Sym { return n.Sel }
func (n *SelectorExpr) SetSym(x *types.Sym) { n.Sel = x }
func (n *SelectorExpr) Offset() int64 { return n.Offset_ }
func (n *SelectorExpr) SetOffset(x int64) { n.Offset_ = x }
// Before type-checking, bytes.Buffer is a SelectorExpr.
// After type-checking it becomes a Name.
func (*SelectorExpr) CanBeNtype() {}
// A SliceExpr is a slice expression X[Low:High] or X[Low:High:Max].
type SliceExpr struct {
miniExpr
X Node
List_ Nodes // TODO(rsc): Use separate Nodes
}
func NewSliceExpr(pos src.XPos, op Op, x Node) *SliceExpr {
n := &SliceExpr{X: x}
n.pos = pos
n.op = op
return n
}
func (n *SliceExpr) Left() Node { return n.X }
func (n *SliceExpr) SetLeft(x Node) { n.X = x }
func (n *SliceExpr) List() Nodes { return n.List_ }
func (n *SliceExpr) PtrList() *Nodes { return &n.List_ }
func (n *SliceExpr) SetList(x Nodes) { n.List_ = x }
func (n *SliceExpr) SetOp(op Op) {
switch op {
default:
panic(n.no("SetOp " + op.String()))
case OSLICE, OSLICEARR, OSLICESTR, OSLICE3, OSLICE3ARR:
n.op = op
}
}
// SliceBounds returns n's slice bounds: low, high, and max in expr[low:high:max].
// n must be a slice expression. max is nil if n is a simple slice expression.
func (n *SliceExpr) SliceBounds() (low, high, max Node) {
if n.List_.Len() == 0 {
return nil, nil, nil
}
switch n.Op() {
case OSLICE, OSLICEARR, OSLICESTR:
s := n.List_.Slice()
return s[0], s[1], nil
case OSLICE3, OSLICE3ARR:
s := n.List_.Slice()
return s[0], s[1], s[2]
}
base.Fatalf("SliceBounds op %v: %v", n.Op(), n)
return nil, nil, nil
}
// SetSliceBounds sets n's slice bounds, where n is a slice expression.
// n must be a slice expression. If max is non-nil, n must be a full slice expression.
func (n *SliceExpr) SetSliceBounds(low, high, max Node) {
switch n.Op() {
case OSLICE, OSLICEARR, OSLICESTR:
if max != nil {
base.Fatalf("SetSliceBounds %v given three bounds", n.Op())
}
s := n.List_.Slice()
if s == nil {
if low == nil && high == nil {
return
}
n.List_.Set2(low, high)
return
}
s[0] = low
s[1] = high
return
case OSLICE3, OSLICE3ARR:
s := n.List_.Slice()
if s == nil {
if low == nil && high == nil && max == nil {
return
}
n.List_.Set3(low, high, max)
return
}
s[0] = low
s[1] = high
s[2] = max
return
}
base.Fatalf("SetSliceBounds op %v: %v", n.Op(), n)
}
// IsSlice3 reports whether o is a slice3 op (OSLICE3, OSLICE3ARR).
// o must be a slicing op.
func (o Op) IsSlice3() bool {
switch o {
case OSLICE, OSLICEARR, OSLICESTR:
return false
case OSLICE3, OSLICE3ARR:
return true
}
base.Fatalf("IsSlice3 op %v", o)
return false
}
// A SliceHeader expression constructs a slice header from its parts.
type SliceHeaderExpr struct {
miniExpr
Ptr Node
LenCap_ Nodes // TODO(rsc): Split into two Node fields
}
func NewSliceHeaderExpr(pos src.XPos, typ *types.Type, ptr, len, cap Node) *SliceHeaderExpr {
n := &SliceHeaderExpr{Ptr: ptr}
n.pos = pos
n.op = OSLICEHEADER
n.typ = typ
n.LenCap_.Set2(len, cap)
return n
}
func (n *SliceHeaderExpr) Left() Node { return n.Ptr }
func (n *SliceHeaderExpr) SetLeft(x Node) { n.Ptr = x }
func (n *SliceHeaderExpr) List() Nodes { return n.LenCap_ }
func (n *SliceHeaderExpr) PtrList() *Nodes { return &n.LenCap_ }
func (n *SliceHeaderExpr) SetList(x Nodes) { n.LenCap_ = x }
// A StarExpr is a dereference expression *X.
// It may end up being a value or a type.
type StarExpr struct {
miniExpr
X Node
}
func NewStarExpr(pos src.XPos, x Node) *StarExpr {
n := &StarExpr{X: x}
n.op = ODEREF
n.pos = pos
return n
}
func (n *StarExpr) Left() Node { return n.X }
func (n *StarExpr) SetLeft(x Node) { n.X = x }
func (*StarExpr) CanBeNtype() {}
// SetOTYPE changes n to be an OTYPE node returning t,
// like all the type nodes in type.go.
func (n *StarExpr) SetOTYPE(t *types.Type) {
n.op = OTYPE
n.X = nil
n.typ = t
t.SetNod(n)
}
// A TypeAssertionExpr is a selector expression X.(Type).
// Before type-checking, the type is Ntype.
type TypeAssertExpr struct {
miniExpr
X Node
Ntype Node // TODO: Should be Ntype, but reused as address of type structure
Itab Nodes // Itab[0] is itab
}
func NewTypeAssertExpr(pos src.XPos, x Node, typ Ntype) *TypeAssertExpr {
n := &TypeAssertExpr{X: x, Ntype: typ}
n.pos = pos
n.op = ODOTTYPE
return n
}
func (n *TypeAssertExpr) Left() Node { return n.X }
func (n *TypeAssertExpr) SetLeft(x Node) { n.X = x }
func (n *TypeAssertExpr) Right() Node { return n.Ntype }
func (n *TypeAssertExpr) SetRight(x Node) { n.Ntype = x } // TODO: toNtype(x)
func (n *TypeAssertExpr) List() Nodes { return n.Itab }
func (n *TypeAssertExpr) PtrList() *Nodes { return &n.Itab }
func (n *TypeAssertExpr) SetList(x Nodes) { n.Itab = x }
func (n *TypeAssertExpr) SetOp(op Op) {
switch op {
default:
panic(n.no("SetOp " + op.String()))
case ODOTTYPE, ODOTTYPE2:
n.op = op
}
}
// A UnaryExpr is a unary expression Op X,
// or Op(X) for a builtin function that does not end up being a call.
type UnaryExpr struct {
miniExpr
X Node
}
func NewUnaryExpr(pos src.XPos, op Op, x Node) *UnaryExpr {
n := &UnaryExpr{X: x}
n.pos = pos
n.SetOp(op)
return n
}
func (n *UnaryExpr) Left() Node { return n.X }
func (n *UnaryExpr) SetLeft(x Node) { n.X = x }
func (n *UnaryExpr) SetOp(op Op) {
switch op {
default:
panic(n.no("SetOp " + op.String()))
case OBITNOT, ONEG, ONOT, OPLUS, ORECV,
OALIGNOF, OCAP, OCLOSE, OIMAG, OLEN, ONEW,
OOFFSETOF, OPANIC, OREAL, OSIZEOF,
OCHECKNIL, OCFUNC, OIDATA, OITAB, ONEWOBJ, OSPTR, OVARDEF, OVARKILL, OVARLIVE:
n.op = op
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,249 @@
// Copyright 2020 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ir
import (
"cmd/compile/internal/base"
"cmd/compile/internal/types"
"cmd/internal/obj"
"cmd/internal/src"
)
// A Func corresponds to a single function in a Go program
// (and vice versa: each function is denoted by exactly one *Func).
//
// There are multiple nodes that represent a Func in the IR.
//
// The ONAME node (Func.Nname) is used for plain references to it.
// The ODCLFUNC node (the Func itself) is used for its declaration code.
// The OCLOSURE node (Func.OClosure) is used for a reference to a
// function literal.
//
// An imported function will have an ONAME node which points to a Func
// with an empty body.
// A declared function or method has an ODCLFUNC (the Func itself) and an ONAME.
// A function literal is represented directly by an OCLOSURE, but it also
// has an ODCLFUNC (and a matching ONAME) representing the compiled
// underlying form of the closure, which accesses the captured variables
// using a special data structure passed in a register.
//
// A method declaration is represented like functions, except f.Sym
// will be the qualified method name (e.g., "T.m") and
// f.Func.Shortname is the bare method name (e.g., "m").
//
// A method expression (T.M) is represented as an OMETHEXPR node,
// in which n.Left and n.Right point to the type and method, respectively.
// Each distinct mention of a method expression in the source code
// constructs a fresh node.
//
// A method value (t.M) is represented by ODOTMETH/ODOTINTER
// when it is called directly and by OCALLPART otherwise.
// These are like method expressions, except that for ODOTMETH/ODOTINTER,
// the method name is stored in Sym instead of Right.
// Each OCALLPART ends up being implemented as a new
// function, a bit like a closure, with its own ODCLFUNC.
// The OCALLPART uses n.Func to record the linkage to
// the generated ODCLFUNC, but there is no
// pointer from the Func back to the OCALLPART.
type Func struct {
miniNode
typ *types.Type
Body_ Nodes
iota int64
Nname *Name // ONAME node
OClosure *ClosureExpr // OCLOSURE node
Shortname *types.Sym
// Extra entry code for the function. For example, allocate and initialize
// memory for escaping parameters.
Enter Nodes
Exit Nodes
// ONAME nodes for all params/locals for this func/closure, does NOT
// include closurevars until transformclosure runs.
Dcl []*Name
ClosureEnter Nodes // list of ONAME nodes (or OADDR-of-ONAME nodes, for output parameters) of captured variables
ClosureType Node // closure representation type
ClosureVars []*Name // closure params; each has closurevar set
// Parents records the parent scope of each scope within a
// function. The root scope (0) has no parent, so the i'th
// scope's parent is stored at Parents[i-1].
Parents []ScopeID
// Marks records scope boundary changes.
Marks []Mark
FieldTrack map[*types.Sym]struct{}
DebugInfo interface{}
LSym *obj.LSym
Inl *Inline
// Closgen tracks how many closures have been generated within
// this function. Used by closurename for creating unique
// function names.
Closgen int32
Label int32 // largest auto-generated label in this function
Endlineno src.XPos
WBPos src.XPos // position of first write barrier; see SetWBPos
Pragma PragmaFlag // go:xxx function annotations
flags bitset16
NumDefers int32 // number of defer calls in the function
NumReturns int32 // number of explicit returns in the function
// nwbrCalls records the LSyms of functions called by this
// function for go:nowritebarrierrec analysis. Only filled in
// if nowritebarrierrecCheck != nil.
NWBRCalls *[]SymAndPos
}
func NewFunc(pos src.XPos) *Func {
f := new(Func)
f.pos = pos
f.op = ODCLFUNC
f.iota = -1
return f
}
func (f *Func) isStmt() {}
func (f *Func) Func() *Func { return f }
func (f *Func) Body() Nodes { return f.Body_ }
func (f *Func) PtrBody() *Nodes { return &f.Body_ }
func (f *Func) SetBody(x Nodes) { f.Body_ = x }
func (f *Func) Type() *types.Type { return f.typ }
func (f *Func) SetType(x *types.Type) { f.typ = x }
func (f *Func) Iota() int64 { return f.iota }
func (f *Func) SetIota(x int64) { f.iota = x }
func (f *Func) Sym() *types.Sym {
if f.Nname != nil {
return f.Nname.Sym()
}
return nil
}
// An Inline holds fields used for function bodies that can be inlined.
type Inline struct {
Cost int32 // heuristic cost of inlining this function
// Copies of Func.Dcl and Nbody for use during inlining.
Dcl []*Name
Body []Node
}
// A Mark represents a scope boundary.
type Mark struct {
// Pos is the position of the token that marks the scope
// change.
Pos src.XPos
// Scope identifies the innermost scope to the right of Pos.
Scope ScopeID
}
// A ScopeID represents a lexical scope within a function.
type ScopeID int32
const (
funcDupok = 1 << iota // duplicate definitions ok
funcWrapper // is method wrapper
funcNeedctxt // function uses context register (has closure variables)
funcReflectMethod // function calls reflect.Type.Method or MethodByName
// true if closure inside a function; false if a simple function or a
// closure in a global variable initialization
funcIsHiddenClosure
funcHasDefer // contains a defer statement
funcNilCheckDisabled // disable nil checks when compiling this function
funcInlinabilityChecked // inliner has already determined whether the function is inlinable
funcExportInline // include inline body in export data
funcInstrumentBody // add race/msan instrumentation during SSA construction
funcOpenCodedDeferDisallowed // can't do open-coded defers
funcClosureCalled // closure is only immediately called
)
type SymAndPos struct {
Sym *obj.LSym // LSym of callee
Pos src.XPos // line of call
}
func (f *Func) Dupok() bool { return f.flags&funcDupok != 0 }
func (f *Func) Wrapper() bool { return f.flags&funcWrapper != 0 }
func (f *Func) Needctxt() bool { return f.flags&funcNeedctxt != 0 }
func (f *Func) ReflectMethod() bool { return f.flags&funcReflectMethod != 0 }
func (f *Func) IsHiddenClosure() bool { return f.flags&funcIsHiddenClosure != 0 }
func (f *Func) HasDefer() bool { return f.flags&funcHasDefer != 0 }
func (f *Func) NilCheckDisabled() bool { return f.flags&funcNilCheckDisabled != 0 }
func (f *Func) InlinabilityChecked() bool { return f.flags&funcInlinabilityChecked != 0 }
func (f *Func) ExportInline() bool { return f.flags&funcExportInline != 0 }
func (f *Func) InstrumentBody() bool { return f.flags&funcInstrumentBody != 0 }
func (f *Func) OpenCodedDeferDisallowed() bool { return f.flags&funcOpenCodedDeferDisallowed != 0 }
func (f *Func) ClosureCalled() bool { return f.flags&funcClosureCalled != 0 }
func (f *Func) SetDupok(b bool) { f.flags.set(funcDupok, b) }
func (f *Func) SetWrapper(b bool) { f.flags.set(funcWrapper, b) }
func (f *Func) SetNeedctxt(b bool) { f.flags.set(funcNeedctxt, b) }
func (f *Func) SetReflectMethod(b bool) { f.flags.set(funcReflectMethod, b) }
func (f *Func) SetIsHiddenClosure(b bool) { f.flags.set(funcIsHiddenClosure, b) }
func (f *Func) SetHasDefer(b bool) { f.flags.set(funcHasDefer, b) }
func (f *Func) SetNilCheckDisabled(b bool) { f.flags.set(funcNilCheckDisabled, b) }
func (f *Func) SetInlinabilityChecked(b bool) { f.flags.set(funcInlinabilityChecked, b) }
func (f *Func) SetExportInline(b bool) { f.flags.set(funcExportInline, b) }
func (f *Func) SetInstrumentBody(b bool) { f.flags.set(funcInstrumentBody, b) }
func (f *Func) SetOpenCodedDeferDisallowed(b bool) { f.flags.set(funcOpenCodedDeferDisallowed, b) }
func (f *Func) SetClosureCalled(b bool) { f.flags.set(funcClosureCalled, b) }
func (f *Func) SetWBPos(pos src.XPos) {
if base.Debug.WB != 0 {
base.WarnfAt(pos, "write barrier")
}
if !f.WBPos.IsKnown() {
f.WBPos = pos
}
}
// funcname returns the name (without the package) of the function n.
func FuncName(n Node) string {
if n == nil || n.Func() == nil || n.Func().Nname == nil {
return "<nil>"
}
return n.Func().Nname.Sym().Name
}
// pkgFuncName returns the name of the function referenced by n, with package prepended.
// This differs from the compiler's internal convention where local functions lack a package
// because the ultimate consumer of this is a human looking at an IDE; package is only empty
// if the compilation package is actually the empty string.
func PkgFuncName(n Node) string {
var s *types.Sym
if n == nil {
return "<nil>"
}
if n.Op() == ONAME {
s = n.Sym()
} else {
if n.Func() == nil || n.Func().Nname == nil {
return "<nil>"
}
s = n.Func().Nname.Sym()
}
pkg := s.Pkg
p := base.Ctxt.Pkgpath
if pkg != nil && pkg.Path != "" {
p = pkg.Path
}
if p == "" {
return s.Name
}
return p + "." + s.Name
}

View File

@ -3,10 +3,3 @@
// license that can be found in the LICENSE file.
package ir
import "cmd/compile/internal/types"
var LocalPkg *types.Pkg // package being compiled
// builtinpkg is a fake package that declares the universe block.
var BuiltinPkg *types.Pkg

View File

@ -0,0 +1,198 @@
// Copyright 2020 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:generate go run -mod=mod mknode.go
package ir
import (
"cmd/compile/internal/types"
"cmd/internal/src"
"fmt"
"go/constant"
)
// A miniNode is a minimal node implementation,
// meant to be embedded as the first field in a larger node implementation,
// at a cost of 8 bytes.
//
// A miniNode is NOT a valid Node by itself: the embedding struct
// must at the least provide:
//
// func (n *MyNode) String() string { return fmt.Sprint(n) }
// func (n *MyNode) rawCopy() Node { c := *n; return &c }
// func (n *MyNode) Format(s fmt.State, verb rune) { FmtNode(n, s, verb) }
//
// The embedding struct should also fill in n.op in its constructor,
// for more useful panic messages when invalid methods are called,
// instead of implementing Op itself.
//
type miniNode struct {
pos src.XPos // uint32
op Op // uint8
bits bitset8
esc uint16
}
func (n *miniNode) Format(s fmt.State, verb rune) { panic(1) }
func (n *miniNode) copy() Node { panic(1) }
func (n *miniNode) doChildren(do func(Node) error) error { panic(1) }
func (n *miniNode) editChildren(edit func(Node) Node) { panic(1) }
// posOr returns pos if known, or else n.pos.
// For use in DeepCopy.
func (n *miniNode) posOr(pos src.XPos) src.XPos {
if pos.IsKnown() {
return pos
}
return n.pos
}
// op can be read, but not written.
// An embedding implementation can provide a SetOp if desired.
// (The panicking SetOp is with the other panics below.)
func (n *miniNode) Op() Op { return n.op }
func (n *miniNode) Pos() src.XPos { return n.pos }
func (n *miniNode) SetPos(x src.XPos) { n.pos = x }
func (n *miniNode) Esc() uint16 { return n.esc }
func (n *miniNode) SetEsc(x uint16) { n.esc = x }
const (
miniWalkdefShift = 0
miniTypecheckShift = 2
miniInitorderShift = 4
miniDiag = 1 << 6
miniHasCall = 1 << 7 // for miniStmt
)
func (n *miniNode) Walkdef() uint8 { return n.bits.get2(miniWalkdefShift) }
func (n *miniNode) Typecheck() uint8 { return n.bits.get2(miniTypecheckShift) }
func (n *miniNode) Initorder() uint8 { return n.bits.get2(miniInitorderShift) }
func (n *miniNode) SetWalkdef(x uint8) {
if x > 3 {
panic(fmt.Sprintf("cannot SetWalkdef %d", x))
}
n.bits.set2(miniWalkdefShift, x)
}
func (n *miniNode) SetTypecheck(x uint8) {
if x > 3 {
panic(fmt.Sprintf("cannot SetTypecheck %d", x))
}
n.bits.set2(miniTypecheckShift, x)
}
func (n *miniNode) SetInitorder(x uint8) {
if x > 3 {
panic(fmt.Sprintf("cannot SetInitorder %d", x))
}
n.bits.set2(miniInitorderShift, x)
}
func (n *miniNode) Diag() bool { return n.bits&miniDiag != 0 }
func (n *miniNode) SetDiag(x bool) { n.bits.set(miniDiag, x) }
// Empty, immutable graph structure.
func (n *miniNode) Left() Node { return nil }
func (n *miniNode) Right() Node { return nil }
func (n *miniNode) Init() Nodes { return Nodes{} }
func (n *miniNode) PtrInit() *Nodes { return &immutableEmptyNodes }
func (n *miniNode) Body() Nodes { return Nodes{} }
func (n *miniNode) PtrBody() *Nodes { return &immutableEmptyNodes }
func (n *miniNode) List() Nodes { return Nodes{} }
func (n *miniNode) PtrList() *Nodes { return &immutableEmptyNodes }
func (n *miniNode) Rlist() Nodes { return Nodes{} }
func (n *miniNode) PtrRlist() *Nodes { return &immutableEmptyNodes }
func (n *miniNode) SetLeft(x Node) {
if x != nil {
panic(n.no("SetLeft"))
}
}
func (n *miniNode) SetRight(x Node) {
if x != nil {
panic(n.no("SetRight"))
}
}
func (n *miniNode) SetInit(x Nodes) {
if x != (Nodes{}) {
panic(n.no("SetInit"))
}
}
func (n *miniNode) SetBody(x Nodes) {
if x != (Nodes{}) {
panic(n.no("SetBody"))
}
}
func (n *miniNode) SetList(x Nodes) {
if x != (Nodes{}) {
panic(n.no("SetList"))
}
}
func (n *miniNode) SetRlist(x Nodes) {
if x != (Nodes{}) {
panic(n.no("SetRlist"))
}
}
// Additional functionality unavailable.
func (n *miniNode) no(name string) string { return "cannot " + name + " on " + n.op.String() }
func (n *miniNode) SetOp(Op) { panic(n.no("SetOp")) }
func (n *miniNode) SubOp() Op { panic(n.no("SubOp")) }
func (n *miniNode) SetSubOp(Op) { panic(n.no("SetSubOp")) }
func (n *miniNode) Type() *types.Type { return nil }
func (n *miniNode) SetType(*types.Type) { panic(n.no("SetType")) }
func (n *miniNode) Func() *Func { return nil }
func (n *miniNode) Name() *Name { return nil }
func (n *miniNode) Sym() *types.Sym { return nil }
func (n *miniNode) SetSym(*types.Sym) { panic(n.no("SetSym")) }
func (n *miniNode) Offset() int64 { return types.BADWIDTH }
func (n *miniNode) SetOffset(x int64) { panic(n.no("SetOffset")) }
func (n *miniNode) Class() Class { return Pxxx }
func (n *miniNode) SetClass(Class) { panic(n.no("SetClass")) }
func (n *miniNode) Likely() bool { panic(n.no("Likely")) }
func (n *miniNode) SetLikely(bool) { panic(n.no("SetLikely")) }
func (n *miniNode) SliceBounds() (low, high, max Node) {
panic(n.no("SliceBounds"))
}
func (n *miniNode) SetSliceBounds(low, high, max Node) {
panic(n.no("SetSliceBounds"))
}
func (n *miniNode) Iota() int64 { panic(n.no("Iota")) }
func (n *miniNode) SetIota(int64) { panic(n.no("SetIota")) }
func (n *miniNode) Colas() bool { return false }
func (n *miniNode) SetColas(bool) { panic(n.no("SetColas")) }
func (n *miniNode) NoInline() bool { panic(n.no("NoInline")) }
func (n *miniNode) SetNoInline(bool) { panic(n.no("SetNoInline")) }
func (n *miniNode) Transient() bool { panic(n.no("Transient")) }
func (n *miniNode) SetTransient(bool) { panic(n.no("SetTransient")) }
func (n *miniNode) Implicit() bool { return false }
func (n *miniNode) SetImplicit(bool) { panic(n.no("SetImplicit")) }
func (n *miniNode) IsDDD() bool { return false }
func (n *miniNode) SetIsDDD(bool) { panic(n.no("SetIsDDD")) }
func (n *miniNode) Embedded() bool { return false }
func (n *miniNode) SetEmbedded(bool) { panic(n.no("SetEmbedded")) }
func (n *miniNode) IndexMapLValue() bool { panic(n.no("IndexMapLValue")) }
func (n *miniNode) SetIndexMapLValue(bool) { panic(n.no("SetIndexMapLValue")) }
func (n *miniNode) ResetAux() { panic(n.no("ResetAux")) }
func (n *miniNode) HasBreak() bool { panic(n.no("HasBreak")) }
func (n *miniNode) SetHasBreak(bool) { panic(n.no("SetHasBreak")) }
func (n *miniNode) Val() constant.Value { panic(n.no("Val")) }
func (n *miniNode) SetVal(v constant.Value) { panic(n.no("SetVal")) }
func (n *miniNode) Int64Val() int64 { panic(n.no("Int64Val")) }
func (n *miniNode) Uint64Val() uint64 { panic(n.no("Uint64Val")) }
func (n *miniNode) CanInt64() bool { panic(n.no("CanInt64")) }
func (n *miniNode) BoolVal() bool { panic(n.no("BoolVal")) }
func (n *miniNode) StringVal() string { panic(n.no("StringVal")) }
func (n *miniNode) HasCall() bool { return false }
func (n *miniNode) SetHasCall(bool) { panic(n.no("SetHasCall")) }
func (n *miniNode) NonNil() bool { return false }
func (n *miniNode) MarkNonNil() { panic(n.no("MarkNonNil")) }
func (n *miniNode) Bounded() bool { return false }
func (n *miniNode) SetBounded(bool) { panic(n.no("SetBounded")) }
func (n *miniNode) Opt() interface{} { return nil }
func (n *miniNode) SetOpt(interface{}) { panic(n.no("SetOpt")) }
func (n *miniNode) MarkReadonly() { panic(n.no("MarkReadonly")) }
func (n *miniNode) TChanDir() types.ChanDir { panic(n.no("TChanDir")) }
func (n *miniNode) SetTChanDir(types.ChanDir) { panic(n.no("SetTChanDir")) }

View File

@ -0,0 +1,174 @@
// Copyright 2020 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build ignore
package main
import (
"bytes"
"fmt"
"go/format"
"go/types"
"io/ioutil"
"log"
"strings"
"golang.org/x/tools/go/packages"
)
func main() {
cfg := &packages.Config{
Mode: packages.NeedSyntax | packages.NeedTypes,
}
pkgs, err := packages.Load(cfg, "cmd/compile/internal/ir")
if err != nil {
log.Fatal(err)
}
pkg := pkgs[0].Types
scope := pkg.Scope()
lookup := func(name string) *types.Named {
return scope.Lookup(name).(*types.TypeName).Type().(*types.Named)
}
nodeType := lookup("Node")
ntypeType := lookup("Ntype")
nodesType := lookup("Nodes")
ptrFieldType := types.NewPointer(lookup("Field"))
slicePtrFieldType := types.NewSlice(ptrFieldType)
ptrIdentType := types.NewPointer(lookup("Ident"))
var buf bytes.Buffer
fmt.Fprintln(&buf, "// Code generated by mknode.go. DO NOT EDIT.")
fmt.Fprintln(&buf)
fmt.Fprintln(&buf, "package ir")
fmt.Fprintln(&buf)
fmt.Fprintln(&buf, `import "fmt"`)
for _, name := range scope.Names() {
obj, ok := scope.Lookup(name).(*types.TypeName)
if !ok {
continue
}
typName := obj.Name()
typ, ok := obj.Type().(*types.Named).Underlying().(*types.Struct)
if !ok {
continue
}
if strings.HasPrefix(typName, "mini") || !hasMiniNode(typ) {
continue
}
fmt.Fprintf(&buf, "\n")
fmt.Fprintf(&buf, "func (n *%s) Format(s fmt.State, verb rune) { FmtNode(n, s, verb) }\n", name)
fmt.Fprintf(&buf, "func (n *%s) copy() Node { c := *n\n", name)
forNodeFields(typName, typ, func(name string, is func(types.Type) bool) {
switch {
case is(nodesType):
fmt.Fprintf(&buf, "c.%s = c.%s.Copy()\n", name, name)
case is(ptrFieldType):
fmt.Fprintf(&buf, "if c.%s != nil { c.%s = c.%s.copy() }\n", name, name, name)
case is(slicePtrFieldType):
fmt.Fprintf(&buf, "c.%s = copyFields(c.%s)\n", name, name)
}
})
fmt.Fprintf(&buf, "return &c }\n")
fmt.Fprintf(&buf, "func (n *%s) doChildren(do func(Node) error) error { var err error\n", name)
forNodeFields(typName, typ, func(name string, is func(types.Type) bool) {
switch {
case is(ptrIdentType):
fmt.Fprintf(&buf, "if n.%s != nil { err = maybeDo(n.%s, err, do) }\n", name, name)
case is(nodeType), is(ntypeType):
fmt.Fprintf(&buf, "err = maybeDo(n.%s, err, do)\n", name)
case is(nodesType):
fmt.Fprintf(&buf, "err = maybeDoList(n.%s, err, do)\n", name)
case is(ptrFieldType):
fmt.Fprintf(&buf, "err = maybeDoField(n.%s, err, do)\n", name)
case is(slicePtrFieldType):
fmt.Fprintf(&buf, "err = maybeDoFields(n.%s, err, do)\n", name)
}
})
fmt.Fprintf(&buf, "return err }\n")
fmt.Fprintf(&buf, "func (n *%s) editChildren(edit func(Node) Node) {\n", name)
forNodeFields(typName, typ, func(name string, is func(types.Type) bool) {
switch {
case is(ptrIdentType):
fmt.Fprintf(&buf, "if n.%s != nil { n.%s = edit(n.%s).(*Ident) }\n", name, name, name)
case is(nodeType):
fmt.Fprintf(&buf, "n.%s = maybeEdit(n.%s, edit)\n", name, name)
case is(ntypeType):
fmt.Fprintf(&buf, "n.%s = toNtype(maybeEdit(n.%s, edit))\n", name, name)
case is(nodesType):
fmt.Fprintf(&buf, "editList(n.%s, edit)\n", name)
case is(ptrFieldType):
fmt.Fprintf(&buf, "editField(n.%s, edit)\n", name)
case is(slicePtrFieldType):
fmt.Fprintf(&buf, "editFields(n.%s, edit)\n", name)
}
})
fmt.Fprintf(&buf, "}\n")
}
out, err := format.Source(buf.Bytes())
if err != nil {
// write out mangled source so we can see the bug.
out = buf.Bytes()
}
err = ioutil.WriteFile("node_gen.go", out, 0666)
if err != nil {
log.Fatal(err)
}
}
func forNodeFields(typName string, typ *types.Struct, f func(name string, is func(types.Type) bool)) {
for i, n := 0, typ.NumFields(); i < n; i++ {
v := typ.Field(i)
if v.Embedded() {
if typ, ok := v.Type().Underlying().(*types.Struct); ok {
forNodeFields(typName, typ, f)
continue
}
}
switch typName {
case "Func":
if strings.ToLower(strings.TrimSuffix(v.Name(), "_")) != "body" {
continue
}
case "Name":
continue
}
switch v.Name() {
case "orig":
continue
}
switch typName + "." + v.Name() {
case "AddStringExpr.Alloc":
continue
}
f(v.Name(), func(t types.Type) bool { return types.Identical(t, v.Type()) })
}
}
func hasMiniNode(typ *types.Struct) bool {
for i, n := 0, typ.NumFields(); i < n; i++ {
v := typ.Field(i)
if v.Name() == "miniNode" {
return true
}
if v.Embedded() {
if typ, ok := v.Type().Underlying().(*types.Struct); ok && hasMiniNode(typ) {
return true
}
}
}
return false
}

View File

@ -0,0 +1,398 @@
// Copyright 2020 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ir
import (
"cmd/compile/internal/base"
"cmd/compile/internal/types"
"cmd/internal/objabi"
"cmd/internal/src"
"go/constant"
)
// An Ident is an identifier, possibly qualified.
type Ident struct {
miniExpr
sym *types.Sym
Used bool
}
func NewIdent(pos src.XPos, sym *types.Sym) *Ident {
n := new(Ident)
n.op = ONONAME
n.pos = pos
n.sym = sym
return n
}
func (n *Ident) Sym() *types.Sym { return n.sym }
func (*Ident) CanBeNtype() {}
// Name holds Node fields used only by named nodes (ONAME, OTYPE, some OLITERAL).
type Name struct {
miniExpr
subOp Op // uint8
class Class // uint8
flags bitset16
pragma PragmaFlag // int16
sym *types.Sym
fn *Func
offset int64
val constant.Value
orig Node
embedFiles *[]string // list of embedded files, for ONAME var
PkgName *PkgName // real package for import . names
// For a local variable (not param) or extern, the initializing assignment (OAS or OAS2).
// For a closure var, the ONAME node of the outer captured variable
Defn Node
// The function, method, or closure in which local variable or param is declared.
Curfn *Func
// Unique number for ONAME nodes within a function. Function outputs
// (results) are numbered starting at one, followed by function inputs
// (parameters), and then local variables. Vargen is used to distinguish
// local variables/params with the same name.
Vargen int32
Decldepth int32 // declaration loop depth, increased for every loop or label
Ntype Ntype
Heapaddr *Name // temp holding heap address of param
// ONAME PAUTOHEAP
Stackcopy *Name // the PPARAM/PPARAMOUT on-stack slot (moved func params only)
// ONAME closure linkage
// Consider:
//
// func f() {
// x := 1 // x1
// func() {
// use(x) // x2
// func() {
// use(x) // x3
// --- parser is here ---
// }()
// }()
// }
//
// There is an original declaration of x and then a chain of mentions of x
// leading into the current function. Each time x is mentioned in a new closure,
// we create a variable representing x for use in that specific closure,
// since the way you get to x is different in each closure.
//
// Let's number the specific variables as shown in the code:
// x1 is the original x, x2 is when mentioned in the closure,
// and x3 is when mentioned in the closure in the closure.
//
// We keep these linked (assume N > 1):
//
// - x1.Defn = original declaration statement for x (like most variables)
// - x1.Innermost = current innermost closure x (in this case x3), or nil for none
// - x1.IsClosureVar() = false
//
// - xN.Defn = x1, N > 1
// - xN.IsClosureVar() = true, N > 1
// - x2.Outer = nil
// - xN.Outer = x(N-1), N > 2
//
//
// When we look up x in the symbol table, we always get x1.
// Then we can use x1.Innermost (if not nil) to get the x
// for the innermost known closure function,
// but the first reference in a closure will find either no x1.Innermost
// or an x1.Innermost with .Funcdepth < Funcdepth.
// In that case, a new xN must be created, linked in with:
//
// xN.Defn = x1
// xN.Outer = x1.Innermost
// x1.Innermost = xN
//
// When we finish the function, we'll process its closure variables
// and find xN and pop it off the list using:
//
// x1 := xN.Defn
// x1.Innermost = xN.Outer
//
// We leave x1.Innermost set so that we can still get to the original
// variable quickly. Not shown here, but once we're
// done parsing a function and no longer need xN.Outer for the
// lexical x reference links as described above, funcLit
// recomputes xN.Outer as the semantic x reference link tree,
// even filling in x in intermediate closures that might not
// have mentioned it along the way to inner closures that did.
// See funcLit for details.
//
// During the eventual compilation, then, for closure variables we have:
//
// xN.Defn = original variable
// xN.Outer = variable captured in next outward scope
// to make closure where xN appears
//
// Because of the sharding of pieces of the node, x.Defn means x.Name.Defn
// and x.Innermost/Outer means x.Name.Param.Innermost/Outer.
Innermost *Name
Outer *Name
}
func (n *Name) isExpr() {}
// NewNameAt returns a new ONAME Node associated with symbol s at position pos.
// The caller is responsible for setting Curfn.
func NewNameAt(pos src.XPos, sym *types.Sym) *Name {
if sym == nil {
base.Fatalf("NewNameAt nil")
}
return newNameAt(pos, ONAME, sym)
}
// NewIota returns a new OIOTA Node.
func NewIota(pos src.XPos, sym *types.Sym) *Name {
if sym == nil {
base.Fatalf("NewIota nil")
}
return newNameAt(pos, OIOTA, sym)
}
// NewDeclNameAt returns a new ONONAME Node associated with symbol s at position pos.
// The caller is responsible for setting Curfn.
func NewDeclNameAt(pos src.XPos, sym *types.Sym) *Name {
if sym == nil {
base.Fatalf("NewDeclNameAt nil")
}
return newNameAt(pos, ONONAME, sym)
}
// newNameAt is like NewNameAt but allows sym == nil.
func newNameAt(pos src.XPos, op Op, sym *types.Sym) *Name {
n := new(Name)
n.op = op
n.pos = pos
n.orig = n
n.sym = sym
return n
}
func (n *Name) Name() *Name { return n }
func (n *Name) Sym() *types.Sym { return n.sym }
func (n *Name) SetSym(x *types.Sym) { n.sym = x }
func (n *Name) SubOp() Op { return n.subOp }
func (n *Name) SetSubOp(x Op) { n.subOp = x }
func (n *Name) Class() Class { return n.class }
func (n *Name) SetClass(x Class) { n.class = x }
func (n *Name) Func() *Func { return n.fn }
func (n *Name) SetFunc(x *Func) { n.fn = x }
func (n *Name) Offset() int64 { return n.offset }
func (n *Name) SetOffset(x int64) { n.offset = x }
func (n *Name) Iota() int64 { return n.offset }
func (n *Name) SetIota(x int64) { n.offset = x }
func (*Name) CanBeNtype() {}
func (*Name) CanBeAnSSASym() {}
func (*Name) CanBeAnSSAAux() {}
func (n *Name) SetOp(op Op) {
if n.op != ONONAME {
base.Fatalf("%v already has Op %v", n, n.op)
}
switch op {
default:
panic(n.no("SetOp " + op.String()))
case OLITERAL, ONAME, OTYPE, OIOTA:
n.op = op
}
}
// Pragma returns the PragmaFlag for p, which must be for an OTYPE.
func (n *Name) Pragma() PragmaFlag { return n.pragma }
// SetPragma sets the PragmaFlag for p, which must be for an OTYPE.
func (n *Name) SetPragma(flag PragmaFlag) { n.pragma = flag }
// Alias reports whether p, which must be for an OTYPE, is a type alias.
func (n *Name) Alias() bool { return n.flags&nameAlias != 0 }
// SetAlias sets whether p, which must be for an OTYPE, is a type alias.
func (n *Name) SetAlias(alias bool) { n.flags.set(nameAlias, alias) }
// EmbedFiles returns the list of embedded files for p,
// which must be for an ONAME var.
func (n *Name) EmbedFiles() []string {
if n.embedFiles == nil {
return nil
}
return *n.embedFiles
}
// SetEmbedFiles sets the list of embedded files for p,
// which must be for an ONAME var.
func (n *Name) SetEmbedFiles(list []string) {
if n.embedFiles == nil && list == nil {
return
}
if n.embedFiles == nil {
n.embedFiles = new([]string)
}
*n.embedFiles = list
}
const (
nameCaptured = 1 << iota // is the variable captured by a closure
nameReadonly
nameByval // is the variable captured by value or by reference
nameNeedzero // if it contains pointers, needs to be zeroed on function entry
nameAutoTemp // is the variable a temporary (implies no dwarf info. reset if escapes to heap)
nameUsed // for variable declared and not used error
nameIsClosureVar // PAUTOHEAP closure pseudo-variable; original at n.Name.Defn
nameIsOutputParamHeapAddr // pointer to a result parameter's heap copy
nameAssigned // is the variable ever assigned to
nameAddrtaken // address taken, even if not moved to heap
nameInlFormal // PAUTO created by inliner, derived from callee formal
nameInlLocal // PAUTO created by inliner, derived from callee local
nameOpenDeferSlot // if temporary var storing info for open-coded defers
nameLibfuzzerExtraCounter // if PEXTERN should be assigned to __libfuzzer_extra_counters section
nameIsDDD // is function argument a ...
nameAlias // is type name an alias
)
func (n *Name) Captured() bool { return n.flags&nameCaptured != 0 }
func (n *Name) Readonly() bool { return n.flags&nameReadonly != 0 }
func (n *Name) Byval() bool { return n.flags&nameByval != 0 }
func (n *Name) Needzero() bool { return n.flags&nameNeedzero != 0 }
func (n *Name) AutoTemp() bool { return n.flags&nameAutoTemp != 0 }
func (n *Name) Used() bool { return n.flags&nameUsed != 0 }
func (n *Name) IsClosureVar() bool { return n.flags&nameIsClosureVar != 0 }
func (n *Name) IsOutputParamHeapAddr() bool { return n.flags&nameIsOutputParamHeapAddr != 0 }
func (n *Name) Assigned() bool { return n.flags&nameAssigned != 0 }
func (n *Name) Addrtaken() bool { return n.flags&nameAddrtaken != 0 }
func (n *Name) InlFormal() bool { return n.flags&nameInlFormal != 0 }
func (n *Name) InlLocal() bool { return n.flags&nameInlLocal != 0 }
func (n *Name) OpenDeferSlot() bool { return n.flags&nameOpenDeferSlot != 0 }
func (n *Name) LibfuzzerExtraCounter() bool { return n.flags&nameLibfuzzerExtraCounter != 0 }
func (n *Name) IsDDD() bool { return n.flags&nameIsDDD != 0 }
func (n *Name) SetCaptured(b bool) { n.flags.set(nameCaptured, b) }
func (n *Name) setReadonly(b bool) { n.flags.set(nameReadonly, b) }
func (n *Name) SetByval(b bool) { n.flags.set(nameByval, b) }
func (n *Name) SetNeedzero(b bool) { n.flags.set(nameNeedzero, b) }
func (n *Name) SetAutoTemp(b bool) { n.flags.set(nameAutoTemp, b) }
func (n *Name) SetUsed(b bool) { n.flags.set(nameUsed, b) }
func (n *Name) SetIsClosureVar(b bool) { n.flags.set(nameIsClosureVar, b) }
func (n *Name) SetIsOutputParamHeapAddr(b bool) { n.flags.set(nameIsOutputParamHeapAddr, b) }
func (n *Name) SetAssigned(b bool) { n.flags.set(nameAssigned, b) }
func (n *Name) SetAddrtaken(b bool) { n.flags.set(nameAddrtaken, b) }
func (n *Name) SetInlFormal(b bool) { n.flags.set(nameInlFormal, b) }
func (n *Name) SetInlLocal(b bool) { n.flags.set(nameInlLocal, b) }
func (n *Name) SetOpenDeferSlot(b bool) { n.flags.set(nameOpenDeferSlot, b) }
func (n *Name) SetLibfuzzerExtraCounter(b bool) { n.flags.set(nameLibfuzzerExtraCounter, b) }
func (n *Name) SetIsDDD(b bool) { n.flags.set(nameIsDDD, b) }
// MarkReadonly indicates that n is an ONAME with readonly contents.
func (n *Name) MarkReadonly() {
if n.Op() != ONAME {
base.Fatalf("Node.MarkReadonly %v", n.Op())
}
n.Name().setReadonly(true)
// Mark the linksym as readonly immediately
// so that the SSA backend can use this information.
// It will be overridden later during dumpglobls.
n.Sym().Linksym().Type = objabi.SRODATA
}
// Val returns the constant.Value for the node.
func (n *Name) Val() constant.Value {
if n.val == nil {
return constant.MakeUnknown()
}
return n.val
}
// SetVal sets the constant.Value for the node,
// which must not have been used with SetOpt.
func (n *Name) SetVal(v constant.Value) {
if n.op != OLITERAL {
panic(n.no("SetVal"))
}
AssertValidTypeForConst(n.Type(), v)
n.val = v
}
// SameSource reports whether two nodes refer to the same source
// element.
//
// It exists to help incrementally migrate the compiler towards
// allowing the introduction of IdentExpr (#42990). Once we have
// IdentExpr, it will no longer be safe to directly compare Node
// values to tell if they refer to the same Name. Instead, code will
// need to explicitly get references to the underlying Name object(s),
// and compare those instead.
//
// It will still be safe to compare Nodes directly for checking if two
// nodes are syntactically the same. The SameSource function exists to
// indicate code that intentionally compares Nodes for syntactic
// equality as opposed to code that has yet to be updated in
// preparation for IdentExpr.
func SameSource(n1, n2 Node) bool {
return n1 == n2
}
// Uses reports whether expression x is a (direct) use of the given
// variable.
func Uses(x Node, v *Name) bool {
if v == nil || v.Op() != ONAME {
base.Fatalf("RefersTo bad Name: %v", v)
}
return x.Op() == ONAME && x.Name() == v
}
// DeclaredBy reports whether expression x refers (directly) to a
// variable that was declared by the given statement.
func DeclaredBy(x, stmt Node) bool {
if stmt == nil {
base.Fatalf("DeclaredBy nil")
}
return x.Op() == ONAME && SameSource(x.Name().Defn, stmt)
}
// The Class of a variable/function describes the "storage class"
// of a variable or function. During parsing, storage classes are
// called declaration contexts.
type Class uint8
//go:generate stringer -type=Class
const (
Pxxx Class = iota // no class; used during ssa conversion to indicate pseudo-variables
PEXTERN // global variables
PAUTO // local variables
PAUTOHEAP // local variables or parameters moved to heap
PPARAM // input arguments
PPARAMOUT // output results
PFUNC // global functions
// Careful: Class is stored in three bits in Node.flags.
_ = uint((1 << 3) - iota) // static assert for iota <= (1 << 3)
)
// A Pack is an identifier referring to an imported package.
type PkgName struct {
miniNode
sym *types.Sym
Pkg *types.Pkg
Used bool
}
func (p *PkgName) Sym() *types.Sym { return p.sym }
func (*PkgName) CanBeNtype() {}
func NewPkgName(pos src.XPos, sym *types.Sym, pkg *types.Pkg) *PkgName {
p := &PkgName{sym: sym, Pkg: pkg}
p.op = OPACK
p.pos = pos
return p
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -56,118 +56,117 @@ func _() {
_ = x[OCOPY-45]
_ = x[ODCL-46]
_ = x[ODCLFUNC-47]
_ = x[ODCLFIELD-48]
_ = x[ODCLCONST-49]
_ = x[ODCLTYPE-50]
_ = x[ODELETE-51]
_ = x[ODOT-52]
_ = x[ODOTPTR-53]
_ = x[ODOTMETH-54]
_ = x[ODOTINTER-55]
_ = x[OXDOT-56]
_ = x[ODOTTYPE-57]
_ = x[ODOTTYPE2-58]
_ = x[OEQ-59]
_ = x[ONE-60]
_ = x[OLT-61]
_ = x[OLE-62]
_ = x[OGE-63]
_ = x[OGT-64]
_ = x[ODEREF-65]
_ = x[OINDEX-66]
_ = x[OINDEXMAP-67]
_ = x[OKEY-68]
_ = x[OSTRUCTKEY-69]
_ = x[OLEN-70]
_ = x[OMAKE-71]
_ = x[OMAKECHAN-72]
_ = x[OMAKEMAP-73]
_ = x[OMAKESLICE-74]
_ = x[OMAKESLICECOPY-75]
_ = x[OMUL-76]
_ = x[ODIV-77]
_ = x[OMOD-78]
_ = x[OLSH-79]
_ = x[ORSH-80]
_ = x[OAND-81]
_ = x[OANDNOT-82]
_ = x[ONEW-83]
_ = x[ONEWOBJ-84]
_ = x[ONOT-85]
_ = x[OBITNOT-86]
_ = x[OPLUS-87]
_ = x[ONEG-88]
_ = x[OOROR-89]
_ = x[OPANIC-90]
_ = x[OPRINT-91]
_ = x[OPRINTN-92]
_ = x[OPAREN-93]
_ = x[OSEND-94]
_ = x[OSLICE-95]
_ = x[OSLICEARR-96]
_ = x[OSLICESTR-97]
_ = x[OSLICE3-98]
_ = x[OSLICE3ARR-99]
_ = x[OSLICEHEADER-100]
_ = x[ORECOVER-101]
_ = x[ORECV-102]
_ = x[ORUNESTR-103]
_ = x[OSELRECV-104]
_ = x[OSELRECV2-105]
_ = x[OIOTA-106]
_ = x[OREAL-107]
_ = x[OIMAG-108]
_ = x[OCOMPLEX-109]
_ = x[OALIGNOF-110]
_ = x[OOFFSETOF-111]
_ = x[OSIZEOF-112]
_ = x[OMETHEXPR-113]
_ = x[ODCLCONST-48]
_ = x[ODCLTYPE-49]
_ = x[ODELETE-50]
_ = x[ODOT-51]
_ = x[ODOTPTR-52]
_ = x[ODOTMETH-53]
_ = x[ODOTINTER-54]
_ = x[OXDOT-55]
_ = x[ODOTTYPE-56]
_ = x[ODOTTYPE2-57]
_ = x[OEQ-58]
_ = x[ONE-59]
_ = x[OLT-60]
_ = x[OLE-61]
_ = x[OGE-62]
_ = x[OGT-63]
_ = x[ODEREF-64]
_ = x[OINDEX-65]
_ = x[OINDEXMAP-66]
_ = x[OKEY-67]
_ = x[OSTRUCTKEY-68]
_ = x[OLEN-69]
_ = x[OMAKE-70]
_ = x[OMAKECHAN-71]
_ = x[OMAKEMAP-72]
_ = x[OMAKESLICE-73]
_ = x[OMAKESLICECOPY-74]
_ = x[OMUL-75]
_ = x[ODIV-76]
_ = x[OMOD-77]
_ = x[OLSH-78]
_ = x[ORSH-79]
_ = x[OAND-80]
_ = x[OANDNOT-81]
_ = x[ONEW-82]
_ = x[ONEWOBJ-83]
_ = x[ONOT-84]
_ = x[OBITNOT-85]
_ = x[OPLUS-86]
_ = x[ONEG-87]
_ = x[OOROR-88]
_ = x[OPANIC-89]
_ = x[OPRINT-90]
_ = x[OPRINTN-91]
_ = x[OPAREN-92]
_ = x[OSEND-93]
_ = x[OSLICE-94]
_ = x[OSLICEARR-95]
_ = x[OSLICESTR-96]
_ = x[OSLICE3-97]
_ = x[OSLICE3ARR-98]
_ = x[OSLICEHEADER-99]
_ = x[ORECOVER-100]
_ = x[ORECV-101]
_ = x[ORUNESTR-102]
_ = x[OSELRECV-103]
_ = x[OSELRECV2-104]
_ = x[OIOTA-105]
_ = x[OREAL-106]
_ = x[OIMAG-107]
_ = x[OCOMPLEX-108]
_ = x[OALIGNOF-109]
_ = x[OOFFSETOF-110]
_ = x[OSIZEOF-111]
_ = x[OMETHEXPR-112]
_ = x[OSTMTEXPR-113]
_ = x[OBLOCK-114]
_ = x[OBREAK-115]
_ = x[OCASE-116]
_ = x[OCONTINUE-117]
_ = x[ODEFER-118]
_ = x[OEMPTY-119]
_ = x[OFALL-120]
_ = x[OFOR-121]
_ = x[OFORUNTIL-122]
_ = x[OGOTO-123]
_ = x[OIF-124]
_ = x[OLABEL-125]
_ = x[OGO-126]
_ = x[ORANGE-127]
_ = x[ORETURN-128]
_ = x[OSELECT-129]
_ = x[OSWITCH-130]
_ = x[OTYPESW-131]
_ = x[OTCHAN-132]
_ = x[OTMAP-133]
_ = x[OTSTRUCT-134]
_ = x[OTINTER-135]
_ = x[OTFUNC-136]
_ = x[OTARRAY-137]
_ = x[ODDD-138]
_ = x[OINLCALL-139]
_ = x[OEFACE-140]
_ = x[OITAB-141]
_ = x[OIDATA-142]
_ = x[OSPTR-143]
_ = x[OCLOSUREVAR-144]
_ = x[OCFUNC-145]
_ = x[OCHECKNIL-146]
_ = x[OVARDEF-147]
_ = x[OVARKILL-148]
_ = x[OVARLIVE-149]
_ = x[ORESULT-150]
_ = x[OINLMARK-151]
_ = x[ORETJMP-152]
_ = x[OGETG-153]
_ = x[OEND-154]
_ = x[OFALL-119]
_ = x[OFOR-120]
_ = x[OFORUNTIL-121]
_ = x[OGOTO-122]
_ = x[OIF-123]
_ = x[OLABEL-124]
_ = x[OGO-125]
_ = x[ORANGE-126]
_ = x[ORETURN-127]
_ = x[OSELECT-128]
_ = x[OSWITCH-129]
_ = x[OTYPESW-130]
_ = x[OTCHAN-131]
_ = x[OTMAP-132]
_ = x[OTSTRUCT-133]
_ = x[OTINTER-134]
_ = x[OTFUNC-135]
_ = x[OTARRAY-136]
_ = x[OTSLICE-137]
_ = x[OINLCALL-138]
_ = x[OEFACE-139]
_ = x[OITAB-140]
_ = x[OIDATA-141]
_ = x[OSPTR-142]
_ = x[OCLOSUREREAD-143]
_ = x[OCFUNC-144]
_ = x[OCHECKNIL-145]
_ = x[OVARDEF-146]
_ = x[OVARKILL-147]
_ = x[OVARLIVE-148]
_ = x[ORESULT-149]
_ = x[OINLMARK-150]
_ = x[ORETJMP-151]
_ = x[OGETG-152]
_ = x[OEND-153]
}
const _Op_name = "XXXNAMENONAMETYPEPACKLITERALNILADDSUBORXORADDSTRADDRANDANDAPPENDBYTES2STRBYTES2STRTMPRUNES2STRSTR2BYTESSTR2BYTESTMPSTR2RUNESASAS2AS2DOTTYPEAS2FUNCAS2MAPRAS2RECVASOPCALLCALLFUNCCALLMETHCALLINTERCALLPARTCAPCLOSECLOSURECOMPLITMAPLITSTRUCTLITARRAYLITSLICELITPTRLITCONVCONVIFACECONVNOPCOPYDCLDCLFUNCDCLFIELDDCLCONSTDCLTYPEDELETEDOTDOTPTRDOTMETHDOTINTERXDOTDOTTYPEDOTTYPE2EQNELTLEGEGTDEREFINDEXINDEXMAPKEYSTRUCTKEYLENMAKEMAKECHANMAKEMAPMAKESLICEMAKESLICECOPYMULDIVMODLSHRSHANDANDNOTNEWNEWOBJNOTBITNOTPLUSNEGORORPANICPRINTPRINTNPARENSENDSLICESLICEARRSLICESTRSLICE3SLICE3ARRSLICEHEADERRECOVERRECVRUNESTRSELRECVSELRECV2IOTAREALIMAGCOMPLEXALIGNOFOFFSETOFSIZEOFMETHEXPRBLOCKBREAKCASECONTINUEDEFEREMPTYFALLFORFORUNTILGOTOIFLABELGORANGERETURNSELECTSWITCHTYPESWTCHANTMAPTSTRUCTTINTERTFUNCTARRAYDDDINLCALLEFACEITABIDATASPTRCLOSUREVARCFUNCCHECKNILVARDEFVARKILLVARLIVERESULTINLMARKRETJMPGETGEND"
const _Op_name = "XXXNAMENONAMETYPEPACKLITERALNILADDSUBORXORADDSTRADDRANDANDAPPENDBYTES2STRBYTES2STRTMPRUNES2STRSTR2BYTESSTR2BYTESTMPSTR2RUNESASAS2AS2DOTTYPEAS2FUNCAS2MAPRAS2RECVASOPCALLCALLFUNCCALLMETHCALLINTERCALLPARTCAPCLOSECLOSURECOMPLITMAPLITSTRUCTLITARRAYLITSLICELITPTRLITCONVCONVIFACECONVNOPCOPYDCLDCLFUNCDCLCONSTDCLTYPEDELETEDOTDOTPTRDOTMETHDOTINTERXDOTDOTTYPEDOTTYPE2EQNELTLEGEGTDEREFINDEXINDEXMAPKEYSTRUCTKEYLENMAKEMAKECHANMAKEMAPMAKESLICEMAKESLICECOPYMULDIVMODLSHRSHANDANDNOTNEWNEWOBJNOTBITNOTPLUSNEGORORPANICPRINTPRINTNPARENSENDSLICESLICEARRSLICESTRSLICE3SLICE3ARRSLICEHEADERRECOVERRECVRUNESTRSELRECVSELRECV2IOTAREALIMAGCOMPLEXALIGNOFOFFSETOFSIZEOFMETHEXPRSTMTEXPRBLOCKBREAKCASECONTINUEDEFERFALLFORFORUNTILGOTOIFLABELGORANGERETURNSELECTSWITCHTYPESWTCHANTMAPTSTRUCTTINTERTFUNCTARRAYTSLICEINLCALLEFACEITABIDATASPTRCLOSUREREADCFUNCCHECKNILVARDEFVARKILLVARLIVERESULTINLMARKRETJMPGETGEND"
var _Op_index = [...]uint16{0, 3, 7, 13, 17, 21, 28, 31, 34, 37, 39, 42, 48, 52, 58, 64, 73, 85, 94, 103, 115, 124, 126, 129, 139, 146, 153, 160, 164, 168, 176, 184, 193, 201, 204, 209, 216, 223, 229, 238, 246, 254, 260, 264, 273, 280, 284, 287, 294, 302, 310, 317, 323, 326, 332, 339, 347, 351, 358, 366, 368, 370, 372, 374, 376, 378, 383, 388, 396, 399, 408, 411, 415, 423, 430, 439, 452, 455, 458, 461, 464, 467, 470, 476, 479, 485, 488, 494, 498, 501, 505, 510, 515, 521, 526, 530, 535, 543, 551, 557, 566, 577, 584, 588, 595, 602, 610, 614, 618, 622, 629, 636, 644, 650, 658, 663, 668, 672, 680, 685, 690, 694, 697, 705, 709, 711, 716, 718, 723, 729, 735, 741, 747, 752, 756, 763, 769, 774, 780, 783, 790, 795, 799, 804, 808, 818, 823, 831, 837, 844, 851, 857, 864, 870, 874, 877}
var _Op_index = [...]uint16{0, 3, 7, 13, 17, 21, 28, 31, 34, 37, 39, 42, 48, 52, 58, 64, 73, 85, 94, 103, 115, 124, 126, 129, 139, 146, 153, 160, 164, 168, 176, 184, 193, 201, 204, 209, 216, 223, 229, 238, 246, 254, 260, 264, 273, 280, 284, 287, 294, 302, 309, 315, 318, 324, 331, 339, 343, 350, 358, 360, 362, 364, 366, 368, 370, 375, 380, 388, 391, 400, 403, 407, 415, 422, 431, 444, 447, 450, 453, 456, 459, 462, 468, 471, 477, 480, 486, 490, 493, 497, 502, 507, 513, 518, 522, 527, 535, 543, 549, 558, 569, 576, 580, 587, 594, 602, 606, 610, 614, 621, 628, 636, 642, 650, 658, 663, 668, 672, 680, 685, 689, 692, 700, 704, 706, 711, 713, 718, 724, 730, 736, 742, 747, 751, 758, 764, 769, 775, 781, 788, 793, 797, 802, 806, 817, 822, 830, 836, 843, 850, 856, 863, 869, 873, 876}
func (i Op) String() string {
if i >= Op(len(_Op_index)-1) {

View File

@ -20,10 +20,8 @@ func TestSizeof(t *testing.T) {
_32bit uintptr // size on 32bit platforms
_64bit uintptr // size on 64bit platforms
}{
{Func{}, 152, 280},
{Name{}, 44, 80},
{Param{}, 44, 88},
{node{}, 88, 152},
{Func{}, 168, 288},
{Name{}, 124, 216},
}
for _, tt := range tests {

View File

@ -0,0 +1,541 @@
// Copyright 2020 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ir
import (
"cmd/compile/internal/types"
"cmd/internal/src"
)
// A Decl is a declaration of a const, type, or var. (A declared func is a Func.)
type Decl struct {
miniNode
X Node // the thing being declared
}
func NewDecl(pos src.XPos, op Op, x Node) *Decl {
n := &Decl{X: x}
n.pos = pos
switch op {
default:
panic("invalid Decl op " + op.String())
case ODCL, ODCLCONST, ODCLTYPE:
n.op = op
}
return n
}
func (*Decl) isStmt() {}
func (n *Decl) Left() Node { return n.X }
func (n *Decl) SetLeft(x Node) { n.X = x }
// A Stmt is a Node that can appear as a statement.
// This includes statement-like expressions such as f().
//
// (It's possible it should include <-c, but that would require
// splitting ORECV out of UnaryExpr, which hasn't yet been
// necessary. Maybe instead we will introduce ExprStmt at
// some point.)
type Stmt interface {
Node
isStmt()
}
// A miniStmt is a miniNode with extra fields common to statements.
type miniStmt struct {
miniNode
init Nodes
}
func (*miniStmt) isStmt() {}
func (n *miniStmt) Init() Nodes { return n.init }
func (n *miniStmt) SetInit(x Nodes) { n.init = x }
func (n *miniStmt) PtrInit() *Nodes { return &n.init }
func (n *miniStmt) HasCall() bool { return n.bits&miniHasCall != 0 }
func (n *miniStmt) SetHasCall(b bool) { n.bits.set(miniHasCall, b) }
// An AssignListStmt is an assignment statement with
// more than one item on at least one side: Lhs = Rhs.
// If Def is true, the assignment is a :=.
type AssignListStmt struct {
miniStmt
Lhs Nodes
Def bool
Rhs Nodes
Offset_ int64 // for initorder
}
func NewAssignListStmt(pos src.XPos, op Op, lhs, rhs []Node) *AssignListStmt {
n := &AssignListStmt{}
n.pos = pos
n.SetOp(op)
n.Lhs.Set(lhs)
n.Rhs.Set(rhs)
n.Offset_ = types.BADWIDTH
return n
}
func (n *AssignListStmt) List() Nodes { return n.Lhs }
func (n *AssignListStmt) PtrList() *Nodes { return &n.Lhs }
func (n *AssignListStmt) SetList(x Nodes) { n.Lhs = x }
func (n *AssignListStmt) Rlist() Nodes { return n.Rhs }
func (n *AssignListStmt) PtrRlist() *Nodes { return &n.Rhs }
func (n *AssignListStmt) SetRlist(x Nodes) { n.Rhs = x }
func (n *AssignListStmt) Colas() bool { return n.Def }
func (n *AssignListStmt) SetColas(x bool) { n.Def = x }
func (n *AssignListStmt) Offset() int64 { return n.Offset_ }
func (n *AssignListStmt) SetOffset(x int64) { n.Offset_ = x }
func (n *AssignListStmt) SetOp(op Op) {
switch op {
default:
panic(n.no("SetOp " + op.String()))
case OAS2, OAS2DOTTYPE, OAS2FUNC, OAS2MAPR, OAS2RECV, OSELRECV2:
n.op = op
}
}
// An AssignStmt is a simple assignment statement: X = Y.
// If Def is true, the assignment is a :=.
type AssignStmt struct {
miniStmt
X Node
Def bool
Y Node
Offset_ int64 // for initorder
}
func NewAssignStmt(pos src.XPos, x, y Node) *AssignStmt {
n := &AssignStmt{X: x, Y: y}
n.pos = pos
n.op = OAS
n.Offset_ = types.BADWIDTH
return n
}
func (n *AssignStmt) Left() Node { return n.X }
func (n *AssignStmt) SetLeft(x Node) { n.X = x }
func (n *AssignStmt) Right() Node { return n.Y }
func (n *AssignStmt) SetRight(y Node) { n.Y = y }
func (n *AssignStmt) Colas() bool { return n.Def }
func (n *AssignStmt) SetColas(x bool) { n.Def = x }
func (n *AssignStmt) Offset() int64 { return n.Offset_ }
func (n *AssignStmt) SetOffset(x int64) { n.Offset_ = x }
func (n *AssignStmt) SetOp(op Op) {
switch op {
default:
panic(n.no("SetOp " + op.String()))
case OAS, OSELRECV:
n.op = op
}
}
// An AssignOpStmt is an AsOp= assignment statement: X AsOp= Y.
type AssignOpStmt struct {
miniStmt
typ *types.Type
X Node
AsOp Op // OADD etc
Y Node
IncDec bool // actually ++ or --
}
func NewAssignOpStmt(pos src.XPos, op Op, x, y Node) *AssignOpStmt {
n := &AssignOpStmt{AsOp: op, X: x, Y: y}
n.pos = pos
n.op = OASOP
return n
}
func (n *AssignOpStmt) Left() Node { return n.X }
func (n *AssignOpStmt) SetLeft(x Node) { n.X = x }
func (n *AssignOpStmt) Right() Node { return n.Y }
func (n *AssignOpStmt) SetRight(y Node) { n.Y = y }
func (n *AssignOpStmt) SubOp() Op { return n.AsOp }
func (n *AssignOpStmt) SetSubOp(x Op) { n.AsOp = x }
func (n *AssignOpStmt) Implicit() bool { return n.IncDec }
func (n *AssignOpStmt) SetImplicit(b bool) { n.IncDec = b }
func (n *AssignOpStmt) Type() *types.Type { return n.typ }
func (n *AssignOpStmt) SetType(x *types.Type) { n.typ = x }
// A BlockStmt is a block: { List }.
type BlockStmt struct {
miniStmt
List_ Nodes
}
func NewBlockStmt(pos src.XPos, list []Node) *BlockStmt {
n := &BlockStmt{}
n.pos = pos
n.op = OBLOCK
n.List_.Set(list)
return n
}
func (n *BlockStmt) List() Nodes { return n.List_ }
func (n *BlockStmt) PtrList() *Nodes { return &n.List_ }
func (n *BlockStmt) SetList(x Nodes) { n.List_ = x }
// A BranchStmt is a break, continue, fallthrough, or goto statement.
//
// For back-end code generation, Op may also be RETJMP (return+jump),
// in which case the label names another function entirely.
type BranchStmt struct {
miniStmt
Label *types.Sym // label if present
}
func NewBranchStmt(pos src.XPos, op Op, label *types.Sym) *BranchStmt {
switch op {
case OBREAK, OCONTINUE, OFALL, OGOTO, ORETJMP:
// ok
default:
panic("NewBranch " + op.String())
}
n := &BranchStmt{Label: label}
n.pos = pos
n.op = op
return n
}
func (n *BranchStmt) Sym() *types.Sym { return n.Label }
func (n *BranchStmt) SetSym(sym *types.Sym) { n.Label = sym }
// A CaseStmt is a case statement in a switch or select: case List: Body.
type CaseStmt struct {
miniStmt
Vars Nodes // declared variable for this case in type switch
List_ Nodes // list of expressions for switch, early select
Comm Node // communication case (Exprs[0]) after select is type-checked
Body_ Nodes
}
func NewCaseStmt(pos src.XPos, list, body []Node) *CaseStmt {
n := &CaseStmt{}
n.pos = pos
n.op = OCASE
n.List_.Set(list)
n.Body_.Set(body)
return n
}
func (n *CaseStmt) List() Nodes { return n.List_ }
func (n *CaseStmt) PtrList() *Nodes { return &n.List_ }
func (n *CaseStmt) SetList(x Nodes) { n.List_ = x }
func (n *CaseStmt) Body() Nodes { return n.Body_ }
func (n *CaseStmt) PtrBody() *Nodes { return &n.Body_ }
func (n *CaseStmt) SetBody(x Nodes) { n.Body_ = x }
func (n *CaseStmt) Rlist() Nodes { return n.Vars }
func (n *CaseStmt) PtrRlist() *Nodes { return &n.Vars }
func (n *CaseStmt) SetRlist(x Nodes) { n.Vars = x }
func (n *CaseStmt) Left() Node { return n.Comm }
func (n *CaseStmt) SetLeft(x Node) { n.Comm = x }
// A ForStmt is a non-range for loop: for Init; Cond; Post { Body }
// Op can be OFOR or OFORUNTIL (!Cond).
type ForStmt struct {
miniStmt
Label *types.Sym
Cond Node
Late Nodes
Post Node
Body_ Nodes
HasBreak_ bool
}
func NewForStmt(pos src.XPos, init []Node, cond, post Node, body []Node) *ForStmt {
n := &ForStmt{Cond: cond, Post: post}
n.pos = pos
n.op = OFOR
n.init.Set(init)
n.Body_.Set(body)
return n
}
func (n *ForStmt) Sym() *types.Sym { return n.Label }
func (n *ForStmt) SetSym(x *types.Sym) { n.Label = x }
func (n *ForStmt) Left() Node { return n.Cond }
func (n *ForStmt) SetLeft(x Node) { n.Cond = x }
func (n *ForStmt) Right() Node { return n.Post }
func (n *ForStmt) SetRight(x Node) { n.Post = x }
func (n *ForStmt) Body() Nodes { return n.Body_ }
func (n *ForStmt) PtrBody() *Nodes { return &n.Body_ }
func (n *ForStmt) SetBody(x Nodes) { n.Body_ = x }
func (n *ForStmt) List() Nodes { return n.Late }
func (n *ForStmt) PtrList() *Nodes { return &n.Late }
func (n *ForStmt) SetList(x Nodes) { n.Late = x }
func (n *ForStmt) HasBreak() bool { return n.HasBreak_ }
func (n *ForStmt) SetHasBreak(b bool) { n.HasBreak_ = b }
func (n *ForStmt) SetOp(op Op) {
if op != OFOR && op != OFORUNTIL {
panic(n.no("SetOp " + op.String()))
}
n.op = op
}
// A GoDeferStmt is a go or defer statement: go Call / defer Call.
//
// The two opcodes use a signle syntax because the implementations
// are very similar: both are concerned with saving Call and running it
// in a different context (a separate goroutine or a later time).
type GoDeferStmt struct {
miniStmt
Call Node
}
func NewGoDeferStmt(pos src.XPos, op Op, call Node) *GoDeferStmt {
n := &GoDeferStmt{Call: call}
n.pos = pos
switch op {
case ODEFER, OGO:
n.op = op
default:
panic("NewGoDeferStmt " + op.String())
}
return n
}
func (n *GoDeferStmt) Left() Node { return n.Call }
func (n *GoDeferStmt) SetLeft(x Node) { n.Call = x }
// A IfStmt is a return statement: if Init; Cond { Then } else { Else }.
type IfStmt struct {
miniStmt
Cond Node
Body_ Nodes
Else Nodes
Likely_ bool // code layout hint
}
func NewIfStmt(pos src.XPos, cond Node, body, els []Node) *IfStmt {
n := &IfStmt{Cond: cond}
n.pos = pos
n.op = OIF
n.Body_.Set(body)
n.Else.Set(els)
return n
}
func (n *IfStmt) Left() Node { return n.Cond }
func (n *IfStmt) SetLeft(x Node) { n.Cond = x }
func (n *IfStmt) Body() Nodes { return n.Body_ }
func (n *IfStmt) PtrBody() *Nodes { return &n.Body_ }
func (n *IfStmt) SetBody(x Nodes) { n.Body_ = x }
func (n *IfStmt) Rlist() Nodes { return n.Else }
func (n *IfStmt) PtrRlist() *Nodes { return &n.Else }
func (n *IfStmt) SetRlist(x Nodes) { n.Else = x }
func (n *IfStmt) Likely() bool { return n.Likely_ }
func (n *IfStmt) SetLikely(x bool) { n.Likely_ = x }
// An InlineMarkStmt is a marker placed just before an inlined body.
type InlineMarkStmt struct {
miniStmt
Index int64
}
func NewInlineMarkStmt(pos src.XPos, index int64) *InlineMarkStmt {
n := &InlineMarkStmt{Index: index}
n.pos = pos
n.op = OINLMARK
return n
}
func (n *InlineMarkStmt) Offset() int64 { return n.Index }
func (n *InlineMarkStmt) SetOffset(x int64) { n.Index = x }
// A LabelStmt is a label statement (just the label, not including the statement it labels).
type LabelStmt struct {
miniStmt
Label *types.Sym // "Label:"
}
func NewLabelStmt(pos src.XPos, label *types.Sym) *LabelStmt {
n := &LabelStmt{Label: label}
n.pos = pos
n.op = OLABEL
return n
}
func (n *LabelStmt) Sym() *types.Sym { return n.Label }
func (n *LabelStmt) SetSym(x *types.Sym) { n.Label = x }
// A RangeStmt is a range loop: for Vars = range X { Stmts }
// Op can be OFOR or OFORUNTIL (!Cond).
type RangeStmt struct {
miniStmt
Label *types.Sym
Vars Nodes // TODO(rsc): Replace with Key, Value Node
Def bool
X Node
Body_ Nodes
HasBreak_ bool
typ *types.Type // TODO(rsc): Remove - use X.Type() instead
}
func NewRangeStmt(pos src.XPos, vars []Node, x Node, body []Node) *RangeStmt {
n := &RangeStmt{X: x}
n.pos = pos
n.op = ORANGE
n.Vars.Set(vars)
n.Body_.Set(body)
return n
}
func (n *RangeStmt) Sym() *types.Sym { return n.Label }
func (n *RangeStmt) SetSym(x *types.Sym) { n.Label = x }
func (n *RangeStmt) Right() Node { return n.X }
func (n *RangeStmt) SetRight(x Node) { n.X = x }
func (n *RangeStmt) Body() Nodes { return n.Body_ }
func (n *RangeStmt) PtrBody() *Nodes { return &n.Body_ }
func (n *RangeStmt) SetBody(x Nodes) { n.Body_ = x }
func (n *RangeStmt) List() Nodes { return n.Vars }
func (n *RangeStmt) PtrList() *Nodes { return &n.Vars }
func (n *RangeStmt) SetList(x Nodes) { n.Vars = x }
func (n *RangeStmt) HasBreak() bool { return n.HasBreak_ }
func (n *RangeStmt) SetHasBreak(b bool) { n.HasBreak_ = b }
func (n *RangeStmt) Colas() bool { return n.Def }
func (n *RangeStmt) SetColas(b bool) { n.Def = b }
func (n *RangeStmt) Type() *types.Type { return n.typ }
func (n *RangeStmt) SetType(x *types.Type) { n.typ = x }
// A ReturnStmt is a return statement.
type ReturnStmt struct {
miniStmt
orig Node // for typecheckargs rewrite
Results Nodes // return list
}
func NewReturnStmt(pos src.XPos, results []Node) *ReturnStmt {
n := &ReturnStmt{}
n.pos = pos
n.op = ORETURN
n.orig = n
n.Results.Set(results)
return n
}
func (n *ReturnStmt) Orig() Node { return n.orig }
func (n *ReturnStmt) SetOrig(x Node) { n.orig = x }
func (n *ReturnStmt) List() Nodes { return n.Results }
func (n *ReturnStmt) PtrList() *Nodes { return &n.Results }
func (n *ReturnStmt) SetList(x Nodes) { n.Results = x }
func (n *ReturnStmt) IsDDD() bool { return false } // typecheckargs asks
// A SelectStmt is a block: { Cases }.
type SelectStmt struct {
miniStmt
Label *types.Sym
Cases Nodes
HasBreak_ bool
// TODO(rsc): Instead of recording here, replace with a block?
Compiled Nodes // compiled form, after walkswitch
}
func NewSelectStmt(pos src.XPos, cases []Node) *SelectStmt {
n := &SelectStmt{}
n.pos = pos
n.op = OSELECT
n.Cases.Set(cases)
return n
}
func (n *SelectStmt) List() Nodes { return n.Cases }
func (n *SelectStmt) PtrList() *Nodes { return &n.Cases }
func (n *SelectStmt) SetList(x Nodes) { n.Cases = x }
func (n *SelectStmt) Sym() *types.Sym { return n.Label }
func (n *SelectStmt) SetSym(x *types.Sym) { n.Label = x }
func (n *SelectStmt) HasBreak() bool { return n.HasBreak_ }
func (n *SelectStmt) SetHasBreak(x bool) { n.HasBreak_ = x }
func (n *SelectStmt) Body() Nodes { return n.Compiled }
func (n *SelectStmt) PtrBody() *Nodes { return &n.Compiled }
func (n *SelectStmt) SetBody(x Nodes) { n.Compiled = x }
// A SendStmt is a send statement: X <- Y.
type SendStmt struct {
miniStmt
Chan Node
Value Node
}
func NewSendStmt(pos src.XPos, ch, value Node) *SendStmt {
n := &SendStmt{Chan: ch, Value: value}
n.pos = pos
n.op = OSEND
return n
}
func (n *SendStmt) Left() Node { return n.Chan }
func (n *SendStmt) SetLeft(x Node) { n.Chan = x }
func (n *SendStmt) Right() Node { return n.Value }
func (n *SendStmt) SetRight(y Node) { n.Value = y }
// A SwitchStmt is a switch statement: switch Init; Expr { Cases }.
type SwitchStmt struct {
miniStmt
Tag Node
Cases Nodes // list of *CaseStmt
Label *types.Sym
HasBreak_ bool
// TODO(rsc): Instead of recording here, replace with a block?
Compiled Nodes // compiled form, after walkswitch
}
func NewSwitchStmt(pos src.XPos, tag Node, cases []Node) *SwitchStmt {
n := &SwitchStmt{Tag: tag}
n.pos = pos
n.op = OSWITCH
n.Cases.Set(cases)
return n
}
func (n *SwitchStmt) Left() Node { return n.Tag }
func (n *SwitchStmt) SetLeft(x Node) { n.Tag = x }
func (n *SwitchStmt) List() Nodes { return n.Cases }
func (n *SwitchStmt) PtrList() *Nodes { return &n.Cases }
func (n *SwitchStmt) SetList(x Nodes) { n.Cases = x }
func (n *SwitchStmt) Body() Nodes { return n.Compiled }
func (n *SwitchStmt) PtrBody() *Nodes { return &n.Compiled }
func (n *SwitchStmt) SetBody(x Nodes) { n.Compiled = x }
func (n *SwitchStmt) Sym() *types.Sym { return n.Label }
func (n *SwitchStmt) SetSym(x *types.Sym) { n.Label = x }
func (n *SwitchStmt) HasBreak() bool { return n.HasBreak_ }
func (n *SwitchStmt) SetHasBreak(x bool) { n.HasBreak_ = x }
// A TypeSwitchGuard is the [Name :=] X.(type) in a type switch.
type TypeSwitchGuard struct {
miniNode
Tag *Ident
X Node
Used bool
}
func NewTypeSwitchGuard(pos src.XPos, tag *Ident, x Node) *TypeSwitchGuard {
n := &TypeSwitchGuard{Tag: tag, X: x}
n.pos = pos
n.op = OTYPESW
return n
}
func (n *TypeSwitchGuard) Left() Node {
if n.Tag == nil {
return nil
}
return n.Tag
}
func (n *TypeSwitchGuard) SetLeft(x Node) {
if x == nil {
n.Tag = nil
return
}
n.Tag = x.(*Ident)
}
func (n *TypeSwitchGuard) Right() Node { return n.X }
func (n *TypeSwitchGuard) SetRight(x Node) { n.X = x }

View File

@ -0,0 +1,340 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package ir
import (
"cmd/compile/internal/base"
"cmd/compile/internal/types"
"cmd/internal/src"
"fmt"
)
// Nodes that represent the syntax of a type before type-checking.
// After type-checking, they serve only as shells around a *types.Type.
// Calling TypeNode converts a *types.Type to a Node shell.
// An Ntype is a Node that syntactically looks like a type.
// It can be the raw syntax for a type before typechecking,
// or it can be an OTYPE with Type() set to a *types.Type.
// Note that syntax doesn't guarantee it's a type: an expression
// like *fmt is an Ntype (we don't know whether names are types yet),
// but at least 1+1 is not an Ntype.
type Ntype interface {
Node
CanBeNtype()
}
// A miniType is a minimal type syntax Node implementation,
// to be embedded as the first field in a larger node implementation.
type miniType struct {
miniNode
typ *types.Type
}
func (*miniType) CanBeNtype() {}
func (n *miniType) Type() *types.Type { return n.typ }
// setOTYPE changes n to be an OTYPE node returning t.
// Rewriting the node in place this way should not be strictly
// necessary (we should be able to update the uses with
// proper OTYPE nodes), but it's mostly harmless and easy
// to keep doing for now.
//
// setOTYPE also records t.Nod = self if t.Nod is not already set.
// (Some types are shared by multiple OTYPE nodes, so only
// the first such node is used as t.Nod.)
func (n *miniType) setOTYPE(t *types.Type, self Node) {
if n.typ != nil {
panic(n.op.String() + " SetType: type already set")
}
n.op = OTYPE
n.typ = t
t.SetNod(self)
}
func (n *miniType) Sym() *types.Sym { return nil } // for Format OTYPE
func (n *miniType) Implicit() bool { return false } // for Format OTYPE
// A ChanType represents a chan Elem syntax with the direction Dir.
type ChanType struct {
miniType
Elem Node
Dir types.ChanDir
}
func NewChanType(pos src.XPos, elem Node, dir types.ChanDir) *ChanType {
n := &ChanType{Elem: elem, Dir: dir}
n.op = OTCHAN
n.pos = pos
return n
}
func (n *ChanType) SetOTYPE(t *types.Type) {
n.setOTYPE(t, n)
n.Elem = nil
}
// A MapType represents a map[Key]Value type syntax.
type MapType struct {
miniType
Key Node
Elem Node
}
func NewMapType(pos src.XPos, key, elem Node) *MapType {
n := &MapType{Key: key, Elem: elem}
n.op = OTMAP
n.pos = pos
return n
}
func (n *MapType) SetOTYPE(t *types.Type) {
n.setOTYPE(t, n)
n.Key = nil
n.Elem = nil
}
// A StructType represents a struct { ... } type syntax.
type StructType struct {
miniType
Fields []*Field
}
func NewStructType(pos src.XPos, fields []*Field) *StructType {
n := &StructType{Fields: fields}
n.op = OTSTRUCT
n.pos = pos
return n
}
func (n *StructType) SetOTYPE(t *types.Type) {
n.setOTYPE(t, n)
n.Fields = nil
}
func deepCopyFields(pos src.XPos, fields []*Field) []*Field {
var out []*Field
for _, f := range fields {
out = append(out, f.deepCopy(pos))
}
return out
}
// An InterfaceType represents a struct { ... } type syntax.
type InterfaceType struct {
miniType
Methods []*Field
}
func NewInterfaceType(pos src.XPos, methods []*Field) *InterfaceType {
n := &InterfaceType{Methods: methods}
n.op = OTINTER
n.pos = pos
return n
}
func (n *InterfaceType) SetOTYPE(t *types.Type) {
n.setOTYPE(t, n)
n.Methods = nil
}
// A FuncType represents a func(Args) Results type syntax.
type FuncType struct {
miniType
Recv *Field
Params []*Field
Results []*Field
}
func NewFuncType(pos src.XPos, rcvr *Field, args, results []*Field) *FuncType {
n := &FuncType{Recv: rcvr, Params: args, Results: results}
n.op = OTFUNC
n.pos = pos
return n
}
func (n *FuncType) SetOTYPE(t *types.Type) {
n.setOTYPE(t, n)
n.Recv = nil
n.Params = nil
n.Results = nil
}
// A Field is a declared struct field, interface method, or function argument.
// It is not a Node.
type Field struct {
Pos src.XPos
Sym *types.Sym
Ntype Ntype
Type *types.Type
Embedded bool
IsDDD bool
Note string
Decl *Name
}
func NewField(pos src.XPos, sym *types.Sym, ntyp Ntype, typ *types.Type) *Field {
return &Field{Pos: pos, Sym: sym, Ntype: ntyp, Type: typ}
}
func (f *Field) String() string {
var typ string
if f.Type != nil {
typ = fmt.Sprint(f.Type)
} else {
typ = fmt.Sprint(f.Ntype)
}
if f.Sym != nil {
return fmt.Sprintf("%v %v", f.Sym, typ)
}
return typ
}
func (f *Field) copy() *Field {
c := *f
return &c
}
func copyFields(list []*Field) []*Field {
out := make([]*Field, len(list))
copy(out, list)
for i, f := range out {
out[i] = f.copy()
}
return out
}
func maybeDoField(f *Field, err error, do func(Node) error) error {
if f != nil {
if err == nil && f.Decl != nil {
err = do(f.Decl)
}
if err == nil && f.Ntype != nil {
err = do(f.Ntype)
}
}
return err
}
func maybeDoFields(list []*Field, err error, do func(Node) error) error {
if err != nil {
return err
}
for _, f := range list {
err = maybeDoField(f, err, do)
if err != nil {
return err
}
}
return err
}
func editField(f *Field, edit func(Node) Node) {
if f == nil {
return
}
if f.Decl != nil {
f.Decl = edit(f.Decl).(*Name)
}
if f.Ntype != nil {
f.Ntype = toNtype(edit(f.Ntype))
}
}
func editFields(list []*Field, edit func(Node) Node) {
for _, f := range list {
editField(f, edit)
}
}
func (f *Field) deepCopy(pos src.XPos) *Field {
if f == nil {
return nil
}
fpos := pos
if !pos.IsKnown() {
fpos = f.Pos
}
decl := f.Decl
if decl != nil {
decl = DeepCopy(pos, decl).(*Name)
}
ntype := f.Ntype
if ntype != nil {
ntype = DeepCopy(pos, ntype).(Ntype)
}
// No keyed literal here: if a new struct field is added, we want this to stop compiling.
return &Field{fpos, f.Sym, ntype, f.Type, f.Embedded, f.IsDDD, f.Note, decl}
}
// A SliceType represents a []Elem type syntax.
// If DDD is true, it's the ...Elem at the end of a function list.
type SliceType struct {
miniType
Elem Node
DDD bool
}
func NewSliceType(pos src.XPos, elem Node) *SliceType {
n := &SliceType{Elem: elem}
n.op = OTSLICE
n.pos = pos
return n
}
func (n *SliceType) SetOTYPE(t *types.Type) {
n.setOTYPE(t, n)
n.Elem = nil
}
// An ArrayType represents a [Len]Elem type syntax.
// If Len is nil, the type is a [...]Elem in an array literal.
type ArrayType struct {
miniType
Len Node
Elem Node
}
func NewArrayType(pos src.XPos, size Node, elem Node) *ArrayType {
n := &ArrayType{Len: size, Elem: elem}
n.op = OTARRAY
n.pos = pos
return n
}
func (n *ArrayType) SetOTYPE(t *types.Type) {
n.setOTYPE(t, n)
n.Len = nil
n.Elem = nil
}
// A typeNode is a Node wrapper for type t.
type typeNode struct {
miniNode
typ *types.Type
}
func newTypeNode(pos src.XPos, typ *types.Type) *typeNode {
n := &typeNode{typ: typ}
n.pos = pos
n.op = OTYPE
return n
}
func (n *typeNode) Type() *types.Type { return n.typ }
func (n *typeNode) Sym() *types.Sym { return n.typ.Sym() }
func (n *typeNode) CanBeNtype() {}
// TypeNode returns the Node representing the type t.
func TypeNode(t *types.Type) Ntype {
if n := t.Obj(); n != nil {
if n.Type() != t {
base.Fatalf("type skew: %v has type %v, but expected %v", n, n.Type(), t)
}
return n.(Ntype)
}
return newTypeNode(src.NoXPos, t)
}

View File

@ -32,7 +32,7 @@ func ConstValue(n Node) interface{} {
case constant.String:
return constant.StringVal(v)
case constant.Int:
return Int64Val(n.Type(), v)
return IntVal(n.Type(), v)
case constant.Float:
return Float64Val(v)
case constant.Complex:
@ -42,7 +42,7 @@ func ConstValue(n Node) interface{} {
// int64Val returns v converted to int64.
// Note: if t is uint64, very large values will be converted to negative int64.
func Int64Val(t *types.Type, v constant.Value) int64 {
func IntVal(t *types.Type, v constant.Value) int64 {
if t.IsUnsigned() {
if x, ok := constant.Uint64Val(v); ok {
return int64(x)
@ -73,7 +73,7 @@ func AssertValidTypeForConst(t *types.Type, v constant.Value) {
func ValidTypeForConst(t *types.Type, v constant.Value) bool {
switch v.Kind() {
case constant.Unknown:
return OKForConst[t.Etype]
return OKForConst[t.Kind()]
case constant.Bool:
return t.IsBoolean()
case constant.String:
@ -92,7 +92,7 @@ func ValidTypeForConst(t *types.Type, v constant.Value) bool {
// nodlit returns a new untyped constant with value v.
func NewLiteral(v constant.Value) Node {
n := Nod(OLITERAL, nil, nil)
n := newNameAt(base.Pos, OLITERAL, nil)
if k := v.Kind(); k != constant.Unknown {
n.SetType(idealType(k))
n.SetVal(v)
@ -118,3 +118,59 @@ func idealType(ct constant.Kind) *types.Type {
}
var OKForConst [types.NTYPE]bool
// CanInt64 reports whether it is safe to call Int64Val() on n.
func CanInt64(n Node) bool {
if !IsConst(n, constant.Int) {
return false
}
// if the value inside n cannot be represented as an int64, the
// return value of Int64 is undefined
_, ok := constant.Int64Val(n.Val())
return ok
}
// Int64Val returns n as an int64.
// n must be an integer or rune constant.
func Int64Val(n Node) int64 {
if !IsConst(n, constant.Int) {
base.Fatalf("Int64Val(%v)", n)
}
x, ok := constant.Int64Val(n.Val())
if !ok {
base.Fatalf("Int64Val(%v)", n)
}
return x
}
// Uint64Val returns n as an uint64.
// n must be an integer or rune constant.
func Uint64Val(n Node) uint64 {
if !IsConst(n, constant.Int) {
base.Fatalf("Uint64Val(%v)", n)
}
x, ok := constant.Uint64Val(n.Val())
if !ok {
base.Fatalf("Uint64Val(%v)", n)
}
return x
}
// BoolVal returns n as a bool.
// n must be a boolean constant.
func BoolVal(n Node) bool {
if !IsConst(n, constant.Bool) {
base.Fatalf("BoolVal(%v)", n)
}
return constant.BoolVal(n.Val())
}
// StringVal returns the value of a literal string Node as a string.
// n must be a string constant.
func StringVal(n Node) string {
if !IsConst(n, constant.String) {
base.Fatalf("StringVal(%v)", n)
}
return constant.StringVal(n.Val())
}

View File

@ -0,0 +1,244 @@
// Copyright 2020 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// IR visitors for walking the IR tree.
//
// The lowest level helpers are DoChildren and EditChildren,
// which nodes help implement (TODO(rsc): eventually) and
// provide control over whether and when recursion happens
// during the walk of the IR.
//
// Although these are both useful directly, two simpler patterns
// are fairly common and also provided: Inspect and Scan.
package ir
import (
"errors"
)
// DoChildren calls do(x) on each of n's non-nil child nodes x.
// If any call returns a non-nil error, DoChildren stops and returns that error.
// Otherwise, DoChildren returns nil.
//
// Note that DoChildren(n, do) only calls do(x) for n's immediate children.
// If x's children should be processed, then do(x) must call DoChildren(x, do).
//
// DoChildren allows constructing general traversals of the IR graph
// that can stop early if needed. The most general usage is:
//
// var do func(ir.Node) error
// do = func(x ir.Node) error {
// ... processing BEFORE visting children ...
// if ... should visit children ... {
// ir.DoChildren(x, do)
// ... processing AFTER visting children ...
// }
// if ... should stop parent DoChildren call from visiting siblings ... {
// return non-nil error
// }
// return nil
// }
// do(root)
//
// Since DoChildren does not generate any errors itself, if the do function
// never wants to stop the traversal, it can assume that DoChildren itself
// will always return nil, simplifying to:
//
// var do func(ir.Node) error
// do = func(x ir.Node) error {
// ... processing BEFORE visting children ...
// if ... should visit children ... {
// ir.DoChildren(x, do)
// }
// ... processing AFTER visting children ...
// return nil
// }
// do(root)
//
// The Inspect function illustrates a further simplification of the pattern,
// only considering processing before visiting children, and letting
// that processing decide whether children are visited at all:
//
// func Inspect(n ir.Node, inspect func(ir.Node) bool) {
// var do func(ir.Node) error
// do = func(x ir.Node) error {
// if inspect(x) {
// ir.DoChildren(x, do)
// }
// return nil
// }
// if n != nil {
// do(n)
// }
// }
//
// The Find function illustrates a different simplification of the pattern,
// visiting each node and then its children, recursively, until finding
// a node x such that find(x) returns a non-nil result,
// at which point the entire traversal stops:
//
// func Find(n ir.Node, find func(ir.Node) interface{}) interface{} {
// stop := errors.New("stop")
// var found interface{}
// var do func(ir.Node) error
// do = func(x ir.Node) error {
// if v := find(x); v != nil {
// found = v
// return stop
// }
// return ir.DoChildren(x, do)
// }
// do(n)
// return found
// }
//
// Inspect and Find are presented above as examples of how to use
// DoChildren effectively, but of course, usage that fits within the
// simplifications captured by Inspect or Find will be best served
// by directly calling the ones provided by this package.
func DoChildren(n Node, do func(Node) error) error {
if n == nil {
return nil
}
return n.doChildren(do)
}
// DoList calls f on each non-nil node x in the list, in list order.
// If any call returns a non-nil error, DoList stops and returns that error.
// Otherwise DoList returns nil.
//
// Note that DoList only calls do on the nodes in the list, not their children.
// If x's children should be processed, do(x) must call DoChildren(x, do) itself.
func DoList(list Nodes, do func(Node) error) error {
for _, x := range list.Slice() {
if x != nil {
if err := do(x); err != nil {
return err
}
}
}
return nil
}
// Inspect visits each node x in the IR tree rooted at n
// in a depth-first preorder traversal, calling inspect on each node visited.
// If inspect(x) returns false, then Inspect skips over x's children.
//
// Note that the meaning of the boolean result in the callback function
// passed to Inspect differs from that of Scan.
// During Scan, if scan(x) returns false, then Scan stops the scan.
// During Inspect, if inspect(x) returns false, then Inspect skips x's children
// but continues with the remainder of the tree (x's siblings and so on).
func Inspect(n Node, inspect func(Node) bool) {
var do func(Node) error
do = func(x Node) error {
if inspect(x) {
DoChildren(x, do)
}
return nil
}
if n != nil {
do(n)
}
}
// InspectList calls Inspect(x, inspect) for each node x in the list.
func InspectList(list Nodes, inspect func(Node) bool) {
for _, x := range list.Slice() {
Inspect(x, inspect)
}
}
var stop = errors.New("stop")
// Find looks for a non-nil node x in the IR tree rooted at n
// for which find(x) returns a non-nil value.
// Find considers nodes in a depth-first, preorder traversal.
// When Find finds a node x such that find(x) != nil,
// Find ends the traversal and returns the value of find(x) immediately.
// Otherwise Find returns nil.
func Find(n Node, find func(Node) interface{}) interface{} {
if n == nil {
return nil
}
var found interface{}
var do func(Node) error
do = func(x Node) error {
if v := find(x); v != nil {
found = v
return stop
}
return DoChildren(x, do)
}
do(n)
return found
}
// FindList calls Find(x, ok) for each node x in the list, in order.
// If any call find(x) returns a non-nil result, FindList stops and
// returns that result, skipping the remainder of the list.
// Otherwise FindList returns nil.
func FindList(list Nodes, find func(Node) interface{}) interface{} {
for _, x := range list.Slice() {
if v := Find(x, find); v != nil {
return v
}
}
return nil
}
// EditChildren edits the child nodes of n, replacing each child x with edit(x).
//
// Note that EditChildren(n, edit) only calls edit(x) for n's immediate children.
// If x's children should be processed, then edit(x) must call EditChildren(x, edit).
//
// EditChildren allows constructing general editing passes of the IR graph.
// The most general usage is:
//
// var edit func(ir.Node) ir.Node
// edit = func(x ir.Node) ir.Node {
// ... processing BEFORE editing children ...
// if ... should edit children ... {
// EditChildren(x, edit)
// ... processing AFTER editing children ...
// }
// ... return x ...
// }
// n = edit(n)
//
// EditChildren edits the node in place. To edit a copy, call Copy first.
// As an example, a simple deep copy implementation would be:
//
// func deepCopy(n ir.Node) ir.Node {
// var edit func(ir.Node) ir.Node
// edit = func(x ir.Node) ir.Node {
// x = ir.Copy(x)
// ir.EditChildren(x, edit)
// return x
// }
// return edit(n)
// }
//
// Of course, in this case it is better to call ir.DeepCopy than to build one anew.
func EditChildren(n Node, edit func(Node) Node) {
if n == nil {
return
}
n.editChildren(edit)
}
// editList calls edit on each non-nil node x in the list,
// saving the result of edit back into the list.
//
// Note that editList only calls edit on the nodes in the list, not their children.
// If x's children should be processed, edit(x) must call EditChildren(x, edit) itself.
func editList(list Nodes, edit func(Node) Node) {
s := list.Slice()
for i, x := range list.Slice() {
if x != nil {
s[i] = edit(x)
}
}
}

View File

@ -51,7 +51,7 @@ func want(t *testing.T, out string, desired string) {
func wantN(t *testing.T, out string, desired string, n int) {
if strings.Count(out, desired) != n {
t.Errorf("expected exactly %d occurences of %s in \n%s", n, desired, out)
t.Errorf("expected exactly %d occurrences of %s in \n%s", n, desired, out)
}
}
@ -132,7 +132,7 @@ func TestLogOpt(t *testing.T) {
// Check at both 1 and 8-byte alignments.
t.Run("Copy", func(t *testing.T) {
const copyCode = `package x
func s128a1(x *[128]int8) [128]int8 {
func s128a1(x *[128]int8) [128]int8 {
return *x
}
func s127a1(x *[127]int8) [127]int8 {
@ -219,7 +219,7 @@ func s15a8(x *[15]int64) [15]int64 {
`{"location":{"uri":"file://tmpdir/file.go","range":{"start":{"line":4,"character":11},"end":{"line":4,"character":11}}},"message":"inlineLoc"},`+
`{"location":{"uri":"file://tmpdir/file.go","range":{"start":{"line":9,"character":13},"end":{"line":9,"character":13}}},"message":"escflow: from \u0026y.b (address-of)"},`+
`{"location":{"uri":"file://tmpdir/file.go","range":{"start":{"line":4,"character":9},"end":{"line":4,"character":9}}},"message":"inlineLoc"},`+
`{"location":{"uri":"file://tmpdir/file.go","range":{"start":{"line":9,"character":13},"end":{"line":9,"character":13}}},"message":"escflow: from ~R0 = \u003cN\u003e (assign-pair)"},`+
`{"location":{"uri":"file://tmpdir/file.go","range":{"start":{"line":9,"character":13},"end":{"line":9,"character":13}}},"message":"escflow: from ~R0 = \u0026y.b (assign-pair)"},`+
`{"location":{"uri":"file://tmpdir/file.go","range":{"start":{"line":9,"character":3},"end":{"line":9,"character":3}}},"message":"escflow: flow: ~r2 = ~R0:"},`+
`{"location":{"uri":"file://tmpdir/file.go","range":{"start":{"line":9,"character":3},"end":{"line":9,"character":3}}},"message":"escflow: from return (*int)(~R0) (return)"}]}`)
})

View File

@ -289,7 +289,7 @@ func ssaGenValue(s *gc.SSAGenState, v *ssa.Value) {
case *obj.LSym:
wantreg = "SB"
gc.AddAux(&p.From, v)
case ir.Node:
case *ir.Name:
wantreg = "SP"
gc.AddAux(&p.From, v)
case nil:

View File

@ -263,7 +263,7 @@ func ssaGenValue(s *gc.SSAGenState, v *ssa.Value) {
case *obj.LSym:
wantreg = "SB"
gc.AddAux(&p.From, v)
case ir.Node:
case *ir.Name:
wantreg = "SP"
gc.AddAux(&p.From, v)
case nil:

View File

@ -324,7 +324,7 @@ func ssaGenValue(s *gc.SSAGenState, v *ssa.Value) {
case *obj.LSym:
wantreg = "SB"
gc.AddAux(&p.From, v)
case ir.Node:
case *ir.Name:
wantreg = "SP"
gc.AddAux(&p.From, v)
case nil:

View File

@ -52,7 +52,7 @@ type Block struct {
Controls [2]*Value
// Auxiliary info for the block. Its value depends on the Kind.
Aux interface{}
Aux Aux
AuxInt int64
// The unordered set of Values that define the operation of this block.

View File

@ -161,7 +161,7 @@ func checkFunc(f *Func) {
f.Fatalf("value %v has an AuxInt that encodes a NaN", v)
}
case auxString:
if _, ok := v.Aux.(string); !ok {
if _, ok := v.Aux.(stringAux); !ok {
f.Fatalf("value %v has Aux type %T, want string", v, v.Aux)
}
canHaveAux = true

View File

@ -139,7 +139,7 @@ type Frontend interface {
// Auto returns a Node for an auto variable of the given type.
// The SSA compiler uses this function to allocate space for spills.
Auto(src.XPos, *types.Type) ir.Node
Auto(src.XPos, *types.Type) *ir.Name
// Given the name for a compound type, returns the name we should use
// for the parts of that compound type.

View File

@ -275,7 +275,7 @@ func lt2Cmp(isLt bool) types.Cmp {
return types.CMPgt
}
type auxmap map[interface{}]int32
type auxmap map[Aux]int32
func cmpVal(v, w *Value, auxIDs auxmap) types.Cmp {
// Try to order these comparison by cost (cheaper first)

View File

@ -14,6 +14,8 @@ type tstAux struct {
s string
}
func (*tstAux) CanBeAnSSAAux() {}
// This tests for a bug found when partitioning, but not sorting by the Aux value.
func TestCSEAuxPartitionBug(t *testing.T) {
c := testConfig(t)

View File

@ -137,9 +137,9 @@ func dse(f *Func) {
// reaches stores then we delete all the stores. The other operations will then
// be eliminated by the dead code elimination pass.
func elimDeadAutosGeneric(f *Func) {
addr := make(map[*Value]ir.Node) // values that the address of the auto reaches
elim := make(map[*Value]ir.Node) // values that could be eliminated if the auto is
used := make(map[ir.Node]bool) // used autos that must be kept
addr := make(map[*Value]*ir.Name) // values that the address of the auto reaches
elim := make(map[*Value]*ir.Name) // values that could be eliminated if the auto is
used := make(map[*ir.Name]bool) // used autos that must be kept
// visit the value and report whether any of the maps are updated
visit := func(v *Value) (changed bool) {
@ -147,7 +147,7 @@ func elimDeadAutosGeneric(f *Func) {
switch v.Op {
case OpAddr, OpLocalAddr:
// Propagate the address if it points to an auto.
n, ok := v.Aux.(ir.Node)
n, ok := v.Aux.(*ir.Name)
if !ok || n.Class() != ir.PAUTO {
return
}
@ -158,7 +158,7 @@ func elimDeadAutosGeneric(f *Func) {
return
case OpVarDef, OpVarKill:
// v should be eliminated if we eliminate the auto.
n, ok := v.Aux.(ir.Node)
n, ok := v.Aux.(*ir.Name)
if !ok || n.Class() != ir.PAUTO {
return
}
@ -174,7 +174,7 @@ func elimDeadAutosGeneric(f *Func) {
// for open-coded defers from being removed (since they
// may not be used by the inline code, but will be used by
// panic processing).
n, ok := v.Aux.(ir.Node)
n, ok := v.Aux.(*ir.Name)
if !ok || n.Class() != ir.PAUTO {
return
}
@ -222,7 +222,7 @@ func elimDeadAutosGeneric(f *Func) {
}
// Propagate any auto addresses through v.
var node ir.Node
var node *ir.Name
for _, a := range args {
if n, ok := addr[a]; ok && !used[n] {
if node == nil {
@ -299,11 +299,11 @@ func elimUnreadAutos(f *Func) {
// Loop over all ops that affect autos taking note of which
// autos we need and also stores that we might be able to
// eliminate.
seen := make(map[ir.Node]bool)
seen := make(map[*ir.Name]bool)
var stores []*Value
for _, b := range f.Blocks {
for _, v := range b.Values {
n, ok := v.Aux.(ir.Node)
n, ok := v.Aux.(*ir.Name)
if !ok {
continue
}
@ -335,7 +335,7 @@ func elimUnreadAutos(f *Func) {
// Eliminate stores to unread autos.
for _, store := range stores {
n, _ := store.Aux.(ir.Node)
n, _ := store.Aux.(*ir.Name)
if seen[n] {
continue
}

View File

@ -25,7 +25,7 @@ type FuncDebug struct {
// Slots is all the slots used in the debug info, indexed by their SlotID.
Slots []LocalSlot
// The user variables, indexed by VarID.
Vars []ir.Node
Vars []*ir.Name
// The slots that make up each variable, indexed by VarID.
VarSlots [][]SlotID
// The location list data, indexed by VarID. Must be processed by PutLocationList.
@ -143,13 +143,13 @@ func (loc VarLoc) absent() bool {
var BlockStart = &Value{
ID: -10000,
Op: OpInvalid,
Aux: "BlockStart",
Aux: StringToAux("BlockStart"),
}
var BlockEnd = &Value{
ID: -20000,
Op: OpInvalid,
Aux: "BlockEnd",
Aux: StringToAux("BlockEnd"),
}
// RegisterSet is a bitmap of registers, indexed by Register.num.
@ -166,7 +166,7 @@ func (s *debugState) logf(msg string, args ...interface{}) {
type debugState struct {
// See FuncDebug.
slots []LocalSlot
vars []ir.Node
vars []*ir.Name
varSlots [][]SlotID
lists [][]byte
@ -190,7 +190,7 @@ type debugState struct {
// The pending location list entry for each user variable, indexed by VarID.
pendingEntries []pendingEntry
varParts map[ir.Node][]SlotID
varParts map[*ir.Name][]SlotID
blockDebug []BlockDebug
pendingSlotLocs []VarLoc
liveSlots []liveSlot
@ -347,7 +347,7 @@ func BuildFuncDebug(ctxt *obj.Link, f *Func, loggingEnabled bool, stackOffset fu
}
if state.varParts == nil {
state.varParts = make(map[ir.Node][]SlotID)
state.varParts = make(map[*ir.Name][]SlotID)
} else {
for n := range state.varParts {
delete(state.varParts, n)
@ -380,7 +380,7 @@ func BuildFuncDebug(ctxt *obj.Link, f *Func, loggingEnabled bool, stackOffset fu
for _, b := range f.Blocks {
for _, v := range b.Values {
if v.Op == OpVarDef || v.Op == OpVarKill {
n := v.Aux.(ir.Node)
n := v.Aux.(*ir.Name)
if ir.IsSynthetic(n) {
continue
}
@ -718,7 +718,7 @@ func (state *debugState) processValue(v *Value, vSlots []SlotID, vReg *Register)
switch {
case v.Op == OpVarDef, v.Op == OpVarKill:
n := v.Aux.(ir.Node)
n := v.Aux.(*ir.Name)
if ir.IsSynthetic(n) {
break
}

View File

@ -69,7 +69,7 @@ func expandCalls(f *Func) {
// intPairTypes returns the pair of 32-bit int types needed to encode a 64-bit integer type on a target
// that has no 64-bit integer registers.
intPairTypes := func(et types.EType) (tHi, tLo *types.Type) {
intPairTypes := func(et types.Kind) (tHi, tLo *types.Type) {
tHi = typ.UInt32
if et == types.TINT64 {
tHi = typ.Int32
@ -294,7 +294,7 @@ func expandCalls(f *Func) {
case OpStructSelect:
w := selector.Args[0]
var ls []LocalSlot
if w.Type.Etype != types.TSTRUCT { // IData artifact
if w.Type.Kind() != types.TSTRUCT { // IData artifact
ls = rewriteSelect(leaf, w, offset)
} else {
ls = rewriteSelect(leaf, w, offset+w.Type.FieldOff(int(selector.AuxInt)))
@ -383,7 +383,7 @@ func expandCalls(f *Func) {
decomposeOne func(pos src.XPos, b *Block, base, source, mem *Value, t1 *types.Type, offArg, offStore int64) *Value,
decomposeTwo func(pos src.XPos, b *Block, base, source, mem *Value, t1, t2 *types.Type, offArg, offStore int64) *Value) *Value {
u := source.Type
switch u.Etype {
switch u.Kind() {
case types.TARRAY:
elem := u.Elem()
for i := int64(0); i < u.NumElem(); i++ {
@ -403,7 +403,7 @@ func expandCalls(f *Func) {
if t.Width == regSize {
break
}
tHi, tLo := intPairTypes(t.Etype)
tHi, tLo := intPairTypes(t.Kind())
mem = decomposeOne(pos, b, base, source, mem, tHi, source.AuxInt+hiOffset, offset+hiOffset)
pos = pos.WithNotStmt()
return decomposeOne(pos, b, base, source, mem, tLo, source.AuxInt+lowOffset, offset+lowOffset)
@ -491,7 +491,7 @@ func expandCalls(f *Func) {
return storeArgOrLoad(pos, b, base, source.Args[0], mem, t.Elem(), offset)
case OpInt64Make:
tHi, tLo := intPairTypes(t.Etype)
tHi, tLo := intPairTypes(t.Kind())
mem = storeArgOrLoad(pos, b, base, source.Args[0], mem, tHi, offset+hiOffset)
pos = pos.WithNotStmt()
return storeArgOrLoad(pos, b, base, source.Args[1], mem, tLo, offset+lowOffset)
@ -524,7 +524,7 @@ func expandCalls(f *Func) {
}
// For nodes that cannot be taken apart -- OpSelectN, other structure selectors.
switch t.Etype {
switch t.Kind() {
case types.TARRAY:
elt := t.Elem()
if source.Type != t && t.NumElem() == 1 && elt.Width == t.Width && t.Width == regSize {
@ -576,7 +576,7 @@ func expandCalls(f *Func) {
if t.Width == regSize {
break
}
tHi, tLo := intPairTypes(t.Etype)
tHi, tLo := intPairTypes(t.Kind())
sel := source.Block.NewValue1(pos, OpInt64Hi, tHi, source)
mem = storeArgOrLoad(pos, b, base, sel, mem, tHi, offset+hiOffset)
pos = pos.WithNotStmt()
@ -873,7 +873,7 @@ func expandCalls(f *Func) {
offset := int64(0)
switch v.Op {
case OpStructSelect:
if w.Type.Etype == types.TSTRUCT {
if w.Type.Kind() == types.TSTRUCT {
offset = w.Type.FieldOff(int(v.AuxInt))
} else { // Immediate interface data artifact, offset is zero.
f.Fatalf("Expand calls interface data problem, func %s, v=%s, w=%s\n", f.Name, v.LongString(), w.LongString())

View File

@ -12,7 +12,6 @@ import (
"cmd/internal/obj/s390x"
"cmd/internal/obj/x86"
"cmd/internal/src"
"fmt"
"testing"
)
@ -69,7 +68,7 @@ type TestFrontend struct {
func (TestFrontend) StringData(s string) *obj.LSym {
return nil
}
func (TestFrontend) Auto(pos src.XPos, t *types.Type) ir.Node {
func (TestFrontend) Auto(pos src.XPos, t *types.Type) *ir.Name {
n := ir.NewNameAt(pos, &types.Sym{Name: "aFakeAuto"})
n.SetClass(ir.PAUTO)
return n
@ -138,24 +137,11 @@ func init() {
// Initialize just enough of the universe and the types package to make our tests function.
// TODO(josharian): move universe initialization to the types package,
// so this test setup can share it.
types.Tconv = func(t *types.Type, flag, mode int) string {
return t.Etype.String()
}
types.Sconv = func(s *types.Sym, flag, mode int) string {
return "sym"
}
types.FormatSym = func(sym *types.Sym, s fmt.State, verb rune, mode int) {
fmt.Fprintf(s, "sym")
}
types.FormatType = func(t *types.Type, s fmt.State, verb rune, mode int) {
fmt.Fprintf(s, "%v", t.Etype)
}
types.Dowidth = func(t *types.Type) {}
for _, typ := range [...]struct {
width int64
et types.EType
et types.Kind
}{
{1, types.TINT8},
{1, types.TUINT8},

View File

@ -377,13 +377,7 @@ func (b *Block) NewValue0I(pos src.XPos, op Op, t *types.Type, auxint int64) *Va
}
// NewValue returns a new value in the block with no arguments and an aux value.
func (b *Block) NewValue0A(pos src.XPos, op Op, t *types.Type, aux interface{}) *Value {
if _, ok := aux.(int64); ok {
// Disallow int64 aux values. They should be in the auxint field instead.
// Maybe we want to allow this at some point, but for now we disallow it
// to prevent errors like using NewValue1A instead of NewValue1I.
b.Fatalf("aux field has int64 type op=%s type=%s aux=%v", op, t, aux)
}
func (b *Block) NewValue0A(pos src.XPos, op Op, t *types.Type, aux Aux) *Value {
v := b.Func.newValue(op, t, b, pos)
v.AuxInt = 0
v.Aux = aux
@ -392,7 +386,7 @@ func (b *Block) NewValue0A(pos src.XPos, op Op, t *types.Type, aux interface{})
}
// NewValue returns a new value in the block with no arguments and both an auxint and aux values.
func (b *Block) NewValue0IA(pos src.XPos, op Op, t *types.Type, auxint int64, aux interface{}) *Value {
func (b *Block) NewValue0IA(pos src.XPos, op Op, t *types.Type, auxint int64, aux Aux) *Value {
v := b.Func.newValue(op, t, b, pos)
v.AuxInt = auxint
v.Aux = aux
@ -421,7 +415,7 @@ func (b *Block) NewValue1I(pos src.XPos, op Op, t *types.Type, auxint int64, arg
}
// NewValue1A returns a new value in the block with one argument and an aux value.
func (b *Block) NewValue1A(pos src.XPos, op Op, t *types.Type, aux interface{}, arg *Value) *Value {
func (b *Block) NewValue1A(pos src.XPos, op Op, t *types.Type, aux Aux, arg *Value) *Value {
v := b.Func.newValue(op, t, b, pos)
v.AuxInt = 0
v.Aux = aux
@ -432,7 +426,7 @@ func (b *Block) NewValue1A(pos src.XPos, op Op, t *types.Type, aux interface{},
}
// NewValue1IA returns a new value in the block with one argument and both an auxint and aux values.
func (b *Block) NewValue1IA(pos src.XPos, op Op, t *types.Type, auxint int64, aux interface{}, arg *Value) *Value {
func (b *Block) NewValue1IA(pos src.XPos, op Op, t *types.Type, auxint int64, aux Aux, arg *Value) *Value {
v := b.Func.newValue(op, t, b, pos)
v.AuxInt = auxint
v.Aux = aux
@ -455,7 +449,7 @@ func (b *Block) NewValue2(pos src.XPos, op Op, t *types.Type, arg0, arg1 *Value)
}
// NewValue2A returns a new value in the block with two arguments and one aux values.
func (b *Block) NewValue2A(pos src.XPos, op Op, t *types.Type, aux interface{}, arg0, arg1 *Value) *Value {
func (b *Block) NewValue2A(pos src.XPos, op Op, t *types.Type, aux Aux, arg0, arg1 *Value) *Value {
v := b.Func.newValue(op, t, b, pos)
v.AuxInt = 0
v.Aux = aux
@ -480,7 +474,7 @@ func (b *Block) NewValue2I(pos src.XPos, op Op, t *types.Type, auxint int64, arg
}
// NewValue2IA returns a new value in the block with two arguments and both an auxint and aux values.
func (b *Block) NewValue2IA(pos src.XPos, op Op, t *types.Type, auxint int64, aux interface{}, arg0, arg1 *Value) *Value {
func (b *Block) NewValue2IA(pos src.XPos, op Op, t *types.Type, auxint int64, aux Aux, arg0, arg1 *Value) *Value {
v := b.Func.newValue(op, t, b, pos)
v.AuxInt = auxint
v.Aux = aux
@ -521,7 +515,7 @@ func (b *Block) NewValue3I(pos src.XPos, op Op, t *types.Type, auxint int64, arg
}
// NewValue3A returns a new value in the block with three argument and an aux value.
func (b *Block) NewValue3A(pos src.XPos, op Op, t *types.Type, aux interface{}, arg0, arg1, arg2 *Value) *Value {
func (b *Block) NewValue3A(pos src.XPos, op Op, t *types.Type, aux Aux, arg0, arg1, arg2 *Value) *Value {
v := b.Func.newValue(op, t, b, pos)
v.AuxInt = 0
v.Aux = aux
@ -633,7 +627,7 @@ func (f *Func) ConstNil(t *types.Type) *Value {
}
func (f *Func) ConstEmptyString(t *types.Type) *Value {
v := f.constVal(OpConstString, t, constEmptyStringMagic, false)
v.Aux = ""
v.Aux = StringToAux("")
return v
}
func (f *Func) ConstOffPtrSP(t *types.Type, c int64, sp *Value) *Value {
@ -790,10 +784,10 @@ func (f *Func) spSb() (sp, sb *Value) {
}
}
if sb == nil {
sb = f.Entry.NewValue0(initpos, OpSB, f.Config.Types.Uintptr)
sb = f.Entry.NewValue0(initpos.WithNotStmt(), OpSB, f.Config.Types.Uintptr)
}
if sp == nil {
sp = f.Entry.NewValue0(initpos, OpSP, f.Config.Types.Uintptr)
sp = f.Entry.NewValue0(initpos.WithNotStmt(), OpSP, f.Config.Types.Uintptr)
}
return
}

View File

@ -232,7 +232,7 @@ func Bloc(name string, entries ...interface{}) bloc {
}
// Valu defines a value in a block.
func Valu(name string, op Op, t *types.Type, auxint int64, aux interface{}, args ...string) valu {
func Valu(name string, op Op, t *types.Type, auxint int64, aux Aux, args ...string) valu {
return valu{name, op, t, auxint, aux, args}
}
@ -277,7 +277,7 @@ type valu struct {
op Op
t *types.Type
auxint int64
aux interface{}
aux Aux
args []string
}
@ -402,12 +402,12 @@ func TestEquiv(t *testing.T) {
cfg.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("a", OpConst64, cfg.config.Types.Int64, 0, 14),
Valu("a", OpConstString, cfg.config.Types.String, 0, StringToAux("foo")),
Exit("mem"))),
cfg.Fun("entry",
Bloc("entry",
Valu("mem", OpInitMem, types.TypeMem, 0, nil),
Valu("a", OpConst64, cfg.config.Types.Int64, 0, 26),
Valu("a", OpConstString, cfg.config.Types.String, 0, StringToAux("bar")),
Exit("mem"))),
},
// value args different

View File

@ -531,6 +531,7 @@
// fold ADDL into LEAL
(ADDLconst [c] (LEAL [d] {s} x)) && is32Bit(int64(c)+int64(d)) => (LEAL [c+d] {s} x)
(LEAL [c] {s} (ADDLconst [d] x)) && is32Bit(int64(c)+int64(d)) => (LEAL [c+d] {s} x)
(ADDLconst [c] x:(SP)) => (LEAL [c] x) // so it is rematerializeable
(LEAL [c] {s} (ADDL x y)) && x.Op != OpSB && y.Op != OpSB => (LEAL1 [c] {s} x y)
(ADDL x (LEAL [c] {s} y)) && x.Op != OpSB && y.Op != OpSB => (LEAL1 [c] {s} x y)

View File

@ -760,8 +760,8 @@
(MUL (MOVWconst [c]) (MOVWconst [d])) => (MOVWconst [c*d])
(MULA (MOVWconst [c]) (MOVWconst [d]) a) => (ADDconst [c*d] a)
(MULS (MOVWconst [c]) (MOVWconst [d]) a) => (SUBconst [c*d] a)
(Select0 (CALLudiv (MOVWconst [c]) (MOVWconst [d]))) => (MOVWconst [int32(uint32(c)/uint32(d))])
(Select1 (CALLudiv (MOVWconst [c]) (MOVWconst [d]))) => (MOVWconst [int32(uint32(c)%uint32(d))])
(Select0 (CALLudiv (MOVWconst [c]) (MOVWconst [d]))) && d != 0 => (MOVWconst [int32(uint32(c)/uint32(d))])
(Select1 (CALLudiv (MOVWconst [c]) (MOVWconst [d]))) && d != 0 => (MOVWconst [int32(uint32(c)%uint32(d))])
(ANDconst [c] (MOVWconst [d])) => (MOVWconst [c&d])
(ANDconst [c] (ANDconst [d] x)) => (ANDconst [c&d] x)
(ORconst [c] (MOVWconst [d])) => (MOVWconst [c|d])
@ -1369,38 +1369,38 @@
(LE (CMPconst [0] l:(ADDshiftLLreg x y z)) yes no) && l.Uses==1 => (LEnoov (CMNshiftLLreg x y z) yes no)
(LE (CMPconst [0] l:(ADDshiftRLreg x y z)) yes no) && l.Uses==1 => (LEnoov (CMNshiftRLreg x y z) yes no)
(LE (CMPconst [0] l:(ADDshiftRAreg x y z)) yes no) && l.Uses==1 => (LEnoov (CMNshiftRAreg x y z) yes no)
(LT (CMPconst [0] l:(AND x y)) yes no) && l.Uses==1 => (LT (TST x y) yes no)
(LT (CMPconst [0] l:(ANDconst [c] x)) yes no) && l.Uses==1 => (LT (TSTconst [c] x) yes no)
(LT (CMPconst [0] l:(ANDshiftLL x y [c])) yes no) && l.Uses==1 => (LT (TSTshiftLL x y [c]) yes no)
(LT (CMPconst [0] l:(ANDshiftRL x y [c])) yes no) && l.Uses==1 => (LT (TSTshiftRL x y [c]) yes no)
(LT (CMPconst [0] l:(ANDshiftRA x y [c])) yes no) && l.Uses==1 => (LT (TSTshiftRA x y [c]) yes no)
(LT (CMPconst [0] l:(ANDshiftLLreg x y z)) yes no) && l.Uses==1 => (LT (TSTshiftLLreg x y z) yes no)
(LT (CMPconst [0] l:(ANDshiftRLreg x y z)) yes no) && l.Uses==1 => (LT (TSTshiftRLreg x y z) yes no)
(LT (CMPconst [0] l:(ANDshiftRAreg x y z)) yes no) && l.Uses==1 => (LT (TSTshiftRAreg x y z) yes no)
(LE (CMPconst [0] l:(AND x y)) yes no) && l.Uses==1 => (LE (TST x y) yes no)
(LE (CMPconst [0] l:(ANDconst [c] x)) yes no) && l.Uses==1 => (LE (TSTconst [c] x) yes no)
(LE (CMPconst [0] l:(ANDshiftLL x y [c])) yes no) && l.Uses==1 => (LE (TSTshiftLL x y [c]) yes no)
(LE (CMPconst [0] l:(ANDshiftRL x y [c])) yes no) && l.Uses==1 => (LE (TSTshiftRL x y [c]) yes no)
(LE (CMPconst [0] l:(ANDshiftRA x y [c])) yes no) && l.Uses==1 => (LE (TSTshiftRA x y [c]) yes no)
(LE (CMPconst [0] l:(ANDshiftLLreg x y z)) yes no) && l.Uses==1 => (LE (TSTshiftLLreg x y z) yes no)
(LE (CMPconst [0] l:(ANDshiftRLreg x y z)) yes no) && l.Uses==1 => (LE (TSTshiftRLreg x y z) yes no)
(LE (CMPconst [0] l:(ANDshiftRAreg x y z)) yes no) && l.Uses==1 => (LE (TSTshiftRAreg x y z) yes no)
(LT (CMPconst [0] l:(XOR x y)) yes no) && l.Uses==1 => (LT (TEQ x y) yes no)
(LT (CMPconst [0] l:(XORconst [c] x)) yes no) && l.Uses==1 => (LT (TEQconst [c] x) yes no)
(LT (CMPconst [0] l:(XORshiftLL x y [c])) yes no) && l.Uses==1 => (LT (TEQshiftLL x y [c]) yes no)
(LT (CMPconst [0] l:(XORshiftRL x y [c])) yes no) && l.Uses==1 => (LT (TEQshiftRL x y [c]) yes no)
(LT (CMPconst [0] l:(XORshiftRA x y [c])) yes no) && l.Uses==1 => (LT (TEQshiftRA x y [c]) yes no)
(LT (CMPconst [0] l:(XORshiftLLreg x y z)) yes no) && l.Uses==1 => (LT (TEQshiftLLreg x y z) yes no)
(LT (CMPconst [0] l:(XORshiftRLreg x y z)) yes no) && l.Uses==1 => (LT (TEQshiftRLreg x y z) yes no)
(LT (CMPconst [0] l:(XORshiftRAreg x y z)) yes no) && l.Uses==1 => (LT (TEQshiftRAreg x y z) yes no)
(LE (CMPconst [0] l:(XOR x y)) yes no) && l.Uses==1 => (LE (TEQ x y) yes no)
(LE (CMPconst [0] l:(XORconst [c] x)) yes no) && l.Uses==1 => (LE (TEQconst [c] x) yes no)
(LE (CMPconst [0] l:(XORshiftLL x y [c])) yes no) && l.Uses==1 => (LE (TEQshiftLL x y [c]) yes no)
(LE (CMPconst [0] l:(XORshiftRL x y [c])) yes no) && l.Uses==1 => (LE (TEQshiftRL x y [c]) yes no)
(LE (CMPconst [0] l:(XORshiftRA x y [c])) yes no) && l.Uses==1 => (LE (TEQshiftRA x y [c]) yes no)
(LE (CMPconst [0] l:(XORshiftLLreg x y z)) yes no) && l.Uses==1 => (LE (TEQshiftLLreg x y z) yes no)
(LE (CMPconst [0] l:(XORshiftRLreg x y z)) yes no) && l.Uses==1 => (LE (TEQshiftRLreg x y z) yes no)
(LE (CMPconst [0] l:(XORshiftRAreg x y z)) yes no) && l.Uses==1 => (LE (TEQshiftRAreg x y z) yes no)
(LT (CMPconst [0] l:(AND x y)) yes no) && l.Uses==1 => (LTnoov (TST x y) yes no)
(LT (CMPconst [0] l:(ANDconst [c] x)) yes no) && l.Uses==1 => (LTnoov (TSTconst [c] x) yes no)
(LT (CMPconst [0] l:(ANDshiftLL x y [c])) yes no) && l.Uses==1 => (LTnoov (TSTshiftLL x y [c]) yes no)
(LT (CMPconst [0] l:(ANDshiftRL x y [c])) yes no) && l.Uses==1 => (LTnoov (TSTshiftRL x y [c]) yes no)
(LT (CMPconst [0] l:(ANDshiftRA x y [c])) yes no) && l.Uses==1 => (LTnoov (TSTshiftRA x y [c]) yes no)
(LT (CMPconst [0] l:(ANDshiftLLreg x y z)) yes no) && l.Uses==1 => (LTnoov (TSTshiftLLreg x y z) yes no)
(LT (CMPconst [0] l:(ANDshiftRLreg x y z)) yes no) && l.Uses==1 => (LTnoov (TSTshiftRLreg x y z) yes no)
(LT (CMPconst [0] l:(ANDshiftRAreg x y z)) yes no) && l.Uses==1 => (LTnoov (TSTshiftRAreg x y z) yes no)
(LE (CMPconst [0] l:(AND x y)) yes no) && l.Uses==1 => (LEnoov (TST x y) yes no)
(LE (CMPconst [0] l:(ANDconst [c] x)) yes no) && l.Uses==1 => (LEnoov (TSTconst [c] x) yes no)
(LE (CMPconst [0] l:(ANDshiftLL x y [c])) yes no) && l.Uses==1 => (LEnoov (TSTshiftLL x y [c]) yes no)
(LE (CMPconst [0] l:(ANDshiftRL x y [c])) yes no) && l.Uses==1 => (LEnoov (TSTshiftRL x y [c]) yes no)
(LE (CMPconst [0] l:(ANDshiftRA x y [c])) yes no) && l.Uses==1 => (LEnoov (TSTshiftRA x y [c]) yes no)
(LE (CMPconst [0] l:(ANDshiftLLreg x y z)) yes no) && l.Uses==1 => (LEnoov (TSTshiftLLreg x y z) yes no)
(LE (CMPconst [0] l:(ANDshiftRLreg x y z)) yes no) && l.Uses==1 => (LEnoov (TSTshiftRLreg x y z) yes no)
(LE (CMPconst [0] l:(ANDshiftRAreg x y z)) yes no) && l.Uses==1 => (LEnoov (TSTshiftRAreg x y z) yes no)
(LT (CMPconst [0] l:(XOR x y)) yes no) && l.Uses==1 => (LTnoov (TEQ x y) yes no)
(LT (CMPconst [0] l:(XORconst [c] x)) yes no) && l.Uses==1 => (LTnoov (TEQconst [c] x) yes no)
(LT (CMPconst [0] l:(XORshiftLL x y [c])) yes no) && l.Uses==1 => (LTnoov (TEQshiftLL x y [c]) yes no)
(LT (CMPconst [0] l:(XORshiftRL x y [c])) yes no) && l.Uses==1 => (LTnoov (TEQshiftRL x y [c]) yes no)
(LT (CMPconst [0] l:(XORshiftRA x y [c])) yes no) && l.Uses==1 => (LTnoov (TEQshiftRA x y [c]) yes no)
(LT (CMPconst [0] l:(XORshiftLLreg x y z)) yes no) && l.Uses==1 => (LTnoov (TEQshiftLLreg x y z) yes no)
(LT (CMPconst [0] l:(XORshiftRLreg x y z)) yes no) && l.Uses==1 => (LTnoov (TEQshiftRLreg x y z) yes no)
(LT (CMPconst [0] l:(XORshiftRAreg x y z)) yes no) && l.Uses==1 => (LTnoov (TEQshiftRAreg x y z) yes no)
(LE (CMPconst [0] l:(XOR x y)) yes no) && l.Uses==1 => (LEnoov (TEQ x y) yes no)
(LE (CMPconst [0] l:(XORconst [c] x)) yes no) && l.Uses==1 => (LEnoov (TEQconst [c] x) yes no)
(LE (CMPconst [0] l:(XORshiftLL x y [c])) yes no) && l.Uses==1 => (LEnoov (TEQshiftLL x y [c]) yes no)
(LE (CMPconst [0] l:(XORshiftRL x y [c])) yes no) && l.Uses==1 => (LEnoov (TEQshiftRL x y [c]) yes no)
(LE (CMPconst [0] l:(XORshiftRA x y [c])) yes no) && l.Uses==1 => (LEnoov (TEQshiftRA x y [c]) yes no)
(LE (CMPconst [0] l:(XORshiftLLreg x y z)) yes no) && l.Uses==1 => (LEnoov (TEQshiftLLreg x y z) yes no)
(LE (CMPconst [0] l:(XORshiftRLreg x y z)) yes no) && l.Uses==1 => (LEnoov (TEQshiftRLreg x y z) yes no)
(LE (CMPconst [0] l:(XORshiftRAreg x y z)) yes no) && l.Uses==1 => (LEnoov (TEQshiftRAreg x y z) yes no)
(GT (CMPconst [0] l:(SUB x y)) yes no) && l.Uses==1 => (GTnoov (CMP x y) yes no)
(GT (CMPconst [0] l:(MULS x y a)) yes no) && l.Uses==1 => (GTnoov (CMP a (MUL <x.Type> x y)) yes no)
(GT (CMPconst [0] l:(SUBconst [c] x)) yes no) && l.Uses==1 => (GTnoov (CMPconst [c] x) yes no)
@ -1436,39 +1436,39 @@
(GE (CMPconst [0] l:(ADDshiftLLreg x y z)) yes no) && l.Uses==1 => (GEnoov (CMNshiftLLreg x y z) yes no)
(GE (CMPconst [0] l:(ADDshiftRLreg x y z)) yes no) && l.Uses==1 => (GEnoov (CMNshiftRLreg x y z) yes no)
(GE (CMPconst [0] l:(ADDshiftRAreg x y z)) yes no) && l.Uses==1 => (GEnoov (CMNshiftRAreg x y z) yes no)
(GT (CMPconst [0] l:(AND x y)) yes no) && l.Uses==1 => (GT (TST x y) yes no)
(GT (CMPconst [0] l:(MULA x y a)) yes no) && l.Uses==1 => (GTnoov (CMN a (MUL <x.Type> x y)) yes no)
(GT (CMPconst [0] l:(ANDconst [c] x)) yes no) && l.Uses==1 => (GT (TSTconst [c] x) yes no)
(GT (CMPconst [0] l:(ANDshiftLL x y [c])) yes no) && l.Uses==1 => (GT (TSTshiftLL x y [c]) yes no)
(GT (CMPconst [0] l:(ANDshiftRL x y [c])) yes no) && l.Uses==1 => (GT (TSTshiftRL x y [c]) yes no)
(GT (CMPconst [0] l:(ANDshiftRA x y [c])) yes no) && l.Uses==1 => (GT (TSTshiftRA x y [c]) yes no)
(GT (CMPconst [0] l:(ANDshiftLLreg x y z)) yes no) && l.Uses==1 => (GT (TSTshiftLLreg x y z) yes no)
(GT (CMPconst [0] l:(ANDshiftRLreg x y z)) yes no) && l.Uses==1 => (GT (TSTshiftRLreg x y z) yes no)
(GT (CMPconst [0] l:(ANDshiftRAreg x y z)) yes no) && l.Uses==1 => (GT (TSTshiftRAreg x y z) yes no)
(GE (CMPconst [0] l:(AND x y)) yes no) && l.Uses==1 => (GE (TST x y) yes no)
(GE (CMPconst [0] l:(ANDconst [c] x)) yes no) && l.Uses==1 => (GE (TSTconst [c] x) yes no)
(GE (CMPconst [0] l:(ANDshiftLL x y [c])) yes no) && l.Uses==1 => (GE (TSTshiftLL x y [c]) yes no)
(GE (CMPconst [0] l:(ANDshiftRL x y [c])) yes no) && l.Uses==1 => (GE (TSTshiftRL x y [c]) yes no)
(GE (CMPconst [0] l:(ANDshiftRA x y [c])) yes no) && l.Uses==1 => (GE (TSTshiftRA x y [c]) yes no)
(GE (CMPconst [0] l:(ANDshiftLLreg x y z)) yes no) && l.Uses==1 => (GE (TSTshiftLLreg x y z) yes no)
(GE (CMPconst [0] l:(ANDshiftRLreg x y z)) yes no) && l.Uses==1 => (GE (TSTshiftRLreg x y z) yes no)
(GE (CMPconst [0] l:(ANDshiftRAreg x y z)) yes no) && l.Uses==1 => (GE (TSTshiftRAreg x y z) yes no)
(GT (CMPconst [0] l:(XOR x y)) yes no) && l.Uses==1 => (GT (TEQ x y) yes no)
(GT (CMPconst [0] l:(XORconst [c] x)) yes no) && l.Uses==1 => (GT (TEQconst [c] x) yes no)
(GT (CMPconst [0] l:(XORshiftLL x y [c])) yes no) && l.Uses==1 => (GT (TEQshiftLL x y [c]) yes no)
(GT (CMPconst [0] l:(XORshiftRL x y [c])) yes no) && l.Uses==1 => (GT (TEQshiftRL x y [c]) yes no)
(GT (CMPconst [0] l:(XORshiftRA x y [c])) yes no) && l.Uses==1 => (GT (TEQshiftRA x y [c]) yes no)
(GT (CMPconst [0] l:(XORshiftLLreg x y z)) yes no) && l.Uses==1 => (GT (TEQshiftLLreg x y z) yes no)
(GT (CMPconst [0] l:(XORshiftRLreg x y z)) yes no) && l.Uses==1 => (GT (TEQshiftRLreg x y z) yes no)
(GT (CMPconst [0] l:(XORshiftRAreg x y z)) yes no) && l.Uses==1 => (GT (TEQshiftRAreg x y z) yes no)
(GE (CMPconst [0] l:(XOR x y)) yes no) && l.Uses==1 => (GE (TEQ x y) yes no)
(GE (CMPconst [0] l:(XORconst [c] x)) yes no) && l.Uses==1 => (GE (TEQconst [c] x) yes no)
(GE (CMPconst [0] l:(XORshiftLL x y [c])) yes no) && l.Uses==1 => (GE (TEQshiftLL x y [c]) yes no)
(GE (CMPconst [0] l:(XORshiftRL x y [c])) yes no) && l.Uses==1 => (GE (TEQshiftRL x y [c]) yes no)
(GE (CMPconst [0] l:(XORshiftRA x y [c])) yes no) && l.Uses==1 => (GE (TEQshiftRA x y [c]) yes no)
(GE (CMPconst [0] l:(XORshiftLLreg x y z)) yes no) && l.Uses==1 => (GE (TEQshiftLLreg x y z) yes no)
(GE (CMPconst [0] l:(XORshiftRLreg x y z)) yes no) && l.Uses==1 => (GE (TEQshiftRLreg x y z) yes no)
(GE (CMPconst [0] l:(XORshiftRAreg x y z)) yes no) && l.Uses==1 => (GE (TEQshiftRAreg x y z) yes no)
(GT (CMPconst [0] l:(AND x y)) yes no) && l.Uses==1 => (GTnoov (TST x y) yes no)
(GT (CMPconst [0] l:(ANDconst [c] x)) yes no) && l.Uses==1 => (GTnoov (TSTconst [c] x) yes no)
(GT (CMPconst [0] l:(ANDshiftLL x y [c])) yes no) && l.Uses==1 => (GTnoov (TSTshiftLL x y [c]) yes no)
(GT (CMPconst [0] l:(ANDshiftRL x y [c])) yes no) && l.Uses==1 => (GTnoov (TSTshiftRL x y [c]) yes no)
(GT (CMPconst [0] l:(ANDshiftRA x y [c])) yes no) && l.Uses==1 => (GTnoov (TSTshiftRA x y [c]) yes no)
(GT (CMPconst [0] l:(ANDshiftLLreg x y z)) yes no) && l.Uses==1 => (GTnoov (TSTshiftLLreg x y z) yes no)
(GT (CMPconst [0] l:(ANDshiftRLreg x y z)) yes no) && l.Uses==1 => (GTnoov (TSTshiftRLreg x y z) yes no)
(GT (CMPconst [0] l:(ANDshiftRAreg x y z)) yes no) && l.Uses==1 => (GTnoov (TSTshiftRAreg x y z) yes no)
(GE (CMPconst [0] l:(AND x y)) yes no) && l.Uses==1 => (GEnoov (TST x y) yes no)
(GE (CMPconst [0] l:(ANDconst [c] x)) yes no) && l.Uses==1 => (GEnoov (TSTconst [c] x) yes no)
(GE (CMPconst [0] l:(ANDshiftLL x y [c])) yes no) && l.Uses==1 => (GEnoov (TSTshiftLL x y [c]) yes no)
(GE (CMPconst [0] l:(ANDshiftRL x y [c])) yes no) && l.Uses==1 => (GEnoov (TSTshiftRL x y [c]) yes no)
(GE (CMPconst [0] l:(ANDshiftRA x y [c])) yes no) && l.Uses==1 => (GEnoov (TSTshiftRA x y [c]) yes no)
(GE (CMPconst [0] l:(ANDshiftLLreg x y z)) yes no) && l.Uses==1 => (GEnoov (TSTshiftLLreg x y z) yes no)
(GE (CMPconst [0] l:(ANDshiftRLreg x y z)) yes no) && l.Uses==1 => (GEnoov (TSTshiftRLreg x y z) yes no)
(GE (CMPconst [0] l:(ANDshiftRAreg x y z)) yes no) && l.Uses==1 => (GEnoov (TSTshiftRAreg x y z) yes no)
(GT (CMPconst [0] l:(XOR x y)) yes no) && l.Uses==1 => (GTnoov (TEQ x y) yes no)
(GT (CMPconst [0] l:(XORconst [c] x)) yes no) && l.Uses==1 => (GTnoov (TEQconst [c] x) yes no)
(GT (CMPconst [0] l:(XORshiftLL x y [c])) yes no) && l.Uses==1 => (GTnoov (TEQshiftLL x y [c]) yes no)
(GT (CMPconst [0] l:(XORshiftRL x y [c])) yes no) && l.Uses==1 => (GTnoov (TEQshiftRL x y [c]) yes no)
(GT (CMPconst [0] l:(XORshiftRA x y [c])) yes no) && l.Uses==1 => (GTnoov (TEQshiftRA x y [c]) yes no)
(GT (CMPconst [0] l:(XORshiftLLreg x y z)) yes no) && l.Uses==1 => (GTnoov (TEQshiftLLreg x y z) yes no)
(GT (CMPconst [0] l:(XORshiftRLreg x y z)) yes no) && l.Uses==1 => (GTnoov (TEQshiftRLreg x y z) yes no)
(GT (CMPconst [0] l:(XORshiftRAreg x y z)) yes no) && l.Uses==1 => (GTnoov (TEQshiftRAreg x y z) yes no)
(GE (CMPconst [0] l:(XOR x y)) yes no) && l.Uses==1 => (GEnoov (TEQ x y) yes no)
(GE (CMPconst [0] l:(XORconst [c] x)) yes no) && l.Uses==1 => (GEnoov (TEQconst [c] x) yes no)
(GE (CMPconst [0] l:(XORshiftLL x y [c])) yes no) && l.Uses==1 => (GEnoov (TEQshiftLL x y [c]) yes no)
(GE (CMPconst [0] l:(XORshiftRL x y [c])) yes no) && l.Uses==1 => (GEnoov (TEQshiftRL x y [c]) yes no)
(GE (CMPconst [0] l:(XORshiftRA x y [c])) yes no) && l.Uses==1 => (GEnoov (TEQshiftRA x y [c]) yes no)
(GE (CMPconst [0] l:(XORshiftLLreg x y z)) yes no) && l.Uses==1 => (GEnoov (TEQshiftLLreg x y z) yes no)
(GE (CMPconst [0] l:(XORshiftRLreg x y z)) yes no) && l.Uses==1 => (GEnoov (TEQshiftRLreg x y z) yes no)
(GE (CMPconst [0] l:(XORshiftRAreg x y z)) yes no) && l.Uses==1 => (GEnoov (TEQshiftRAreg x y z) yes no)
(MOVBUload [off] {sym} (SB) _) && symIsRO(sym) => (MOVWconst [int32(read8(sym, int64(off)))])
(MOVHUload [off] {sym} (SB) _) && symIsRO(sym) => (MOVWconst [int32(read16(sym, int64(off), config.ctxt.Arch.ByteOrder))])

Some files were not shown because too many files have changed in this diff Show More