go/test/prove.go

1049 lines
20 KiB
Go
Raw Normal View History

// +build amd64
// errorcheck -0 -d=ssa/prove/debug=1
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package main
import "math"
func f0(a []int) int {
a[0] = 1
cmd/compile: make prove pass use unsatisfiability Currently the prove pass uses implication queries. For each block, it collects the set of branch conditions leading to that block, and queries this fact table for whether any of these facts imply the block's own branch condition (or its inverse). This works remarkably well considering it doesn't do any deduction on these facts, but it has various downsides: 1. It requires an implementation both of adding facts to the table and determining implications. These are very nearly duals of each other, but require separate implementations. Likewise, the process of asserting facts of dominating branch conditions is very nearly the dual of the process of querying implied branch conditions. 2. It leads to less effective use of derived facts. For example, the prove pass currently derives facts about the relations between len and cap, but can't make use of these unless a branch condition is in the exact form of a derived fact. If one of these derived facts contradicts another fact, it won't notice or make use of this. This CL changes the approach of the prove pass to instead use *contradiction* instead of implication. Rather than ever querying a branch condition, it simply adds branch conditions to the fact table. If this leads to a contradiction (specifically, it makes the fact set unsatisfiable), that branch is impossible and can be cut. As a result, 1. We can eliminate the code for determining implications (factsTable.get disappears entirely). Also, there is now a single implementation of visiting and asserting branch conditions, since we don't have to flip them around to treat them as facts in one place and queries in another. 2. Derived facts can be used effectively. It doesn't matter *why* the fact table is unsatisfiable; a contradiction in any of the facts is enough. 3. As an added benefit, it's now quite easy to avoid traversing beyond provably-unreachable blocks. In contrast, the current implementation always visits all blocks. The prove pass already has nearly all of the mechanism necessary to compute unsatisfiability, which means this both simplifies the code and makes it more powerful. The only complication is that the current implication procedure has a hack for dealing with the 0 <= Args[0] condition of OpIsInBounds and OpIsSliceInBounds. We replace this with asserting the appropriate fact when we process one of these conditions. This seems much cleaner anyway, and works because we can now take advantage of derived facts. This has no measurable effect on compiler performance. Effectiveness: There is exactly one condition in all of std and cmd that this fails to prove that the old implementation could: (int64(^uint(0)>>1) < x) in encoding/gob. This can never be true because x is an int, and it's basically coincidence that the old code gets this. (For example, it fails to prove the similar (x < ^int64(^uint(0)>>1)) condition that immediately precedes it, and even though the conditions are logically unrelated, it wouldn't get the second one if it hadn't first processed the first!) It does, however, prove a few dozen additional branches. These come from facts that are added to the fact table about the relations between len and cap. These were almost never queried directly before, but could lead to contradictions, which the unsat-based approach is able to use. There are exactly two branches in std and cmd that this implementation proves in the *other* direction. This sounds scary, but is okay because both occur in already-unreachable blocks, so it doesn't matter what we chose. Because the fact table logic is sound but incomplete, it fails to prove that the block isn't reachable, even though it is able to prove that both outgoing branches are impossible. We could turn these blocks into BlockExit blocks, but it doesn't seem worth the trouble of the extra proof effort for something that happens twice in all of std and cmd. Tests: This CL updates test/prove.go to change the expected messages because it can no longer give a "reason" why it proved or disproved a condition. It also adds a new test of a branch it couldn't prove before. It mostly guts test/sliceopt.go, removing everything related to slice bounds optimizations and moving a few relevant tests to test/prove.go. Much of this test is actually unreachable. The new prove pass figures this out and doesn't try to prove anything about the unreachable parts. The output on the unreachable parts is already suspect because anything can be proved at that point, so it's really just a regression test for an algorithm the compiler no longer uses. This is a step toward fixing #23354. That issue is quite easy to fix once we can use derived facts effectively. Change-Id: Ia48a1b9ee081310579fe474e4a61857424ff8ce8 Reviewed-on: https://go-review.googlesource.com/87478 Reviewed-by: Keith Randall <khr@golang.org>
2018-01-10 21:28:58 +00:00
a[0] = 1 // ERROR "Proved IsInBounds$"
a[6] = 1
cmd/compile: make prove pass use unsatisfiability Currently the prove pass uses implication queries. For each block, it collects the set of branch conditions leading to that block, and queries this fact table for whether any of these facts imply the block's own branch condition (or its inverse). This works remarkably well considering it doesn't do any deduction on these facts, but it has various downsides: 1. It requires an implementation both of adding facts to the table and determining implications. These are very nearly duals of each other, but require separate implementations. Likewise, the process of asserting facts of dominating branch conditions is very nearly the dual of the process of querying implied branch conditions. 2. It leads to less effective use of derived facts. For example, the prove pass currently derives facts about the relations between len and cap, but can't make use of these unless a branch condition is in the exact form of a derived fact. If one of these derived facts contradicts another fact, it won't notice or make use of this. This CL changes the approach of the prove pass to instead use *contradiction* instead of implication. Rather than ever querying a branch condition, it simply adds branch conditions to the fact table. If this leads to a contradiction (specifically, it makes the fact set unsatisfiable), that branch is impossible and can be cut. As a result, 1. We can eliminate the code for determining implications (factsTable.get disappears entirely). Also, there is now a single implementation of visiting and asserting branch conditions, since we don't have to flip them around to treat them as facts in one place and queries in another. 2. Derived facts can be used effectively. It doesn't matter *why* the fact table is unsatisfiable; a contradiction in any of the facts is enough. 3. As an added benefit, it's now quite easy to avoid traversing beyond provably-unreachable blocks. In contrast, the current implementation always visits all blocks. The prove pass already has nearly all of the mechanism necessary to compute unsatisfiability, which means this both simplifies the code and makes it more powerful. The only complication is that the current implication procedure has a hack for dealing with the 0 <= Args[0] condition of OpIsInBounds and OpIsSliceInBounds. We replace this with asserting the appropriate fact when we process one of these conditions. This seems much cleaner anyway, and works because we can now take advantage of derived facts. This has no measurable effect on compiler performance. Effectiveness: There is exactly one condition in all of std and cmd that this fails to prove that the old implementation could: (int64(^uint(0)>>1) < x) in encoding/gob. This can never be true because x is an int, and it's basically coincidence that the old code gets this. (For example, it fails to prove the similar (x < ^int64(^uint(0)>>1)) condition that immediately precedes it, and even though the conditions are logically unrelated, it wouldn't get the second one if it hadn't first processed the first!) It does, however, prove a few dozen additional branches. These come from facts that are added to the fact table about the relations between len and cap. These were almost never queried directly before, but could lead to contradictions, which the unsat-based approach is able to use. There are exactly two branches in std and cmd that this implementation proves in the *other* direction. This sounds scary, but is okay because both occur in already-unreachable blocks, so it doesn't matter what we chose. Because the fact table logic is sound but incomplete, it fails to prove that the block isn't reachable, even though it is able to prove that both outgoing branches are impossible. We could turn these blocks into BlockExit blocks, but it doesn't seem worth the trouble of the extra proof effort for something that happens twice in all of std and cmd. Tests: This CL updates test/prove.go to change the expected messages because it can no longer give a "reason" why it proved or disproved a condition. It also adds a new test of a branch it couldn't prove before. It mostly guts test/sliceopt.go, removing everything related to slice bounds optimizations and moving a few relevant tests to test/prove.go. Much of this test is actually unreachable. The new prove pass figures this out and doesn't try to prove anything about the unreachable parts. The output on the unreachable parts is already suspect because anything can be proved at that point, so it's really just a regression test for an algorithm the compiler no longer uses. This is a step toward fixing #23354. That issue is quite easy to fix once we can use derived facts effectively. Change-Id: Ia48a1b9ee081310579fe474e4a61857424ff8ce8 Reviewed-on: https://go-review.googlesource.com/87478 Reviewed-by: Keith Randall <khr@golang.org>
2018-01-10 21:28:58 +00:00
a[6] = 1 // ERROR "Proved IsInBounds$"
a[5] = 1 // ERROR "Proved IsInBounds$"
a[5] = 1 // ERROR "Proved IsInBounds$"
return 13
}
func f1(a []int) int {
if len(a) <= 5 {
return 18
}
cmd/compile: make prove pass use unsatisfiability Currently the prove pass uses implication queries. For each block, it collects the set of branch conditions leading to that block, and queries this fact table for whether any of these facts imply the block's own branch condition (or its inverse). This works remarkably well considering it doesn't do any deduction on these facts, but it has various downsides: 1. It requires an implementation both of adding facts to the table and determining implications. These are very nearly duals of each other, but require separate implementations. Likewise, the process of asserting facts of dominating branch conditions is very nearly the dual of the process of querying implied branch conditions. 2. It leads to less effective use of derived facts. For example, the prove pass currently derives facts about the relations between len and cap, but can't make use of these unless a branch condition is in the exact form of a derived fact. If one of these derived facts contradicts another fact, it won't notice or make use of this. This CL changes the approach of the prove pass to instead use *contradiction* instead of implication. Rather than ever querying a branch condition, it simply adds branch conditions to the fact table. If this leads to a contradiction (specifically, it makes the fact set unsatisfiable), that branch is impossible and can be cut. As a result, 1. We can eliminate the code for determining implications (factsTable.get disappears entirely). Also, there is now a single implementation of visiting and asserting branch conditions, since we don't have to flip them around to treat them as facts in one place and queries in another. 2. Derived facts can be used effectively. It doesn't matter *why* the fact table is unsatisfiable; a contradiction in any of the facts is enough. 3. As an added benefit, it's now quite easy to avoid traversing beyond provably-unreachable blocks. In contrast, the current implementation always visits all blocks. The prove pass already has nearly all of the mechanism necessary to compute unsatisfiability, which means this both simplifies the code and makes it more powerful. The only complication is that the current implication procedure has a hack for dealing with the 0 <= Args[0] condition of OpIsInBounds and OpIsSliceInBounds. We replace this with asserting the appropriate fact when we process one of these conditions. This seems much cleaner anyway, and works because we can now take advantage of derived facts. This has no measurable effect on compiler performance. Effectiveness: There is exactly one condition in all of std and cmd that this fails to prove that the old implementation could: (int64(^uint(0)>>1) < x) in encoding/gob. This can never be true because x is an int, and it's basically coincidence that the old code gets this. (For example, it fails to prove the similar (x < ^int64(^uint(0)>>1)) condition that immediately precedes it, and even though the conditions are logically unrelated, it wouldn't get the second one if it hadn't first processed the first!) It does, however, prove a few dozen additional branches. These come from facts that are added to the fact table about the relations between len and cap. These were almost never queried directly before, but could lead to contradictions, which the unsat-based approach is able to use. There are exactly two branches in std and cmd that this implementation proves in the *other* direction. This sounds scary, but is okay because both occur in already-unreachable blocks, so it doesn't matter what we chose. Because the fact table logic is sound but incomplete, it fails to prove that the block isn't reachable, even though it is able to prove that both outgoing branches are impossible. We could turn these blocks into BlockExit blocks, but it doesn't seem worth the trouble of the extra proof effort for something that happens twice in all of std and cmd. Tests: This CL updates test/prove.go to change the expected messages because it can no longer give a "reason" why it proved or disproved a condition. It also adds a new test of a branch it couldn't prove before. It mostly guts test/sliceopt.go, removing everything related to slice bounds optimizations and moving a few relevant tests to test/prove.go. Much of this test is actually unreachable. The new prove pass figures this out and doesn't try to prove anything about the unreachable parts. The output on the unreachable parts is already suspect because anything can be proved at that point, so it's really just a regression test for an algorithm the compiler no longer uses. This is a step toward fixing #23354. That issue is quite easy to fix once we can use derived facts effectively. Change-Id: Ia48a1b9ee081310579fe474e4a61857424ff8ce8 Reviewed-on: https://go-review.googlesource.com/87478 Reviewed-by: Keith Randall <khr@golang.org>
2018-01-10 21:28:58 +00:00
a[0] = 1 // ERROR "Proved IsInBounds$"
a[0] = 1 // ERROR "Proved IsInBounds$"
a[6] = 1
cmd/compile: make prove pass use unsatisfiability Currently the prove pass uses implication queries. For each block, it collects the set of branch conditions leading to that block, and queries this fact table for whether any of these facts imply the block's own branch condition (or its inverse). This works remarkably well considering it doesn't do any deduction on these facts, but it has various downsides: 1. It requires an implementation both of adding facts to the table and determining implications. These are very nearly duals of each other, but require separate implementations. Likewise, the process of asserting facts of dominating branch conditions is very nearly the dual of the process of querying implied branch conditions. 2. It leads to less effective use of derived facts. For example, the prove pass currently derives facts about the relations between len and cap, but can't make use of these unless a branch condition is in the exact form of a derived fact. If one of these derived facts contradicts another fact, it won't notice or make use of this. This CL changes the approach of the prove pass to instead use *contradiction* instead of implication. Rather than ever querying a branch condition, it simply adds branch conditions to the fact table. If this leads to a contradiction (specifically, it makes the fact set unsatisfiable), that branch is impossible and can be cut. As a result, 1. We can eliminate the code for determining implications (factsTable.get disappears entirely). Also, there is now a single implementation of visiting and asserting branch conditions, since we don't have to flip them around to treat them as facts in one place and queries in another. 2. Derived facts can be used effectively. It doesn't matter *why* the fact table is unsatisfiable; a contradiction in any of the facts is enough. 3. As an added benefit, it's now quite easy to avoid traversing beyond provably-unreachable blocks. In contrast, the current implementation always visits all blocks. The prove pass already has nearly all of the mechanism necessary to compute unsatisfiability, which means this both simplifies the code and makes it more powerful. The only complication is that the current implication procedure has a hack for dealing with the 0 <= Args[0] condition of OpIsInBounds and OpIsSliceInBounds. We replace this with asserting the appropriate fact when we process one of these conditions. This seems much cleaner anyway, and works because we can now take advantage of derived facts. This has no measurable effect on compiler performance. Effectiveness: There is exactly one condition in all of std and cmd that this fails to prove that the old implementation could: (int64(^uint(0)>>1) < x) in encoding/gob. This can never be true because x is an int, and it's basically coincidence that the old code gets this. (For example, it fails to prove the similar (x < ^int64(^uint(0)>>1)) condition that immediately precedes it, and even though the conditions are logically unrelated, it wouldn't get the second one if it hadn't first processed the first!) It does, however, prove a few dozen additional branches. These come from facts that are added to the fact table about the relations between len and cap. These were almost never queried directly before, but could lead to contradictions, which the unsat-based approach is able to use. There are exactly two branches in std and cmd that this implementation proves in the *other* direction. This sounds scary, but is okay because both occur in already-unreachable blocks, so it doesn't matter what we chose. Because the fact table logic is sound but incomplete, it fails to prove that the block isn't reachable, even though it is able to prove that both outgoing branches are impossible. We could turn these blocks into BlockExit blocks, but it doesn't seem worth the trouble of the extra proof effort for something that happens twice in all of std and cmd. Tests: This CL updates test/prove.go to change the expected messages because it can no longer give a "reason" why it proved or disproved a condition. It also adds a new test of a branch it couldn't prove before. It mostly guts test/sliceopt.go, removing everything related to slice bounds optimizations and moving a few relevant tests to test/prove.go. Much of this test is actually unreachable. The new prove pass figures this out and doesn't try to prove anything about the unreachable parts. The output on the unreachable parts is already suspect because anything can be proved at that point, so it's really just a regression test for an algorithm the compiler no longer uses. This is a step toward fixing #23354. That issue is quite easy to fix once we can use derived facts effectively. Change-Id: Ia48a1b9ee081310579fe474e4a61857424ff8ce8 Reviewed-on: https://go-review.googlesource.com/87478 Reviewed-by: Keith Randall <khr@golang.org>
2018-01-10 21:28:58 +00:00
a[6] = 1 // ERROR "Proved IsInBounds$"
a[5] = 1 // ERROR "Proved IsInBounds$"
a[5] = 1 // ERROR "Proved IsInBounds$"
return 26
}
func f1b(a []int, i int, j uint) int {
if i >= 0 && i < len(a) {
cmd/compile: make prove pass use unsatisfiability Currently the prove pass uses implication queries. For each block, it collects the set of branch conditions leading to that block, and queries this fact table for whether any of these facts imply the block's own branch condition (or its inverse). This works remarkably well considering it doesn't do any deduction on these facts, but it has various downsides: 1. It requires an implementation both of adding facts to the table and determining implications. These are very nearly duals of each other, but require separate implementations. Likewise, the process of asserting facts of dominating branch conditions is very nearly the dual of the process of querying implied branch conditions. 2. It leads to less effective use of derived facts. For example, the prove pass currently derives facts about the relations between len and cap, but can't make use of these unless a branch condition is in the exact form of a derived fact. If one of these derived facts contradicts another fact, it won't notice or make use of this. This CL changes the approach of the prove pass to instead use *contradiction* instead of implication. Rather than ever querying a branch condition, it simply adds branch conditions to the fact table. If this leads to a contradiction (specifically, it makes the fact set unsatisfiable), that branch is impossible and can be cut. As a result, 1. We can eliminate the code for determining implications (factsTable.get disappears entirely). Also, there is now a single implementation of visiting and asserting branch conditions, since we don't have to flip them around to treat them as facts in one place and queries in another. 2. Derived facts can be used effectively. It doesn't matter *why* the fact table is unsatisfiable; a contradiction in any of the facts is enough. 3. As an added benefit, it's now quite easy to avoid traversing beyond provably-unreachable blocks. In contrast, the current implementation always visits all blocks. The prove pass already has nearly all of the mechanism necessary to compute unsatisfiability, which means this both simplifies the code and makes it more powerful. The only complication is that the current implication procedure has a hack for dealing with the 0 <= Args[0] condition of OpIsInBounds and OpIsSliceInBounds. We replace this with asserting the appropriate fact when we process one of these conditions. This seems much cleaner anyway, and works because we can now take advantage of derived facts. This has no measurable effect on compiler performance. Effectiveness: There is exactly one condition in all of std and cmd that this fails to prove that the old implementation could: (int64(^uint(0)>>1) < x) in encoding/gob. This can never be true because x is an int, and it's basically coincidence that the old code gets this. (For example, it fails to prove the similar (x < ^int64(^uint(0)>>1)) condition that immediately precedes it, and even though the conditions are logically unrelated, it wouldn't get the second one if it hadn't first processed the first!) It does, however, prove a few dozen additional branches. These come from facts that are added to the fact table about the relations between len and cap. These were almost never queried directly before, but could lead to contradictions, which the unsat-based approach is able to use. There are exactly two branches in std and cmd that this implementation proves in the *other* direction. This sounds scary, but is okay because both occur in already-unreachable blocks, so it doesn't matter what we chose. Because the fact table logic is sound but incomplete, it fails to prove that the block isn't reachable, even though it is able to prove that both outgoing branches are impossible. We could turn these blocks into BlockExit blocks, but it doesn't seem worth the trouble of the extra proof effort for something that happens twice in all of std and cmd. Tests: This CL updates test/prove.go to change the expected messages because it can no longer give a "reason" why it proved or disproved a condition. It also adds a new test of a branch it couldn't prove before. It mostly guts test/sliceopt.go, removing everything related to slice bounds optimizations and moving a few relevant tests to test/prove.go. Much of this test is actually unreachable. The new prove pass figures this out and doesn't try to prove anything about the unreachable parts. The output on the unreachable parts is already suspect because anything can be proved at that point, so it's really just a regression test for an algorithm the compiler no longer uses. This is a step toward fixing #23354. That issue is quite easy to fix once we can use derived facts effectively. Change-Id: Ia48a1b9ee081310579fe474e4a61857424ff8ce8 Reviewed-on: https://go-review.googlesource.com/87478 Reviewed-by: Keith Randall <khr@golang.org>
2018-01-10 21:28:58 +00:00
return a[i] // ERROR "Proved IsInBounds$"
}
if i >= 10 && i < len(a) {
cmd/compile: make prove pass use unsatisfiability Currently the prove pass uses implication queries. For each block, it collects the set of branch conditions leading to that block, and queries this fact table for whether any of these facts imply the block's own branch condition (or its inverse). This works remarkably well considering it doesn't do any deduction on these facts, but it has various downsides: 1. It requires an implementation both of adding facts to the table and determining implications. These are very nearly duals of each other, but require separate implementations. Likewise, the process of asserting facts of dominating branch conditions is very nearly the dual of the process of querying implied branch conditions. 2. It leads to less effective use of derived facts. For example, the prove pass currently derives facts about the relations between len and cap, but can't make use of these unless a branch condition is in the exact form of a derived fact. If one of these derived facts contradicts another fact, it won't notice or make use of this. This CL changes the approach of the prove pass to instead use *contradiction* instead of implication. Rather than ever querying a branch condition, it simply adds branch conditions to the fact table. If this leads to a contradiction (specifically, it makes the fact set unsatisfiable), that branch is impossible and can be cut. As a result, 1. We can eliminate the code for determining implications (factsTable.get disappears entirely). Also, there is now a single implementation of visiting and asserting branch conditions, since we don't have to flip them around to treat them as facts in one place and queries in another. 2. Derived facts can be used effectively. It doesn't matter *why* the fact table is unsatisfiable; a contradiction in any of the facts is enough. 3. As an added benefit, it's now quite easy to avoid traversing beyond provably-unreachable blocks. In contrast, the current implementation always visits all blocks. The prove pass already has nearly all of the mechanism necessary to compute unsatisfiability, which means this both simplifies the code and makes it more powerful. The only complication is that the current implication procedure has a hack for dealing with the 0 <= Args[0] condition of OpIsInBounds and OpIsSliceInBounds. We replace this with asserting the appropriate fact when we process one of these conditions. This seems much cleaner anyway, and works because we can now take advantage of derived facts. This has no measurable effect on compiler performance. Effectiveness: There is exactly one condition in all of std and cmd that this fails to prove that the old implementation could: (int64(^uint(0)>>1) < x) in encoding/gob. This can never be true because x is an int, and it's basically coincidence that the old code gets this. (For example, it fails to prove the similar (x < ^int64(^uint(0)>>1)) condition that immediately precedes it, and even though the conditions are logically unrelated, it wouldn't get the second one if it hadn't first processed the first!) It does, however, prove a few dozen additional branches. These come from facts that are added to the fact table about the relations between len and cap. These were almost never queried directly before, but could lead to contradictions, which the unsat-based approach is able to use. There are exactly two branches in std and cmd that this implementation proves in the *other* direction. This sounds scary, but is okay because both occur in already-unreachable blocks, so it doesn't matter what we chose. Because the fact table logic is sound but incomplete, it fails to prove that the block isn't reachable, even though it is able to prove that both outgoing branches are impossible. We could turn these blocks into BlockExit blocks, but it doesn't seem worth the trouble of the extra proof effort for something that happens twice in all of std and cmd. Tests: This CL updates test/prove.go to change the expected messages because it can no longer give a "reason" why it proved or disproved a condition. It also adds a new test of a branch it couldn't prove before. It mostly guts test/sliceopt.go, removing everything related to slice bounds optimizations and moving a few relevant tests to test/prove.go. Much of this test is actually unreachable. The new prove pass figures this out and doesn't try to prove anything about the unreachable parts. The output on the unreachable parts is already suspect because anything can be proved at that point, so it's really just a regression test for an algorithm the compiler no longer uses. This is a step toward fixing #23354. That issue is quite easy to fix once we can use derived facts effectively. Change-Id: Ia48a1b9ee081310579fe474e4a61857424ff8ce8 Reviewed-on: https://go-review.googlesource.com/87478 Reviewed-by: Keith Randall <khr@golang.org>
2018-01-10 21:28:58 +00:00
return a[i] // ERROR "Proved IsInBounds$"
}
if i >= 10 && i < len(a) {
cmd/compile: make prove pass use unsatisfiability Currently the prove pass uses implication queries. For each block, it collects the set of branch conditions leading to that block, and queries this fact table for whether any of these facts imply the block's own branch condition (or its inverse). This works remarkably well considering it doesn't do any deduction on these facts, but it has various downsides: 1. It requires an implementation both of adding facts to the table and determining implications. These are very nearly duals of each other, but require separate implementations. Likewise, the process of asserting facts of dominating branch conditions is very nearly the dual of the process of querying implied branch conditions. 2. It leads to less effective use of derived facts. For example, the prove pass currently derives facts about the relations between len and cap, but can't make use of these unless a branch condition is in the exact form of a derived fact. If one of these derived facts contradicts another fact, it won't notice or make use of this. This CL changes the approach of the prove pass to instead use *contradiction* instead of implication. Rather than ever querying a branch condition, it simply adds branch conditions to the fact table. If this leads to a contradiction (specifically, it makes the fact set unsatisfiable), that branch is impossible and can be cut. As a result, 1. We can eliminate the code for determining implications (factsTable.get disappears entirely). Also, there is now a single implementation of visiting and asserting branch conditions, since we don't have to flip them around to treat them as facts in one place and queries in another. 2. Derived facts can be used effectively. It doesn't matter *why* the fact table is unsatisfiable; a contradiction in any of the facts is enough. 3. As an added benefit, it's now quite easy to avoid traversing beyond provably-unreachable blocks. In contrast, the current implementation always visits all blocks. The prove pass already has nearly all of the mechanism necessary to compute unsatisfiability, which means this both simplifies the code and makes it more powerful. The only complication is that the current implication procedure has a hack for dealing with the 0 <= Args[0] condition of OpIsInBounds and OpIsSliceInBounds. We replace this with asserting the appropriate fact when we process one of these conditions. This seems much cleaner anyway, and works because we can now take advantage of derived facts. This has no measurable effect on compiler performance. Effectiveness: There is exactly one condition in all of std and cmd that this fails to prove that the old implementation could: (int64(^uint(0)>>1) < x) in encoding/gob. This can never be true because x is an int, and it's basically coincidence that the old code gets this. (For example, it fails to prove the similar (x < ^int64(^uint(0)>>1)) condition that immediately precedes it, and even though the conditions are logically unrelated, it wouldn't get the second one if it hadn't first processed the first!) It does, however, prove a few dozen additional branches. These come from facts that are added to the fact table about the relations between len and cap. These were almost never queried directly before, but could lead to contradictions, which the unsat-based approach is able to use. There are exactly two branches in std and cmd that this implementation proves in the *other* direction. This sounds scary, but is okay because both occur in already-unreachable blocks, so it doesn't matter what we chose. Because the fact table logic is sound but incomplete, it fails to prove that the block isn't reachable, even though it is able to prove that both outgoing branches are impossible. We could turn these blocks into BlockExit blocks, but it doesn't seem worth the trouble of the extra proof effort for something that happens twice in all of std and cmd. Tests: This CL updates test/prove.go to change the expected messages because it can no longer give a "reason" why it proved or disproved a condition. It also adds a new test of a branch it couldn't prove before. It mostly guts test/sliceopt.go, removing everything related to slice bounds optimizations and moving a few relevant tests to test/prove.go. Much of this test is actually unreachable. The new prove pass figures this out and doesn't try to prove anything about the unreachable parts. The output on the unreachable parts is already suspect because anything can be proved at that point, so it's really just a regression test for an algorithm the compiler no longer uses. This is a step toward fixing #23354. That issue is quite easy to fix once we can use derived facts effectively. Change-Id: Ia48a1b9ee081310579fe474e4a61857424ff8ce8 Reviewed-on: https://go-review.googlesource.com/87478 Reviewed-by: Keith Randall <khr@golang.org>
2018-01-10 21:28:58 +00:00
return a[i] // ERROR "Proved IsInBounds$"
}
if i >= 10 && i < len(a) {
return a[i-10] // ERROR "Proved IsInBounds$"
}
if j < uint(len(a)) {
return a[j] // ERROR "Proved IsInBounds$"
}
return 0
}
func f1c(a []int, i int64) int {
c := uint64(math.MaxInt64 + 10) // overflows int
d := int64(c)
if i >= d && i < int64(len(a)) {
// d overflows, should not be handled.
return a[i]
}
return 0
}
func f2(a []int) int {
for i := range a { // ERROR "Induction variable: limits \[0,\?\), increment 1$"
cmd/compile/internal/ssa: BCE for induction variables There are 5293 loop in the main go repository. A survey of the top most common for loops: 18 for __k__ := 0; i < len(sa.Addr); i++ { 19 for __k__ := 0; ; i++ { 19 for __k__ := 0; i < 16; i++ { 25 for __k__ := 0; i < length; i++ { 30 for __k__ := 0; i < 8; i++ { 49 for __k__ := 0; i < len(s); i++ { 67 for __k__ := 0; i < n; i++ { 376 for __k__ := range __slice__ { 685 for __k__, __v__ := range __slice__ { 2074 for __, __v__ := range __slice__ { The algorithm to find induction variables handles all cases with an upper limit. It currently doesn't find related induction variables such as c * ind or c + ind. 842 out of 22954 bound checks are removed for src/make.bash. 1957 out of 42952 bounds checks are removed for src/all.bash. Things to do in follow-up CLs: * Find the associated pointer for `for _, v := range a {}` * Drop the NilChecks on the pointer. * Replace the implicit induction variable by a loop over the pointer Generated garbage can be reduced if we share the sdom between passes. % benchstat old.txt new.txt name old time/op new time/op delta Template 337ms ± 3% 333ms ± 3% ~ (p=0.258 n=9+9) GoTypes 1.11s ± 2% 1.10s ± 2% ~ (p=0.912 n=10+10) Compiler 5.25s ± 1% 5.29s ± 2% ~ (p=0.077 n=9+9) MakeBash 33.5s ± 1% 34.1s ± 2% +1.85% (p=0.011 n=9+9) name old alloc/op new alloc/op delta Template 63.6MB ± 0% 63.9MB ± 0% +0.52% (p=0.000 n=10+9) GoTypes 218MB ± 0% 219MB ± 0% +0.59% (p=0.000 n=10+9) Compiler 978MB ± 0% 985MB ± 0% +0.69% (p=0.000 n=10+10) name old allocs/op new allocs/op delta Template 582k ± 0% 583k ± 0% +0.10% (p=0.000 n=10+10) GoTypes 1.78M ± 0% 1.78M ± 0% +0.12% (p=0.000 n=10+10) Compiler 7.68M ± 0% 7.69M ± 0% +0.05% (p=0.000 n=10+10) name old text-bytes new text-bytes delta HelloSize 581k ± 0% 581k ± 0% -0.08% (p=0.000 n=10+10) CmdGoSize 6.40M ± 0% 6.39M ± 0% -0.08% (p=0.000 n=10+10) name old data-bytes new data-bytes delta HelloSize 3.66k ± 0% 3.66k ± 0% ~ (all samples are equal) CmdGoSize 134k ± 0% 134k ± 0% ~ (all samples are equal) name old bss-bytes new bss-bytes delta HelloSize 126k ± 0% 126k ± 0% ~ (all samples are equal) CmdGoSize 149k ± 0% 149k ± 0% ~ (all samples are equal) name old exe-bytes new exe-bytes delta HelloSize 947k ± 0% 946k ± 0% -0.01% (p=0.000 n=10+10) CmdGoSize 9.92M ± 0% 9.91M ± 0% -0.06% (p=0.000 n=10+10) Change-Id: Ie74bdff46fd602db41bb457333d3a762a0c3dc4d Reviewed-on: https://go-review.googlesource.com/20517 Reviewed-by: David Chase <drchase@google.com> Run-TryBot: Alexandru Moșoi <alexandru@mosoi.ro>
2016-03-02 11:58:27 +00:00
a[i+1] = i
cmd/compile: make prove pass use unsatisfiability Currently the prove pass uses implication queries. For each block, it collects the set of branch conditions leading to that block, and queries this fact table for whether any of these facts imply the block's own branch condition (or its inverse). This works remarkably well considering it doesn't do any deduction on these facts, but it has various downsides: 1. It requires an implementation both of adding facts to the table and determining implications. These are very nearly duals of each other, but require separate implementations. Likewise, the process of asserting facts of dominating branch conditions is very nearly the dual of the process of querying implied branch conditions. 2. It leads to less effective use of derived facts. For example, the prove pass currently derives facts about the relations between len and cap, but can't make use of these unless a branch condition is in the exact form of a derived fact. If one of these derived facts contradicts another fact, it won't notice or make use of this. This CL changes the approach of the prove pass to instead use *contradiction* instead of implication. Rather than ever querying a branch condition, it simply adds branch conditions to the fact table. If this leads to a contradiction (specifically, it makes the fact set unsatisfiable), that branch is impossible and can be cut. As a result, 1. We can eliminate the code for determining implications (factsTable.get disappears entirely). Also, there is now a single implementation of visiting and asserting branch conditions, since we don't have to flip them around to treat them as facts in one place and queries in another. 2. Derived facts can be used effectively. It doesn't matter *why* the fact table is unsatisfiable; a contradiction in any of the facts is enough. 3. As an added benefit, it's now quite easy to avoid traversing beyond provably-unreachable blocks. In contrast, the current implementation always visits all blocks. The prove pass already has nearly all of the mechanism necessary to compute unsatisfiability, which means this both simplifies the code and makes it more powerful. The only complication is that the current implication procedure has a hack for dealing with the 0 <= Args[0] condition of OpIsInBounds and OpIsSliceInBounds. We replace this with asserting the appropriate fact when we process one of these conditions. This seems much cleaner anyway, and works because we can now take advantage of derived facts. This has no measurable effect on compiler performance. Effectiveness: There is exactly one condition in all of std and cmd that this fails to prove that the old implementation could: (int64(^uint(0)>>1) < x) in encoding/gob. This can never be true because x is an int, and it's basically coincidence that the old code gets this. (For example, it fails to prove the similar (x < ^int64(^uint(0)>>1)) condition that immediately precedes it, and even though the conditions are logically unrelated, it wouldn't get the second one if it hadn't first processed the first!) It does, however, prove a few dozen additional branches. These come from facts that are added to the fact table about the relations between len and cap. These were almost never queried directly before, but could lead to contradictions, which the unsat-based approach is able to use. There are exactly two branches in std and cmd that this implementation proves in the *other* direction. This sounds scary, but is okay because both occur in already-unreachable blocks, so it doesn't matter what we chose. Because the fact table logic is sound but incomplete, it fails to prove that the block isn't reachable, even though it is able to prove that both outgoing branches are impossible. We could turn these blocks into BlockExit blocks, but it doesn't seem worth the trouble of the extra proof effort for something that happens twice in all of std and cmd. Tests: This CL updates test/prove.go to change the expected messages because it can no longer give a "reason" why it proved or disproved a condition. It also adds a new test of a branch it couldn't prove before. It mostly guts test/sliceopt.go, removing everything related to slice bounds optimizations and moving a few relevant tests to test/prove.go. Much of this test is actually unreachable. The new prove pass figures this out and doesn't try to prove anything about the unreachable parts. The output on the unreachable parts is already suspect because anything can be proved at that point, so it's really just a regression test for an algorithm the compiler no longer uses. This is a step toward fixing #23354. That issue is quite easy to fix once we can use derived facts effectively. Change-Id: Ia48a1b9ee081310579fe474e4a61857424ff8ce8 Reviewed-on: https://go-review.googlesource.com/87478 Reviewed-by: Keith Randall <khr@golang.org>
2018-01-10 21:28:58 +00:00
a[i+1] = i // ERROR "Proved IsInBounds$"
}
return 34
}
func f3(a []uint) int {
for i := uint(0); i < uint(len(a)); i++ {
a[i] = i // ERROR "Proved IsInBounds$"
}
return 41
}
func f4a(a, b, c int) int {
if a < b {
if a == b { // ERROR "Disproved Eq64$"
return 47
}
if a > b { // ERROR "Disproved Less64$"
return 50
}
cmd/compile: make prove pass use unsatisfiability Currently the prove pass uses implication queries. For each block, it collects the set of branch conditions leading to that block, and queries this fact table for whether any of these facts imply the block's own branch condition (or its inverse). This works remarkably well considering it doesn't do any deduction on these facts, but it has various downsides: 1. It requires an implementation both of adding facts to the table and determining implications. These are very nearly duals of each other, but require separate implementations. Likewise, the process of asserting facts of dominating branch conditions is very nearly the dual of the process of querying implied branch conditions. 2. It leads to less effective use of derived facts. For example, the prove pass currently derives facts about the relations between len and cap, but can't make use of these unless a branch condition is in the exact form of a derived fact. If one of these derived facts contradicts another fact, it won't notice or make use of this. This CL changes the approach of the prove pass to instead use *contradiction* instead of implication. Rather than ever querying a branch condition, it simply adds branch conditions to the fact table. If this leads to a contradiction (specifically, it makes the fact set unsatisfiable), that branch is impossible and can be cut. As a result, 1. We can eliminate the code for determining implications (factsTable.get disappears entirely). Also, there is now a single implementation of visiting and asserting branch conditions, since we don't have to flip them around to treat them as facts in one place and queries in another. 2. Derived facts can be used effectively. It doesn't matter *why* the fact table is unsatisfiable; a contradiction in any of the facts is enough. 3. As an added benefit, it's now quite easy to avoid traversing beyond provably-unreachable blocks. In contrast, the current implementation always visits all blocks. The prove pass already has nearly all of the mechanism necessary to compute unsatisfiability, which means this both simplifies the code and makes it more powerful. The only complication is that the current implication procedure has a hack for dealing with the 0 <= Args[0] condition of OpIsInBounds and OpIsSliceInBounds. We replace this with asserting the appropriate fact when we process one of these conditions. This seems much cleaner anyway, and works because we can now take advantage of derived facts. This has no measurable effect on compiler performance. Effectiveness: There is exactly one condition in all of std and cmd that this fails to prove that the old implementation could: (int64(^uint(0)>>1) < x) in encoding/gob. This can never be true because x is an int, and it's basically coincidence that the old code gets this. (For example, it fails to prove the similar (x < ^int64(^uint(0)>>1)) condition that immediately precedes it, and even though the conditions are logically unrelated, it wouldn't get the second one if it hadn't first processed the first!) It does, however, prove a few dozen additional branches. These come from facts that are added to the fact table about the relations between len and cap. These were almost never queried directly before, but could lead to contradictions, which the unsat-based approach is able to use. There are exactly two branches in std and cmd that this implementation proves in the *other* direction. This sounds scary, but is okay because both occur in already-unreachable blocks, so it doesn't matter what we chose. Because the fact table logic is sound but incomplete, it fails to prove that the block isn't reachable, even though it is able to prove that both outgoing branches are impossible. We could turn these blocks into BlockExit blocks, but it doesn't seem worth the trouble of the extra proof effort for something that happens twice in all of std and cmd. Tests: This CL updates test/prove.go to change the expected messages because it can no longer give a "reason" why it proved or disproved a condition. It also adds a new test of a branch it couldn't prove before. It mostly guts test/sliceopt.go, removing everything related to slice bounds optimizations and moving a few relevant tests to test/prove.go. Much of this test is actually unreachable. The new prove pass figures this out and doesn't try to prove anything about the unreachable parts. The output on the unreachable parts is already suspect because anything can be proved at that point, so it's really just a regression test for an algorithm the compiler no longer uses. This is a step toward fixing #23354. That issue is quite easy to fix once we can use derived facts effectively. Change-Id: Ia48a1b9ee081310579fe474e4a61857424ff8ce8 Reviewed-on: https://go-review.googlesource.com/87478 Reviewed-by: Keith Randall <khr@golang.org>
2018-01-10 21:28:58 +00:00
if a < b { // ERROR "Proved Less64$"
return 53
}
cmd/compile: make prove pass use unsatisfiability Currently the prove pass uses implication queries. For each block, it collects the set of branch conditions leading to that block, and queries this fact table for whether any of these facts imply the block's own branch condition (or its inverse). This works remarkably well considering it doesn't do any deduction on these facts, but it has various downsides: 1. It requires an implementation both of adding facts to the table and determining implications. These are very nearly duals of each other, but require separate implementations. Likewise, the process of asserting facts of dominating branch conditions is very nearly the dual of the process of querying implied branch conditions. 2. It leads to less effective use of derived facts. For example, the prove pass currently derives facts about the relations between len and cap, but can't make use of these unless a branch condition is in the exact form of a derived fact. If one of these derived facts contradicts another fact, it won't notice or make use of this. This CL changes the approach of the prove pass to instead use *contradiction* instead of implication. Rather than ever querying a branch condition, it simply adds branch conditions to the fact table. If this leads to a contradiction (specifically, it makes the fact set unsatisfiable), that branch is impossible and can be cut. As a result, 1. We can eliminate the code for determining implications (factsTable.get disappears entirely). Also, there is now a single implementation of visiting and asserting branch conditions, since we don't have to flip them around to treat them as facts in one place and queries in another. 2. Derived facts can be used effectively. It doesn't matter *why* the fact table is unsatisfiable; a contradiction in any of the facts is enough. 3. As an added benefit, it's now quite easy to avoid traversing beyond provably-unreachable blocks. In contrast, the current implementation always visits all blocks. The prove pass already has nearly all of the mechanism necessary to compute unsatisfiability, which means this both simplifies the code and makes it more powerful. The only complication is that the current implication procedure has a hack for dealing with the 0 <= Args[0] condition of OpIsInBounds and OpIsSliceInBounds. We replace this with asserting the appropriate fact when we process one of these conditions. This seems much cleaner anyway, and works because we can now take advantage of derived facts. This has no measurable effect on compiler performance. Effectiveness: There is exactly one condition in all of std and cmd that this fails to prove that the old implementation could: (int64(^uint(0)>>1) < x) in encoding/gob. This can never be true because x is an int, and it's basically coincidence that the old code gets this. (For example, it fails to prove the similar (x < ^int64(^uint(0)>>1)) condition that immediately precedes it, and even though the conditions are logically unrelated, it wouldn't get the second one if it hadn't first processed the first!) It does, however, prove a few dozen additional branches. These come from facts that are added to the fact table about the relations between len and cap. These were almost never queried directly before, but could lead to contradictions, which the unsat-based approach is able to use. There are exactly two branches in std and cmd that this implementation proves in the *other* direction. This sounds scary, but is okay because both occur in already-unreachable blocks, so it doesn't matter what we chose. Because the fact table logic is sound but incomplete, it fails to prove that the block isn't reachable, even though it is able to prove that both outgoing branches are impossible. We could turn these blocks into BlockExit blocks, but it doesn't seem worth the trouble of the extra proof effort for something that happens twice in all of std and cmd. Tests: This CL updates test/prove.go to change the expected messages because it can no longer give a "reason" why it proved or disproved a condition. It also adds a new test of a branch it couldn't prove before. It mostly guts test/sliceopt.go, removing everything related to slice bounds optimizations and moving a few relevant tests to test/prove.go. Much of this test is actually unreachable. The new prove pass figures this out and doesn't try to prove anything about the unreachable parts. The output on the unreachable parts is already suspect because anything can be proved at that point, so it's really just a regression test for an algorithm the compiler no longer uses. This is a step toward fixing #23354. That issue is quite easy to fix once we can use derived facts effectively. Change-Id: Ia48a1b9ee081310579fe474e4a61857424ff8ce8 Reviewed-on: https://go-review.googlesource.com/87478 Reviewed-by: Keith Randall <khr@golang.org>
2018-01-10 21:28:58 +00:00
// We can't get to this point and prove knows that, so
// there's no message for the next (obvious) branch.
if a != a {
return 56
}
return 61
}
return 63
}
func f4b(a, b, c int) int {
if a <= b {
if a >= b {
if a == b { // ERROR "Proved Eq64$"
return 70
}
return 75
}
return 77
}
return 79
}
func f4c(a, b, c int) int {
if a <= b {
if a >= b {
if a != b { // ERROR "Disproved Neq64$"
return 73
}
return 75
}
return 77
}
return 79
}
func f4d(a, b, c int) int {
if a < b {
if a < c {
cmd/compile: make prove pass use unsatisfiability Currently the prove pass uses implication queries. For each block, it collects the set of branch conditions leading to that block, and queries this fact table for whether any of these facts imply the block's own branch condition (or its inverse). This works remarkably well considering it doesn't do any deduction on these facts, but it has various downsides: 1. It requires an implementation both of adding facts to the table and determining implications. These are very nearly duals of each other, but require separate implementations. Likewise, the process of asserting facts of dominating branch conditions is very nearly the dual of the process of querying implied branch conditions. 2. It leads to less effective use of derived facts. For example, the prove pass currently derives facts about the relations between len and cap, but can't make use of these unless a branch condition is in the exact form of a derived fact. If one of these derived facts contradicts another fact, it won't notice or make use of this. This CL changes the approach of the prove pass to instead use *contradiction* instead of implication. Rather than ever querying a branch condition, it simply adds branch conditions to the fact table. If this leads to a contradiction (specifically, it makes the fact set unsatisfiable), that branch is impossible and can be cut. As a result, 1. We can eliminate the code for determining implications (factsTable.get disappears entirely). Also, there is now a single implementation of visiting and asserting branch conditions, since we don't have to flip them around to treat them as facts in one place and queries in another. 2. Derived facts can be used effectively. It doesn't matter *why* the fact table is unsatisfiable; a contradiction in any of the facts is enough. 3. As an added benefit, it's now quite easy to avoid traversing beyond provably-unreachable blocks. In contrast, the current implementation always visits all blocks. The prove pass already has nearly all of the mechanism necessary to compute unsatisfiability, which means this both simplifies the code and makes it more powerful. The only complication is that the current implication procedure has a hack for dealing with the 0 <= Args[0] condition of OpIsInBounds and OpIsSliceInBounds. We replace this with asserting the appropriate fact when we process one of these conditions. This seems much cleaner anyway, and works because we can now take advantage of derived facts. This has no measurable effect on compiler performance. Effectiveness: There is exactly one condition in all of std and cmd that this fails to prove that the old implementation could: (int64(^uint(0)>>1) < x) in encoding/gob. This can never be true because x is an int, and it's basically coincidence that the old code gets this. (For example, it fails to prove the similar (x < ^int64(^uint(0)>>1)) condition that immediately precedes it, and even though the conditions are logically unrelated, it wouldn't get the second one if it hadn't first processed the first!) It does, however, prove a few dozen additional branches. These come from facts that are added to the fact table about the relations between len and cap. These were almost never queried directly before, but could lead to contradictions, which the unsat-based approach is able to use. There are exactly two branches in std and cmd that this implementation proves in the *other* direction. This sounds scary, but is okay because both occur in already-unreachable blocks, so it doesn't matter what we chose. Because the fact table logic is sound but incomplete, it fails to prove that the block isn't reachable, even though it is able to prove that both outgoing branches are impossible. We could turn these blocks into BlockExit blocks, but it doesn't seem worth the trouble of the extra proof effort for something that happens twice in all of std and cmd. Tests: This CL updates test/prove.go to change the expected messages because it can no longer give a "reason" why it proved or disproved a condition. It also adds a new test of a branch it couldn't prove before. It mostly guts test/sliceopt.go, removing everything related to slice bounds optimizations and moving a few relevant tests to test/prove.go. Much of this test is actually unreachable. The new prove pass figures this out and doesn't try to prove anything about the unreachable parts. The output on the unreachable parts is already suspect because anything can be proved at that point, so it's really just a regression test for an algorithm the compiler no longer uses. This is a step toward fixing #23354. That issue is quite easy to fix once we can use derived facts effectively. Change-Id: Ia48a1b9ee081310579fe474e4a61857424ff8ce8 Reviewed-on: https://go-review.googlesource.com/87478 Reviewed-by: Keith Randall <khr@golang.org>
2018-01-10 21:28:58 +00:00
if a < b { // ERROR "Proved Less64$"
if a < c { // ERROR "Proved Less64$"
return 87
}
return 89
}
return 91
}
return 93
}
return 95
}
func f4e(a, b, c int) int {
if a < b {
if b > a { // ERROR "Proved Less64$"
return 101
}
return 103
}
return 105
}
func f4f(a, b, c int) int {
if a <= b {
if b > a {
if b == a { // ERROR "Disproved Eq64$"
return 112
}
return 114
}
if b >= a { // ERROR "Proved Leq64$"
if b == a { // ERROR "Proved Eq64$"
return 118
}
return 120
}
return 122
}
return 124
}
func f5(a, b uint) int {
if a == b {
if a <= b { // ERROR "Proved Leq64U$"
return 130
}
return 132
}
return 134
}
// These comparisons are compile time constants.
func f6a(a uint8) int {
if a < a { // ERROR "Disproved Less8U$"
return 140
}
return 151
}
func f6b(a uint8) int {
if a < a { // ERROR "Disproved Less8U$"
return 140
}
return 151
}
func f6x(a uint8) int {
if a > a { // ERROR "Disproved Less8U$"
return 143
}
return 151
}
func f6d(a uint8) int {
if a <= a { // ERROR "Proved Leq8U$"
return 146
}
return 151
}
func f6e(a uint8) int {
if a >= a { // ERROR "Proved Leq8U$"
return 149
}
return 151
}
func f7(a []int, b int) int {
if b < len(a) {
a[b] = 3
cmd/compile: make prove pass use unsatisfiability Currently the prove pass uses implication queries. For each block, it collects the set of branch conditions leading to that block, and queries this fact table for whether any of these facts imply the block's own branch condition (or its inverse). This works remarkably well considering it doesn't do any deduction on these facts, but it has various downsides: 1. It requires an implementation both of adding facts to the table and determining implications. These are very nearly duals of each other, but require separate implementations. Likewise, the process of asserting facts of dominating branch conditions is very nearly the dual of the process of querying implied branch conditions. 2. It leads to less effective use of derived facts. For example, the prove pass currently derives facts about the relations between len and cap, but can't make use of these unless a branch condition is in the exact form of a derived fact. If one of these derived facts contradicts another fact, it won't notice or make use of this. This CL changes the approach of the prove pass to instead use *contradiction* instead of implication. Rather than ever querying a branch condition, it simply adds branch conditions to the fact table. If this leads to a contradiction (specifically, it makes the fact set unsatisfiable), that branch is impossible and can be cut. As a result, 1. We can eliminate the code for determining implications (factsTable.get disappears entirely). Also, there is now a single implementation of visiting and asserting branch conditions, since we don't have to flip them around to treat them as facts in one place and queries in another. 2. Derived facts can be used effectively. It doesn't matter *why* the fact table is unsatisfiable; a contradiction in any of the facts is enough. 3. As an added benefit, it's now quite easy to avoid traversing beyond provably-unreachable blocks. In contrast, the current implementation always visits all blocks. The prove pass already has nearly all of the mechanism necessary to compute unsatisfiability, which means this both simplifies the code and makes it more powerful. The only complication is that the current implication procedure has a hack for dealing with the 0 <= Args[0] condition of OpIsInBounds and OpIsSliceInBounds. We replace this with asserting the appropriate fact when we process one of these conditions. This seems much cleaner anyway, and works because we can now take advantage of derived facts. This has no measurable effect on compiler performance. Effectiveness: There is exactly one condition in all of std and cmd that this fails to prove that the old implementation could: (int64(^uint(0)>>1) < x) in encoding/gob. This can never be true because x is an int, and it's basically coincidence that the old code gets this. (For example, it fails to prove the similar (x < ^int64(^uint(0)>>1)) condition that immediately precedes it, and even though the conditions are logically unrelated, it wouldn't get the second one if it hadn't first processed the first!) It does, however, prove a few dozen additional branches. These come from facts that are added to the fact table about the relations between len and cap. These were almost never queried directly before, but could lead to contradictions, which the unsat-based approach is able to use. There are exactly two branches in std and cmd that this implementation proves in the *other* direction. This sounds scary, but is okay because both occur in already-unreachable blocks, so it doesn't matter what we chose. Because the fact table logic is sound but incomplete, it fails to prove that the block isn't reachable, even though it is able to prove that both outgoing branches are impossible. We could turn these blocks into BlockExit blocks, but it doesn't seem worth the trouble of the extra proof effort for something that happens twice in all of std and cmd. Tests: This CL updates test/prove.go to change the expected messages because it can no longer give a "reason" why it proved or disproved a condition. It also adds a new test of a branch it couldn't prove before. It mostly guts test/sliceopt.go, removing everything related to slice bounds optimizations and moving a few relevant tests to test/prove.go. Much of this test is actually unreachable. The new prove pass figures this out and doesn't try to prove anything about the unreachable parts. The output on the unreachable parts is already suspect because anything can be proved at that point, so it's really just a regression test for an algorithm the compiler no longer uses. This is a step toward fixing #23354. That issue is quite easy to fix once we can use derived facts effectively. Change-Id: Ia48a1b9ee081310579fe474e4a61857424ff8ce8 Reviewed-on: https://go-review.googlesource.com/87478 Reviewed-by: Keith Randall <khr@golang.org>
2018-01-10 21:28:58 +00:00
if b < len(a) { // ERROR "Proved Less64$"
a[b] = 5 // ERROR "Proved IsInBounds$"
}
}
return 161
}
func f8(a, b uint) int {
if a == b {
return 166
}
if a > b {
return 169
}
if a < b { // ERROR "Proved Less64U$"
return 172
}
return 174
}
func f9(a, b bool) int {
if a {
return 1
}
cmd/compile: make prove pass use unsatisfiability Currently the prove pass uses implication queries. For each block, it collects the set of branch conditions leading to that block, and queries this fact table for whether any of these facts imply the block's own branch condition (or its inverse). This works remarkably well considering it doesn't do any deduction on these facts, but it has various downsides: 1. It requires an implementation both of adding facts to the table and determining implications. These are very nearly duals of each other, but require separate implementations. Likewise, the process of asserting facts of dominating branch conditions is very nearly the dual of the process of querying implied branch conditions. 2. It leads to less effective use of derived facts. For example, the prove pass currently derives facts about the relations between len and cap, but can't make use of these unless a branch condition is in the exact form of a derived fact. If one of these derived facts contradicts another fact, it won't notice or make use of this. This CL changes the approach of the prove pass to instead use *contradiction* instead of implication. Rather than ever querying a branch condition, it simply adds branch conditions to the fact table. If this leads to a contradiction (specifically, it makes the fact set unsatisfiable), that branch is impossible and can be cut. As a result, 1. We can eliminate the code for determining implications (factsTable.get disappears entirely). Also, there is now a single implementation of visiting and asserting branch conditions, since we don't have to flip them around to treat them as facts in one place and queries in another. 2. Derived facts can be used effectively. It doesn't matter *why* the fact table is unsatisfiable; a contradiction in any of the facts is enough. 3. As an added benefit, it's now quite easy to avoid traversing beyond provably-unreachable blocks. In contrast, the current implementation always visits all blocks. The prove pass already has nearly all of the mechanism necessary to compute unsatisfiability, which means this both simplifies the code and makes it more powerful. The only complication is that the current implication procedure has a hack for dealing with the 0 <= Args[0] condition of OpIsInBounds and OpIsSliceInBounds. We replace this with asserting the appropriate fact when we process one of these conditions. This seems much cleaner anyway, and works because we can now take advantage of derived facts. This has no measurable effect on compiler performance. Effectiveness: There is exactly one condition in all of std and cmd that this fails to prove that the old implementation could: (int64(^uint(0)>>1) < x) in encoding/gob. This can never be true because x is an int, and it's basically coincidence that the old code gets this. (For example, it fails to prove the similar (x < ^int64(^uint(0)>>1)) condition that immediately precedes it, and even though the conditions are logically unrelated, it wouldn't get the second one if it hadn't first processed the first!) It does, however, prove a few dozen additional branches. These come from facts that are added to the fact table about the relations between len and cap. These were almost never queried directly before, but could lead to contradictions, which the unsat-based approach is able to use. There are exactly two branches in std and cmd that this implementation proves in the *other* direction. This sounds scary, but is okay because both occur in already-unreachable blocks, so it doesn't matter what we chose. Because the fact table logic is sound but incomplete, it fails to prove that the block isn't reachable, even though it is able to prove that both outgoing branches are impossible. We could turn these blocks into BlockExit blocks, but it doesn't seem worth the trouble of the extra proof effort for something that happens twice in all of std and cmd. Tests: This CL updates test/prove.go to change the expected messages because it can no longer give a "reason" why it proved or disproved a condition. It also adds a new test of a branch it couldn't prove before. It mostly guts test/sliceopt.go, removing everything related to slice bounds optimizations and moving a few relevant tests to test/prove.go. Much of this test is actually unreachable. The new prove pass figures this out and doesn't try to prove anything about the unreachable parts. The output on the unreachable parts is already suspect because anything can be proved at that point, so it's really just a regression test for an algorithm the compiler no longer uses. This is a step toward fixing #23354. That issue is quite easy to fix once we can use derived facts effectively. Change-Id: Ia48a1b9ee081310579fe474e4a61857424ff8ce8 Reviewed-on: https://go-review.googlesource.com/87478 Reviewed-by: Keith Randall <khr@golang.org>
2018-01-10 21:28:58 +00:00
if a || b { // ERROR "Disproved Arg$"
return 2
}
return 3
}
func f10(a string) int {
n := len(a)
// We optimize comparisons with small constant strings (see cmd/compile/internal/gc/walk.go),
// so this string literal must be long.
if a[:n>>1] == "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa" {
return 0
}
return 1
}
func f11a(a []int, i int) {
useInt(a[i])
cmd/compile: make prove pass use unsatisfiability Currently the prove pass uses implication queries. For each block, it collects the set of branch conditions leading to that block, and queries this fact table for whether any of these facts imply the block's own branch condition (or its inverse). This works remarkably well considering it doesn't do any deduction on these facts, but it has various downsides: 1. It requires an implementation both of adding facts to the table and determining implications. These are very nearly duals of each other, but require separate implementations. Likewise, the process of asserting facts of dominating branch conditions is very nearly the dual of the process of querying implied branch conditions. 2. It leads to less effective use of derived facts. For example, the prove pass currently derives facts about the relations between len and cap, but can't make use of these unless a branch condition is in the exact form of a derived fact. If one of these derived facts contradicts another fact, it won't notice or make use of this. This CL changes the approach of the prove pass to instead use *contradiction* instead of implication. Rather than ever querying a branch condition, it simply adds branch conditions to the fact table. If this leads to a contradiction (specifically, it makes the fact set unsatisfiable), that branch is impossible and can be cut. As a result, 1. We can eliminate the code for determining implications (factsTable.get disappears entirely). Also, there is now a single implementation of visiting and asserting branch conditions, since we don't have to flip them around to treat them as facts in one place and queries in another. 2. Derived facts can be used effectively. It doesn't matter *why* the fact table is unsatisfiable; a contradiction in any of the facts is enough. 3. As an added benefit, it's now quite easy to avoid traversing beyond provably-unreachable blocks. In contrast, the current implementation always visits all blocks. The prove pass already has nearly all of the mechanism necessary to compute unsatisfiability, which means this both simplifies the code and makes it more powerful. The only complication is that the current implication procedure has a hack for dealing with the 0 <= Args[0] condition of OpIsInBounds and OpIsSliceInBounds. We replace this with asserting the appropriate fact when we process one of these conditions. This seems much cleaner anyway, and works because we can now take advantage of derived facts. This has no measurable effect on compiler performance. Effectiveness: There is exactly one condition in all of std and cmd that this fails to prove that the old implementation could: (int64(^uint(0)>>1) < x) in encoding/gob. This can never be true because x is an int, and it's basically coincidence that the old code gets this. (For example, it fails to prove the similar (x < ^int64(^uint(0)>>1)) condition that immediately precedes it, and even though the conditions are logically unrelated, it wouldn't get the second one if it hadn't first processed the first!) It does, however, prove a few dozen additional branches. These come from facts that are added to the fact table about the relations between len and cap. These were almost never queried directly before, but could lead to contradictions, which the unsat-based approach is able to use. There are exactly two branches in std and cmd that this implementation proves in the *other* direction. This sounds scary, but is okay because both occur in already-unreachable blocks, so it doesn't matter what we chose. Because the fact table logic is sound but incomplete, it fails to prove that the block isn't reachable, even though it is able to prove that both outgoing branches are impossible. We could turn these blocks into BlockExit blocks, but it doesn't seem worth the trouble of the extra proof effort for something that happens twice in all of std and cmd. Tests: This CL updates test/prove.go to change the expected messages because it can no longer give a "reason" why it proved or disproved a condition. It also adds a new test of a branch it couldn't prove before. It mostly guts test/sliceopt.go, removing everything related to slice bounds optimizations and moving a few relevant tests to test/prove.go. Much of this test is actually unreachable. The new prove pass figures this out and doesn't try to prove anything about the unreachable parts. The output on the unreachable parts is already suspect because anything can be proved at that point, so it's really just a regression test for an algorithm the compiler no longer uses. This is a step toward fixing #23354. That issue is quite easy to fix once we can use derived facts effectively. Change-Id: Ia48a1b9ee081310579fe474e4a61857424ff8ce8 Reviewed-on: https://go-review.googlesource.com/87478 Reviewed-by: Keith Randall <khr@golang.org>
2018-01-10 21:28:58 +00:00
useInt(a[i]) // ERROR "Proved IsInBounds$"
}
func f11b(a []int, i int) {
useSlice(a[i:])
cmd/compile: make prove pass use unsatisfiability Currently the prove pass uses implication queries. For each block, it collects the set of branch conditions leading to that block, and queries this fact table for whether any of these facts imply the block's own branch condition (or its inverse). This works remarkably well considering it doesn't do any deduction on these facts, but it has various downsides: 1. It requires an implementation both of adding facts to the table and determining implications. These are very nearly duals of each other, but require separate implementations. Likewise, the process of asserting facts of dominating branch conditions is very nearly the dual of the process of querying implied branch conditions. 2. It leads to less effective use of derived facts. For example, the prove pass currently derives facts about the relations between len and cap, but can't make use of these unless a branch condition is in the exact form of a derived fact. If one of these derived facts contradicts another fact, it won't notice or make use of this. This CL changes the approach of the prove pass to instead use *contradiction* instead of implication. Rather than ever querying a branch condition, it simply adds branch conditions to the fact table. If this leads to a contradiction (specifically, it makes the fact set unsatisfiable), that branch is impossible and can be cut. As a result, 1. We can eliminate the code for determining implications (factsTable.get disappears entirely). Also, there is now a single implementation of visiting and asserting branch conditions, since we don't have to flip them around to treat them as facts in one place and queries in another. 2. Derived facts can be used effectively. It doesn't matter *why* the fact table is unsatisfiable; a contradiction in any of the facts is enough. 3. As an added benefit, it's now quite easy to avoid traversing beyond provably-unreachable blocks. In contrast, the current implementation always visits all blocks. The prove pass already has nearly all of the mechanism necessary to compute unsatisfiability, which means this both simplifies the code and makes it more powerful. The only complication is that the current implication procedure has a hack for dealing with the 0 <= Args[0] condition of OpIsInBounds and OpIsSliceInBounds. We replace this with asserting the appropriate fact when we process one of these conditions. This seems much cleaner anyway, and works because we can now take advantage of derived facts. This has no measurable effect on compiler performance. Effectiveness: There is exactly one condition in all of std and cmd that this fails to prove that the old implementation could: (int64(^uint(0)>>1) < x) in encoding/gob. This can never be true because x is an int, and it's basically coincidence that the old code gets this. (For example, it fails to prove the similar (x < ^int64(^uint(0)>>1)) condition that immediately precedes it, and even though the conditions are logically unrelated, it wouldn't get the second one if it hadn't first processed the first!) It does, however, prove a few dozen additional branches. These come from facts that are added to the fact table about the relations between len and cap. These were almost never queried directly before, but could lead to contradictions, which the unsat-based approach is able to use. There are exactly two branches in std and cmd that this implementation proves in the *other* direction. This sounds scary, but is okay because both occur in already-unreachable blocks, so it doesn't matter what we chose. Because the fact table logic is sound but incomplete, it fails to prove that the block isn't reachable, even though it is able to prove that both outgoing branches are impossible. We could turn these blocks into BlockExit blocks, but it doesn't seem worth the trouble of the extra proof effort for something that happens twice in all of std and cmd. Tests: This CL updates test/prove.go to change the expected messages because it can no longer give a "reason" why it proved or disproved a condition. It also adds a new test of a branch it couldn't prove before. It mostly guts test/sliceopt.go, removing everything related to slice bounds optimizations and moving a few relevant tests to test/prove.go. Much of this test is actually unreachable. The new prove pass figures this out and doesn't try to prove anything about the unreachable parts. The output on the unreachable parts is already suspect because anything can be proved at that point, so it's really just a regression test for an algorithm the compiler no longer uses. This is a step toward fixing #23354. That issue is quite easy to fix once we can use derived facts effectively. Change-Id: Ia48a1b9ee081310579fe474e4a61857424ff8ce8 Reviewed-on: https://go-review.googlesource.com/87478 Reviewed-by: Keith Randall <khr@golang.org>
2018-01-10 21:28:58 +00:00
useSlice(a[i:]) // ERROR "Proved IsSliceInBounds$"
}
func f11c(a []int, i int) {
useSlice(a[:i])
useSlice(a[:i]) // ERROR "Proved IsSliceInBounds$"
}
func f11d(a []int, i int) {
useInt(a[2*i+7])
cmd/compile: make prove pass use unsatisfiability Currently the prove pass uses implication queries. For each block, it collects the set of branch conditions leading to that block, and queries this fact table for whether any of these facts imply the block's own branch condition (or its inverse). This works remarkably well considering it doesn't do any deduction on these facts, but it has various downsides: 1. It requires an implementation both of adding facts to the table and determining implications. These are very nearly duals of each other, but require separate implementations. Likewise, the process of asserting facts of dominating branch conditions is very nearly the dual of the process of querying implied branch conditions. 2. It leads to less effective use of derived facts. For example, the prove pass currently derives facts about the relations between len and cap, but can't make use of these unless a branch condition is in the exact form of a derived fact. If one of these derived facts contradicts another fact, it won't notice or make use of this. This CL changes the approach of the prove pass to instead use *contradiction* instead of implication. Rather than ever querying a branch condition, it simply adds branch conditions to the fact table. If this leads to a contradiction (specifically, it makes the fact set unsatisfiable), that branch is impossible and can be cut. As a result, 1. We can eliminate the code for determining implications (factsTable.get disappears entirely). Also, there is now a single implementation of visiting and asserting branch conditions, since we don't have to flip them around to treat them as facts in one place and queries in another. 2. Derived facts can be used effectively. It doesn't matter *why* the fact table is unsatisfiable; a contradiction in any of the facts is enough. 3. As an added benefit, it's now quite easy to avoid traversing beyond provably-unreachable blocks. In contrast, the current implementation always visits all blocks. The prove pass already has nearly all of the mechanism necessary to compute unsatisfiability, which means this both simplifies the code and makes it more powerful. The only complication is that the current implication procedure has a hack for dealing with the 0 <= Args[0] condition of OpIsInBounds and OpIsSliceInBounds. We replace this with asserting the appropriate fact when we process one of these conditions. This seems much cleaner anyway, and works because we can now take advantage of derived facts. This has no measurable effect on compiler performance. Effectiveness: There is exactly one condition in all of std and cmd that this fails to prove that the old implementation could: (int64(^uint(0)>>1) < x) in encoding/gob. This can never be true because x is an int, and it's basically coincidence that the old code gets this. (For example, it fails to prove the similar (x < ^int64(^uint(0)>>1)) condition that immediately precedes it, and even though the conditions are logically unrelated, it wouldn't get the second one if it hadn't first processed the first!) It does, however, prove a few dozen additional branches. These come from facts that are added to the fact table about the relations between len and cap. These were almost never queried directly before, but could lead to contradictions, which the unsat-based approach is able to use. There are exactly two branches in std and cmd that this implementation proves in the *other* direction. This sounds scary, but is okay because both occur in already-unreachable blocks, so it doesn't matter what we chose. Because the fact table logic is sound but incomplete, it fails to prove that the block isn't reachable, even though it is able to prove that both outgoing branches are impossible. We could turn these blocks into BlockExit blocks, but it doesn't seem worth the trouble of the extra proof effort for something that happens twice in all of std and cmd. Tests: This CL updates test/prove.go to change the expected messages because it can no longer give a "reason" why it proved or disproved a condition. It also adds a new test of a branch it couldn't prove before. It mostly guts test/sliceopt.go, removing everything related to slice bounds optimizations and moving a few relevant tests to test/prove.go. Much of this test is actually unreachable. The new prove pass figures this out and doesn't try to prove anything about the unreachable parts. The output on the unreachable parts is already suspect because anything can be proved at that point, so it's really just a regression test for an algorithm the compiler no longer uses. This is a step toward fixing #23354. That issue is quite easy to fix once we can use derived facts effectively. Change-Id: Ia48a1b9ee081310579fe474e4a61857424ff8ce8 Reviewed-on: https://go-review.googlesource.com/87478 Reviewed-by: Keith Randall <khr@golang.org>
2018-01-10 21:28:58 +00:00
useInt(a[2*i+7]) // ERROR "Proved IsInBounds$"
}
func f12(a []int, b int) {
useSlice(a[:b])
}
func f13a(a, b, c int, x bool) int {
if a > 12 {
if x {
if a < 12 { // ERROR "Disproved Less64$"
return 1
}
}
if x {
if a <= 12 { // ERROR "Disproved Leq64$"
return 2
}
}
if x {
if a == 12 { // ERROR "Disproved Eq64$"
return 3
}
}
if x {
if a >= 12 { // ERROR "Proved Leq64$"
return 4
}
}
if x {
if a > 12 { // ERROR "Proved Less64$"
return 5
}
}
return 6
}
return 0
}
func f13b(a int, x bool) int {
if a == -9 {
if x {
if a < -9 { // ERROR "Disproved Less64$"
return 7
}
}
if x {
if a <= -9 { // ERROR "Proved Leq64$"
return 8
}
}
if x {
cmd/compile: make prove pass use unsatisfiability Currently the prove pass uses implication queries. For each block, it collects the set of branch conditions leading to that block, and queries this fact table for whether any of these facts imply the block's own branch condition (or its inverse). This works remarkably well considering it doesn't do any deduction on these facts, but it has various downsides: 1. It requires an implementation both of adding facts to the table and determining implications. These are very nearly duals of each other, but require separate implementations. Likewise, the process of asserting facts of dominating branch conditions is very nearly the dual of the process of querying implied branch conditions. 2. It leads to less effective use of derived facts. For example, the prove pass currently derives facts about the relations between len and cap, but can't make use of these unless a branch condition is in the exact form of a derived fact. If one of these derived facts contradicts another fact, it won't notice or make use of this. This CL changes the approach of the prove pass to instead use *contradiction* instead of implication. Rather than ever querying a branch condition, it simply adds branch conditions to the fact table. If this leads to a contradiction (specifically, it makes the fact set unsatisfiable), that branch is impossible and can be cut. As a result, 1. We can eliminate the code for determining implications (factsTable.get disappears entirely). Also, there is now a single implementation of visiting and asserting branch conditions, since we don't have to flip them around to treat them as facts in one place and queries in another. 2. Derived facts can be used effectively. It doesn't matter *why* the fact table is unsatisfiable; a contradiction in any of the facts is enough. 3. As an added benefit, it's now quite easy to avoid traversing beyond provably-unreachable blocks. In contrast, the current implementation always visits all blocks. The prove pass already has nearly all of the mechanism necessary to compute unsatisfiability, which means this both simplifies the code and makes it more powerful. The only complication is that the current implication procedure has a hack for dealing with the 0 <= Args[0] condition of OpIsInBounds and OpIsSliceInBounds. We replace this with asserting the appropriate fact when we process one of these conditions. This seems much cleaner anyway, and works because we can now take advantage of derived facts. This has no measurable effect on compiler performance. Effectiveness: There is exactly one condition in all of std and cmd that this fails to prove that the old implementation could: (int64(^uint(0)>>1) < x) in encoding/gob. This can never be true because x is an int, and it's basically coincidence that the old code gets this. (For example, it fails to prove the similar (x < ^int64(^uint(0)>>1)) condition that immediately precedes it, and even though the conditions are logically unrelated, it wouldn't get the second one if it hadn't first processed the first!) It does, however, prove a few dozen additional branches. These come from facts that are added to the fact table about the relations between len and cap. These were almost never queried directly before, but could lead to contradictions, which the unsat-based approach is able to use. There are exactly two branches in std and cmd that this implementation proves in the *other* direction. This sounds scary, but is okay because both occur in already-unreachable blocks, so it doesn't matter what we chose. Because the fact table logic is sound but incomplete, it fails to prove that the block isn't reachable, even though it is able to prove that both outgoing branches are impossible. We could turn these blocks into BlockExit blocks, but it doesn't seem worth the trouble of the extra proof effort for something that happens twice in all of std and cmd. Tests: This CL updates test/prove.go to change the expected messages because it can no longer give a "reason" why it proved or disproved a condition. It also adds a new test of a branch it couldn't prove before. It mostly guts test/sliceopt.go, removing everything related to slice bounds optimizations and moving a few relevant tests to test/prove.go. Much of this test is actually unreachable. The new prove pass figures this out and doesn't try to prove anything about the unreachable parts. The output on the unreachable parts is already suspect because anything can be proved at that point, so it's really just a regression test for an algorithm the compiler no longer uses. This is a step toward fixing #23354. That issue is quite easy to fix once we can use derived facts effectively. Change-Id: Ia48a1b9ee081310579fe474e4a61857424ff8ce8 Reviewed-on: https://go-review.googlesource.com/87478 Reviewed-by: Keith Randall <khr@golang.org>
2018-01-10 21:28:58 +00:00
if a == -9 { // ERROR "Proved Eq64$"
return 9
}
}
if x {
if a >= -9 { // ERROR "Proved Leq64$"
return 10
}
}
if x {
if a > -9 { // ERROR "Disproved Less64$"
return 11
}
}
return 12
}
return 0
}
func f13c(a int, x bool) int {
if a < 90 {
if x {
cmd/compile: make prove pass use unsatisfiability Currently the prove pass uses implication queries. For each block, it collects the set of branch conditions leading to that block, and queries this fact table for whether any of these facts imply the block's own branch condition (or its inverse). This works remarkably well considering it doesn't do any deduction on these facts, but it has various downsides: 1. It requires an implementation both of adding facts to the table and determining implications. These are very nearly duals of each other, but require separate implementations. Likewise, the process of asserting facts of dominating branch conditions is very nearly the dual of the process of querying implied branch conditions. 2. It leads to less effective use of derived facts. For example, the prove pass currently derives facts about the relations between len and cap, but can't make use of these unless a branch condition is in the exact form of a derived fact. If one of these derived facts contradicts another fact, it won't notice or make use of this. This CL changes the approach of the prove pass to instead use *contradiction* instead of implication. Rather than ever querying a branch condition, it simply adds branch conditions to the fact table. If this leads to a contradiction (specifically, it makes the fact set unsatisfiable), that branch is impossible and can be cut. As a result, 1. We can eliminate the code for determining implications (factsTable.get disappears entirely). Also, there is now a single implementation of visiting and asserting branch conditions, since we don't have to flip them around to treat them as facts in one place and queries in another. 2. Derived facts can be used effectively. It doesn't matter *why* the fact table is unsatisfiable; a contradiction in any of the facts is enough. 3. As an added benefit, it's now quite easy to avoid traversing beyond provably-unreachable blocks. In contrast, the current implementation always visits all blocks. The prove pass already has nearly all of the mechanism necessary to compute unsatisfiability, which means this both simplifies the code and makes it more powerful. The only complication is that the current implication procedure has a hack for dealing with the 0 <= Args[0] condition of OpIsInBounds and OpIsSliceInBounds. We replace this with asserting the appropriate fact when we process one of these conditions. This seems much cleaner anyway, and works because we can now take advantage of derived facts. This has no measurable effect on compiler performance. Effectiveness: There is exactly one condition in all of std and cmd that this fails to prove that the old implementation could: (int64(^uint(0)>>1) < x) in encoding/gob. This can never be true because x is an int, and it's basically coincidence that the old code gets this. (For example, it fails to prove the similar (x < ^int64(^uint(0)>>1)) condition that immediately precedes it, and even though the conditions are logically unrelated, it wouldn't get the second one if it hadn't first processed the first!) It does, however, prove a few dozen additional branches. These come from facts that are added to the fact table about the relations between len and cap. These were almost never queried directly before, but could lead to contradictions, which the unsat-based approach is able to use. There are exactly two branches in std and cmd that this implementation proves in the *other* direction. This sounds scary, but is okay because both occur in already-unreachable blocks, so it doesn't matter what we chose. Because the fact table logic is sound but incomplete, it fails to prove that the block isn't reachable, even though it is able to prove that both outgoing branches are impossible. We could turn these blocks into BlockExit blocks, but it doesn't seem worth the trouble of the extra proof effort for something that happens twice in all of std and cmd. Tests: This CL updates test/prove.go to change the expected messages because it can no longer give a "reason" why it proved or disproved a condition. It also adds a new test of a branch it couldn't prove before. It mostly guts test/sliceopt.go, removing everything related to slice bounds optimizations and moving a few relevant tests to test/prove.go. Much of this test is actually unreachable. The new prove pass figures this out and doesn't try to prove anything about the unreachable parts. The output on the unreachable parts is already suspect because anything can be proved at that point, so it's really just a regression test for an algorithm the compiler no longer uses. This is a step toward fixing #23354. That issue is quite easy to fix once we can use derived facts effectively. Change-Id: Ia48a1b9ee081310579fe474e4a61857424ff8ce8 Reviewed-on: https://go-review.googlesource.com/87478 Reviewed-by: Keith Randall <khr@golang.org>
2018-01-10 21:28:58 +00:00
if a < 90 { // ERROR "Proved Less64$"
return 13
}
}
if x {
if a <= 90 { // ERROR "Proved Leq64$"
return 14
}
}
if x {
if a == 90 { // ERROR "Disproved Eq64$"
return 15
}
}
if x {
if a >= 90 { // ERROR "Disproved Leq64$"
return 16
}
}
if x {
if a > 90 { // ERROR "Disproved Less64$"
return 17
}
}
return 18
}
return 0
}
func f13d(a int) int {
if a < 5 {
if a < 9 { // ERROR "Proved Less64$"
return 1
}
}
return 0
}
func f13e(a int) int {
if a > 9 {
if a > 5 { // ERROR "Proved Less64$"
return 1
}
}
return 0
}
func f13f(a int64) int64 {
if a > math.MaxInt64 {
cmd/compile: in prove, add transitive closure of relations Implement it through a partial order datastructure, which keeps the relations between SSA values in a forest of DAGs and is able to discover contradictions. In make.bash, this patch is able to prove hundreds of conditions which were not proved before. Compilebench: name old time/op new time/op delta Template 371ms ± 2% 368ms ± 1% ~ (p=0.222 n=5+5) Unicode 203ms ± 6% 199ms ± 3% ~ (p=0.421 n=5+5) GoTypes 1.17s ± 4% 1.18s ± 1% ~ (p=0.151 n=5+5) Compiler 5.54s ± 2% 5.59s ± 1% ~ (p=0.548 n=5+5) SSA 12.9s ± 2% 13.2s ± 1% +2.96% (p=0.032 n=5+5) Flate 245ms ± 2% 247ms ± 3% ~ (p=0.690 n=5+5) GoParser 302ms ± 6% 302ms ± 4% ~ (p=0.548 n=5+5) Reflect 764ms ± 4% 773ms ± 3% ~ (p=0.095 n=5+5) Tar 354ms ± 6% 361ms ± 3% ~ (p=0.222 n=5+5) XML 434ms ± 3% 429ms ± 1% ~ (p=0.421 n=5+5) StdCmd 22.6s ± 1% 22.9s ± 1% +1.40% (p=0.032 n=5+5) name old user-time/op new user-time/op delta Template 436ms ± 8% 426ms ± 5% ~ (p=0.579 n=5+5) Unicode 219ms ±15% 219ms ±12% ~ (p=1.000 n=5+5) GoTypes 1.47s ± 6% 1.53s ± 6% ~ (p=0.222 n=5+5) Compiler 7.26s ± 4% 7.40s ± 2% ~ (p=0.389 n=5+5) SSA 17.7s ± 4% 18.5s ± 4% +4.13% (p=0.032 n=5+5) Flate 257ms ± 5% 268ms ± 9% ~ (p=0.333 n=5+5) GoParser 354ms ± 6% 348ms ± 6% ~ (p=0.913 n=5+5) Reflect 904ms ± 2% 944ms ± 4% ~ (p=0.056 n=5+5) Tar 398ms ±11% 430ms ± 7% ~ (p=0.079 n=5+5) XML 501ms ± 7% 489ms ± 5% ~ (p=0.444 n=5+5) name old text-bytes new text-bytes delta HelloSize 670kB ± 0% 670kB ± 0% +0.00% (p=0.008 n=5+5) CmdGoSize 7.22MB ± 0% 7.21MB ± 0% -0.07% (p=0.008 n=5+5) name old data-bytes new data-bytes delta HelloSize 9.88kB ± 0% 9.88kB ± 0% ~ (all equal) CmdGoSize 248kB ± 0% 248kB ± 0% -0.06% (p=0.008 n=5+5) name old bss-bytes new bss-bytes delta HelloSize 125kB ± 0% 125kB ± 0% ~ (all equal) CmdGoSize 145kB ± 0% 144kB ± 0% -0.20% (p=0.008 n=5+5) name old exe-bytes new exe-bytes delta HelloSize 1.43MB ± 0% 1.43MB ± 0% ~ (all equal) CmdGoSize 14.5MB ± 0% 14.5MB ± 0% -0.06% (p=0.008 n=5+5) Fixes #19714 Updates #20393 Change-Id: Ia090f5b5dc1bcd274ba8a39b233c1e1ace1b330e Reviewed-on: https://go-review.googlesource.com/100277 Run-TryBot: Giovanni Bajo <rasky@develer.com> Reviewed-by: David Chase <drchase@google.com>
2018-04-15 21:53:08 +00:00
if a == 0 { // ERROR "Disproved Eq64$"
return 1
}
}
return 0
}
func f13g(a int) int {
if a < 3 {
return 5
}
if a > 3 {
return 6
}
if a == 3 { // ERROR "Proved Eq64$"
return 7
}
return 8
}
func f13h(a int) int {
if a < 3 {
if a > 1 {
if a == 2 { // ERROR "Proved Eq64$"
return 5
}
}
}
return 0
}
func f13i(a uint) int {
if a == 0 {
return 1
}
if a > 0 { // ERROR "Proved Less64U$"
return 2
}
return 3
}
func f14(p, q *int, a []int) {
// This crazy ordering usually gives i1 the lowest value ID,
// j the middle value ID, and i2 the highest value ID.
// That used to confuse CSE because it ordered the args
// of the two + ops below differently.
// That in turn foiled bounds check elimination.
i1 := *p
j := *q
i2 := *p
useInt(a[i1+j])
cmd/compile: make prove pass use unsatisfiability Currently the prove pass uses implication queries. For each block, it collects the set of branch conditions leading to that block, and queries this fact table for whether any of these facts imply the block's own branch condition (or its inverse). This works remarkably well considering it doesn't do any deduction on these facts, but it has various downsides: 1. It requires an implementation both of adding facts to the table and determining implications. These are very nearly duals of each other, but require separate implementations. Likewise, the process of asserting facts of dominating branch conditions is very nearly the dual of the process of querying implied branch conditions. 2. It leads to less effective use of derived facts. For example, the prove pass currently derives facts about the relations between len and cap, but can't make use of these unless a branch condition is in the exact form of a derived fact. If one of these derived facts contradicts another fact, it won't notice or make use of this. This CL changes the approach of the prove pass to instead use *contradiction* instead of implication. Rather than ever querying a branch condition, it simply adds branch conditions to the fact table. If this leads to a contradiction (specifically, it makes the fact set unsatisfiable), that branch is impossible and can be cut. As a result, 1. We can eliminate the code for determining implications (factsTable.get disappears entirely). Also, there is now a single implementation of visiting and asserting branch conditions, since we don't have to flip them around to treat them as facts in one place and queries in another. 2. Derived facts can be used effectively. It doesn't matter *why* the fact table is unsatisfiable; a contradiction in any of the facts is enough. 3. As an added benefit, it's now quite easy to avoid traversing beyond provably-unreachable blocks. In contrast, the current implementation always visits all blocks. The prove pass already has nearly all of the mechanism necessary to compute unsatisfiability, which means this both simplifies the code and makes it more powerful. The only complication is that the current implication procedure has a hack for dealing with the 0 <= Args[0] condition of OpIsInBounds and OpIsSliceInBounds. We replace this with asserting the appropriate fact when we process one of these conditions. This seems much cleaner anyway, and works because we can now take advantage of derived facts. This has no measurable effect on compiler performance. Effectiveness: There is exactly one condition in all of std and cmd that this fails to prove that the old implementation could: (int64(^uint(0)>>1) < x) in encoding/gob. This can never be true because x is an int, and it's basically coincidence that the old code gets this. (For example, it fails to prove the similar (x < ^int64(^uint(0)>>1)) condition that immediately precedes it, and even though the conditions are logically unrelated, it wouldn't get the second one if it hadn't first processed the first!) It does, however, prove a few dozen additional branches. These come from facts that are added to the fact table about the relations between len and cap. These were almost never queried directly before, but could lead to contradictions, which the unsat-based approach is able to use. There are exactly two branches in std and cmd that this implementation proves in the *other* direction. This sounds scary, but is okay because both occur in already-unreachable blocks, so it doesn't matter what we chose. Because the fact table logic is sound but incomplete, it fails to prove that the block isn't reachable, even though it is able to prove that both outgoing branches are impossible. We could turn these blocks into BlockExit blocks, but it doesn't seem worth the trouble of the extra proof effort for something that happens twice in all of std and cmd. Tests: This CL updates test/prove.go to change the expected messages because it can no longer give a "reason" why it proved or disproved a condition. It also adds a new test of a branch it couldn't prove before. It mostly guts test/sliceopt.go, removing everything related to slice bounds optimizations and moving a few relevant tests to test/prove.go. Much of this test is actually unreachable. The new prove pass figures this out and doesn't try to prove anything about the unreachable parts. The output on the unreachable parts is already suspect because anything can be proved at that point, so it's really just a regression test for an algorithm the compiler no longer uses. This is a step toward fixing #23354. That issue is quite easy to fix once we can use derived facts effectively. Change-Id: Ia48a1b9ee081310579fe474e4a61857424ff8ce8 Reviewed-on: https://go-review.googlesource.com/87478 Reviewed-by: Keith Randall <khr@golang.org>
2018-01-10 21:28:58 +00:00
useInt(a[i2+j]) // ERROR "Proved IsInBounds$"
}
func f15(s []int, x int) {
useSlice(s[x:])
useSlice(s[:x]) // ERROR "Proved IsSliceInBounds$"
}
func f16(s []int) []int {
if len(s) >= 10 {
cmd/compile: make prove pass use unsatisfiability Currently the prove pass uses implication queries. For each block, it collects the set of branch conditions leading to that block, and queries this fact table for whether any of these facts imply the block's own branch condition (or its inverse). This works remarkably well considering it doesn't do any deduction on these facts, but it has various downsides: 1. It requires an implementation both of adding facts to the table and determining implications. These are very nearly duals of each other, but require separate implementations. Likewise, the process of asserting facts of dominating branch conditions is very nearly the dual of the process of querying implied branch conditions. 2. It leads to less effective use of derived facts. For example, the prove pass currently derives facts about the relations between len and cap, but can't make use of these unless a branch condition is in the exact form of a derived fact. If one of these derived facts contradicts another fact, it won't notice or make use of this. This CL changes the approach of the prove pass to instead use *contradiction* instead of implication. Rather than ever querying a branch condition, it simply adds branch conditions to the fact table. If this leads to a contradiction (specifically, it makes the fact set unsatisfiable), that branch is impossible and can be cut. As a result, 1. We can eliminate the code for determining implications (factsTable.get disappears entirely). Also, there is now a single implementation of visiting and asserting branch conditions, since we don't have to flip them around to treat them as facts in one place and queries in another. 2. Derived facts can be used effectively. It doesn't matter *why* the fact table is unsatisfiable; a contradiction in any of the facts is enough. 3. As an added benefit, it's now quite easy to avoid traversing beyond provably-unreachable blocks. In contrast, the current implementation always visits all blocks. The prove pass already has nearly all of the mechanism necessary to compute unsatisfiability, which means this both simplifies the code and makes it more powerful. The only complication is that the current implication procedure has a hack for dealing with the 0 <= Args[0] condition of OpIsInBounds and OpIsSliceInBounds. We replace this with asserting the appropriate fact when we process one of these conditions. This seems much cleaner anyway, and works because we can now take advantage of derived facts. This has no measurable effect on compiler performance. Effectiveness: There is exactly one condition in all of std and cmd that this fails to prove that the old implementation could: (int64(^uint(0)>>1) < x) in encoding/gob. This can never be true because x is an int, and it's basically coincidence that the old code gets this. (For example, it fails to prove the similar (x < ^int64(^uint(0)>>1)) condition that immediately precedes it, and even though the conditions are logically unrelated, it wouldn't get the second one if it hadn't first processed the first!) It does, however, prove a few dozen additional branches. These come from facts that are added to the fact table about the relations between len and cap. These were almost never queried directly before, but could lead to contradictions, which the unsat-based approach is able to use. There are exactly two branches in std and cmd that this implementation proves in the *other* direction. This sounds scary, but is okay because both occur in already-unreachable blocks, so it doesn't matter what we chose. Because the fact table logic is sound but incomplete, it fails to prove that the block isn't reachable, even though it is able to prove that both outgoing branches are impossible. We could turn these blocks into BlockExit blocks, but it doesn't seem worth the trouble of the extra proof effort for something that happens twice in all of std and cmd. Tests: This CL updates test/prove.go to change the expected messages because it can no longer give a "reason" why it proved or disproved a condition. It also adds a new test of a branch it couldn't prove before. It mostly guts test/sliceopt.go, removing everything related to slice bounds optimizations and moving a few relevant tests to test/prove.go. Much of this test is actually unreachable. The new prove pass figures this out and doesn't try to prove anything about the unreachable parts. The output on the unreachable parts is already suspect because anything can be proved at that point, so it's really just a regression test for an algorithm the compiler no longer uses. This is a step toward fixing #23354. That issue is quite easy to fix once we can use derived facts effectively. Change-Id: Ia48a1b9ee081310579fe474e4a61857424ff8ce8 Reviewed-on: https://go-review.googlesource.com/87478 Reviewed-by: Keith Randall <khr@golang.org>
2018-01-10 21:28:58 +00:00
return s[:10] // ERROR "Proved IsSliceInBounds$"
}
return nil
}
cmd/compile: make prove pass use unsatisfiability Currently the prove pass uses implication queries. For each block, it collects the set of branch conditions leading to that block, and queries this fact table for whether any of these facts imply the block's own branch condition (or its inverse). This works remarkably well considering it doesn't do any deduction on these facts, but it has various downsides: 1. It requires an implementation both of adding facts to the table and determining implications. These are very nearly duals of each other, but require separate implementations. Likewise, the process of asserting facts of dominating branch conditions is very nearly the dual of the process of querying implied branch conditions. 2. It leads to less effective use of derived facts. For example, the prove pass currently derives facts about the relations between len and cap, but can't make use of these unless a branch condition is in the exact form of a derived fact. If one of these derived facts contradicts another fact, it won't notice or make use of this. This CL changes the approach of the prove pass to instead use *contradiction* instead of implication. Rather than ever querying a branch condition, it simply adds branch conditions to the fact table. If this leads to a contradiction (specifically, it makes the fact set unsatisfiable), that branch is impossible and can be cut. As a result, 1. We can eliminate the code for determining implications (factsTable.get disappears entirely). Also, there is now a single implementation of visiting and asserting branch conditions, since we don't have to flip them around to treat them as facts in one place and queries in another. 2. Derived facts can be used effectively. It doesn't matter *why* the fact table is unsatisfiable; a contradiction in any of the facts is enough. 3. As an added benefit, it's now quite easy to avoid traversing beyond provably-unreachable blocks. In contrast, the current implementation always visits all blocks. The prove pass already has nearly all of the mechanism necessary to compute unsatisfiability, which means this both simplifies the code and makes it more powerful. The only complication is that the current implication procedure has a hack for dealing with the 0 <= Args[0] condition of OpIsInBounds and OpIsSliceInBounds. We replace this with asserting the appropriate fact when we process one of these conditions. This seems much cleaner anyway, and works because we can now take advantage of derived facts. This has no measurable effect on compiler performance. Effectiveness: There is exactly one condition in all of std and cmd that this fails to prove that the old implementation could: (int64(^uint(0)>>1) < x) in encoding/gob. This can never be true because x is an int, and it's basically coincidence that the old code gets this. (For example, it fails to prove the similar (x < ^int64(^uint(0)>>1)) condition that immediately precedes it, and even though the conditions are logically unrelated, it wouldn't get the second one if it hadn't first processed the first!) It does, however, prove a few dozen additional branches. These come from facts that are added to the fact table about the relations between len and cap. These were almost never queried directly before, but could lead to contradictions, which the unsat-based approach is able to use. There are exactly two branches in std and cmd that this implementation proves in the *other* direction. This sounds scary, but is okay because both occur in already-unreachable blocks, so it doesn't matter what we chose. Because the fact table logic is sound but incomplete, it fails to prove that the block isn't reachable, even though it is able to prove that both outgoing branches are impossible. We could turn these blocks into BlockExit blocks, but it doesn't seem worth the trouble of the extra proof effort for something that happens twice in all of std and cmd. Tests: This CL updates test/prove.go to change the expected messages because it can no longer give a "reason" why it proved or disproved a condition. It also adds a new test of a branch it couldn't prove before. It mostly guts test/sliceopt.go, removing everything related to slice bounds optimizations and moving a few relevant tests to test/prove.go. Much of this test is actually unreachable. The new prove pass figures this out and doesn't try to prove anything about the unreachable parts. The output on the unreachable parts is already suspect because anything can be proved at that point, so it's really just a regression test for an algorithm the compiler no longer uses. This is a step toward fixing #23354. That issue is quite easy to fix once we can use derived facts effectively. Change-Id: Ia48a1b9ee081310579fe474e4a61857424ff8ce8 Reviewed-on: https://go-review.googlesource.com/87478 Reviewed-by: Keith Randall <khr@golang.org>
2018-01-10 21:28:58 +00:00
func f17(b []int) {
for i := 0; i < len(b); i++ { // ERROR "Induction variable: limits \[0,\?\), increment 1$"
cmd/compile: make prove pass use unsatisfiability Currently the prove pass uses implication queries. For each block, it collects the set of branch conditions leading to that block, and queries this fact table for whether any of these facts imply the block's own branch condition (or its inverse). This works remarkably well considering it doesn't do any deduction on these facts, but it has various downsides: 1. It requires an implementation both of adding facts to the table and determining implications. These are very nearly duals of each other, but require separate implementations. Likewise, the process of asserting facts of dominating branch conditions is very nearly the dual of the process of querying implied branch conditions. 2. It leads to less effective use of derived facts. For example, the prove pass currently derives facts about the relations between len and cap, but can't make use of these unless a branch condition is in the exact form of a derived fact. If one of these derived facts contradicts another fact, it won't notice or make use of this. This CL changes the approach of the prove pass to instead use *contradiction* instead of implication. Rather than ever querying a branch condition, it simply adds branch conditions to the fact table. If this leads to a contradiction (specifically, it makes the fact set unsatisfiable), that branch is impossible and can be cut. As a result, 1. We can eliminate the code for determining implications (factsTable.get disappears entirely). Also, there is now a single implementation of visiting and asserting branch conditions, since we don't have to flip them around to treat them as facts in one place and queries in another. 2. Derived facts can be used effectively. It doesn't matter *why* the fact table is unsatisfiable; a contradiction in any of the facts is enough. 3. As an added benefit, it's now quite easy to avoid traversing beyond provably-unreachable blocks. In contrast, the current implementation always visits all blocks. The prove pass already has nearly all of the mechanism necessary to compute unsatisfiability, which means this both simplifies the code and makes it more powerful. The only complication is that the current implication procedure has a hack for dealing with the 0 <= Args[0] condition of OpIsInBounds and OpIsSliceInBounds. We replace this with asserting the appropriate fact when we process one of these conditions. This seems much cleaner anyway, and works because we can now take advantage of derived facts. This has no measurable effect on compiler performance. Effectiveness: There is exactly one condition in all of std and cmd that this fails to prove that the old implementation could: (int64(^uint(0)>>1) < x) in encoding/gob. This can never be true because x is an int, and it's basically coincidence that the old code gets this. (For example, it fails to prove the similar (x < ^int64(^uint(0)>>1)) condition that immediately precedes it, and even though the conditions are logically unrelated, it wouldn't get the second one if it hadn't first processed the first!) It does, however, prove a few dozen additional branches. These come from facts that are added to the fact table about the relations between len and cap. These were almost never queried directly before, but could lead to contradictions, which the unsat-based approach is able to use. There are exactly two branches in std and cmd that this implementation proves in the *other* direction. This sounds scary, but is okay because both occur in already-unreachable blocks, so it doesn't matter what we chose. Because the fact table logic is sound but incomplete, it fails to prove that the block isn't reachable, even though it is able to prove that both outgoing branches are impossible. We could turn these blocks into BlockExit blocks, but it doesn't seem worth the trouble of the extra proof effort for something that happens twice in all of std and cmd. Tests: This CL updates test/prove.go to change the expected messages because it can no longer give a "reason" why it proved or disproved a condition. It also adds a new test of a branch it couldn't prove before. It mostly guts test/sliceopt.go, removing everything related to slice bounds optimizations and moving a few relevant tests to test/prove.go. Much of this test is actually unreachable. The new prove pass figures this out and doesn't try to prove anything about the unreachable parts. The output on the unreachable parts is already suspect because anything can be proved at that point, so it's really just a regression test for an algorithm the compiler no longer uses. This is a step toward fixing #23354. That issue is quite easy to fix once we can use derived facts effectively. Change-Id: Ia48a1b9ee081310579fe474e4a61857424ff8ce8 Reviewed-on: https://go-review.googlesource.com/87478 Reviewed-by: Keith Randall <khr@golang.org>
2018-01-10 21:28:58 +00:00
// This tests for i <= cap, which we can only prove
// using the derived relation between len and cap.
// This depends on finding the contradiction, since we
// don't query this condition directly.
useSlice(b[:i]) // ERROR "Proved IsSliceInBounds$"
cmd/compile: make prove pass use unsatisfiability Currently the prove pass uses implication queries. For each block, it collects the set of branch conditions leading to that block, and queries this fact table for whether any of these facts imply the block's own branch condition (or its inverse). This works remarkably well considering it doesn't do any deduction on these facts, but it has various downsides: 1. It requires an implementation both of adding facts to the table and determining implications. These are very nearly duals of each other, but require separate implementations. Likewise, the process of asserting facts of dominating branch conditions is very nearly the dual of the process of querying implied branch conditions. 2. It leads to less effective use of derived facts. For example, the prove pass currently derives facts about the relations between len and cap, but can't make use of these unless a branch condition is in the exact form of a derived fact. If one of these derived facts contradicts another fact, it won't notice or make use of this. This CL changes the approach of the prove pass to instead use *contradiction* instead of implication. Rather than ever querying a branch condition, it simply adds branch conditions to the fact table. If this leads to a contradiction (specifically, it makes the fact set unsatisfiable), that branch is impossible and can be cut. As a result, 1. We can eliminate the code for determining implications (factsTable.get disappears entirely). Also, there is now a single implementation of visiting and asserting branch conditions, since we don't have to flip them around to treat them as facts in one place and queries in another. 2. Derived facts can be used effectively. It doesn't matter *why* the fact table is unsatisfiable; a contradiction in any of the facts is enough. 3. As an added benefit, it's now quite easy to avoid traversing beyond provably-unreachable blocks. In contrast, the current implementation always visits all blocks. The prove pass already has nearly all of the mechanism necessary to compute unsatisfiability, which means this both simplifies the code and makes it more powerful. The only complication is that the current implication procedure has a hack for dealing with the 0 <= Args[0] condition of OpIsInBounds and OpIsSliceInBounds. We replace this with asserting the appropriate fact when we process one of these conditions. This seems much cleaner anyway, and works because we can now take advantage of derived facts. This has no measurable effect on compiler performance. Effectiveness: There is exactly one condition in all of std and cmd that this fails to prove that the old implementation could: (int64(^uint(0)>>1) < x) in encoding/gob. This can never be true because x is an int, and it's basically coincidence that the old code gets this. (For example, it fails to prove the similar (x < ^int64(^uint(0)>>1)) condition that immediately precedes it, and even though the conditions are logically unrelated, it wouldn't get the second one if it hadn't first processed the first!) It does, however, prove a few dozen additional branches. These come from facts that are added to the fact table about the relations between len and cap. These were almost never queried directly before, but could lead to contradictions, which the unsat-based approach is able to use. There are exactly two branches in std and cmd that this implementation proves in the *other* direction. This sounds scary, but is okay because both occur in already-unreachable blocks, so it doesn't matter what we chose. Because the fact table logic is sound but incomplete, it fails to prove that the block isn't reachable, even though it is able to prove that both outgoing branches are impossible. We could turn these blocks into BlockExit blocks, but it doesn't seem worth the trouble of the extra proof effort for something that happens twice in all of std and cmd. Tests: This CL updates test/prove.go to change the expected messages because it can no longer give a "reason" why it proved or disproved a condition. It also adds a new test of a branch it couldn't prove before. It mostly guts test/sliceopt.go, removing everything related to slice bounds optimizations and moving a few relevant tests to test/prove.go. Much of this test is actually unreachable. The new prove pass figures this out and doesn't try to prove anything about the unreachable parts. The output on the unreachable parts is already suspect because anything can be proved at that point, so it's really just a regression test for an algorithm the compiler no longer uses. This is a step toward fixing #23354. That issue is quite easy to fix once we can use derived facts effectively. Change-Id: Ia48a1b9ee081310579fe474e4a61857424ff8ce8 Reviewed-on: https://go-review.googlesource.com/87478 Reviewed-by: Keith Randall <khr@golang.org>
2018-01-10 21:28:58 +00:00
}
}
func f18(b []int, x int, y uint) {
_ = b[x]
_ = b[y]
if x > len(b) { // ERROR "Disproved Less64$"
return
}
if y > uint(len(b)) { // ERROR "Disproved Less64U$"
return
}
if int(y) > len(b) { // ERROR "Disproved Less64$"
return
}
}
func f19() (e int64, err error) {
// Issue 29502: slice[:0] is incorrectly disproved.
var stack []int64
stack = append(stack, 123)
if len(stack) > 1 {
panic("too many elements")
}
last := len(stack) - 1
e = stack[last]
// Buggy compiler prints "Disproved Leq64" for the next line.
stack = stack[:last] // ERROR "Proved IsSliceInBounds"
return e, nil
}
cmd/compile: make prove pass use unsatisfiability Currently the prove pass uses implication queries. For each block, it collects the set of branch conditions leading to that block, and queries this fact table for whether any of these facts imply the block's own branch condition (or its inverse). This works remarkably well considering it doesn't do any deduction on these facts, but it has various downsides: 1. It requires an implementation both of adding facts to the table and determining implications. These are very nearly duals of each other, but require separate implementations. Likewise, the process of asserting facts of dominating branch conditions is very nearly the dual of the process of querying implied branch conditions. 2. It leads to less effective use of derived facts. For example, the prove pass currently derives facts about the relations between len and cap, but can't make use of these unless a branch condition is in the exact form of a derived fact. If one of these derived facts contradicts another fact, it won't notice or make use of this. This CL changes the approach of the prove pass to instead use *contradiction* instead of implication. Rather than ever querying a branch condition, it simply adds branch conditions to the fact table. If this leads to a contradiction (specifically, it makes the fact set unsatisfiable), that branch is impossible and can be cut. As a result, 1. We can eliminate the code for determining implications (factsTable.get disappears entirely). Also, there is now a single implementation of visiting and asserting branch conditions, since we don't have to flip them around to treat them as facts in one place and queries in another. 2. Derived facts can be used effectively. It doesn't matter *why* the fact table is unsatisfiable; a contradiction in any of the facts is enough. 3. As an added benefit, it's now quite easy to avoid traversing beyond provably-unreachable blocks. In contrast, the current implementation always visits all blocks. The prove pass already has nearly all of the mechanism necessary to compute unsatisfiability, which means this both simplifies the code and makes it more powerful. The only complication is that the current implication procedure has a hack for dealing with the 0 <= Args[0] condition of OpIsInBounds and OpIsSliceInBounds. We replace this with asserting the appropriate fact when we process one of these conditions. This seems much cleaner anyway, and works because we can now take advantage of derived facts. This has no measurable effect on compiler performance. Effectiveness: There is exactly one condition in all of std and cmd that this fails to prove that the old implementation could: (int64(^uint(0)>>1) < x) in encoding/gob. This can never be true because x is an int, and it's basically coincidence that the old code gets this. (For example, it fails to prove the similar (x < ^int64(^uint(0)>>1)) condition that immediately precedes it, and even though the conditions are logically unrelated, it wouldn't get the second one if it hadn't first processed the first!) It does, however, prove a few dozen additional branches. These come from facts that are added to the fact table about the relations between len and cap. These were almost never queried directly before, but could lead to contradictions, which the unsat-based approach is able to use. There are exactly two branches in std and cmd that this implementation proves in the *other* direction. This sounds scary, but is okay because both occur in already-unreachable blocks, so it doesn't matter what we chose. Because the fact table logic is sound but incomplete, it fails to prove that the block isn't reachable, even though it is able to prove that both outgoing branches are impossible. We could turn these blocks into BlockExit blocks, but it doesn't seem worth the trouble of the extra proof effort for something that happens twice in all of std and cmd. Tests: This CL updates test/prove.go to change the expected messages because it can no longer give a "reason" why it proved or disproved a condition. It also adds a new test of a branch it couldn't prove before. It mostly guts test/sliceopt.go, removing everything related to slice bounds optimizations and moving a few relevant tests to test/prove.go. Much of this test is actually unreachable. The new prove pass figures this out and doesn't try to prove anything about the unreachable parts. The output on the unreachable parts is already suspect because anything can be proved at that point, so it's really just a regression test for an algorithm the compiler no longer uses. This is a step toward fixing #23354. That issue is quite easy to fix once we can use derived facts effectively. Change-Id: Ia48a1b9ee081310579fe474e4a61857424ff8ce8 Reviewed-on: https://go-review.googlesource.com/87478 Reviewed-by: Keith Randall <khr@golang.org>
2018-01-10 21:28:58 +00:00
func sm1(b []int, x int) {
// Test constant argument to slicemask.
useSlice(b[2:8]) // ERROR "Proved slicemask not needed$"
// Test non-constant argument with known limits.
if cap(b) > 10 {
useSlice(b[2:])
cmd/compile: make prove pass use unsatisfiability Currently the prove pass uses implication queries. For each block, it collects the set of branch conditions leading to that block, and queries this fact table for whether any of these facts imply the block's own branch condition (or its inverse). This works remarkably well considering it doesn't do any deduction on these facts, but it has various downsides: 1. It requires an implementation both of adding facts to the table and determining implications. These are very nearly duals of each other, but require separate implementations. Likewise, the process of asserting facts of dominating branch conditions is very nearly the dual of the process of querying implied branch conditions. 2. It leads to less effective use of derived facts. For example, the prove pass currently derives facts about the relations between len and cap, but can't make use of these unless a branch condition is in the exact form of a derived fact. If one of these derived facts contradicts another fact, it won't notice or make use of this. This CL changes the approach of the prove pass to instead use *contradiction* instead of implication. Rather than ever querying a branch condition, it simply adds branch conditions to the fact table. If this leads to a contradiction (specifically, it makes the fact set unsatisfiable), that branch is impossible and can be cut. As a result, 1. We can eliminate the code for determining implications (factsTable.get disappears entirely). Also, there is now a single implementation of visiting and asserting branch conditions, since we don't have to flip them around to treat them as facts in one place and queries in another. 2. Derived facts can be used effectively. It doesn't matter *why* the fact table is unsatisfiable; a contradiction in any of the facts is enough. 3. As an added benefit, it's now quite easy to avoid traversing beyond provably-unreachable blocks. In contrast, the current implementation always visits all blocks. The prove pass already has nearly all of the mechanism necessary to compute unsatisfiability, which means this both simplifies the code and makes it more powerful. The only complication is that the current implication procedure has a hack for dealing with the 0 <= Args[0] condition of OpIsInBounds and OpIsSliceInBounds. We replace this with asserting the appropriate fact when we process one of these conditions. This seems much cleaner anyway, and works because we can now take advantage of derived facts. This has no measurable effect on compiler performance. Effectiveness: There is exactly one condition in all of std and cmd that this fails to prove that the old implementation could: (int64(^uint(0)>>1) < x) in encoding/gob. This can never be true because x is an int, and it's basically coincidence that the old code gets this. (For example, it fails to prove the similar (x < ^int64(^uint(0)>>1)) condition that immediately precedes it, and even though the conditions are logically unrelated, it wouldn't get the second one if it hadn't first processed the first!) It does, however, prove a few dozen additional branches. These come from facts that are added to the fact table about the relations between len and cap. These were almost never queried directly before, but could lead to contradictions, which the unsat-based approach is able to use. There are exactly two branches in std and cmd that this implementation proves in the *other* direction. This sounds scary, but is okay because both occur in already-unreachable blocks, so it doesn't matter what we chose. Because the fact table logic is sound but incomplete, it fails to prove that the block isn't reachable, even though it is able to prove that both outgoing branches are impossible. We could turn these blocks into BlockExit blocks, but it doesn't seem worth the trouble of the extra proof effort for something that happens twice in all of std and cmd. Tests: This CL updates test/prove.go to change the expected messages because it can no longer give a "reason" why it proved or disproved a condition. It also adds a new test of a branch it couldn't prove before. It mostly guts test/sliceopt.go, removing everything related to slice bounds optimizations and moving a few relevant tests to test/prove.go. Much of this test is actually unreachable. The new prove pass figures this out and doesn't try to prove anything about the unreachable parts. The output on the unreachable parts is already suspect because anything can be proved at that point, so it's really just a regression test for an algorithm the compiler no longer uses. This is a step toward fixing #23354. That issue is quite easy to fix once we can use derived facts effectively. Change-Id: Ia48a1b9ee081310579fe474e4a61857424ff8ce8 Reviewed-on: https://go-review.googlesource.com/87478 Reviewed-by: Keith Randall <khr@golang.org>
2018-01-10 21:28:58 +00:00
}
}
func lim1(x, y, z int) {
// Test relations between signed and unsigned limits.
if x > 5 {
if uint(x) > 5 { // ERROR "Proved Less64U$"
return
}
}
if y >= 0 && y < 4 {
if uint(y) > 4 { // ERROR "Disproved Less64U$"
return
}
if uint(y) < 5 { // ERROR "Proved Less64U$"
return
}
}
if z < 4 {
if uint(z) > 4 { // Not provable without disjunctions.
return
}
}
}
// fence14 correspond to the four fence-post implications.
func fence1(b []int, x, y int) {
// Test proofs that rely on fence-post implications.
if x+1 > y {
if x < y { // ERROR "Disproved Less64$"
return
}
}
if len(b) < cap(b) {
// This eliminates the growslice path.
b = append(b, 1) // ERROR "Disproved Less64U$"
}
}
func fence2(x, y int) {
if x-1 < y {
if x > y { // ERROR "Disproved Less64$"
return
}
}
}
func fence3(b, c []int, x, y int64) {
if x-1 >= y {
if x <= y { // Can't prove because x may have wrapped.
return
}
}
if x != math.MinInt64 && x-1 >= y {
if x <= y { // ERROR "Disproved Leq64$"
return
}
}
c[len(c)-1] = 0 // Can't prove because len(c) might be 0
if n := len(b); n > 0 {
b[n-1] = 0 // ERROR "Proved IsInBounds$"
}
}
func fence4(x, y int64) {
if x >= y+1 {
if x <= y {
return
}
}
if y != math.MaxInt64 && x >= y+1 {
if x <= y { // ERROR "Disproved Leq64$"
return
}
}
}
cmd/compile: in prove, add transitive closure of relations Implement it through a partial order datastructure, which keeps the relations between SSA values in a forest of DAGs and is able to discover contradictions. In make.bash, this patch is able to prove hundreds of conditions which were not proved before. Compilebench: name old time/op new time/op delta Template 371ms ± 2% 368ms ± 1% ~ (p=0.222 n=5+5) Unicode 203ms ± 6% 199ms ± 3% ~ (p=0.421 n=5+5) GoTypes 1.17s ± 4% 1.18s ± 1% ~ (p=0.151 n=5+5) Compiler 5.54s ± 2% 5.59s ± 1% ~ (p=0.548 n=5+5) SSA 12.9s ± 2% 13.2s ± 1% +2.96% (p=0.032 n=5+5) Flate 245ms ± 2% 247ms ± 3% ~ (p=0.690 n=5+5) GoParser 302ms ± 6% 302ms ± 4% ~ (p=0.548 n=5+5) Reflect 764ms ± 4% 773ms ± 3% ~ (p=0.095 n=5+5) Tar 354ms ± 6% 361ms ± 3% ~ (p=0.222 n=5+5) XML 434ms ± 3% 429ms ± 1% ~ (p=0.421 n=5+5) StdCmd 22.6s ± 1% 22.9s ± 1% +1.40% (p=0.032 n=5+5) name old user-time/op new user-time/op delta Template 436ms ± 8% 426ms ± 5% ~ (p=0.579 n=5+5) Unicode 219ms ±15% 219ms ±12% ~ (p=1.000 n=5+5) GoTypes 1.47s ± 6% 1.53s ± 6% ~ (p=0.222 n=5+5) Compiler 7.26s ± 4% 7.40s ± 2% ~ (p=0.389 n=5+5) SSA 17.7s ± 4% 18.5s ± 4% +4.13% (p=0.032 n=5+5) Flate 257ms ± 5% 268ms ± 9% ~ (p=0.333 n=5+5) GoParser 354ms ± 6% 348ms ± 6% ~ (p=0.913 n=5+5) Reflect 904ms ± 2% 944ms ± 4% ~ (p=0.056 n=5+5) Tar 398ms ±11% 430ms ± 7% ~ (p=0.079 n=5+5) XML 501ms ± 7% 489ms ± 5% ~ (p=0.444 n=5+5) name old text-bytes new text-bytes delta HelloSize 670kB ± 0% 670kB ± 0% +0.00% (p=0.008 n=5+5) CmdGoSize 7.22MB ± 0% 7.21MB ± 0% -0.07% (p=0.008 n=5+5) name old data-bytes new data-bytes delta HelloSize 9.88kB ± 0% 9.88kB ± 0% ~ (all equal) CmdGoSize 248kB ± 0% 248kB ± 0% -0.06% (p=0.008 n=5+5) name old bss-bytes new bss-bytes delta HelloSize 125kB ± 0% 125kB ± 0% ~ (all equal) CmdGoSize 145kB ± 0% 144kB ± 0% -0.20% (p=0.008 n=5+5) name old exe-bytes new exe-bytes delta HelloSize 1.43MB ± 0% 1.43MB ± 0% ~ (all equal) CmdGoSize 14.5MB ± 0% 14.5MB ± 0% -0.06% (p=0.008 n=5+5) Fixes #19714 Updates #20393 Change-Id: Ia090f5b5dc1bcd274ba8a39b233c1e1ace1b330e Reviewed-on: https://go-review.googlesource.com/100277 Run-TryBot: Giovanni Bajo <rasky@develer.com> Reviewed-by: David Chase <drchase@google.com>
2018-04-15 21:53:08 +00:00
// Check transitive relations
func trans1(x, y int64) {
if x > 5 {
if y > x {
if y > 2 { // ERROR "Proved Less64$"
cmd/compile: in prove, add transitive closure of relations Implement it through a partial order datastructure, which keeps the relations between SSA values in a forest of DAGs and is able to discover contradictions. In make.bash, this patch is able to prove hundreds of conditions which were not proved before. Compilebench: name old time/op new time/op delta Template 371ms ± 2% 368ms ± 1% ~ (p=0.222 n=5+5) Unicode 203ms ± 6% 199ms ± 3% ~ (p=0.421 n=5+5) GoTypes 1.17s ± 4% 1.18s ± 1% ~ (p=0.151 n=5+5) Compiler 5.54s ± 2% 5.59s ± 1% ~ (p=0.548 n=5+5) SSA 12.9s ± 2% 13.2s ± 1% +2.96% (p=0.032 n=5+5) Flate 245ms ± 2% 247ms ± 3% ~ (p=0.690 n=5+5) GoParser 302ms ± 6% 302ms ± 4% ~ (p=0.548 n=5+5) Reflect 764ms ± 4% 773ms ± 3% ~ (p=0.095 n=5+5) Tar 354ms ± 6% 361ms ± 3% ~ (p=0.222 n=5+5) XML 434ms ± 3% 429ms ± 1% ~ (p=0.421 n=5+5) StdCmd 22.6s ± 1% 22.9s ± 1% +1.40% (p=0.032 n=5+5) name old user-time/op new user-time/op delta Template 436ms ± 8% 426ms ± 5% ~ (p=0.579 n=5+5) Unicode 219ms ±15% 219ms ±12% ~ (p=1.000 n=5+5) GoTypes 1.47s ± 6% 1.53s ± 6% ~ (p=0.222 n=5+5) Compiler 7.26s ± 4% 7.40s ± 2% ~ (p=0.389 n=5+5) SSA 17.7s ± 4% 18.5s ± 4% +4.13% (p=0.032 n=5+5) Flate 257ms ± 5% 268ms ± 9% ~ (p=0.333 n=5+5) GoParser 354ms ± 6% 348ms ± 6% ~ (p=0.913 n=5+5) Reflect 904ms ± 2% 944ms ± 4% ~ (p=0.056 n=5+5) Tar 398ms ±11% 430ms ± 7% ~ (p=0.079 n=5+5) XML 501ms ± 7% 489ms ± 5% ~ (p=0.444 n=5+5) name old text-bytes new text-bytes delta HelloSize 670kB ± 0% 670kB ± 0% +0.00% (p=0.008 n=5+5) CmdGoSize 7.22MB ± 0% 7.21MB ± 0% -0.07% (p=0.008 n=5+5) name old data-bytes new data-bytes delta HelloSize 9.88kB ± 0% 9.88kB ± 0% ~ (all equal) CmdGoSize 248kB ± 0% 248kB ± 0% -0.06% (p=0.008 n=5+5) name old bss-bytes new bss-bytes delta HelloSize 125kB ± 0% 125kB ± 0% ~ (all equal) CmdGoSize 145kB ± 0% 144kB ± 0% -0.20% (p=0.008 n=5+5) name old exe-bytes new exe-bytes delta HelloSize 1.43MB ± 0% 1.43MB ± 0% ~ (all equal) CmdGoSize 14.5MB ± 0% 14.5MB ± 0% -0.06% (p=0.008 n=5+5) Fixes #19714 Updates #20393 Change-Id: Ia090f5b5dc1bcd274ba8a39b233c1e1ace1b330e Reviewed-on: https://go-review.googlesource.com/100277 Run-TryBot: Giovanni Bajo <rasky@develer.com> Reviewed-by: David Chase <drchase@google.com>
2018-04-15 21:53:08 +00:00
return
}
} else if y == x {
if y > 5 { // ERROR "Proved Less64$"
cmd/compile: in prove, add transitive closure of relations Implement it through a partial order datastructure, which keeps the relations between SSA values in a forest of DAGs and is able to discover contradictions. In make.bash, this patch is able to prove hundreds of conditions which were not proved before. Compilebench: name old time/op new time/op delta Template 371ms ± 2% 368ms ± 1% ~ (p=0.222 n=5+5) Unicode 203ms ± 6% 199ms ± 3% ~ (p=0.421 n=5+5) GoTypes 1.17s ± 4% 1.18s ± 1% ~ (p=0.151 n=5+5) Compiler 5.54s ± 2% 5.59s ± 1% ~ (p=0.548 n=5+5) SSA 12.9s ± 2% 13.2s ± 1% +2.96% (p=0.032 n=5+5) Flate 245ms ± 2% 247ms ± 3% ~ (p=0.690 n=5+5) GoParser 302ms ± 6% 302ms ± 4% ~ (p=0.548 n=5+5) Reflect 764ms ± 4% 773ms ± 3% ~ (p=0.095 n=5+5) Tar 354ms ± 6% 361ms ± 3% ~ (p=0.222 n=5+5) XML 434ms ± 3% 429ms ± 1% ~ (p=0.421 n=5+5) StdCmd 22.6s ± 1% 22.9s ± 1% +1.40% (p=0.032 n=5+5) name old user-time/op new user-time/op delta Template 436ms ± 8% 426ms ± 5% ~ (p=0.579 n=5+5) Unicode 219ms ±15% 219ms ±12% ~ (p=1.000 n=5+5) GoTypes 1.47s ± 6% 1.53s ± 6% ~ (p=0.222 n=5+5) Compiler 7.26s ± 4% 7.40s ± 2% ~ (p=0.389 n=5+5) SSA 17.7s ± 4% 18.5s ± 4% +4.13% (p=0.032 n=5+5) Flate 257ms ± 5% 268ms ± 9% ~ (p=0.333 n=5+5) GoParser 354ms ± 6% 348ms ± 6% ~ (p=0.913 n=5+5) Reflect 904ms ± 2% 944ms ± 4% ~ (p=0.056 n=5+5) Tar 398ms ±11% 430ms ± 7% ~ (p=0.079 n=5+5) XML 501ms ± 7% 489ms ± 5% ~ (p=0.444 n=5+5) name old text-bytes new text-bytes delta HelloSize 670kB ± 0% 670kB ± 0% +0.00% (p=0.008 n=5+5) CmdGoSize 7.22MB ± 0% 7.21MB ± 0% -0.07% (p=0.008 n=5+5) name old data-bytes new data-bytes delta HelloSize 9.88kB ± 0% 9.88kB ± 0% ~ (all equal) CmdGoSize 248kB ± 0% 248kB ± 0% -0.06% (p=0.008 n=5+5) name old bss-bytes new bss-bytes delta HelloSize 125kB ± 0% 125kB ± 0% ~ (all equal) CmdGoSize 145kB ± 0% 144kB ± 0% -0.20% (p=0.008 n=5+5) name old exe-bytes new exe-bytes delta HelloSize 1.43MB ± 0% 1.43MB ± 0% ~ (all equal) CmdGoSize 14.5MB ± 0% 14.5MB ± 0% -0.06% (p=0.008 n=5+5) Fixes #19714 Updates #20393 Change-Id: Ia090f5b5dc1bcd274ba8a39b233c1e1ace1b330e Reviewed-on: https://go-review.googlesource.com/100277 Run-TryBot: Giovanni Bajo <rasky@develer.com> Reviewed-by: David Chase <drchase@google.com>
2018-04-15 21:53:08 +00:00
return
}
}
}
if x >= 10 {
if y > x {
if y > 10 { // ERROR "Proved Less64$"
cmd/compile: in prove, add transitive closure of relations Implement it through a partial order datastructure, which keeps the relations between SSA values in a forest of DAGs and is able to discover contradictions. In make.bash, this patch is able to prove hundreds of conditions which were not proved before. Compilebench: name old time/op new time/op delta Template 371ms ± 2% 368ms ± 1% ~ (p=0.222 n=5+5) Unicode 203ms ± 6% 199ms ± 3% ~ (p=0.421 n=5+5) GoTypes 1.17s ± 4% 1.18s ± 1% ~ (p=0.151 n=5+5) Compiler 5.54s ± 2% 5.59s ± 1% ~ (p=0.548 n=5+5) SSA 12.9s ± 2% 13.2s ± 1% +2.96% (p=0.032 n=5+5) Flate 245ms ± 2% 247ms ± 3% ~ (p=0.690 n=5+5) GoParser 302ms ± 6% 302ms ± 4% ~ (p=0.548 n=5+5) Reflect 764ms ± 4% 773ms ± 3% ~ (p=0.095 n=5+5) Tar 354ms ± 6% 361ms ± 3% ~ (p=0.222 n=5+5) XML 434ms ± 3% 429ms ± 1% ~ (p=0.421 n=5+5) StdCmd 22.6s ± 1% 22.9s ± 1% +1.40% (p=0.032 n=5+5) name old user-time/op new user-time/op delta Template 436ms ± 8% 426ms ± 5% ~ (p=0.579 n=5+5) Unicode 219ms ±15% 219ms ±12% ~ (p=1.000 n=5+5) GoTypes 1.47s ± 6% 1.53s ± 6% ~ (p=0.222 n=5+5) Compiler 7.26s ± 4% 7.40s ± 2% ~ (p=0.389 n=5+5) SSA 17.7s ± 4% 18.5s ± 4% +4.13% (p=0.032 n=5+5) Flate 257ms ± 5% 268ms ± 9% ~ (p=0.333 n=5+5) GoParser 354ms ± 6% 348ms ± 6% ~ (p=0.913 n=5+5) Reflect 904ms ± 2% 944ms ± 4% ~ (p=0.056 n=5+5) Tar 398ms ±11% 430ms ± 7% ~ (p=0.079 n=5+5) XML 501ms ± 7% 489ms ± 5% ~ (p=0.444 n=5+5) name old text-bytes new text-bytes delta HelloSize 670kB ± 0% 670kB ± 0% +0.00% (p=0.008 n=5+5) CmdGoSize 7.22MB ± 0% 7.21MB ± 0% -0.07% (p=0.008 n=5+5) name old data-bytes new data-bytes delta HelloSize 9.88kB ± 0% 9.88kB ± 0% ~ (all equal) CmdGoSize 248kB ± 0% 248kB ± 0% -0.06% (p=0.008 n=5+5) name old bss-bytes new bss-bytes delta HelloSize 125kB ± 0% 125kB ± 0% ~ (all equal) CmdGoSize 145kB ± 0% 144kB ± 0% -0.20% (p=0.008 n=5+5) name old exe-bytes new exe-bytes delta HelloSize 1.43MB ± 0% 1.43MB ± 0% ~ (all equal) CmdGoSize 14.5MB ± 0% 14.5MB ± 0% -0.06% (p=0.008 n=5+5) Fixes #19714 Updates #20393 Change-Id: Ia090f5b5dc1bcd274ba8a39b233c1e1ace1b330e Reviewed-on: https://go-review.googlesource.com/100277 Run-TryBot: Giovanni Bajo <rasky@develer.com> Reviewed-by: David Chase <drchase@google.com>
2018-04-15 21:53:08 +00:00
return
}
}
}
}
func trans2(a, b []int, i int) {
if len(a) != len(b) {
return
}
_ = a[i]
_ = b[i] // ERROR "Proved IsInBounds$"
}
func trans3(a, b []int, i int) {
if len(a) > len(b) {
return
}
_ = a[i]
_ = b[i] // ERROR "Proved IsInBounds$"
}
func trans4(b []byte, x int) {
// Issue #42603: slice len/cap transitive relations.
switch x {
case 0:
if len(b) < 20 {
return
}
_ = b[:2] // ERROR "Proved IsSliceInBounds$"
case 1:
if len(b) < 40 {
return
}
_ = b[:2] // ERROR "Proved IsSliceInBounds$"
}
}
// Derived from nat.cmp
func natcmp(x, y []uint) (r int) {
m := len(x)
n := len(y)
if m != n || m == 0 {
return
}
i := m - 1
for i > 0 && // ERROR "Induction variable: limits \(0,\?\], increment 1$"
x[i] == // ERROR "Proved IsInBounds$"
y[i] { // ERROR "Proved IsInBounds$"
i--
}
switch {
case x[i] < // todo, cannot prove this because it's dominated by i<=0 || x[i]==y[i]
y[i]: // ERROR "Proved IsInBounds$"
r = -1
case x[i] > // ERROR "Proved IsInBounds$"
y[i]: // ERROR "Proved IsInBounds$"
r = 1
}
return
}
func suffix(s, suffix string) bool {
// todo, we're still not able to drop the bound check here in the general case
return len(s) >= len(suffix) && s[len(s)-len(suffix):] == suffix
}
func constsuffix(s string) bool {
return suffix(s, "abc") // ERROR "Proved IsSliceInBounds$"
}
cmd/compile: detect OFORUNTIL inductive facts in prove Currently, we compile range loops into for loops with the obvious initialization and update of the index variable. In this form, the prove pass can see that the body is dominated by an i < len condition, and findIndVar can detect that i is an induction variable and that 0 <= i < len. GOEXPERIMENT=preemptibleloops compiles range loops to OFORUNTIL and we're preparing to unconditionally switch to a variation of this for #24543. OFORUNTIL moves the increment and condition *after* the body, which makes the bounds on the index variable much less obvious. With OFORUNTIL, proving anything about the index variable requires understanding the phi that joins the index values at the top of the loop body block. This interferes with both prove's ability to see that i < len (this is true on both paths that enter the body, but from two different conditional checks) and with findIndVar's ability to detect the induction pattern. Fix this by teaching prove to detect that the index in the pattern constructed by OFORUNTIL is an induction variable and add both bounds to the facts table. Currently this is done separately from findIndVar because it depends on prove's factsTable, while findIndVar runs before visiting blocks and building the factsTable. Without any GOEXPERIMENT, this has no effect on std or cmd. However, with GOEXPERIMENT=preemptibleloops, this change becomes necessary to prove 90 conditions in std and cmd. Change-Id: Ic025d669f81b53426309da5a6e8010e5ccaf4f49 Reviewed-on: https://go-review.googlesource.com/102603 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org>
2018-03-23 21:38:55 +00:00
// oforuntil tests the pattern created by OFORUNTIL blocks. These are
// handled by addLocalInductiveFacts rather than findIndVar.
func oforuntil(b []int) {
i := 0
if len(b) > i {
top:
println(b[i]) // ERROR "Induction variable: limits \[0,\?\), increment 1$" "Proved IsInBounds$"
cmd/compile: detect OFORUNTIL inductive facts in prove Currently, we compile range loops into for loops with the obvious initialization and update of the index variable. In this form, the prove pass can see that the body is dominated by an i < len condition, and findIndVar can detect that i is an induction variable and that 0 <= i < len. GOEXPERIMENT=preemptibleloops compiles range loops to OFORUNTIL and we're preparing to unconditionally switch to a variation of this for #24543. OFORUNTIL moves the increment and condition *after* the body, which makes the bounds on the index variable much less obvious. With OFORUNTIL, proving anything about the index variable requires understanding the phi that joins the index values at the top of the loop body block. This interferes with both prove's ability to see that i < len (this is true on both paths that enter the body, but from two different conditional checks) and with findIndVar's ability to detect the induction pattern. Fix this by teaching prove to detect that the index in the pattern constructed by OFORUNTIL is an induction variable and add both bounds to the facts table. Currently this is done separately from findIndVar because it depends on prove's factsTable, while findIndVar runs before visiting blocks and building the factsTable. Without any GOEXPERIMENT, this has no effect on std or cmd. However, with GOEXPERIMENT=preemptibleloops, this change becomes necessary to prove 90 conditions in std and cmd. Change-Id: Ic025d669f81b53426309da5a6e8010e5ccaf4f49 Reviewed-on: https://go-review.googlesource.com/102603 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org>
2018-03-23 21:38:55 +00:00
i++
if i < len(b) {
goto top
}
}
}
func atexit(foobar []func()) {
for i := len(foobar) - 1; i >= 0; i-- { // ERROR "Induction variable: limits \[0,\?\], increment 1"
f := foobar[i]
foobar = foobar[:i] // ERROR "IsSliceInBounds"
f()
}
}
func make1(n int) []int {
s := make([]int, n)
for i := 0; i < n; i++ { // ERROR "Induction variable: limits \[0,\?\), increment 1"
s[i] = 1 // ERROR "Proved IsInBounds$"
}
return s
}
func make2(n int) []int {
s := make([]int, n)
for i := range s { // ERROR "Induction variable: limits \[0,\?\), increment 1"
s[i] = 1 // ERROR "Proved IsInBounds$"
}
return s
}
cmd/compile: don't produce a past-the-end pointer in range loops Currently, range loops over slices and arrays are compiled roughly like: for i, x := range s { b } ⇓ for i, _n, _p := 0, len(s), &s[0]; i < _n; i, _p = i+1, _p + unsafe.Sizeof(s[0]) { b } ⇓ i, _n, _p := 0, len(s), &s[0] goto cond body: { b } i, _p = i+1, _p + unsafe.Sizeof(s[0]) cond: if i < _n { goto body } else { goto end } end: The problem with this lowering is that _p may temporarily point past the end of the allocation the moment before the loop terminates. Right now this isn't a problem because there's never a safe-point during this brief moment. We're about to introduce safe-points everywhere, so this bad pointer is going to be a problem. We could mark the increment as an unsafe block, but this inhibits reordering opportunities and could result in infrequent safe-points if the body is short. Instead, this CL fixes this by changing how we compile range loops to never produce this past-the-end pointer. It changes the lowering to roughly: i, _n, _p := 0, len(s), &s[0] if i < _n { goto body } else { goto end } top: _p += unsafe.Sizeof(s[0]) body: { b } i++ if i < _n { goto top } else { goto end } end: Notably, the increment is split into two parts: we increment the index before checking the condition, but increment the pointer only *after* the condition check has succeeded. The implementation builds on the OFORUNTIL construct that was introduced during the loop preemption experiments, since OFORUNTIL places the increment and condition after the loop body. To support the extra "late increment" step, we further define OFORUNTIL's "List" field to contain the late increment statements. This makes all of this a relatively small change. This depends on the improvements to the prove pass in CL 102603. With the current lowering, bounds-check elimination knows that i < _n in the body because the body block is dominated by the cond block. In the new lowering, deriving this fact requires detecting that i < _n on *both* paths into body and hence is true in body. CL 102603 made prove able to detect this. The code size effect of this is minimal. The cmd/go binary on linux/amd64 increases by 0.17%. Performance-wise, this actually appears to be a net win, though it's mostly noise: name old time/op new time/op delta BinaryTree17-12 2.80s ± 0% 2.61s ± 1% -6.88% (p=0.000 n=20+18) Fannkuch11-12 2.41s ± 0% 2.42s ± 0% +0.05% (p=0.005 n=20+20) FmtFprintfEmpty-12 41.6ns ± 5% 41.4ns ± 6% ~ (p=0.765 n=20+19) FmtFprintfString-12 69.4ns ± 3% 69.3ns ± 1% ~ (p=0.084 n=19+17) FmtFprintfInt-12 76.1ns ± 1% 77.3ns ± 1% +1.57% (p=0.000 n=19+19) FmtFprintfIntInt-12 122ns ± 2% 123ns ± 3% +0.95% (p=0.015 n=20+20) FmtFprintfPrefixedInt-12 153ns ± 2% 151ns ± 3% -1.27% (p=0.013 n=20+20) FmtFprintfFloat-12 215ns ± 0% 216ns ± 0% +0.47% (p=0.000 n=20+16) FmtManyArgs-12 486ns ± 1% 498ns ± 0% +2.40% (p=0.000 n=20+17) GobDecode-12 6.43ms ± 0% 6.50ms ± 0% +1.08% (p=0.000 n=18+19) GobEncode-12 5.43ms ± 1% 5.47ms ± 0% +0.76% (p=0.000 n=20+20) Gzip-12 218ms ± 1% 218ms ± 1% ~ (p=0.883 n=20+20) Gunzip-12 38.8ms ± 0% 38.9ms ± 0% ~ (p=0.644 n=19+19) HTTPClientServer-12 76.2µs ± 1% 76.4µs ± 2% ~ (p=0.218 n=20+20) JSONEncode-12 12.2ms ± 0% 12.3ms ± 1% +0.45% (p=0.000 n=19+19) JSONDecode-12 54.2ms ± 1% 53.3ms ± 0% -1.67% (p=0.000 n=20+20) Mandelbrot200-12 3.71ms ± 0% 3.71ms ± 0% ~ (p=0.143 n=19+20) GoParse-12 3.22ms ± 0% 3.19ms ± 1% -0.72% (p=0.000 n=20+20) RegexpMatchEasy0_32-12 76.7ns ± 1% 75.8ns ± 1% -1.19% (p=0.000 n=20+17) RegexpMatchEasy0_1K-12 245ns ± 1% 243ns ± 0% -0.72% (p=0.000 n=18+17) RegexpMatchEasy1_32-12 71.9ns ± 0% 71.7ns ± 1% -0.39% (p=0.006 n=12+18) RegexpMatchEasy1_1K-12 358ns ± 1% 354ns ± 1% -1.13% (p=0.000 n=20+19) RegexpMatchMedium_32-12 105ns ± 2% 105ns ± 1% -0.63% (p=0.007 n=19+20) RegexpMatchMedium_1K-12 31.9µs ± 1% 31.9µs ± 1% ~ (p=1.000 n=17+17) RegexpMatchHard_32-12 1.51µs ± 1% 1.52µs ± 2% +0.46% (p=0.042 n=18+18) RegexpMatchHard_1K-12 45.3µs ± 1% 45.5µs ± 2% +0.44% (p=0.029 n=18+19) Revcomp-12 388ms ± 1% 385ms ± 0% -0.57% (p=0.000 n=19+18) Template-12 63.0ms ± 1% 63.3ms ± 0% +0.50% (p=0.000 n=19+20) TimeParse-12 309ns ± 1% 307ns ± 0% -0.62% (p=0.000 n=20+20) TimeFormat-12 328ns ± 0% 333ns ± 0% +1.35% (p=0.000 n=19+19) [Geo mean] 47.0µs 46.9µs -0.20% (https://perf.golang.org/search?q=upload:20180326.1) For #10958. For #24543. Change-Id: Icbd52e711fdbe7938a1fea3e6baca1104b53ac3a Reviewed-on: https://go-review.googlesource.com/102604 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: David Chase <drchase@google.com>
2018-03-22 16:04:51 +00:00
// The range tests below test the index variable of range loops.
// range1 compiles to the "efficiently indexable" form of a range loop.
func range1(b []int) {
for i, v := range b { // ERROR "Induction variable: limits \[0,\?\), increment 1$"
b[i] = v + 1 // ERROR "Proved IsInBounds$"
if i < len(b) { // ERROR "Proved Less64$"
println("x")
}
if i >= 0 { // ERROR "Proved Leq64$"
cmd/compile: don't produce a past-the-end pointer in range loops Currently, range loops over slices and arrays are compiled roughly like: for i, x := range s { b } ⇓ for i, _n, _p := 0, len(s), &s[0]; i < _n; i, _p = i+1, _p + unsafe.Sizeof(s[0]) { b } ⇓ i, _n, _p := 0, len(s), &s[0] goto cond body: { b } i, _p = i+1, _p + unsafe.Sizeof(s[0]) cond: if i < _n { goto body } else { goto end } end: The problem with this lowering is that _p may temporarily point past the end of the allocation the moment before the loop terminates. Right now this isn't a problem because there's never a safe-point during this brief moment. We're about to introduce safe-points everywhere, so this bad pointer is going to be a problem. We could mark the increment as an unsafe block, but this inhibits reordering opportunities and could result in infrequent safe-points if the body is short. Instead, this CL fixes this by changing how we compile range loops to never produce this past-the-end pointer. It changes the lowering to roughly: i, _n, _p := 0, len(s), &s[0] if i < _n { goto body } else { goto end } top: _p += unsafe.Sizeof(s[0]) body: { b } i++ if i < _n { goto top } else { goto end } end: Notably, the increment is split into two parts: we increment the index before checking the condition, but increment the pointer only *after* the condition check has succeeded. The implementation builds on the OFORUNTIL construct that was introduced during the loop preemption experiments, since OFORUNTIL places the increment and condition after the loop body. To support the extra "late increment" step, we further define OFORUNTIL's "List" field to contain the late increment statements. This makes all of this a relatively small change. This depends on the improvements to the prove pass in CL 102603. With the current lowering, bounds-check elimination knows that i < _n in the body because the body block is dominated by the cond block. In the new lowering, deriving this fact requires detecting that i < _n on *both* paths into body and hence is true in body. CL 102603 made prove able to detect this. The code size effect of this is minimal. The cmd/go binary on linux/amd64 increases by 0.17%. Performance-wise, this actually appears to be a net win, though it's mostly noise: name old time/op new time/op delta BinaryTree17-12 2.80s ± 0% 2.61s ± 1% -6.88% (p=0.000 n=20+18) Fannkuch11-12 2.41s ± 0% 2.42s ± 0% +0.05% (p=0.005 n=20+20) FmtFprintfEmpty-12 41.6ns ± 5% 41.4ns ± 6% ~ (p=0.765 n=20+19) FmtFprintfString-12 69.4ns ± 3% 69.3ns ± 1% ~ (p=0.084 n=19+17) FmtFprintfInt-12 76.1ns ± 1% 77.3ns ± 1% +1.57% (p=0.000 n=19+19) FmtFprintfIntInt-12 122ns ± 2% 123ns ± 3% +0.95% (p=0.015 n=20+20) FmtFprintfPrefixedInt-12 153ns ± 2% 151ns ± 3% -1.27% (p=0.013 n=20+20) FmtFprintfFloat-12 215ns ± 0% 216ns ± 0% +0.47% (p=0.000 n=20+16) FmtManyArgs-12 486ns ± 1% 498ns ± 0% +2.40% (p=0.000 n=20+17) GobDecode-12 6.43ms ± 0% 6.50ms ± 0% +1.08% (p=0.000 n=18+19) GobEncode-12 5.43ms ± 1% 5.47ms ± 0% +0.76% (p=0.000 n=20+20) Gzip-12 218ms ± 1% 218ms ± 1% ~ (p=0.883 n=20+20) Gunzip-12 38.8ms ± 0% 38.9ms ± 0% ~ (p=0.644 n=19+19) HTTPClientServer-12 76.2µs ± 1% 76.4µs ± 2% ~ (p=0.218 n=20+20) JSONEncode-12 12.2ms ± 0% 12.3ms ± 1% +0.45% (p=0.000 n=19+19) JSONDecode-12 54.2ms ± 1% 53.3ms ± 0% -1.67% (p=0.000 n=20+20) Mandelbrot200-12 3.71ms ± 0% 3.71ms ± 0% ~ (p=0.143 n=19+20) GoParse-12 3.22ms ± 0% 3.19ms ± 1% -0.72% (p=0.000 n=20+20) RegexpMatchEasy0_32-12 76.7ns ± 1% 75.8ns ± 1% -1.19% (p=0.000 n=20+17) RegexpMatchEasy0_1K-12 245ns ± 1% 243ns ± 0% -0.72% (p=0.000 n=18+17) RegexpMatchEasy1_32-12 71.9ns ± 0% 71.7ns ± 1% -0.39% (p=0.006 n=12+18) RegexpMatchEasy1_1K-12 358ns ± 1% 354ns ± 1% -1.13% (p=0.000 n=20+19) RegexpMatchMedium_32-12 105ns ± 2% 105ns ± 1% -0.63% (p=0.007 n=19+20) RegexpMatchMedium_1K-12 31.9µs ± 1% 31.9µs ± 1% ~ (p=1.000 n=17+17) RegexpMatchHard_32-12 1.51µs ± 1% 1.52µs ± 2% +0.46% (p=0.042 n=18+18) RegexpMatchHard_1K-12 45.3µs ± 1% 45.5µs ± 2% +0.44% (p=0.029 n=18+19) Revcomp-12 388ms ± 1% 385ms ± 0% -0.57% (p=0.000 n=19+18) Template-12 63.0ms ± 1% 63.3ms ± 0% +0.50% (p=0.000 n=19+20) TimeParse-12 309ns ± 1% 307ns ± 0% -0.62% (p=0.000 n=20+20) TimeFormat-12 328ns ± 0% 333ns ± 0% +1.35% (p=0.000 n=19+19) [Geo mean] 47.0µs 46.9µs -0.20% (https://perf.golang.org/search?q=upload:20180326.1) For #10958. For #24543. Change-Id: Icbd52e711fdbe7938a1fea3e6baca1104b53ac3a Reviewed-on: https://go-review.googlesource.com/102604 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: David Chase <drchase@google.com>
2018-03-22 16:04:51 +00:00
println("x")
}
}
}
// range2 elements are larger, so they use the general form of a range loop.
func range2(b [][32]int) {
for i, v := range b {
b[i][0] = v[0] + 1 // ERROR "Induction variable: limits \[0,\?\), increment 1$" "Proved IsInBounds$"
cmd/compile: don't produce a past-the-end pointer in range loops Currently, range loops over slices and arrays are compiled roughly like: for i, x := range s { b } ⇓ for i, _n, _p := 0, len(s), &s[0]; i < _n; i, _p = i+1, _p + unsafe.Sizeof(s[0]) { b } ⇓ i, _n, _p := 0, len(s), &s[0] goto cond body: { b } i, _p = i+1, _p + unsafe.Sizeof(s[0]) cond: if i < _n { goto body } else { goto end } end: The problem with this lowering is that _p may temporarily point past the end of the allocation the moment before the loop terminates. Right now this isn't a problem because there's never a safe-point during this brief moment. We're about to introduce safe-points everywhere, so this bad pointer is going to be a problem. We could mark the increment as an unsafe block, but this inhibits reordering opportunities and could result in infrequent safe-points if the body is short. Instead, this CL fixes this by changing how we compile range loops to never produce this past-the-end pointer. It changes the lowering to roughly: i, _n, _p := 0, len(s), &s[0] if i < _n { goto body } else { goto end } top: _p += unsafe.Sizeof(s[0]) body: { b } i++ if i < _n { goto top } else { goto end } end: Notably, the increment is split into two parts: we increment the index before checking the condition, but increment the pointer only *after* the condition check has succeeded. The implementation builds on the OFORUNTIL construct that was introduced during the loop preemption experiments, since OFORUNTIL places the increment and condition after the loop body. To support the extra "late increment" step, we further define OFORUNTIL's "List" field to contain the late increment statements. This makes all of this a relatively small change. This depends on the improvements to the prove pass in CL 102603. With the current lowering, bounds-check elimination knows that i < _n in the body because the body block is dominated by the cond block. In the new lowering, deriving this fact requires detecting that i < _n on *both* paths into body and hence is true in body. CL 102603 made prove able to detect this. The code size effect of this is minimal. The cmd/go binary on linux/amd64 increases by 0.17%. Performance-wise, this actually appears to be a net win, though it's mostly noise: name old time/op new time/op delta BinaryTree17-12 2.80s ± 0% 2.61s ± 1% -6.88% (p=0.000 n=20+18) Fannkuch11-12 2.41s ± 0% 2.42s ± 0% +0.05% (p=0.005 n=20+20) FmtFprintfEmpty-12 41.6ns ± 5% 41.4ns ± 6% ~ (p=0.765 n=20+19) FmtFprintfString-12 69.4ns ± 3% 69.3ns ± 1% ~ (p=0.084 n=19+17) FmtFprintfInt-12 76.1ns ± 1% 77.3ns ± 1% +1.57% (p=0.000 n=19+19) FmtFprintfIntInt-12 122ns ± 2% 123ns ± 3% +0.95% (p=0.015 n=20+20) FmtFprintfPrefixedInt-12 153ns ± 2% 151ns ± 3% -1.27% (p=0.013 n=20+20) FmtFprintfFloat-12 215ns ± 0% 216ns ± 0% +0.47% (p=0.000 n=20+16) FmtManyArgs-12 486ns ± 1% 498ns ± 0% +2.40% (p=0.000 n=20+17) GobDecode-12 6.43ms ± 0% 6.50ms ± 0% +1.08% (p=0.000 n=18+19) GobEncode-12 5.43ms ± 1% 5.47ms ± 0% +0.76% (p=0.000 n=20+20) Gzip-12 218ms ± 1% 218ms ± 1% ~ (p=0.883 n=20+20) Gunzip-12 38.8ms ± 0% 38.9ms ± 0% ~ (p=0.644 n=19+19) HTTPClientServer-12 76.2µs ± 1% 76.4µs ± 2% ~ (p=0.218 n=20+20) JSONEncode-12 12.2ms ± 0% 12.3ms ± 1% +0.45% (p=0.000 n=19+19) JSONDecode-12 54.2ms ± 1% 53.3ms ± 0% -1.67% (p=0.000 n=20+20) Mandelbrot200-12 3.71ms ± 0% 3.71ms ± 0% ~ (p=0.143 n=19+20) GoParse-12 3.22ms ± 0% 3.19ms ± 1% -0.72% (p=0.000 n=20+20) RegexpMatchEasy0_32-12 76.7ns ± 1% 75.8ns ± 1% -1.19% (p=0.000 n=20+17) RegexpMatchEasy0_1K-12 245ns ± 1% 243ns ± 0% -0.72% (p=0.000 n=18+17) RegexpMatchEasy1_32-12 71.9ns ± 0% 71.7ns ± 1% -0.39% (p=0.006 n=12+18) RegexpMatchEasy1_1K-12 358ns ± 1% 354ns ± 1% -1.13% (p=0.000 n=20+19) RegexpMatchMedium_32-12 105ns ± 2% 105ns ± 1% -0.63% (p=0.007 n=19+20) RegexpMatchMedium_1K-12 31.9µs ± 1% 31.9µs ± 1% ~ (p=1.000 n=17+17) RegexpMatchHard_32-12 1.51µs ± 1% 1.52µs ± 2% +0.46% (p=0.042 n=18+18) RegexpMatchHard_1K-12 45.3µs ± 1% 45.5µs ± 2% +0.44% (p=0.029 n=18+19) Revcomp-12 388ms ± 1% 385ms ± 0% -0.57% (p=0.000 n=19+18) Template-12 63.0ms ± 1% 63.3ms ± 0% +0.50% (p=0.000 n=19+20) TimeParse-12 309ns ± 1% 307ns ± 0% -0.62% (p=0.000 n=20+20) TimeFormat-12 328ns ± 0% 333ns ± 0% +1.35% (p=0.000 n=19+19) [Geo mean] 47.0µs 46.9µs -0.20% (https://perf.golang.org/search?q=upload:20180326.1) For #10958. For #24543. Change-Id: Icbd52e711fdbe7938a1fea3e6baca1104b53ac3a Reviewed-on: https://go-review.googlesource.com/102604 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: David Chase <drchase@google.com>
2018-03-22 16:04:51 +00:00
if i < len(b) { // ERROR "Proved Less64$"
println("x")
}
if i >= 0 { // ERROR "Proved Leq64$"
cmd/compile: don't produce a past-the-end pointer in range loops Currently, range loops over slices and arrays are compiled roughly like: for i, x := range s { b } ⇓ for i, _n, _p := 0, len(s), &s[0]; i < _n; i, _p = i+1, _p + unsafe.Sizeof(s[0]) { b } ⇓ i, _n, _p := 0, len(s), &s[0] goto cond body: { b } i, _p = i+1, _p + unsafe.Sizeof(s[0]) cond: if i < _n { goto body } else { goto end } end: The problem with this lowering is that _p may temporarily point past the end of the allocation the moment before the loop terminates. Right now this isn't a problem because there's never a safe-point during this brief moment. We're about to introduce safe-points everywhere, so this bad pointer is going to be a problem. We could mark the increment as an unsafe block, but this inhibits reordering opportunities and could result in infrequent safe-points if the body is short. Instead, this CL fixes this by changing how we compile range loops to never produce this past-the-end pointer. It changes the lowering to roughly: i, _n, _p := 0, len(s), &s[0] if i < _n { goto body } else { goto end } top: _p += unsafe.Sizeof(s[0]) body: { b } i++ if i < _n { goto top } else { goto end } end: Notably, the increment is split into two parts: we increment the index before checking the condition, but increment the pointer only *after* the condition check has succeeded. The implementation builds on the OFORUNTIL construct that was introduced during the loop preemption experiments, since OFORUNTIL places the increment and condition after the loop body. To support the extra "late increment" step, we further define OFORUNTIL's "List" field to contain the late increment statements. This makes all of this a relatively small change. This depends on the improvements to the prove pass in CL 102603. With the current lowering, bounds-check elimination knows that i < _n in the body because the body block is dominated by the cond block. In the new lowering, deriving this fact requires detecting that i < _n on *both* paths into body and hence is true in body. CL 102603 made prove able to detect this. The code size effect of this is minimal. The cmd/go binary on linux/amd64 increases by 0.17%. Performance-wise, this actually appears to be a net win, though it's mostly noise: name old time/op new time/op delta BinaryTree17-12 2.80s ± 0% 2.61s ± 1% -6.88% (p=0.000 n=20+18) Fannkuch11-12 2.41s ± 0% 2.42s ± 0% +0.05% (p=0.005 n=20+20) FmtFprintfEmpty-12 41.6ns ± 5% 41.4ns ± 6% ~ (p=0.765 n=20+19) FmtFprintfString-12 69.4ns ± 3% 69.3ns ± 1% ~ (p=0.084 n=19+17) FmtFprintfInt-12 76.1ns ± 1% 77.3ns ± 1% +1.57% (p=0.000 n=19+19) FmtFprintfIntInt-12 122ns ± 2% 123ns ± 3% +0.95% (p=0.015 n=20+20) FmtFprintfPrefixedInt-12 153ns ± 2% 151ns ± 3% -1.27% (p=0.013 n=20+20) FmtFprintfFloat-12 215ns ± 0% 216ns ± 0% +0.47% (p=0.000 n=20+16) FmtManyArgs-12 486ns ± 1% 498ns ± 0% +2.40% (p=0.000 n=20+17) GobDecode-12 6.43ms ± 0% 6.50ms ± 0% +1.08% (p=0.000 n=18+19) GobEncode-12 5.43ms ± 1% 5.47ms ± 0% +0.76% (p=0.000 n=20+20) Gzip-12 218ms ± 1% 218ms ± 1% ~ (p=0.883 n=20+20) Gunzip-12 38.8ms ± 0% 38.9ms ± 0% ~ (p=0.644 n=19+19) HTTPClientServer-12 76.2µs ± 1% 76.4µs ± 2% ~ (p=0.218 n=20+20) JSONEncode-12 12.2ms ± 0% 12.3ms ± 1% +0.45% (p=0.000 n=19+19) JSONDecode-12 54.2ms ± 1% 53.3ms ± 0% -1.67% (p=0.000 n=20+20) Mandelbrot200-12 3.71ms ± 0% 3.71ms ± 0% ~ (p=0.143 n=19+20) GoParse-12 3.22ms ± 0% 3.19ms ± 1% -0.72% (p=0.000 n=20+20) RegexpMatchEasy0_32-12 76.7ns ± 1% 75.8ns ± 1% -1.19% (p=0.000 n=20+17) RegexpMatchEasy0_1K-12 245ns ± 1% 243ns ± 0% -0.72% (p=0.000 n=18+17) RegexpMatchEasy1_32-12 71.9ns ± 0% 71.7ns ± 1% -0.39% (p=0.006 n=12+18) RegexpMatchEasy1_1K-12 358ns ± 1% 354ns ± 1% -1.13% (p=0.000 n=20+19) RegexpMatchMedium_32-12 105ns ± 2% 105ns ± 1% -0.63% (p=0.007 n=19+20) RegexpMatchMedium_1K-12 31.9µs ± 1% 31.9µs ± 1% ~ (p=1.000 n=17+17) RegexpMatchHard_32-12 1.51µs ± 1% 1.52µs ± 2% +0.46% (p=0.042 n=18+18) RegexpMatchHard_1K-12 45.3µs ± 1% 45.5µs ± 2% +0.44% (p=0.029 n=18+19) Revcomp-12 388ms ± 1% 385ms ± 0% -0.57% (p=0.000 n=19+18) Template-12 63.0ms ± 1% 63.3ms ± 0% +0.50% (p=0.000 n=19+20) TimeParse-12 309ns ± 1% 307ns ± 0% -0.62% (p=0.000 n=20+20) TimeFormat-12 328ns ± 0% 333ns ± 0% +1.35% (p=0.000 n=19+19) [Geo mean] 47.0µs 46.9µs -0.20% (https://perf.golang.org/search?q=upload:20180326.1) For #10958. For #24543. Change-Id: Icbd52e711fdbe7938a1fea3e6baca1104b53ac3a Reviewed-on: https://go-review.googlesource.com/102604 Run-TryBot: Austin Clements <austin@google.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Keith Randall <khr@golang.org> Reviewed-by: David Chase <drchase@google.com>
2018-03-22 16:04:51 +00:00
println("x")
}
}
}
// signhint1-2 test whether the hint (int >= 0) is propagated into the loop.
func signHint1(i int, data []byte) {
if i >= 0 {
for i < len(data) { // ERROR "Induction variable: limits \[\?,\?\), increment 1$"
_ = data[i] // ERROR "Proved IsInBounds$"
i++
}
}
}
func signHint2(b []byte, n int) {
if n < 0 {
panic("")
}
_ = b[25]
for i := n; i <= 25; i++ { // ERROR "Induction variable: limits \[\?,25\], increment 1$"
b[i] = 123 // ERROR "Proved IsInBounds$"
}
}
// indexGT0 tests whether prove learns int index >= 0 from bounds check.
func indexGT0(b []byte, n int) {
_ = b[n]
_ = b[25]
for i := n; i <= 25; i++ { // ERROR "Induction variable: limits \[\?,25\], increment 1$"
b[i] = 123 // ERROR "Proved IsInBounds$"
}
}
// Induction variable in unrolled loop.
func unrollUpExcl(a []int) int {
var i, x int
for i = 0; i < len(a)-1; i += 2 { // ERROR "Induction variable: limits \[0,\?\), increment 2$"
x += a[i] // ERROR "Proved IsInBounds$"
x += a[i+1]
}
if i == len(a)-1 {
x += a[i]
}
return x
}
// Induction variable in unrolled loop.
func unrollUpIncl(a []int) int {
var i, x int
for i = 0; i <= len(a)-2; i += 2 { // ERROR "Induction variable: limits \[0,\?\], increment 2$"
x += a[i]
x += a[i+1]
}
if i == len(a)-1 {
x += a[i]
}
return x
}
// Induction variable in unrolled loop.
func unrollDownExcl0(a []int) int {
var i, x int
for i = len(a) - 1; i > 0; i -= 2 { // ERROR "Induction variable: limits \(0,\?\], increment 2$"
x += a[i] // ERROR "Proved IsInBounds$"
x += a[i-1] // ERROR "Proved IsInBounds$"
}
if i == 0 {
x += a[i]
}
return x
}
// Induction variable in unrolled loop.
func unrollDownExcl1(a []int) int {
var i, x int
for i = len(a) - 1; i >= 1; i -= 2 { // ERROR "Induction variable: limits \[1,\?\], increment 2$"
x += a[i] // ERROR "Proved IsInBounds$"
x += a[i-1] // ERROR "Proved IsInBounds$"
}
if i == 0 {
x += a[i]
}
return x
}
// Induction variable in unrolled loop.
func unrollDownInclStep(a []int) int {
var i, x int
for i = len(a); i >= 2; i -= 2 { // ERROR "Induction variable: limits \[2,\?\], increment 2$"
x += a[i-1] // ERROR "Proved IsInBounds$"
x += a[i-2]
}
if i == 1 {
x += a[i-1]
}
return x
}
// Not an induction variable (step too large)
func unrollExclStepTooLarge(a []int) int {
var i, x int
for i = 0; i < len(a)-1; i += 3 {
x += a[i]
x += a[i+1]
}
if i == len(a)-1 {
x += a[i]
}
return x
}
// Not an induction variable (step too large)
func unrollInclStepTooLarge(a []int) int {
var i, x int
for i = 0; i <= len(a)-2; i += 3 {
x += a[i]
x += a[i+1]
}
if i == len(a)-1 {
x += a[i]
}
return x
}
// Not an induction variable (min too small, iterating down)
func unrollDecMin(a []int) int {
var i, x int
for i = len(a); i >= math.MinInt64; i -= 2 {
x += a[i-1]
x += a[i-2]
}
if i == 1 { // ERROR "Disproved Eq64$"
x += a[i-1]
}
return x
}
// Not an induction variable (min too small, iterating up -- perhaps could allow, but why bother?)
func unrollIncMin(a []int) int {
var i, x int
for i = len(a); i >= math.MinInt64; i += 2 {
x += a[i-1]
x += a[i-2]
}
if i == 1 { // ERROR "Disproved Eq64$"
x += a[i-1]
}
return x
}
cmd/compile: handle sign/zero extensions in prove, via update method Array accesses with index types smaller than the machine word size may involve a sign or zero extension of the index value before bounds checking. Currently, this defeats prove because the facts about the original index value don't flow through the sign/zero extension. This CL fixes this by looking back through value-preserving sign/zero extensions when adding facts via Update and, where appropriate, applying the same facts using the pre-extension value. This fix is enhanced by also looking back through value-preserving extensions within ft.isNonNegative to infer whether the extended value is known to be non-negative. Without this additional isNonNegative enhancement, this logic is rendered significantly less effective by the limitation discussed in the next paragraph. In Update, the application of facts to pre-extension values is limited to cases where the domain of the new fact is consistent with the type of the pre-extension value. There may be cases where this cross-domain passing of facts is valid, but distinguishing them from the invalid cases is difficult for me to reason about and to implement. Assessing which cases to allow requires details about the context and inferences behind the fact being applied which are not available within Update. Additional difficulty arises from the fact that the SSA does not curently differentiate extensions added by the compiler for indexing operations, extensions added by the compiler for implicit conversions, or explicit extensions from the source. Examples of some cases that would need to be filtered correctly for cross-domain facts: (1) A uint8 is zero-extended to int for indexing (a value-preserving zeroExt). When, if ever, can signed domain facts learned about the int be applied to the uint8? (2) An int8 is sign-extended to int16 (value-preserving) for an equality comparison. Equality comparison facts are currently always learned in both the signed and unsigned domains. When, if ever, can the unsigned facts learned about the int16, from the int16 != int16 comparison, be applied to the original int8? This is an alternative to CL 122695 and CL 174309. Compared to CL 122695, this CL differs in that the facts added about the pre-extension value will pass through the Update method, where additional inferences are processed (e.g. fence-post implications, see #29964). CL 174309 is limited to bounds checks, so is narrower in application, and makes the code harder to read. Fixes #26292. Fixes #29964. Fixes #15074 Removes 238 bounds checks from std/cmd. Change-Id: I1f87c32ee672bfb8be397b27eab7a4c2f304893f Reviewed-on: https://go-review.googlesource.com/c/go/+/174704 Run-TryBot: Zach Jones <zachj1@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Giovanni Bajo <rasky@develer.com>
2019-04-09 22:19:43 +00:00
// The 4 xxxxExtNto64 functions below test whether prove is looking
// through value-preserving sign/zero extensions of index values (issue #26292).
// Look through all extensions
func signExtNto64(x []int, j8 int8, j16 int16, j32 int32) int {
if len(x) < 22 {
return 0
}
if j8 >= 0 && j8 < 22 {
return x[j8] // ERROR "Proved IsInBounds$"
cmd/compile: handle sign/zero extensions in prove, via update method Array accesses with index types smaller than the machine word size may involve a sign or zero extension of the index value before bounds checking. Currently, this defeats prove because the facts about the original index value don't flow through the sign/zero extension. This CL fixes this by looking back through value-preserving sign/zero extensions when adding facts via Update and, where appropriate, applying the same facts using the pre-extension value. This fix is enhanced by also looking back through value-preserving extensions within ft.isNonNegative to infer whether the extended value is known to be non-negative. Without this additional isNonNegative enhancement, this logic is rendered significantly less effective by the limitation discussed in the next paragraph. In Update, the application of facts to pre-extension values is limited to cases where the domain of the new fact is consistent with the type of the pre-extension value. There may be cases where this cross-domain passing of facts is valid, but distinguishing them from the invalid cases is difficult for me to reason about and to implement. Assessing which cases to allow requires details about the context and inferences behind the fact being applied which are not available within Update. Additional difficulty arises from the fact that the SSA does not curently differentiate extensions added by the compiler for indexing operations, extensions added by the compiler for implicit conversions, or explicit extensions from the source. Examples of some cases that would need to be filtered correctly for cross-domain facts: (1) A uint8 is zero-extended to int for indexing (a value-preserving zeroExt). When, if ever, can signed domain facts learned about the int be applied to the uint8? (2) An int8 is sign-extended to int16 (value-preserving) for an equality comparison. Equality comparison facts are currently always learned in both the signed and unsigned domains. When, if ever, can the unsigned facts learned about the int16, from the int16 != int16 comparison, be applied to the original int8? This is an alternative to CL 122695 and CL 174309. Compared to CL 122695, this CL differs in that the facts added about the pre-extension value will pass through the Update method, where additional inferences are processed (e.g. fence-post implications, see #29964). CL 174309 is limited to bounds checks, so is narrower in application, and makes the code harder to read. Fixes #26292. Fixes #29964. Fixes #15074 Removes 238 bounds checks from std/cmd. Change-Id: I1f87c32ee672bfb8be397b27eab7a4c2f304893f Reviewed-on: https://go-review.googlesource.com/c/go/+/174704 Run-TryBot: Zach Jones <zachj1@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Giovanni Bajo <rasky@develer.com>
2019-04-09 22:19:43 +00:00
}
if j16 >= 0 && j16 < 22 {
return x[j16] // ERROR "Proved IsInBounds$"
cmd/compile: handle sign/zero extensions in prove, via update method Array accesses with index types smaller than the machine word size may involve a sign or zero extension of the index value before bounds checking. Currently, this defeats prove because the facts about the original index value don't flow through the sign/zero extension. This CL fixes this by looking back through value-preserving sign/zero extensions when adding facts via Update and, where appropriate, applying the same facts using the pre-extension value. This fix is enhanced by also looking back through value-preserving extensions within ft.isNonNegative to infer whether the extended value is known to be non-negative. Without this additional isNonNegative enhancement, this logic is rendered significantly less effective by the limitation discussed in the next paragraph. In Update, the application of facts to pre-extension values is limited to cases where the domain of the new fact is consistent with the type of the pre-extension value. There may be cases where this cross-domain passing of facts is valid, but distinguishing them from the invalid cases is difficult for me to reason about and to implement. Assessing which cases to allow requires details about the context and inferences behind the fact being applied which are not available within Update. Additional difficulty arises from the fact that the SSA does not curently differentiate extensions added by the compiler for indexing operations, extensions added by the compiler for implicit conversions, or explicit extensions from the source. Examples of some cases that would need to be filtered correctly for cross-domain facts: (1) A uint8 is zero-extended to int for indexing (a value-preserving zeroExt). When, if ever, can signed domain facts learned about the int be applied to the uint8? (2) An int8 is sign-extended to int16 (value-preserving) for an equality comparison. Equality comparison facts are currently always learned in both the signed and unsigned domains. When, if ever, can the unsigned facts learned about the int16, from the int16 != int16 comparison, be applied to the original int8? This is an alternative to CL 122695 and CL 174309. Compared to CL 122695, this CL differs in that the facts added about the pre-extension value will pass through the Update method, where additional inferences are processed (e.g. fence-post implications, see #29964). CL 174309 is limited to bounds checks, so is narrower in application, and makes the code harder to read. Fixes #26292. Fixes #29964. Fixes #15074 Removes 238 bounds checks from std/cmd. Change-Id: I1f87c32ee672bfb8be397b27eab7a4c2f304893f Reviewed-on: https://go-review.googlesource.com/c/go/+/174704 Run-TryBot: Zach Jones <zachj1@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Giovanni Bajo <rasky@develer.com>
2019-04-09 22:19:43 +00:00
}
if j32 >= 0 && j32 < 22 {
return x[j32] // ERROR "Proved IsInBounds$"
cmd/compile: handle sign/zero extensions in prove, via update method Array accesses with index types smaller than the machine word size may involve a sign or zero extension of the index value before bounds checking. Currently, this defeats prove because the facts about the original index value don't flow through the sign/zero extension. This CL fixes this by looking back through value-preserving sign/zero extensions when adding facts via Update and, where appropriate, applying the same facts using the pre-extension value. This fix is enhanced by also looking back through value-preserving extensions within ft.isNonNegative to infer whether the extended value is known to be non-negative. Without this additional isNonNegative enhancement, this logic is rendered significantly less effective by the limitation discussed in the next paragraph. In Update, the application of facts to pre-extension values is limited to cases where the domain of the new fact is consistent with the type of the pre-extension value. There may be cases where this cross-domain passing of facts is valid, but distinguishing them from the invalid cases is difficult for me to reason about and to implement. Assessing which cases to allow requires details about the context and inferences behind the fact being applied which are not available within Update. Additional difficulty arises from the fact that the SSA does not curently differentiate extensions added by the compiler for indexing operations, extensions added by the compiler for implicit conversions, or explicit extensions from the source. Examples of some cases that would need to be filtered correctly for cross-domain facts: (1) A uint8 is zero-extended to int for indexing (a value-preserving zeroExt). When, if ever, can signed domain facts learned about the int be applied to the uint8? (2) An int8 is sign-extended to int16 (value-preserving) for an equality comparison. Equality comparison facts are currently always learned in both the signed and unsigned domains. When, if ever, can the unsigned facts learned about the int16, from the int16 != int16 comparison, be applied to the original int8? This is an alternative to CL 122695 and CL 174309. Compared to CL 122695, this CL differs in that the facts added about the pre-extension value will pass through the Update method, where additional inferences are processed (e.g. fence-post implications, see #29964). CL 174309 is limited to bounds checks, so is narrower in application, and makes the code harder to read. Fixes #26292. Fixes #29964. Fixes #15074 Removes 238 bounds checks from std/cmd. Change-Id: I1f87c32ee672bfb8be397b27eab7a4c2f304893f Reviewed-on: https://go-review.googlesource.com/c/go/+/174704 Run-TryBot: Zach Jones <zachj1@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Giovanni Bajo <rasky@develer.com>
2019-04-09 22:19:43 +00:00
}
return 0
}
func zeroExtNto64(x []int, j8 uint8, j16 uint16, j32 uint32) int {
if len(x) < 22 {
return 0
}
if j8 >= 0 && j8 < 22 {
return x[j8] // ERROR "Proved IsInBounds$"
cmd/compile: handle sign/zero extensions in prove, via update method Array accesses with index types smaller than the machine word size may involve a sign or zero extension of the index value before bounds checking. Currently, this defeats prove because the facts about the original index value don't flow through the sign/zero extension. This CL fixes this by looking back through value-preserving sign/zero extensions when adding facts via Update and, where appropriate, applying the same facts using the pre-extension value. This fix is enhanced by also looking back through value-preserving extensions within ft.isNonNegative to infer whether the extended value is known to be non-negative. Without this additional isNonNegative enhancement, this logic is rendered significantly less effective by the limitation discussed in the next paragraph. In Update, the application of facts to pre-extension values is limited to cases where the domain of the new fact is consistent with the type of the pre-extension value. There may be cases where this cross-domain passing of facts is valid, but distinguishing them from the invalid cases is difficult for me to reason about and to implement. Assessing which cases to allow requires details about the context and inferences behind the fact being applied which are not available within Update. Additional difficulty arises from the fact that the SSA does not curently differentiate extensions added by the compiler for indexing operations, extensions added by the compiler for implicit conversions, or explicit extensions from the source. Examples of some cases that would need to be filtered correctly for cross-domain facts: (1) A uint8 is zero-extended to int for indexing (a value-preserving zeroExt). When, if ever, can signed domain facts learned about the int be applied to the uint8? (2) An int8 is sign-extended to int16 (value-preserving) for an equality comparison. Equality comparison facts are currently always learned in both the signed and unsigned domains. When, if ever, can the unsigned facts learned about the int16, from the int16 != int16 comparison, be applied to the original int8? This is an alternative to CL 122695 and CL 174309. Compared to CL 122695, this CL differs in that the facts added about the pre-extension value will pass through the Update method, where additional inferences are processed (e.g. fence-post implications, see #29964). CL 174309 is limited to bounds checks, so is narrower in application, and makes the code harder to read. Fixes #26292. Fixes #29964. Fixes #15074 Removes 238 bounds checks from std/cmd. Change-Id: I1f87c32ee672bfb8be397b27eab7a4c2f304893f Reviewed-on: https://go-review.googlesource.com/c/go/+/174704 Run-TryBot: Zach Jones <zachj1@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Giovanni Bajo <rasky@develer.com>
2019-04-09 22:19:43 +00:00
}
if j16 >= 0 && j16 < 22 {
return x[j16] // ERROR "Proved IsInBounds$"
cmd/compile: handle sign/zero extensions in prove, via update method Array accesses with index types smaller than the machine word size may involve a sign or zero extension of the index value before bounds checking. Currently, this defeats prove because the facts about the original index value don't flow through the sign/zero extension. This CL fixes this by looking back through value-preserving sign/zero extensions when adding facts via Update and, where appropriate, applying the same facts using the pre-extension value. This fix is enhanced by also looking back through value-preserving extensions within ft.isNonNegative to infer whether the extended value is known to be non-negative. Without this additional isNonNegative enhancement, this logic is rendered significantly less effective by the limitation discussed in the next paragraph. In Update, the application of facts to pre-extension values is limited to cases where the domain of the new fact is consistent with the type of the pre-extension value. There may be cases where this cross-domain passing of facts is valid, but distinguishing them from the invalid cases is difficult for me to reason about and to implement. Assessing which cases to allow requires details about the context and inferences behind the fact being applied which are not available within Update. Additional difficulty arises from the fact that the SSA does not curently differentiate extensions added by the compiler for indexing operations, extensions added by the compiler for implicit conversions, or explicit extensions from the source. Examples of some cases that would need to be filtered correctly for cross-domain facts: (1) A uint8 is zero-extended to int for indexing (a value-preserving zeroExt). When, if ever, can signed domain facts learned about the int be applied to the uint8? (2) An int8 is sign-extended to int16 (value-preserving) for an equality comparison. Equality comparison facts are currently always learned in both the signed and unsigned domains. When, if ever, can the unsigned facts learned about the int16, from the int16 != int16 comparison, be applied to the original int8? This is an alternative to CL 122695 and CL 174309. Compared to CL 122695, this CL differs in that the facts added about the pre-extension value will pass through the Update method, where additional inferences are processed (e.g. fence-post implications, see #29964). CL 174309 is limited to bounds checks, so is narrower in application, and makes the code harder to read. Fixes #26292. Fixes #29964. Fixes #15074 Removes 238 bounds checks from std/cmd. Change-Id: I1f87c32ee672bfb8be397b27eab7a4c2f304893f Reviewed-on: https://go-review.googlesource.com/c/go/+/174704 Run-TryBot: Zach Jones <zachj1@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Giovanni Bajo <rasky@develer.com>
2019-04-09 22:19:43 +00:00
}
if j32 >= 0 && j32 < 22 {
return x[j32] // ERROR "Proved IsInBounds$"
cmd/compile: handle sign/zero extensions in prove, via update method Array accesses with index types smaller than the machine word size may involve a sign or zero extension of the index value before bounds checking. Currently, this defeats prove because the facts about the original index value don't flow through the sign/zero extension. This CL fixes this by looking back through value-preserving sign/zero extensions when adding facts via Update and, where appropriate, applying the same facts using the pre-extension value. This fix is enhanced by also looking back through value-preserving extensions within ft.isNonNegative to infer whether the extended value is known to be non-negative. Without this additional isNonNegative enhancement, this logic is rendered significantly less effective by the limitation discussed in the next paragraph. In Update, the application of facts to pre-extension values is limited to cases where the domain of the new fact is consistent with the type of the pre-extension value. There may be cases where this cross-domain passing of facts is valid, but distinguishing them from the invalid cases is difficult for me to reason about and to implement. Assessing which cases to allow requires details about the context and inferences behind the fact being applied which are not available within Update. Additional difficulty arises from the fact that the SSA does not curently differentiate extensions added by the compiler for indexing operations, extensions added by the compiler for implicit conversions, or explicit extensions from the source. Examples of some cases that would need to be filtered correctly for cross-domain facts: (1) A uint8 is zero-extended to int for indexing (a value-preserving zeroExt). When, if ever, can signed domain facts learned about the int be applied to the uint8? (2) An int8 is sign-extended to int16 (value-preserving) for an equality comparison. Equality comparison facts are currently always learned in both the signed and unsigned domains. When, if ever, can the unsigned facts learned about the int16, from the int16 != int16 comparison, be applied to the original int8? This is an alternative to CL 122695 and CL 174309. Compared to CL 122695, this CL differs in that the facts added about the pre-extension value will pass through the Update method, where additional inferences are processed (e.g. fence-post implications, see #29964). CL 174309 is limited to bounds checks, so is narrower in application, and makes the code harder to read. Fixes #26292. Fixes #29964. Fixes #15074 Removes 238 bounds checks from std/cmd. Change-Id: I1f87c32ee672bfb8be397b27eab7a4c2f304893f Reviewed-on: https://go-review.googlesource.com/c/go/+/174704 Run-TryBot: Zach Jones <zachj1@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Giovanni Bajo <rasky@develer.com>
2019-04-09 22:19:43 +00:00
}
return 0
}
// Process fence-post implications through 32to64 extensions (issue #29964)
func signExt32to64Fence(x []int, j int32) int {
if x[j] != 0 {
return 1
}
if j > 0 && x[j-1] != 0 { // ERROR "Proved IsInBounds$"
cmd/compile: handle sign/zero extensions in prove, via update method Array accesses with index types smaller than the machine word size may involve a sign or zero extension of the index value before bounds checking. Currently, this defeats prove because the facts about the original index value don't flow through the sign/zero extension. This CL fixes this by looking back through value-preserving sign/zero extensions when adding facts via Update and, where appropriate, applying the same facts using the pre-extension value. This fix is enhanced by also looking back through value-preserving extensions within ft.isNonNegative to infer whether the extended value is known to be non-negative. Without this additional isNonNegative enhancement, this logic is rendered significantly less effective by the limitation discussed in the next paragraph. In Update, the application of facts to pre-extension values is limited to cases where the domain of the new fact is consistent with the type of the pre-extension value. There may be cases where this cross-domain passing of facts is valid, but distinguishing them from the invalid cases is difficult for me to reason about and to implement. Assessing which cases to allow requires details about the context and inferences behind the fact being applied which are not available within Update. Additional difficulty arises from the fact that the SSA does not curently differentiate extensions added by the compiler for indexing operations, extensions added by the compiler for implicit conversions, or explicit extensions from the source. Examples of some cases that would need to be filtered correctly for cross-domain facts: (1) A uint8 is zero-extended to int for indexing (a value-preserving zeroExt). When, if ever, can signed domain facts learned about the int be applied to the uint8? (2) An int8 is sign-extended to int16 (value-preserving) for an equality comparison. Equality comparison facts are currently always learned in both the signed and unsigned domains. When, if ever, can the unsigned facts learned about the int16, from the int16 != int16 comparison, be applied to the original int8? This is an alternative to CL 122695 and CL 174309. Compared to CL 122695, this CL differs in that the facts added about the pre-extension value will pass through the Update method, where additional inferences are processed (e.g. fence-post implications, see #29964). CL 174309 is limited to bounds checks, so is narrower in application, and makes the code harder to read. Fixes #26292. Fixes #29964. Fixes #15074 Removes 238 bounds checks from std/cmd. Change-Id: I1f87c32ee672bfb8be397b27eab7a4c2f304893f Reviewed-on: https://go-review.googlesource.com/c/go/+/174704 Run-TryBot: Zach Jones <zachj1@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Giovanni Bajo <rasky@develer.com>
2019-04-09 22:19:43 +00:00
return 1
}
return 0
}
func zeroExt32to64Fence(x []int, j uint32) int {
if x[j] != 0 {
return 1
}
if j > 0 && x[j-1] != 0 { // ERROR "Proved IsInBounds$"
cmd/compile: handle sign/zero extensions in prove, via update method Array accesses with index types smaller than the machine word size may involve a sign or zero extension of the index value before bounds checking. Currently, this defeats prove because the facts about the original index value don't flow through the sign/zero extension. This CL fixes this by looking back through value-preserving sign/zero extensions when adding facts via Update and, where appropriate, applying the same facts using the pre-extension value. This fix is enhanced by also looking back through value-preserving extensions within ft.isNonNegative to infer whether the extended value is known to be non-negative. Without this additional isNonNegative enhancement, this logic is rendered significantly less effective by the limitation discussed in the next paragraph. In Update, the application of facts to pre-extension values is limited to cases where the domain of the new fact is consistent with the type of the pre-extension value. There may be cases where this cross-domain passing of facts is valid, but distinguishing them from the invalid cases is difficult for me to reason about and to implement. Assessing which cases to allow requires details about the context and inferences behind the fact being applied which are not available within Update. Additional difficulty arises from the fact that the SSA does not curently differentiate extensions added by the compiler for indexing operations, extensions added by the compiler for implicit conversions, or explicit extensions from the source. Examples of some cases that would need to be filtered correctly for cross-domain facts: (1) A uint8 is zero-extended to int for indexing (a value-preserving zeroExt). When, if ever, can signed domain facts learned about the int be applied to the uint8? (2) An int8 is sign-extended to int16 (value-preserving) for an equality comparison. Equality comparison facts are currently always learned in both the signed and unsigned domains. When, if ever, can the unsigned facts learned about the int16, from the int16 != int16 comparison, be applied to the original int8? This is an alternative to CL 122695 and CL 174309. Compared to CL 122695, this CL differs in that the facts added about the pre-extension value will pass through the Update method, where additional inferences are processed (e.g. fence-post implications, see #29964). CL 174309 is limited to bounds checks, so is narrower in application, and makes the code harder to read. Fixes #26292. Fixes #29964. Fixes #15074 Removes 238 bounds checks from std/cmd. Change-Id: I1f87c32ee672bfb8be397b27eab7a4c2f304893f Reviewed-on: https://go-review.googlesource.com/c/go/+/174704 Run-TryBot: Zach Jones <zachj1@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Giovanni Bajo <rasky@develer.com>
2019-04-09 22:19:43 +00:00
return 1
}
return 0
}
cmd/compile: make poset use sufficient conditions for OrderedOrEqual When assessing whether A <= B, the poset's OrderedOrEqual has a passing condition which permits A <= B, but is not sufficient to infer that A <= B. This CL removes that incorrect passing condition. Having identified that A and B are in the poset, the method will report that A <= B if any of these three conditions are true: (1) A and B are the same node in the poset. - This means we know that A == B. (2) There is a directed path, strict or not, from A -> B - This means we know that, at least, A <= B, but A < B is possible. (3) There is a directed path from B -> A, AND that path has no strict edges. - This means we know that B <= A, but do not know that B < A. In condition (3), we do not have enough information to say that A <= B, rather we only know that B == A (which satisfies A <= B) is possible. The way I understand it, a strict edge shows a known, strictly-ordered relation (<) but the lack of a strict edge does not show the lack of a strictly-ordered relation. The difference is highlighted by the example in #34802, where a bounds check is incorrectly removed by prove, such that negative indexes into a slice succeed: n := make([]int, 1) for i := -1; i <= 0; i++ { fmt.Printf("i is %d\n", i) n[i] = 1 // No Bounds check, program runs, assignment to n[-1] succeeds!! } When prove is checking the negative/failed branch from the bounds check at n[i], in the signed domain we learn (0 > i || i >= len(n)). Because prove can't learn the OR condition, we check whether we know that i is non-negative so we can learn something, namely that i >= len(n). Prove uses the poset to check whether we know that i is non-negative. At this point the poset holds the following relations as a directed graph: -1 <= i <= 0 -1 < 0 In poset.OrderedOrEqual, we are testing for 0 <= i. In this case, condition (3) above is true because there is a non-strict path from i -> 0, and that path does NOT have any strict edges. Because this condition is true, the poset reports to prove that i is known to be >= 0. Knowing, incorrectly, that i >= 0, prove learns from the failed bounds check that i >= len(n) in the signed domain. When the slice, n, was created, prove learned that len(n) == 1. Because i is also the induction variable for the loop, upon entering the loop, prove previously learned that i is in [-1,0]. So when prove attempts to learn from the failed bounds check, it finds the new fact, i > len(n), unsatisfiable given that it previously learned that i <= 0 and len(n) = 1. Fixes #34802 Change-Id: I235f4224bef97700c3aa5c01edcc595eb9f13afc Reviewed-on: https://go-review.googlesource.com/c/go/+/200759 Run-TryBot: Zach Jones <zachj1@gmail.com> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Giovanni Bajo <rasky@develer.com> Reviewed-by: Keith Randall <khr@golang.org>
2019-10-11 15:04:47 +00:00
// Ensure that bounds checks with negative indexes are not incorrectly removed.
func negIndex() {
n := make([]int, 1)
for i := -1; i <= 0; i++ { // ERROR "Induction variable: limits \[-1,0\], increment 1$"
n[i] = 1
}
}
func negIndex2(n int) {
a := make([]int, 5)
b := make([]int, 5)
c := make([]int, 5)
for i := -1; i <= 0; i-- {
b[i] = i
n++
if n > 10 {
break
}
}
useSlice(a)
useSlice(c)
}
cmd/compile: in prove, zero right shifts of positive int by #bits - 1 Taking over Zach's CL 212277. Just cleaned up and added a test. For a positive, signed integer, an arithmetic right shift of count (bit-width - 1) equals zero. e.g. int64(22) >> 63 -> 0. This CL makes prove replace these right shifts with a zero-valued constant. These shifts may arise in source code explicitly, but can also be created by the generic rewrite of signed division by a power of 2. // Signed divide by power of 2. // n / c = n >> log(c) if n >= 0 // = (n+c-1) >> log(c) if n < 0 // We conditionally add c-1 by adding n>>63>>(64-log(c)) (first shift signed, second shift unsigned). (Div64 <t> n (Const64 [c])) && isPowerOfTwo(c) -> (Rsh64x64 (Add64 <t> n (Rsh64Ux64 <t> (Rsh64x64 <t> n (Const64 <typ.UInt64> [63])) (Const64 <typ.UInt64> [64-log2(c)]))) (Const64 <typ.UInt64> [log2(c)])) If n is known to be positive, this rewrite includes an extra Add and 2 extra Rsh. This CL will allow prove to replace one of the extra Rsh with a 0. That replacement then allows lateopt to remove all the unneccesary fixups from the generic rewrite. There is a rewrite rule to handle this case directly: (Div64 n (Const64 [c])) && isNonNegative(n) && isPowerOfTwo(c) -> (Rsh64Ux64 n (Const64 <typ.UInt64> [log2(c)])) But this implementation of isNonNegative really only handles constants and a few special operations like len/cap. The division could be handled if the factsTable version of isNonNegative were available. Unfortunately, the first opt pass happens before prove even has a chance to deduce the numerator is non-negative, so the generic rewrite has already fired and created the extra Ops discussed above. Fixes #36159 By Printf count, this zeroes 137 right shifts when building std and cmd. Change-Id: Iab486910ac9d7cfb86ace2835456002732b384a2 Reviewed-on: https://go-review.googlesource.com/c/go/+/232857 Run-TryBot: Keith Randall <khr@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2020-05-07 20:44:51 +00:00
// Check that prove is zeroing these right shifts of positive ints by bit-width - 1.
// e.g (Rsh64x64 <t> n (Const64 <typ.UInt64> [63])) && ft.isNonNegative(n) -> 0
func sh64(n int64) int64 {
if n < 0 {
return n
}
return n >> 63 // ERROR "Proved Rsh64x64 shifts to zero"
}
func sh32(n int32) int32 {
if n < 0 {
return n
}
return n >> 31 // ERROR "Proved Rsh32x64 shifts to zero"
}
func sh32x64(n int32) int32 {
if n < 0 {
return n
}
return n >> uint64(31) // ERROR "Proved Rsh32x64 shifts to zero"
}
func sh16(n int16) int16 {
if n < 0 {
return n
}
return n >> 15 // ERROR "Proved Rsh16x64 shifts to zero"
}
func sh64noopt(n int64) int64 {
return n >> 63 // not optimized; n could be negative
}
// These cases are division of a positive signed integer by a power of 2.
// The opt pass doesnt have sufficient information to see that n is positive.
// So, instead, opt rewrites the division with a less-than-optimal replacement.
// Prove, which can see that n is nonnegative, cannot see the division because
// opt, an earlier pass, has already replaced it.
// The fix for this issue allows prove to zero a right shift that was added as
// part of the less-than-optimal reqwrite. That change by prove then allows
// lateopt to clean up all the unnecessary parts of the original division
cmd/compile: in prove, zero right shifts of positive int by #bits - 1 Taking over Zach's CL 212277. Just cleaned up and added a test. For a positive, signed integer, an arithmetic right shift of count (bit-width - 1) equals zero. e.g. int64(22) >> 63 -> 0. This CL makes prove replace these right shifts with a zero-valued constant. These shifts may arise in source code explicitly, but can also be created by the generic rewrite of signed division by a power of 2. // Signed divide by power of 2. // n / c = n >> log(c) if n >= 0 // = (n+c-1) >> log(c) if n < 0 // We conditionally add c-1 by adding n>>63>>(64-log(c)) (first shift signed, second shift unsigned). (Div64 <t> n (Const64 [c])) && isPowerOfTwo(c) -> (Rsh64x64 (Add64 <t> n (Rsh64Ux64 <t> (Rsh64x64 <t> n (Const64 <typ.UInt64> [63])) (Const64 <typ.UInt64> [64-log2(c)]))) (Const64 <typ.UInt64> [log2(c)])) If n is known to be positive, this rewrite includes an extra Add and 2 extra Rsh. This CL will allow prove to replace one of the extra Rsh with a 0. That replacement then allows lateopt to remove all the unneccesary fixups from the generic rewrite. There is a rewrite rule to handle this case directly: (Div64 n (Const64 [c])) && isNonNegative(n) && isPowerOfTwo(c) -> (Rsh64Ux64 n (Const64 <typ.UInt64> [log2(c)])) But this implementation of isNonNegative really only handles constants and a few special operations like len/cap. The division could be handled if the factsTable version of isNonNegative were available. Unfortunately, the first opt pass happens before prove even has a chance to deduce the numerator is non-negative, so the generic rewrite has already fired and created the extra Ops discussed above. Fixes #36159 By Printf count, this zeroes 137 right shifts when building std and cmd. Change-Id: Iab486910ac9d7cfb86ace2835456002732b384a2 Reviewed-on: https://go-review.googlesource.com/c/go/+/232857 Run-TryBot: Keith Randall <khr@golang.org> TryBot-Result: Gobot Gobot <gobot@golang.org> Reviewed-by: Cherry Zhang <cherryyz@google.com>
2020-05-07 20:44:51 +00:00
// replacement. See issue #36159.
func divShiftClean(n int) int {
if n < 0 {
return n
}
return n / int(8) // ERROR "Proved Rsh64x64 shifts to zero"
}
func divShiftClean64(n int64) int64 {
if n < 0 {
return n
}
return n / int64(16) // ERROR "Proved Rsh64x64 shifts to zero"
}
func divShiftClean32(n int32) int32 {
if n < 0 {
return n
}
return n / int32(16) // ERROR "Proved Rsh32x64 shifts to zero"
}
//go:noinline
func useInt(a int) {
}
//go:noinline
func useSlice(a []int) {
}
func main() {
}