Auto merge of #23884 - Manishearth:rollup, r=Manishearth

- Successful merges: #23558, #23813, #23826, #23836, #23839, #23846, #23852, #23859, #23862, #23865, #23866, #23869, #23874
- Failed merges:
This commit is contained in:
bors 2015-03-31 05:47:35 +00:00
commit b3317d6891
46 changed files with 518 additions and 226 deletions

View file

@ -125,13 +125,18 @@ ONLY_RLIB_rustc_bitflags := 1
# On channels where the only usable crate is std, only build documentation for
# std. This keeps distributions small and doesn't clutter up the API docs with
# confusing internal details from the crates behind the facade.
#
# (Disabled while cmr figures out how to change rustdoc to make reexports work
# slightly nicer. Otherwise, all cross-crate links to Vec will go to
# libcollections, breaking them, and [src] links for anything reexported will
# not work.)
ifeq ($(CFG_RELEASE_CHANNEL),stable)
DOC_CRATES := std
else
ifeq ($(CFG_RELEASE_CHANNEL),beta)
DOC_CRATES := std
else
#ifeq ($(CFG_RELEASE_CHANNEL),stable)
#DOC_CRATES := std
#else
#ifeq ($(CFG_RELEASE_CHANNEL),beta)
#DOC_CRATES := std
#else
DOC_CRATES := $(filter-out rustc, \
$(filter-out rustc_trans, \
$(filter-out rustc_typeck, \
@ -143,8 +148,8 @@ DOC_CRATES := $(filter-out rustc, \
$(filter-out log, \
$(filter-out getopts, \
$(filter-out syntax, $(CRATES))))))))))))
endif
endif
#endif
#endif
COMPILER_DOC_CRATES := rustc rustc_trans rustc_borrowck rustc_resolve \
rustc_typeck rustc_driver syntax rustc_privacy \
rustc_lint

View file

@ -1188,12 +1188,15 @@ the guarantee that these issues are never caused by safe code.
* Data races
* Dereferencing a null/dangling raw pointer
* Mutating an immutable value/reference without `UnsafeCell`
* Reads of [undef](http://llvm.org/docs/LangRef.html#undefined-values)
(uninitialized) memory
* Breaking the [pointer aliasing
rules](http://llvm.org/docs/LangRef.html#pointer-aliasing-rules)
with raw pointers (a subset of the rules used by C)
* `&mut` and `&` follow LLVMs scoped [noalias] model, except if the `&T`
contains an `UnsafeCell<U>`. Unsafe code must not violate these aliasing
guarantees.
* Mutating an immutable value/reference without `UnsafeCell<U>`
* Invoking undefined behavior via compiler intrinsics:
* Indexing outside of the bounds of an object with `std::ptr::offset`
(`offset` intrinsic), with
@ -1210,6 +1213,8 @@ the guarantee that these issues are never caused by safe code.
code. Rust's failure system is not compatible with exception handling in
other languages. Unwinding must be caught and handled at FFI boundaries.
[noalias]: http://llvm.org/docs/LangRef.html#noalias
##### Behaviour not considered unsafe
This is a list of behaviour not considered *unsafe* in Rust terms, but that may

View file

@ -181,17 +181,23 @@ impl Circle {
}
struct CircleBuilder {
coordinate: f64,
x: f64,
y: f64,
radius: f64,
}
impl CircleBuilder {
fn new() -> CircleBuilder {
CircleBuilder { coordinate: 0.0, radius: 0.0, }
CircleBuilder { x: 0.0, y: 0.0, radius: 0.0, }
}
fn coordinate(&mut self, coordinate: f64) -> &mut CircleBuilder {
self.coordinate = coordinate;
fn x(&mut self, coordinate: f64) -> &mut CircleBuilder {
self.x = coordinate;
self
}
fn y(&mut self, coordinate: f64) -> &mut CircleBuilder {
self.x = coordinate;
self
}
@ -201,18 +207,20 @@ impl CircleBuilder {
}
fn finalize(&self) -> Circle {
Circle { x: self.coordinate, y: self.coordinate, radius: self.radius }
Circle { x: self.x, y: self.y, radius: self.radius }
}
}
fn main() {
let c = CircleBuilder::new()
.coordinate(10.0)
.radius(5.0)
.x(1.0)
.y(2.0)
.radius(2.0)
.finalize();
println!("area: {}", c.area());
println!("x: {}", c.x);
println!("y: {}", c.y);
}
```

View file

@ -472,10 +472,15 @@ thread-safe counterpart of `Rc<T>`.
## Lifetime Elision
Earlier, we mentioned *lifetime elision*, a feature of Rust which allows you to
not write lifetime annotations in certain circumstances. All references have a
lifetime, and so if you elide a lifetime (like `&T` instead of `&'a T`), Rust
will do three things to determine what those lifetimes should be.
Rust supports powerful local type inference in function bodies, but its
forbidden in item signatures to allow reasoning about the types just based in
the item signature alone. However, for ergonomic reasons a very restricted
secondary inference algorithm called “lifetime elision” applies in function
signatures. It infers only based on the signature components themselves and not
based on the body of the function, only infers lifetime paramters, and does
this with only three easily memorizable and unambiguous rules. This makes
lifetime elision a shorthand for writing an item signature, while not hiding
away the actual types involved as full local inference would if applied to it.
When talking about lifetime elision, we use the term *input lifetime* and
*output lifetime*. An *input lifetime* is a lifetime associated with a parameter

View file

@ -231,7 +231,7 @@ pub fn add_two(a: i32) -> i32 {
}
#[cfg(test)]
mod tests {
mod test {
use super::add_two;
#[test]
@ -241,7 +241,7 @@ mod tests {
}
```
There's a few changes here. The first is the introduction of a `mod tests` with
There's a few changes here. The first is the introduction of a `mod test` with
a `cfg` attribute. The module allows us to group all of our tests together, and
to also define helper functions if needed, that don't become a part of the rest
of our crate. The `cfg` attribute only compiles our test code if we're
@ -260,7 +260,7 @@ pub fn add_two(a: i32) -> i32 {
}
#[cfg(test)]
mod tests {
mod test {
use super::*;
#[test]

View file

@ -301,7 +301,7 @@ pub unsafe fn reallocate(ptr: *mut u8, old_size: usize, size: usize, align: usiz
libc::realloc(ptr as *mut libc::c_void, size as libc::size_t) as *mut u8
} else {
let new_ptr = allocate(size, align);
ptr::copy(new_ptr, ptr, cmp::min(size, old_size));
ptr::copy(ptr, new_ptr, cmp::min(size, old_size));
deallocate(ptr, old_size, align);
new_ptr
}

View file

@ -1133,13 +1133,13 @@ unsafe fn push_edge(&mut self, edge: Node<K, V>) {
#[inline]
unsafe fn insert_kv(&mut self, index: usize, key: K, val: V) -> &mut V {
ptr::copy(
self.keys_mut().as_mut_ptr().offset(index as isize + 1),
self.keys().as_ptr().offset(index as isize),
self.keys_mut().as_mut_ptr().offset(index as isize + 1),
self.len() - index
);
ptr::copy(
self.vals_mut().as_mut_ptr().offset(index as isize + 1),
self.vals().as_ptr().offset(index as isize),
self.vals_mut().as_mut_ptr().offset(index as isize + 1),
self.len() - index
);
@ -1155,8 +1155,8 @@ unsafe fn insert_kv(&mut self, index: usize, key: K, val: V) -> &mut V {
#[inline]
unsafe fn insert_edge(&mut self, index: usize, edge: Node<K, V>) {
ptr::copy(
self.edges_mut().as_mut_ptr().offset(index as isize + 1),
self.edges().as_ptr().offset(index as isize),
self.edges_mut().as_mut_ptr().offset(index as isize + 1),
self.len() - index
);
ptr::write(self.edges_mut().get_unchecked_mut(index), edge);
@ -1188,13 +1188,13 @@ unsafe fn remove_kv(&mut self, index: usize) -> (K, V) {
let val = ptr::read(self.vals().get_unchecked(index));
ptr::copy(
self.keys_mut().as_mut_ptr().offset(index as isize),
self.keys().as_ptr().offset(index as isize + 1),
self.keys_mut().as_mut_ptr().offset(index as isize),
self.len() - index - 1
);
ptr::copy(
self.vals_mut().as_mut_ptr().offset(index as isize),
self.vals().as_ptr().offset(index as isize + 1),
self.vals_mut().as_mut_ptr().offset(index as isize),
self.len() - index - 1
);
@ -1209,8 +1209,8 @@ unsafe fn remove_edge(&mut self, index: usize) -> Node<K, V> {
let edge = ptr::read(self.edges().get_unchecked(index));
ptr::copy(
self.edges_mut().as_mut_ptr().offset(index as isize),
self.edges().as_ptr().offset(index as isize + 1),
self.edges_mut().as_mut_ptr().offset(index as isize),
// index can be == len+1, so do the +1 first to avoid underflow.
(self.len() + 1) - index
);
@ -1237,19 +1237,19 @@ fn split(&mut self) -> (K, V, Node<K, V>) {
right._len = self.len() / 2;
let right_offset = self.len() - right.len();
ptr::copy_nonoverlapping(
right.keys_mut().as_mut_ptr(),
self.keys().as_ptr().offset(right_offset as isize),
right.keys_mut().as_mut_ptr(),
right.len()
);
ptr::copy_nonoverlapping(
right.vals_mut().as_mut_ptr(),
self.vals().as_ptr().offset(right_offset as isize),
right.vals_mut().as_mut_ptr(),
right.len()
);
if !self.is_leaf() {
ptr::copy_nonoverlapping(
right.edges_mut().as_mut_ptr(),
self.edges().as_ptr().offset(right_offset as isize),
right.edges_mut().as_mut_ptr(),
right.len() + 1
);
}
@ -1278,19 +1278,19 @@ fn absorb(&mut self, key: K, val: V, mut right: Node<K, V>) {
ptr::write(self.vals_mut().get_unchecked_mut(old_len), val);
ptr::copy_nonoverlapping(
self.keys_mut().as_mut_ptr().offset(old_len as isize + 1),
right.keys().as_ptr(),
self.keys_mut().as_mut_ptr().offset(old_len as isize + 1),
right.len()
);
ptr::copy_nonoverlapping(
self.vals_mut().as_mut_ptr().offset(old_len as isize + 1),
right.vals().as_ptr(),
self.vals_mut().as_mut_ptr().offset(old_len as isize + 1),
right.len()
);
if !self.is_leaf() {
ptr::copy_nonoverlapping(
self.edges_mut().as_mut_ptr().offset(old_len as isize + 1),
right.edges().as_ptr(),
self.edges_mut().as_mut_ptr().offset(old_len as isize + 1),
right.len() + 1
);
}

View file

@ -1320,10 +1320,10 @@ fn insertion_sort<T, F>(v: &mut [T], mut compare: F) where F: FnMut(&T, &T) -> O
if i != j {
let tmp = ptr::read(read_ptr);
ptr::copy(buf_v.offset(j + 1),
&*buf_v.offset(j),
ptr::copy(&*buf_v.offset(j),
buf_v.offset(j + 1),
(i - j) as usize);
ptr::copy_nonoverlapping(buf_v.offset(j), &tmp, 1);
ptr::copy_nonoverlapping(&tmp, buf_v.offset(j), 1);
mem::forget(tmp);
}
}
@ -1396,10 +1396,10 @@ fn merge_sort<T, F>(v: &mut [T], mut compare: F) where F: FnMut(&T, &T) -> Order
// j + 1 could be `len` (for the last `i`), but in
// that case, `i == j` so we don't copy. The
// `.offset(j)` is always in bounds.
ptr::copy(buf_dat.offset(j + 1),
&*buf_dat.offset(j),
ptr::copy(&*buf_dat.offset(j),
buf_dat.offset(j + 1),
i - j as usize);
ptr::copy_nonoverlapping(buf_dat.offset(j), read_ptr, 1);
ptr::copy_nonoverlapping(read_ptr, buf_dat.offset(j), 1);
}
}
}
@ -1447,11 +1447,11 @@ fn merge_sort<T, F>(v: &mut [T], mut compare: F) where F: FnMut(&T, &T) -> Order
if left == right_start {
// the number remaining in this run.
let elems = (right_end as usize - right as usize) / mem::size_of::<T>();
ptr::copy_nonoverlapping(out, &*right, elems);
ptr::copy_nonoverlapping(&*right, out, elems);
break;
} else if right == right_end {
let elems = (right_start as usize - left as usize) / mem::size_of::<T>();
ptr::copy_nonoverlapping(out, &*left, elems);
ptr::copy_nonoverlapping(&*left, out, elems);
break;
}
@ -1465,7 +1465,7 @@ fn merge_sort<T, F>(v: &mut [T], mut compare: F) where F: FnMut(&T, &T) -> Order
} else {
step(&mut left)
};
ptr::copy_nonoverlapping(out, &*to_copy, 1);
ptr::copy_nonoverlapping(&*to_copy, out, 1);
step(&mut out);
}
}
@ -1479,7 +1479,7 @@ fn merge_sort<T, F>(v: &mut [T], mut compare: F) where F: FnMut(&T, &T) -> Order
// write the result to `v` in one go, so that there are never two copies
// of the same object in `v`.
unsafe {
ptr::copy_nonoverlapping(v.as_mut_ptr(), &*buf_dat, len);
ptr::copy_nonoverlapping(&*buf_dat, v.as_mut_ptr(), len);
}
// increment the pointer, returning the old pointer.

View file

@ -592,8 +592,8 @@ pub fn remove(&mut self, idx: usize) -> char {
let ch = self.char_at(idx);
let next = idx + ch.len_utf8();
unsafe {
ptr::copy(self.vec.as_mut_ptr().offset(idx as isize),
self.vec.as_ptr().offset(next as isize),
ptr::copy(self.vec.as_ptr().offset(next as isize),
self.vec.as_mut_ptr().offset(idx as isize),
len - next);
self.vec.set_len(len - (next - idx));
}
@ -622,11 +622,11 @@ pub fn insert(&mut self, idx: usize, ch: char) {
let amt = ch.encode_utf8(&mut bits).unwrap();
unsafe {
ptr::copy(self.vec.as_mut_ptr().offset((idx + amt) as isize),
self.vec.as_ptr().offset(idx as isize),
ptr::copy(self.vec.as_ptr().offset(idx as isize),
self.vec.as_mut_ptr().offset((idx + amt) as isize),
len - idx);
ptr::copy(self.vec.as_mut_ptr().offset(idx as isize),
bits.as_ptr(),
ptr::copy(bits.as_ptr(),
self.vec.as_mut_ptr().offset(idx as isize),
amt);
self.vec.set_len(len + amt);
}
@ -1019,11 +1019,28 @@ fn as_ref(&self) -> &str {
#[stable(feature = "rust1", since = "1.0.0")]
impl<'a> From<&'a str> for String {
#[inline]
fn from(s: &'a str) -> String {
s.to_string()
}
}
#[stable(feature = "rust1", since = "1.0.0")]
impl<'a> From<&'a str> for Cow<'a, str> {
#[inline]
fn from(s: &'a str) -> Cow<'a, str> {
Cow::Borrowed(s)
}
}
#[stable(feature = "rust1", since = "1.0.0")]
impl<'a> From<String> for Cow<'a, str> {
#[inline]
fn from(s: String) -> Cow<'a, str> {
Cow::Owned(s)
}
}
#[stable(feature = "rust1", since = "1.0.0")]
impl Into<Vec<u8>> for String {
fn into(self) -> Vec<u8> {
@ -1031,7 +1048,7 @@ fn into(self) -> Vec<u8> {
}
}
#[stable(feature = "rust1", since = "1.0.0")]
#[unstable(feature = "into_cow", reason = "may be replaced by `convert::Into`")]
impl IntoCow<'static, str> for String {
#[inline]
fn into_cow(self) -> Cow<'static, str> {
@ -1039,7 +1056,7 @@ fn into_cow(self) -> Cow<'static, str> {
}
}
#[stable(feature = "rust1", since = "1.0.0")]
#[unstable(feature = "into_cow", reason = "may be replaced by `convert::Into`")]
impl<'a> IntoCow<'a, str> for &'a str {
#[inline]
fn into_cow(self) -> Cow<'a, str> {

View file

@ -260,16 +260,17 @@ pub unsafe fn from_raw_parts(ptr: *mut T, length: usize,
/// Creates a vector by copying the elements from a raw pointer.
///
/// This function will copy `elts` contiguous elements starting at `ptr` into a new allocation
/// owned by the returned `Vec<T>`. The elements of the buffer are copied into the vector
/// without cloning, as if `ptr::read()` were called on them.
/// This function will copy `elts` contiguous elements starting at `ptr`
/// into a new allocation owned by the returned `Vec<T>`. The elements of
/// the buffer are copied into the vector without cloning, as if
/// `ptr::read()` were called on them.
#[inline]
#[unstable(feature = "collections",
reason = "may be better expressed via composition")]
pub unsafe fn from_raw_buf(ptr: *const T, elts: usize) -> Vec<T> {
let mut dst = Vec::with_capacity(elts);
dst.set_len(elts);
ptr::copy_nonoverlapping(dst.as_mut_ptr(), ptr, elts);
ptr::copy_nonoverlapping(ptr, dst.as_mut_ptr(), elts);
dst
}
@ -288,8 +289,9 @@ pub fn capacity(&self) -> usize {
self.cap
}
/// Reserves capacity for at least `additional` more elements to be inserted in the given
/// `Vec<T>`. The collection may reserve more space to avoid frequent reallocations.
/// Reserves capacity for at least `additional` more elements to be inserted
/// in the given `Vec<T>`. The collection may reserve more space to avoid
/// frequent reallocations.
///
/// # Panics
///
@ -541,7 +543,7 @@ pub fn insert(&mut self, index: usize, element: T) {
let p = self.as_mut_ptr().offset(index as isize);
// Shift everything over to make space. (Duplicating the
// `index`th element into two consecutive places.)
ptr::copy(p.offset(1), &*p, len - index);
ptr::copy(&*p, p.offset(1), len - index);
// Write it in, overwriting the first copy of the `index`th
// element.
ptr::write(&mut *p, element);
@ -579,7 +581,7 @@ pub fn remove(&mut self, index: usize) -> T {
ret = ptr::read(ptr);
// Shift everything down to fill in that spot.
ptr::copy(ptr, &*ptr.offset(1), len - index - 1);
ptr::copy(&*ptr.offset(1), ptr, len - index - 1);
}
self.set_len(len - 1);
ret
@ -721,8 +723,8 @@ pub fn append(&mut self, other: &mut Self) {
let len = self.len();
unsafe {
ptr::copy_nonoverlapping(
self.get_unchecked_mut(len),
other.as_ptr(),
self.get_unchecked_mut(len),
other.len());
}
@ -1042,8 +1044,8 @@ pub fn split_off(&mut self, at: usize) -> Self {
other.set_len(other_len);
ptr::copy_nonoverlapping(
other.as_mut_ptr(),
self.as_ptr().offset(at as isize),
other.as_mut_ptr(),
other.len());
}
other

View file

@ -142,8 +142,8 @@ unsafe fn copy(&self, dst: usize, src: usize, len: usize) {
debug_assert!(src + len <= self.cap, "dst={} src={} len={} cap={}", dst, src, len,
self.cap);
ptr::copy(
self.ptr.offset(dst as isize),
self.ptr.offset(src as isize),
self.ptr.offset(dst as isize),
len);
}
@ -155,8 +155,8 @@ unsafe fn copy_nonoverlapping(&self, dst: usize, src: usize, len: usize) {
debug_assert!(src + len <= self.cap, "dst={} src={} len={} cap={}", dst, src, len,
self.cap);
ptr::copy_nonoverlapping(
self.ptr.offset(dst as isize),
self.ptr.offset(src as isize),
self.ptr.offset(dst as isize),
len);
}
}
@ -1361,21 +1361,21 @@ pub fn split_off(&mut self, at: usize) -> Self {
// `at` lies in the first half.
let amount_in_first = first_len - at;
ptr::copy_nonoverlapping(*other.ptr,
first_half.as_ptr().offset(at as isize),
ptr::copy_nonoverlapping(first_half.as_ptr().offset(at as isize),
*other.ptr,
amount_in_first);
// just take all of the second half.
ptr::copy_nonoverlapping(other.ptr.offset(amount_in_first as isize),
second_half.as_ptr(),
ptr::copy_nonoverlapping(second_half.as_ptr(),
other.ptr.offset(amount_in_first as isize),
second_len);
} else {
// `at` lies in the second half, need to factor in the elements we skipped
// in the first half.
let offset = at - first_len;
let amount_in_second = second_len - offset;
ptr::copy_nonoverlapping(*other.ptr,
second_half.as_ptr().offset(offset as isize),
ptr::copy_nonoverlapping(second_half.as_ptr().offset(offset as isize),
*other.ptr,
amount_in_second);
}
}

View file

@ -48,33 +48,30 @@
//! For example,
//!
//! ```
//! # #![feature(os, old_io, old_path)]
//! #![feature(core)]
//! use std::error::FromError;
//! use std::old_io::{File, IoError};
//! use std::os::{MemoryMap, MapError};
//! use std::old_path::Path;
//! use std::{io, str};
//! use std::fs::File;
//!
//! enum MyError {
//! Io(IoError),
//! Map(MapError)
//! Io(io::Error),
//! Utf8(str::Utf8Error),
//! }
//!
//! impl FromError<IoError> for MyError {
//! fn from_error(err: IoError) -> MyError {
//! MyError::Io(err)
//! }
//! impl FromError<io::Error> for MyError {
//! fn from_error(err: io::Error) -> MyError { MyError::Io(err) }
//! }
//!
//! impl FromError<MapError> for MyError {
//! fn from_error(err: MapError) -> MyError {
//! MyError::Map(err)
//! }
//! impl FromError<str::Utf8Error> for MyError {
//! fn from_error(err: str::Utf8Error) -> MyError { MyError::Utf8(err) }
//! }
//!
//! #[allow(unused_variables)]
//! fn open_and_map() -> Result<(), MyError> {
//! let f = try!(File::open(&Path::new("foo.txt")));
//! let m = try!(MemoryMap::new(0, &[]));
//! let b = b"foo.txt";
//! let s = try!(str::from_utf8(b));
//! let f = try!(File::open(s));
//!
//! // do something interesting here...
//! Ok(())
//! }

View file

@ -316,8 +316,8 @@ struct Filler<'a> {
impl<'a> fmt::Write for Filler<'a> {
fn write_str(&mut self, s: &str) -> fmt::Result {
slice::bytes::copy_memory(&mut self.buf[(*self.end)..],
s.as_bytes());
slice::bytes::copy_memory(s.as_bytes(),
&mut self.buf[(*self.end)..]);
*self.end += s.len();
Ok(())
}

View file

@ -293,9 +293,9 @@
/// let mut t: T = mem::uninitialized();
///
/// // Perform the swap, `&mut` pointers never alias
/// ptr::copy_nonoverlapping(&mut t, &*x, 1);
/// ptr::copy_nonoverlapping(x, &*y, 1);
/// ptr::copy_nonoverlapping(y, &t, 1);
/// ptr::copy_nonoverlapping(x, &mut t, 1);
/// ptr::copy_nonoverlapping(y, x, 1);
/// ptr::copy_nonoverlapping(&t, y, 1);
///
/// // y and t now point to the same thing, but we need to completely forget `tmp`
/// // because it's no longer relevant.
@ -304,6 +304,12 @@
/// }
/// ```
#[stable(feature = "rust1", since = "1.0.0")]
#[cfg(not(stage0))]
pub fn copy_nonoverlapping<T>(src: *const T, dst: *mut T, count: usize);
/// dox
#[stable(feature = "rust1", since = "1.0.0")]
#[cfg(stage0)]
pub fn copy_nonoverlapping<T>(dst: *mut T, src: *const T, count: usize);
/// Copies `count * size_of<T>` bytes from `src` to `dst`. The source
@ -329,12 +335,18 @@
/// unsafe fn from_buf_raw<T>(ptr: *const T, elts: usize) -> Vec<T> {
/// let mut dst = Vec::with_capacity(elts);
/// dst.set_len(elts);
/// ptr::copy(dst.as_mut_ptr(), ptr, elts);
/// ptr::copy(ptr, dst.as_mut_ptr(), elts);
/// dst
/// }
/// ```
///
#[stable(feature = "rust1", since = "1.0.0")]
#[cfg(not(stage0))]
pub fn copy<T>(src: *const T, dst: *mut T, count: usize);
/// dox
#[stable(feature = "rust1", since = "1.0.0")]
#[cfg(stage0)]
pub fn copy<T>(dst: *mut T, src: *const T, count: usize);
/// Invokes memset on the specified pointer, setting `count * size_of::<T>()`

View file

@ -229,9 +229,9 @@ pub fn swap<T>(x: &mut T, y: &mut T) {
let mut t: T = uninitialized();
// Perform the swap, `&mut` pointers never alias
ptr::copy_nonoverlapping(&mut t, &*x, 1);
ptr::copy_nonoverlapping(x, &*y, 1);
ptr::copy_nonoverlapping(y, &t, 1);
ptr::copy_nonoverlapping(&*x, &mut t, 1);
ptr::copy_nonoverlapping(&*y, x, 1);
ptr::copy_nonoverlapping(&t, y, 1);
// y and t now point to the same thing, but we need to completely forget `t`
// because it's no longer relevant.

View file

@ -333,7 +333,7 @@ pub fn expect(self, msg: &str) -> T {
}
}
/// Moves the value `v` out of the `Option<T>` if the content of the `Option<T>` is a `Some(v)`.
/// Moves the value `v` out of the `Option<T>` if it is `Some(v)`.
///
/// # Panics
///

View file

@ -104,11 +104,28 @@
// FIXME #19649: intrinsic docs don't render, so these have no docs :(
#[stable(feature = "rust1", since = "1.0.0")]
#[cfg(not(stage0))]
pub use intrinsics::copy_nonoverlapping;
/// dox
#[cfg(stage0)]
#[stable(feature = "rust1", since = "1.0.0")]
pub unsafe fn copy_nonoverlapping<T>(src: *const T, dst: *mut T, count: usize) {
intrinsics::copy_nonoverlapping(dst, src, count)
}
#[stable(feature = "rust1", since = "1.0.0")]
#[cfg(not(stage0))]
pub use intrinsics::copy;
/// dox
#[cfg(stage0)]
#[stable(feature = "rust1", since = "1.0.0")]
pub unsafe fn copy<T>(src: *const T, dst: *mut T, count: usize) {
intrinsics::copy(dst, src, count)
}
#[stable(feature = "rust1", since = "1.0.0")]
pub use intrinsics::write_bytes;
@ -167,12 +184,11 @@ pub unsafe fn zero_memory<T>(dst: *mut T, count: usize) {
pub unsafe fn swap<T>(x: *mut T, y: *mut T) {
// Give ourselves some scratch space to work with
let mut tmp: T = mem::uninitialized();
let t: *mut T = &mut tmp;
// Perform the swap
copy_nonoverlapping(t, &*x, 1);
copy(x, &*y, 1); // `x` and `y` may overlap
copy_nonoverlapping(y, &*t, 1);
copy_nonoverlapping(x, &mut tmp, 1);
copy(y, x, 1); // `x` and `y` may overlap
copy_nonoverlapping(&tmp, y, 1);
// y and t now point to the same thing, but we need to completely forget `tmp`
// because it's no longer relevant.
@ -208,7 +224,7 @@ pub unsafe fn replace<T>(dest: *mut T, mut src: T) -> T {
#[stable(feature = "rust1", since = "1.0.0")]
pub unsafe fn read<T>(src: *const T) -> T {
let mut tmp: T = mem::uninitialized();
copy_nonoverlapping(&mut tmp, src, 1);
copy_nonoverlapping(src, &mut tmp, 1);
tmp
}

View file

@ -1577,14 +1577,14 @@ fn set_memory(&mut self, value: u8) {
///
/// Panics if the length of `dst` is less than the length of `src`.
#[inline]
pub fn copy_memory(dst: &mut [u8], src: &[u8]) {
pub fn copy_memory(src: &[u8], dst: &mut [u8]) {
let len_src = src.len();
assert!(dst.len() >= len_src);
// `dst` is unaliasable, so we know statically it doesn't overlap
// with `src`.
unsafe {
ptr::copy_nonoverlapping(dst.as_mut_ptr(),
src.as_ptr(),
ptr::copy_nonoverlapping(src.as_ptr(),
dst.as_mut_ptr(),
len_src);
}
}

View file

@ -35,18 +35,15 @@ struct Pair {
let v0 = vec![32000u16, 32001u16, 32002u16];
let mut v1 = vec![0u16, 0u16, 0u16];
copy(v1.as_mut_ptr().offset(1),
v0.as_ptr().offset(1), 1);
copy(v0.as_ptr().offset(1), v1.as_mut_ptr().offset(1), 1);
assert!((v1[0] == 0u16 &&
v1[1] == 32001u16 &&
v1[2] == 0u16));
copy(v1.as_mut_ptr(),
v0.as_ptr().offset(2), 1);
copy(v0.as_ptr().offset(2), v1.as_mut_ptr(), 1);
assert!((v1[0] == 32002u16 &&
v1[1] == 32001u16 &&
v1[2] == 0u16));
copy(v1.as_mut_ptr().offset(2),
v0.as_ptr(), 1);
copy(v0.as_ptr(), v1.as_mut_ptr().offset(2), 1);
assert!((v1[0] == 32002u16 &&
v1[1] == 32001u16 &&
v1[2] == 32000u16));

View file

@ -449,21 +449,21 @@ pub fn doc_as_u8(d: Doc) -> u8 {
pub fn doc_as_u16(d: Doc) -> u16 {
assert_eq!(d.end, d.start + 2);
let mut b = [0; 2];
bytes::copy_memory(&mut b, &d.data[d.start..d.end]);
bytes::copy_memory(&d.data[d.start..d.end], &mut b);
unsafe { (*(b.as_ptr() as *const u16)).to_be() }
}
pub fn doc_as_u32(d: Doc) -> u32 {
assert_eq!(d.end, d.start + 4);
let mut b = [0; 4];
bytes::copy_memory(&mut b, &d.data[d.start..d.end]);
bytes::copy_memory(&d.data[d.start..d.end], &mut b);
unsafe { (*(b.as_ptr() as *const u32)).to_be() }
}
pub fn doc_as_u64(d: Doc) -> u64 {
assert_eq!(d.end, d.start + 8);
let mut b = [0; 8];
bytes::copy_memory(&mut b, &d.data[d.start..d.end]);
bytes::copy_memory(&d.data[d.start..d.end], &mut b);
unsafe { (*(b.as_ptr() as *const u64)).to_be() }
}
@ -938,7 +938,7 @@ pub fn end_tag(&mut self) -> EncodeResult {
{
let last_size_pos = last_size_pos as usize;
let data = &self.writer.get_ref()[last_size_pos+4..cur_pos as usize];
bytes::copy_memory(&mut buf, data);
bytes::copy_memory(data, &mut buf);
}
// overwrite the size and data and continue

View file

@ -62,7 +62,7 @@
fn u32_from_be_bytes(bytes: &[u8]) -> u32 {
let mut b = [0; 4];
bytes::copy_memory(&mut b, &bytes[..4]);
bytes::copy_memory(&bytes[..4], &mut b);
unsafe { (*(b.as_ptr() as *const u32)).to_be() }
}

View file

@ -885,6 +885,11 @@ fn walk_autoref(&mut self,
}
}
// When this returns true, it means that the expression *is* a
// method-call (i.e. via the operator-overload). This true result
// also implies that walk_overloaded_operator already took care of
// recursively processing the input arguments, and thus the caller
// should not do so.
fn walk_overloaded_operator(&mut self,
expr: &ast::Expr,
receiver: &ast::Expr,

View file

@ -71,6 +71,8 @@
pub use self::deref_kind::*;
pub use self::categorization::*;
use self::Aliasability::*;
use middle::check_const;
use middle::def;
use middle::region;
@ -295,23 +297,29 @@ fn node_method_origin(&self, method_call: ty::MethodCall)
impl MutabilityCategory {
pub fn from_mutbl(m: ast::Mutability) -> MutabilityCategory {
match m {
let ret = match m {
MutImmutable => McImmutable,
MutMutable => McDeclared
}
};
debug!("MutabilityCategory::{}({:?}) => {:?}",
"from_mutbl", m, ret);
ret
}
pub fn from_borrow_kind(borrow_kind: ty::BorrowKind) -> MutabilityCategory {
match borrow_kind {
let ret = match borrow_kind {
ty::ImmBorrow => McImmutable,
ty::UniqueImmBorrow => McImmutable,
ty::MutBorrow => McDeclared,
}
};
debug!("MutabilityCategory::{}({:?}) => {:?}",
"from_borrow_kind", borrow_kind, ret);
ret
}
pub fn from_pointer_kind(base_mutbl: MutabilityCategory,
ptr: PointerKind) -> MutabilityCategory {
match ptr {
fn from_pointer_kind(base_mutbl: MutabilityCategory,
ptr: PointerKind) -> MutabilityCategory {
let ret = match ptr {
Unique => {
base_mutbl.inherit()
}
@ -321,11 +329,14 @@ pub fn from_pointer_kind(base_mutbl: MutabilityCategory,
UnsafePtr(m) => {
MutabilityCategory::from_mutbl(m)
}
}
};
debug!("MutabilityCategory::{}({:?}, {:?}) => {:?}",
"from_pointer_kind", base_mutbl, ptr, ret);
ret
}
fn from_local(tcx: &ty::ctxt, id: ast::NodeId) -> MutabilityCategory {
match tcx.map.get(id) {
let ret = match tcx.map.get(id) {
ast_map::NodeLocal(p) | ast_map::NodeArg(p) => match p.node {
ast::PatIdent(bind_mode, _, _) => {
if bind_mode == ast::BindByValue(ast::MutMutable) {
@ -337,30 +348,39 @@ fn from_local(tcx: &ty::ctxt, id: ast::NodeId) -> MutabilityCategory {
_ => tcx.sess.span_bug(p.span, "expected identifier pattern")
},
_ => tcx.sess.span_bug(tcx.map.span(id), "expected identifier pattern")
}
};
debug!("MutabilityCategory::{}(tcx, id={:?}) => {:?}",
"from_local", id, ret);
ret
}
pub fn inherit(&self) -> MutabilityCategory {
match *self {
let ret = match *self {
McImmutable => McImmutable,
McDeclared => McInherited,
McInherited => McInherited,
}
};
debug!("{:?}.inherit() => {:?}", self, ret);
ret
}
pub fn is_mutable(&self) -> bool {
match *self {
let ret = match *self {
McImmutable => false,
McInherited => true,
McDeclared => true,
}
};
debug!("{:?}.is_mutable() => {:?}", self, ret);
ret
}
pub fn is_immutable(&self) -> bool {
match *self {
let ret = match *self {
McImmutable => true,
McDeclared | McInherited => false
}
};
debug!("{:?}.is_immutable() => {:?}", self, ret);
ret
}
pub fn to_user_str(&self) -> &'static str {
@ -733,7 +753,9 @@ fn cat_upvar(&self,
}
};
Ok(Rc::new(cmt_result))
let ret = Rc::new(cmt_result);
debug!("cat_upvar ret={}", ret.repr(self.tcx()));
Ok(ret)
}
fn env_deref(&self,
@ -794,14 +816,18 @@ fn env_deref(&self,
McDeclared | McInherited => { }
}
cmt_ {
let ret = cmt_ {
id: id,
span: span,
cat: cat_deref(Rc::new(cmt_result), 0, env_ptr),
mutbl: deref_mutbl,
ty: var_ty,
note: NoteClosureEnv(upvar_id)
}
};
debug!("env_deref ret {}", ret.repr(self.tcx()));
ret
}
pub fn cat_rvalue_node(&self,
@ -831,7 +857,9 @@ pub fn cat_rvalue_node(&self,
}
}
};
self.cat_rvalue(id, span, re, expr_ty)
let ret = self.cat_rvalue(id, span, re, expr_ty);
debug!("cat_rvalue_node ret {}", ret.repr(self.tcx()));
ret
}
pub fn cat_rvalue(&self,
@ -839,14 +867,16 @@ pub fn cat_rvalue(&self,
span: Span,
temp_scope: ty::Region,
expr_ty: Ty<'tcx>) -> cmt<'tcx> {
Rc::new(cmt_ {
let ret = Rc::new(cmt_ {
id:cmt_id,
span:span,
cat:cat_rvalue(temp_scope),
mutbl:McDeclared,
ty:expr_ty,
note: NoteNone
})
});
debug!("cat_rvalue ret {}", ret.repr(self.tcx()));
ret
}
pub fn cat_field<N:ast_node>(&self,
@ -855,14 +885,16 @@ pub fn cat_field<N:ast_node>(&self,
f_name: ast::Name,
f_ty: Ty<'tcx>)
-> cmt<'tcx> {
Rc::new(cmt_ {
let ret = Rc::new(cmt_ {
id: node.id(),
span: node.span(),
mutbl: base_cmt.mutbl.inherit(),
cat: cat_interior(base_cmt, InteriorField(NamedField(f_name))),
ty: f_ty,
note: NoteNone
})
});
debug!("cat_field ret {}", ret.repr(self.tcx()));
ret
}
pub fn cat_tup_field<N:ast_node>(&self,
@ -871,14 +903,16 @@ pub fn cat_tup_field<N:ast_node>(&self,
f_idx: usize,
f_ty: Ty<'tcx>)
-> cmt<'tcx> {
Rc::new(cmt_ {
let ret = Rc::new(cmt_ {
id: node.id(),
span: node.span(),
mutbl: base_cmt.mutbl.inherit(),
cat: cat_interior(base_cmt, InteriorField(PositionalField(f_idx))),
ty: f_ty,
note: NoteNone
})
});
debug!("cat_tup_field ret {}", ret.repr(self.tcx()));
ret
}
fn cat_deref<N:ast_node>(&self,
@ -913,10 +947,14 @@ fn cat_deref<N:ast_node>(&self,
};
let base_cmt_ty = base_cmt.ty;
match ty::deref(base_cmt_ty, true) {
Some(mt) => self.cat_deref_common(node, base_cmt, deref_cnt,
Some(mt) => {
let ret = self.cat_deref_common(node, base_cmt, deref_cnt,
mt.ty,
deref_context,
/* implicit: */ false),
/* implicit: */ false);
debug!("cat_deref ret {}", ret.repr(self.tcx()));
ret
}
None => {
debug!("Explicit deref of non-derefable type: {}",
base_cmt_ty.repr(self.tcx()));
@ -954,14 +992,16 @@ fn cat_deref_common<N:ast_node>(&self,
(base_cmt.mutbl.inherit(), cat_interior(base_cmt, interior))
}
};
Ok(Rc::new(cmt_ {
let ret = Rc::new(cmt_ {
id: node.id(),
span: node.span(),
cat: cat,
mutbl: m,
ty: deref_ty,
note: NoteNone
}))
});
debug!("cat_deref_common ret {}", ret.repr(self.tcx()));
Ok(ret)
}
pub fn cat_index<N:ast_node>(&self,
@ -1009,8 +1049,10 @@ pub fn cat_index<N:ast_node>(&self,
};
let m = base_cmt.mutbl.inherit();
return Ok(interior(elt, base_cmt.clone(), base_cmt.ty,
m, context, element_ty));
let ret = interior(elt, base_cmt.clone(), base_cmt.ty,
m, context, element_ty);
debug!("cat_index ret {}", ret.repr(self.tcx()));
return Ok(ret);
fn interior<'tcx, N: ast_node>(elt: &N,
of_cmt: cmt<'tcx>,
@ -1039,14 +1081,14 @@ fn deref_vec<N:ast_node>(&self,
context: InteriorOffsetKind)
-> McResult<cmt<'tcx>>
{
match try!(deref_kind(base_cmt.ty, Some(context))) {
let ret = match try!(deref_kind(base_cmt.ty, Some(context))) {
deref_ptr(ptr) => {
// for unique ptrs, we inherit mutability from the
// owning reference.
let m = MutabilityCategory::from_pointer_kind(base_cmt.mutbl, ptr);
// the deref is explicit in the resulting cmt
Ok(Rc::new(cmt_ {
Rc::new(cmt_ {
id:elt.id(),
span:elt.span(),
cat:cat_deref(base_cmt.clone(), 0, ptr),
@ -1056,13 +1098,15 @@ fn deref_vec<N:ast_node>(&self,
None => self.tcx().sess.bug("Found non-derefable type")
},
note: NoteNone
}))
})
}
deref_interior(_) => {
Ok(base_cmt)
base_cmt
}
}
};
debug!("deref_vec ret {}", ret.repr(self.tcx()));
Ok(ret)
}
/// Given a pattern P like: `[_, ..Q, _]`, where `vec_cmt` is the cmt for `P`, `slice_pat` is
@ -1112,14 +1156,16 @@ pub fn cat_imm_interior<N:ast_node>(&self,
interior_ty: Ty<'tcx>,
interior: InteriorKind)
-> cmt<'tcx> {
Rc::new(cmt_ {
let ret = Rc::new(cmt_ {
id: node.id(),
span: node.span(),
mutbl: base_cmt.mutbl.inherit(),
cat: cat_interior(base_cmt, interior),
ty: interior_ty,
note: NoteNone
})
});
debug!("cat_imm_interior ret={}", ret.repr(self.tcx()));
ret
}
pub fn cat_downcast<N:ast_node>(&self,
@ -1128,14 +1174,16 @@ pub fn cat_downcast<N:ast_node>(&self,
downcast_ty: Ty<'tcx>,
variant_did: ast::DefId)
-> cmt<'tcx> {
Rc::new(cmt_ {
let ret = Rc::new(cmt_ {
id: node.id(),
span: node.span(),
mutbl: base_cmt.mutbl.inherit(),
cat: cat_downcast(base_cmt, variant_did),
ty: downcast_ty,
note: NoteNone
})
});
debug!("cat_downcast ret={}", ret.repr(self.tcx()));
ret
}
pub fn cat_pattern<F>(&self, cmt: cmt<'tcx>, pat: &ast::Pat, mut op: F) -> McResult<()>
@ -1341,17 +1389,25 @@ fn overloaded_method_return_ty(&self,
}
}
#[derive(Copy)]
#[derive(Copy, Clone, Debug)]
pub enum InteriorSafety {
InteriorUnsafe,
InteriorSafe
}
#[derive(Copy)]
#[derive(Clone, Debug)]
pub enum Aliasability {
FreelyAliasable(AliasableReason),
NonAliasable,
ImmutableUnique(Box<Aliasability>),
}
#[derive(Copy, Clone, Debug)]
pub enum AliasableReason {
AliasableBorrowed,
AliasableClosure(ast::NodeId), // Aliasable due to capture Fn closure env
AliasableOther,
UnaliasableImmutable, // Created as needed upon seeing ImmutableUnique
AliasableStatic(InteriorSafety),
AliasableStaticMut(InteriorSafety),
}
@ -1380,9 +1436,9 @@ pub fn guarantor(&self) -> cmt<'tcx> {
}
}
/// Returns `Some(_)` if this lvalue represents a freely aliasable pointer type.
/// Returns `FreelyAliasable(_)` if this lvalue represents a freely aliasable pointer type.
pub fn freely_aliasable(&self, ctxt: &ty::ctxt<'tcx>)
-> Option<AliasableReason> {
-> Aliasability {
// Maybe non-obvious: copied upvars can only be considered
// non-aliasable in once closures, since any other kind can be
// aliased and eventually recused.
@ -1393,17 +1449,27 @@ pub fn freely_aliasable(&self, ctxt: &ty::ctxt<'tcx>)
cat_deref(ref b, _, BorrowedPtr(ty::UniqueImmBorrow, _)) |
cat_deref(ref b, _, Implicit(ty::UniqueImmBorrow, _)) |
cat_downcast(ref b, _) |
cat_deref(ref b, _, Unique) |
cat_interior(ref b, _) => {
// Aliasability depends on base cmt
b.freely_aliasable(ctxt)
}
cat_deref(ref b, _, Unique) => {
let sub = b.freely_aliasable(ctxt);
if b.mutbl.is_mutable() {
// Aliasability depends on base cmt alone
sub
} else {
// Do not allow mutation through an immutable box.
ImmutableUnique(Box::new(sub))
}
}
cat_rvalue(..) |
cat_local(..) |
cat_upvar(..) |
cat_deref(_, _, UnsafePtr(..)) => { // yes, it's aliasable, but...
None
NonAliasable
}
cat_static_item(..) => {
@ -1414,17 +1480,18 @@ pub fn freely_aliasable(&self, ctxt: &ty::ctxt<'tcx>)
};
if self.mutbl.is_mutable() {
Some(AliasableStaticMut(int_safe))
FreelyAliasable(AliasableStaticMut(int_safe))
} else {
Some(AliasableStatic(int_safe))
FreelyAliasable(AliasableStatic(int_safe))
}
}
cat_deref(ref base, _, BorrowedPtr(ty::ImmBorrow, _)) |
cat_deref(ref base, _, Implicit(ty::ImmBorrow, _)) => {
match base.cat {
cat_upvar(Upvar{ id, .. }) => Some(AliasableClosure(id.closure_expr_id)),
_ => Some(AliasableBorrowed)
cat_upvar(Upvar{ id, .. }) =>
FreelyAliasable(AliasableClosure(id.closure_expr_id)),
_ => FreelyAliasable(AliasableBorrowed)
}
}
}

View file

@ -139,15 +139,15 @@ fn input<F>(&mut self, input: &[u8], mut func: F) where
let buffer_remaining = size - self.buffer_idx;
if input.len() >= buffer_remaining {
copy_memory(
&mut self.buffer[self.buffer_idx..size],
&input[..buffer_remaining]);
&input[..buffer_remaining],
&mut self.buffer[self.buffer_idx..size]);
self.buffer_idx = 0;
func(&self.buffer);
i += buffer_remaining;
} else {
copy_memory(
&mut self.buffer[self.buffer_idx..self.buffer_idx + input.len()],
input);
input,
&mut self.buffer[self.buffer_idx..self.buffer_idx + input.len()]);
self.buffer_idx += input.len();
return;
}
@ -165,8 +165,8 @@ fn input<F>(&mut self, input: &[u8], mut func: F) where
// be empty.
let input_remaining = input.len() - i;
copy_memory(
&mut self.buffer[..input_remaining],
&input[i..]);
&input[i..],
&mut self.buffer[..input_remaining]);
self.buffer_idx += input_remaining;
}

View file

@ -943,13 +943,20 @@ fn check_for_aliasability_violation<'a, 'tcx>(this: &CheckLoanCtxt<'a, 'tcx>,
cmt: mc::cmt<'tcx>)
-> bool {
match cmt.freely_aliasable(this.tcx()) {
None => {
mc::Aliasability::NonAliasable => {
return true;
}
Some(mc::AliasableStaticMut(..)) => {
mc::Aliasability::FreelyAliasable(mc::AliasableStaticMut(..)) => {
return true;
}
Some(cause) => {
mc::Aliasability::ImmutableUnique(_) => {
this.bccx.report_aliasability_violation(
span,
MutabilityViolation,
mc::AliasableReason::UnaliasableImmutable);
return false;
}
mc::Aliasability::FreelyAliasable(cause) => {
this.bccx.report_aliasability_violation(
span,
MutabilityViolation,

View file

@ -151,10 +151,11 @@ fn mutate(&mut self,
assignee_cmt: mc::cmt<'tcx>,
mode: euv::MutateMode)
{
debug!("mutate(assignment_id={}, assignee_cmt={})",
assignment_id, assignee_cmt.repr(self.tcx()));
let opt_lp = opt_loan_path(&assignee_cmt);
debug!("mutate(assignment_id={}, assignee_cmt={}) opt_lp={:?}",
assignment_id, assignee_cmt.repr(self.tcx()), opt_lp);
match opt_loan_path(&assignee_cmt) {
match opt_lp {
Some(lp) => {
gather_moves::gather_assignment(self.bccx, &self.move_data,
assignment_id, assignment_span,
@ -181,12 +182,16 @@ fn check_aliasability<'a, 'tcx>(bccx: &BorrowckCtxt<'a, 'tcx>,
req_kind: ty::BorrowKind)
-> Result<(),()> {
match (cmt.freely_aliasable(bccx.tcx), req_kind) {
(None, _) => {
let aliasability = cmt.freely_aliasable(bccx.tcx);
debug!("check_aliasability aliasability={:?} req_kind={:?}",
aliasability, req_kind);
match (aliasability, req_kind) {
(mc::Aliasability::NonAliasable, _) => {
/* Uniquely accessible path -- OK for `&` and `&mut` */
Ok(())
}
(Some(mc::AliasableStatic(safety)), ty::ImmBorrow) => {
(mc::Aliasability::FreelyAliasable(mc::AliasableStatic(safety)), ty::ImmBorrow) => {
// Borrow of an immutable static item:
match safety {
mc::InteriorUnsafe => {
@ -202,13 +207,20 @@ fn check_aliasability<'a, 'tcx>(bccx: &BorrowckCtxt<'a, 'tcx>,
}
}
}
(Some(mc::AliasableStaticMut(..)), _) => {
(mc::Aliasability::FreelyAliasable(mc::AliasableStaticMut(..)), _) => {
// Even touching a static mut is considered unsafe. We assume the
// user knows what they're doing in these cases.
Ok(())
}
(Some(alias_cause), ty::UniqueImmBorrow) |
(Some(alias_cause), ty::MutBorrow) => {
(mc::Aliasability::ImmutableUnique(_), ty::MutBorrow) => {
bccx.report_aliasability_violation(
borrow_span,
BorrowViolation(loan_cause),
mc::AliasableReason::UnaliasableImmutable);
Err(())
}
(mc::Aliasability::FreelyAliasable(alias_cause), ty::UniqueImmBorrow) |
(mc::Aliasability::FreelyAliasable(alias_cause), ty::MutBorrow) => {
bccx.report_aliasability_violation(
borrow_span,
BorrowViolation(loan_cause),
@ -376,7 +388,8 @@ fn check_mutability<'a, 'tcx>(bccx: &BorrowckCtxt<'a, 'tcx>,
req_kind: ty::BorrowKind)
-> Result<(),()> {
//! Implements the M-* rules in README.md.
debug!("check_mutability(cause={:?} cmt={} req_kind={:?}",
cause, cmt.repr(bccx.tcx), req_kind);
match req_kind {
ty::UniqueImmBorrow | ty::ImmBorrow => {
match cmt.mutbl {

View file

@ -844,6 +844,12 @@ pub fn report_aliasability_violation(&self,
&format!("{} in an aliasable location",
prefix));
}
mc::AliasableReason::UnaliasableImmutable => {
self.tcx.sess.span_err(
span,
&format!("{} in an immutable container",
prefix));
}
mc::AliasableClosure(id) => {
self.tcx.sess.span_err(span,
&format!("{} in a captured outer \

View file

@ -39,7 +39,7 @@
use util::nodemap::{FnvHashMap, NodeSet};
use lint::{Level, Context, LintPass, LintArray, Lint};
use std::collections::BitSet;
use std::collections::{HashSet, BitSet};
use std::collections::hash_map::Entry::{Occupied, Vacant};
use std::num::SignedInt;
use std::{cmp, slice};
@ -1437,6 +1437,9 @@ pub struct MissingDoc {
/// Stack of whether #[doc(hidden)] is set
/// at each level which has lint attributes.
doc_hidden_stack: Vec<bool>,
/// Private traits or trait items that leaked through. Don't check their methods.
private_traits: HashSet<ast::NodeId>,
}
impl MissingDoc {
@ -1445,6 +1448,7 @@ pub fn new() -> MissingDoc {
struct_def_stack: vec!(),
in_variant: false,
doc_hidden_stack: vec!(false),
private_traits: HashSet::new(),
}
}
@ -1531,18 +1535,46 @@ fn check_item(&mut self, cx: &Context, it: &ast::Item) {
ast::ItemMod(..) => "a module",
ast::ItemEnum(..) => "an enum",
ast::ItemStruct(..) => "a struct",
ast::ItemTrait(..) => "a trait",
ast::ItemTrait(_, _, _, ref items) => {
// Issue #11592, traits are always considered exported, even when private.
if it.vis == ast::Visibility::Inherited {
self.private_traits.insert(it.id);
for itm in items {
self.private_traits.insert(itm.id);
}
return
}
"a trait"
},
ast::ItemTy(..) => "a type alias",
ast::ItemImpl(_, _, _, Some(ref trait_ref), _, ref impl_items) => {
// If the trait is private, add the impl items to private_traits so they don't get
// reported for missing docs.
let real_trait = ty::trait_ref_to_def_id(cx.tcx, trait_ref);
match cx.tcx.map.find(real_trait.node) {
Some(ast_map::NodeItem(item)) => if item.vis == ast::Visibility::Inherited {
for itm in impl_items {
self.private_traits.insert(itm.id);
}
},
_ => { }
}
return
},
_ => return
};
self.check_missing_docs_attrs(cx, Some(it.id), &it.attrs, it.span, desc);
}
fn check_trait_item(&mut self, cx: &Context, trait_item: &ast::TraitItem) {
if self.private_traits.contains(&trait_item.id) { return }
let desc = match trait_item.node {
ast::MethodTraitItem(..) => "a trait method",
ast::TypeTraitItem(..) => "an associated type"
};
self.check_missing_docs_attrs(cx, Some(trait_item.id),
&trait_item.attrs,
trait_item.span, desc);

View file

@ -398,8 +398,8 @@ pub fn trans_intrinsic_call<'a, 'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>,
false,
false,
*substs.types.get(FnSpace, 0),
llargs[0],
llargs[1],
llargs[0],
llargs[2],
call_debug_location)
}
@ -408,8 +408,8 @@ pub fn trans_intrinsic_call<'a, 'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>,
true,
false,
*substs.types.get(FnSpace, 0),
llargs[0],
llargs[1],
llargs[0],
llargs[2],
call_debug_location)
}

View file

@ -5080,7 +5080,21 @@ fn param<'a, 'tcx>(ccx: &CrateCtxt<'a, 'tcx>, n: u32) -> Ty<'tcx> {
mutbl: ast::MutImmutable
}))
}
"copy" | "copy_nonoverlapping" |
"copy" | "copy_nonoverlapping" => {
(1,
vec!(
ty::mk_ptr(tcx, ty::mt {
ty: param(ccx, 0),
mutbl: ast::MutImmutable
}),
ty::mk_ptr(tcx, ty::mt {
ty: param(ccx, 0),
mutbl: ast::MutMutable
}),
tcx.types.usize,
),
ty::mk_nil(tcx))
}
"volatile_copy_memory" | "volatile_copy_nonoverlapping_memory" => {
(1,
vec!(

View file

@ -243,8 +243,9 @@ fn drop(&mut self) {
if should_panic && out.status.success() {
panic!("test executable succeeded when it should have failed");
} else if !should_panic && !out.status.success() {
panic!("test executable failed:\n{:?}",
str::from_utf8(&out.stdout));
panic!("test executable failed:\n{}\n{}",
str::from_utf8(&out.stdout).unwrap_or(""),
str::from_utf8(&out.stderr).unwrap_or(""));
}
}
}

View file

@ -480,8 +480,8 @@ pub fn full(&self) -> &FullBucket<K, V, M> {
pub fn shift(mut self) -> Option<GapThenFull<K, V, M>> {
unsafe {
*self.gap.raw.hash = mem::replace(&mut *self.full.raw.hash, EMPTY_BUCKET);
ptr::copy_nonoverlapping(self.gap.raw.key, self.full.raw.key, 1);
ptr::copy_nonoverlapping(self.gap.raw.val, self.full.raw.val, 1);
ptr::copy_nonoverlapping(self.full.raw.key, self.gap.raw.key, 1);
ptr::copy_nonoverlapping(self.full.raw.val, self.gap.raw.val, 1);
}
let FullBucket { raw: prev_raw, idx: prev_idx, .. } = self.full;

View file

@ -177,8 +177,8 @@ fn flush_buf(&mut self) -> io::Result<()> {
if written > 0 {
// NB: would be better expressed as .remove(0..n) if it existed
unsafe {
ptr::copy(self.buf.as_mut_ptr(),
self.buf.as_ptr().offset(written as isize),
ptr::copy(self.buf.as_ptr().offset(written as isize),
self.buf.as_mut_ptr(),
len - written);
}
}

View file

@ -151,7 +151,7 @@ fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
// there (left), and what will be appended on the end (right)
let space = self.inner.len() - pos as usize;
let (left, right) = buf.split_at(cmp::min(space, buf.len()));
slice::bytes::copy_memory(&mut self.inner[(pos as usize)..], left);
slice::bytes::copy_memory(left, &mut self.inner[(pos as usize)..]);
self.inner.push_all(right);
// Bump us forward

View file

@ -149,7 +149,7 @@ impl<'a> Read for &'a [u8] {
fn read(&mut self, buf: &mut [u8]) -> io::Result<usize> {
let amt = cmp::min(buf.len(), self.len());
let (a, b) = self.split_at(amt);
slice::bytes::copy_memory(buf, a);
slice::bytes::copy_memory(a, buf);
*self = b;
Ok(amt)
}
@ -170,7 +170,7 @@ impl<'a> Write for &'a mut [u8] {
fn write(&mut self, data: &[u8]) -> io::Result<usize> {
let amt = cmp::min(data.len(), self.len());
let (a, b) = mem::replace(self, &mut []).split_at_mut(amt);
slice::bytes::copy_memory(a, &data[..amt]);
slice::bytes::copy_memory(&data[..amt], a);
*self = b;
Ok(amt)
}

View file

@ -64,6 +64,10 @@ macro_rules! panic {
///
/// Equivalent to the `println!` macro except that a newline is not printed at
/// the end of the message.
///
/// Note that stdout is frequently line-buffered by default so it may be
/// necessary to use `io::stdout().flush()` to ensure the output is emitted
/// immediately.
#[macro_export]
#[stable(feature = "rust1", since = "1.0.0")]
#[allow_internal_unstable]

View file

@ -118,7 +118,7 @@ fn read(&mut self, buf: &mut [u8]) -> IoResult<usize> {
let nread = {
let available = try!(self.fill_buf());
let nread = cmp::min(available.len(), buf.len());
slice::bytes::copy_memory(buf, &available[..nread]);
slice::bytes::copy_memory(&available[..nread], buf);
nread
};
self.pos += nread;
@ -225,7 +225,7 @@ fn write_all(&mut self, buf: &[u8]) -> IoResult<()> {
self.inner.as_mut().unwrap().write_all(buf)
} else {
let dst = &mut self.buf[self.pos..];
slice::bytes::copy_memory(dst, buf);
slice::bytes::copy_memory(buf, dst);
self.pos += buf.len();
Ok(())
}

View file

@ -91,7 +91,7 @@ fn read(&mut self, buf: &mut [u8]) -> IoResult<usize> {
Some(src) => {
let dst = &mut buf[num_read..];
let count = cmp::min(src.len(), dst.len());
bytes::copy_memory(dst, &src[..count]);
bytes::copy_memory(&src[..count], dst);
count
},
None => 0,

View file

@ -171,7 +171,7 @@ pub fn u64_from_be_bytes(data: &[u8], start: usize, size: usize) -> u64 {
unsafe {
let ptr = data.as_ptr().offset(start as isize);
let out = buf.as_mut_ptr();
copy_nonoverlapping(out.offset((8 - size) as isize), ptr, size);
copy_nonoverlapping(ptr, out.offset((8 - size) as isize), size);
(*(out as *const u64)).to_be()
}
}

View file

@ -168,7 +168,7 @@ fn read(&mut self, buf: &mut [u8]) -> IoResult<usize> {
let input = &self.buf[self.pos.. self.pos + write_len];
let output = &mut buf[..write_len];
assert_eq!(input.len(), output.len());
slice::bytes::copy_memory(output, input);
slice::bytes::copy_memory(input, output);
}
self.pos += write_len;
assert!(self.pos <= self.buf.len());
@ -212,7 +212,7 @@ fn read(&mut self, buf: &mut [u8]) -> IoResult<usize> {
{
let input = &self[..write_len];
let output = &mut buf[.. write_len];
slice::bytes::copy_memory(output, input);
slice::bytes::copy_memory(input, output);
}
*self = &self[write_len..];
@ -287,13 +287,13 @@ fn write_all(&mut self, src: &[u8]) -> IoResult<()> {
let src_len = src.len();
if dst_len >= src_len {
slice::bytes::copy_memory(dst, src);
slice::bytes::copy_memory(src, dst);
self.pos += src_len;
Ok(())
} else {
slice::bytes::copy_memory(dst, &src[..dst_len]);
slice::bytes::copy_memory(&src[..dst_len], dst);
self.pos += dst_len;
@ -360,7 +360,7 @@ fn read(&mut self, buf: &mut [u8]) -> IoResult<usize> {
let input = &self.buf[self.pos.. self.pos + write_len];
let output = &mut buf[..write_len];
assert_eq!(input.len(), output.len());
slice::bytes::copy_memory(output, input);
slice::bytes::copy_memory(input, output);
}
self.pos += write_len;
assert!(self.pos <= self.buf.len());

View file

@ -344,8 +344,8 @@ pub fn into_string_lossy(mut self) -> String {
Some((surrogate_pos, _)) => {
pos = surrogate_pos + 3;
slice::bytes::copy_memory(
UTF8_REPLACEMENT_CHARACTER,
&mut self.bytes[surrogate_pos .. pos],
UTF8_REPLACEMENT_CHARACTER
);
},
None => return unsafe { String::from_utf8_unchecked(self.bytes) }

View file

@ -126,10 +126,9 @@ fn make(&mut self, n: usize) -> IoResult<()> {
let mut buf = repeat(0).take(alu_len + LINE_LEN).collect::<Vec<_>>();
let alu: &[u8] = self.alu.as_bytes();
copy_memory(&mut buf, alu);
copy_memory(alu, &mut buf);
let buf_len = buf.len();
copy_memory(&mut buf[alu_len..buf_len],
&alu[..LINE_LEN]);
copy_memory(&alu[..LINE_LEN], &mut buf[alu_len..buf_len]);
let mut pos = 0;
let mut bytes;

View file

@ -181,8 +181,8 @@ fn reverse_complement(seq: &mut [u8], tables: &Tables) {
let mut i = LINE_LEN;
while i < len {
unsafe {
copy(seq.as_mut_ptr().offset((i - off + 1) as isize),
seq.as_ptr().offset((i - off) as isize), off);
copy(seq.as_ptr().offset((i - off) as isize),
seq.as_mut_ptr().offset((i - off + 1) as isize), off);
*seq.get_unchecked_mut(i - off) = b'\n';
}
i += LINE_LEN + 1;

View file

@ -9,26 +9,48 @@
// except according to those terms.
// This tests that we can't modify Box<&mut T> contents while they
// are borrowed.
// are borrowed (#14498).
//
// Also includes tests of the errors reported when the Box in question
// is immutable (#14270).
#![feature(box_syntax)]
struct A { a: isize }
struct B<'a> { a: Box<&'a mut isize> }
fn indirect_write_to_imm_box() {
let mut x: isize = 1;
let y: Box<_> = box &mut x;
let p = &y;
***p = 2; //~ ERROR cannot assign to data in an immutable container
drop(p);
}
fn borrow_in_var_from_var() {
let mut x: isize = 1;
let mut y: Box<_> = box &mut x;
let p = &y;
let q = &***p;
**y = 2; //~ ERROR cannot assign to `**y` because it is borrowed
drop(p);
drop(q);
}
fn borrow_in_var_from_var_via_imm_box() {
let mut x: isize = 1;
let y: Box<_> = box &mut x;
let p = &y;
let q = &***p;
**y = 2; //~ ERROR cannot assign to `**y` because it is borrowed
//~^ ERROR cannot assign to data in an immutable container
drop(p);
drop(q);
}
fn borrow_in_var_from_field() {
let mut x = A { a: 1 };
let y: Box<_> = box &mut x.a;
let mut y: Box<_> = box &mut x.a;
let p = &y;
let q = &***p;
**y = 2; //~ ERROR cannot assign to `**y` because it is borrowed
@ -36,19 +58,41 @@ fn borrow_in_var_from_field() {
drop(q);
}
fn borrow_in_var_from_field_via_imm_box() {
let mut x = A { a: 1 };
let y: Box<_> = box &mut x.a;
let p = &y;
let q = &***p;
**y = 2; //~ ERROR cannot assign to `**y` because it is borrowed
//~^ ERROR cannot assign to data in an immutable container
drop(p);
drop(q);
}
fn borrow_in_field_from_var() {
let mut x: isize = 1;
let mut y = B { a: box &mut x };
let p = &y.a;
let q = &***p;
**y.a = 2; //~ ERROR cannot assign to `**y.a` because it is borrowed
drop(p);
drop(q);
}
fn borrow_in_field_from_var_via_imm_box() {
let mut x: isize = 1;
let y = B { a: box &mut x };
let p = &y.a;
let q = &***p;
**y.a = 2; //~ ERROR cannot assign to `**y.a` because it is borrowed
//~^ ERROR cannot assign to data in an immutable container
drop(p);
drop(q);
}
fn borrow_in_field_from_field() {
let mut x = A { a: 1 };
let y = B { a: box &mut x.a };
let mut y = B { a: box &mut x.a };
let p = &y.a;
let q = &***p;
**y.a = 2; //~ ERROR cannot assign to `**y.a` because it is borrowed
@ -56,9 +100,25 @@ fn borrow_in_field_from_field() {
drop(q);
}
fn main() {
borrow_in_var_from_var();
borrow_in_var_from_field();
borrow_in_field_from_var();
borrow_in_field_from_field();
fn borrow_in_field_from_field_via_imm_box() {
let mut x = A { a: 1 };
let y = B { a: box &mut x.a };
let p = &y.a;
let q = &***p;
**y.a = 2; //~ ERROR cannot assign to `**y.a` because it is borrowed
//~^ ERROR cannot assign to data in an immutable container
drop(p);
drop(q);
}
fn main() {
indirect_write_to_imm_box();
borrow_in_var_from_var();
borrow_in_var_from_var_via_imm_box();
borrow_in_var_from_field();
borrow_in_var_from_field_via_imm_box();
borrow_in_field_from_var();
borrow_in_field_from_var_via_imm_box();
borrow_in_field_from_field();
borrow_in_field_from_field_via_imm_box();
}

View file

@ -0,0 +1,20 @@
// Copyright 2015 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Ensure the private trait Bar isn't complained about.
#![deny(missing_docs)]
mod foo {
trait Bar { fn bar(&self) { } }
impl Bar for i8 { fn bar(&self) { } }
}
fn main() { }

View file

@ -26,7 +26,7 @@ trait MyWriter {
impl<'a> MyWriter for &'a mut [u8] {
fn my_write(&mut self, buf: &[u8]) -> IoResult<()> {
slice::bytes::copy_memory(*self, buf);
slice::bytes::copy_memory(buf, *self);
let write_len = buf.len();
unsafe {