Auto merge of #26173 - pnkfelix:fsk-trans-nzmove-take3, r=nikomatsakis

Add dropflag hints (stack-local booleans) for unfragmented paths in trans.  Part of #5016.

Added code to maintain these hints at runtime, and to conditionalize drop-filling and calls to destructors.

In this early stage of my incremental implementation strategy, we are using hints, so we are always free to leave out a flag for a path -- then we just pass `None` as the dropflag hint in the corresponding schedule cleanup call. But, once a path has a hint, we must at least maintain it: i.e. if the hint exists, we must ensure it is never set to "moved" if the data in question might actually have been initialized. It remains sound to conservatively set the hint to "initialized" as long as the true drop-flag embedded in the value itself is up-to-date.

I hope the commit series has been broken up to be readable; most of the commits in the series should build (though I did not always check this).

----

Oh, I think this technically qualifies as a:
[breaking-change]
because it removes drop-filling in some cases where one could previously observe it. That should only affect `unsafe` code; no safe code should be able to inspect whether the drop-fill was present or not. For an example of code that needed to change to account for this, see commit a81c24ae0216ab47df59acd724f8a33124fb6d97 (a unit test of the `move_val_init` intrinsic).  I have not encountered actual code that needed to be updated to account for the change, since this should only be skipping the drop-filling on *moved* values, not on dropped one, and so even types that use `unsafe_no_drop_flag` should be unchanged by this particular PR. (Their time will come later.)
This commit is contained in:
bors 2015-07-28 15:15:00 +00:00
commit 661a5ad38e
19 changed files with 832 additions and 164 deletions

View File

@ -957,6 +957,44 @@ pub struct ctxt<'tcx> {
/// Maps a cast expression to its kind. This is keyed on the
/// *from* expression of the cast, not the cast itself.
pub cast_kinds: RefCell<NodeMap<cast::CastKind>>,
/// Maps Fn items to a collection of fragment infos.
///
/// The main goal is to identify data (each of which may be moved
/// or assigned) whose subparts are not moved nor assigned
/// (i.e. their state is *unfragmented*) and corresponding ast
/// nodes where the path to that data is moved or assigned.
///
/// In the long term, unfragmented values will have their
/// destructor entirely driven by a single stack-local drop-flag,
/// and their parents, the collections of the unfragmented values
/// (or more simply, "fragmented values"), are mapped to the
/// corresponding collections of stack-local drop-flags.
///
/// (However, in the short term that is not the case; e.g. some
/// unfragmented paths still need to be zeroed, namely when they
/// reference parent data from an outer scope that was not
/// entirely moved, and therefore that needs to be zeroed so that
/// we do not get double-drop when we hit the end of the parent
/// scope.)
///
/// Also: currently the table solely holds keys for node-ids of
/// unfragmented values (see `FragmentInfo` enum definition), but
/// longer-term we will need to also store mappings from
/// fragmented data to the set of unfragmented pieces that
/// constitute it.
pub fragment_infos: RefCell<DefIdMap<Vec<FragmentInfo>>>,
}
/// Describes the fragment-state associated with a NodeId.
///
/// Currently only unfragmented paths have entries in the table,
/// but longer-term this enum is expected to expand to also
/// include data for fragmented paths.
#[derive(Copy, Clone, Debug)]
pub enum FragmentInfo {
Moved { var: NodeId, move_expr: NodeId },
Assigned { var: NodeId, assign_expr: NodeId, assignee_id: NodeId },
}
impl<'tcx> ctxt<'tcx> {
@ -3498,6 +3536,7 @@ pub fn create_and_enter<F, R>(s: Session,
const_qualif_map: RefCell::new(NodeMap()),
custom_coerce_unsized_kinds: RefCell::new(DefIdMap()),
cast_kinds: RefCell::new(NodeMap()),
fragment_infos: RefCell::new(DefIdMap()),
}, f)
}

View File

@ -594,6 +594,8 @@ fn parse_passes(slot: &mut Passes, v: Option<&str>) -> bool {
"Force drop flag checks on or off"),
trace_macros: bool = (false, parse_bool,
"For every macro invocation, print its name and arguments"),
disable_nonzeroing_move_hints: bool = (false, parse_bool,
"Force nonzeroing move optimization off"),
}
pub fn default_lib_output() -> CrateType {

View File

@ -272,6 +272,9 @@ pub fn unstable_options(&self) -> bool {
pub fn print_enum_sizes(&self) -> bool {
self.opts.debugging_opts.print_enum_sizes
}
pub fn nonzeroing_move_hints(&self) -> bool {
!self.opts.debugging_opts.disable_nonzeroing_move_hints
}
pub fn sysroot<'a>(&'a self) -> &'a Path {
match self.opts.maybe_sysroot {
Some (ref sysroot) => sysroot,

View File

@ -86,7 +86,7 @@ fn helper<'tcx>(loan_path: &Rc<LoanPath<'tcx>>) -> Option<Rc<LoanPath<'tcx>>> {
struct CheckLoanCtxt<'a, 'tcx: 'a> {
bccx: &'a BorrowckCtxt<'a, 'tcx>,
dfcx_loans: &'a LoanDataFlow<'a, 'tcx>,
move_data: move_data::FlowedMoveData<'a, 'tcx>,
move_data: &'a move_data::FlowedMoveData<'a, 'tcx>,
all_loans: &'a [Loan<'tcx>],
param_env: &'a ty::ParameterEnvironment<'a, 'tcx>,
}
@ -191,7 +191,7 @@ fn decl_without_init(&mut self, _id: ast::NodeId, _span: Span) { }
pub fn check_loans<'a, 'b, 'c, 'tcx>(bccx: &BorrowckCtxt<'a, 'tcx>,
dfcx_loans: &LoanDataFlow<'b, 'tcx>,
move_data: move_data::FlowedMoveData<'c, 'tcx>,
move_data: &move_data::FlowedMoveData<'c, 'tcx>,
all_loans: &[Loan<'tcx>],
fn_id: ast::NodeId,
decl: &ast::FnDecl,

View File

@ -15,7 +15,7 @@
use self::Fragment::*;
use borrowck::InteriorKind::{InteriorField, InteriorElement};
use borrowck::LoanPath;
use borrowck::{self, LoanPath};
use borrowck::LoanPathKind::{LpVar, LpUpvar, LpDowncast, LpExtend};
use borrowck::LoanPathElem::{LpDeref, LpInterior};
use borrowck::move_data::InvalidMovePathIndex;
@ -59,6 +59,84 @@ fn loan_path_user_string(&self, move_data: &MoveData) -> String {
}
}
pub fn build_unfragmented_map(this: &mut borrowck::BorrowckCtxt,
move_data: &MoveData,
id: ast::NodeId) {
let fr = &move_data.fragments.borrow();
// For now, don't care about other kinds of fragments; the precise
// classfication of all paths for non-zeroing *drop* needs them,
// but the loose approximation used by non-zeroing moves does not.
let moved_leaf_paths = fr.moved_leaf_paths();
let assigned_leaf_paths = fr.assigned_leaf_paths();
let mut fragment_infos = Vec::with_capacity(moved_leaf_paths.len());
let find_var_id = |move_path_index: MovePathIndex| -> Option<ast::NodeId> {
let lp = move_data.path_loan_path(move_path_index);
match lp.kind {
LpVar(var_id) => Some(var_id),
LpUpvar(ty::UpvarId { var_id, closure_expr_id }) => {
// The `var_id` is unique *relative to* the current function.
// (Check that we are indeed talking about the same function.)
assert_eq!(id, closure_expr_id);
Some(var_id)
}
LpDowncast(..) | LpExtend(..) => {
// This simple implementation of non-zeroing move does
// not attempt to deal with tracking substructure
// accurately in the general case.
None
}
}
};
let moves = move_data.moves.borrow();
for &move_path_index in moved_leaf_paths {
let var_id = match find_var_id(move_path_index) {
None => continue,
Some(var_id) => var_id,
};
move_data.each_applicable_move(move_path_index, |move_index| {
let info = ty::FragmentInfo::Moved {
var: var_id,
move_expr: moves[move_index.get()].id,
};
debug!("fragment_infos push({:?} \
due to move_path_index: {} move_index: {}",
info, move_path_index.get(), move_index.get());
fragment_infos.push(info);
true
});
}
for &move_path_index in assigned_leaf_paths {
let var_id = match find_var_id(move_path_index) {
None => continue,
Some(var_id) => var_id,
};
let var_assigns = move_data.var_assignments.borrow();
for var_assign in var_assigns.iter()
.filter(|&assign| assign.path == move_path_index)
{
let info = ty::FragmentInfo::Assigned {
var: var_id,
assign_expr: var_assign.id,
assignee_id: var_assign.assignee_id,
};
debug!("fragment_infos push({:?} due to var_assignment", info);
fragment_infos.push(info);
}
}
let mut fraginfo_map = this.tcx.fragment_infos.borrow_mut();
let fn_did = ast::DefId { krate: ast::LOCAL_CRATE, node: id };
let prev = fraginfo_map.insert(fn_did, fragment_infos);
assert!(prev.is_none());
}
pub struct FragmentSets {
/// During move_data construction, `moved_leaf_paths` tracks paths
/// that have been used directly by being moved out of. When
@ -103,6 +181,14 @@ pub fn new() -> FragmentSets {
}
}
pub fn moved_leaf_paths(&self) -> &[MovePathIndex] {
&self.moved_leaf_paths
}
pub fn assigned_leaf_paths(&self) -> &[MovePathIndex] {
&self.assigned_leaf_paths
}
pub fn add_move(&mut self, path_index: MovePathIndex) {
self.moved_leaf_paths.push(path_index);
}

View File

@ -166,10 +166,13 @@ fn borrowck_fn(this: &mut BorrowckCtxt,
this.tcx,
sp,
id);
move_data::fragments::build_unfragmented_map(this,
&flowed_moves.move_data,
id);
check_loans::check_loans(this,
&loan_dfcx,
flowed_moves,
&flowed_moves,
&all_loans[..],
id,
decl,

View File

@ -159,6 +159,9 @@ pub struct Assignment {
/// span of node where assignment occurs
pub span: Span,
/// id for l-value expression on lhs of assignment
pub assignee_id: ast::NodeId,
}
#[derive(Copy, Clone)]
@ -412,6 +415,7 @@ pub fn add_assignment(&self,
path: path_index,
id: assign_id,
span: span,
assignee_id: assignee_id,
};
if self.is_var_path(path_index) {

View File

@ -205,7 +205,7 @@
use trans::build::{Not, Store, Sub, add_comment};
use trans::build;
use trans::callee;
use trans::cleanup::{self, CleanupMethods};
use trans::cleanup::{self, CleanupMethods, DropHintMethods};
use trans::common::*;
use trans::consts;
use trans::datum::*;
@ -330,11 +330,35 @@ pub enum OptResult<'blk, 'tcx: 'blk> {
#[derive(Clone, Copy, PartialEq)]
pub enum TransBindingMode {
/// By-value binding for a copy type: copies from matched data
/// into a fresh LLVM alloca.
TrByCopy(/* llbinding */ ValueRef),
TrByMove,
/// By-value binding for a non-copy type where we copy into a
/// fresh LLVM alloca; this most accurately reflects the language
/// semantics (e.g. it properly handles overwrites of the matched
/// input), but potentially injects an unwanted copy.
TrByMoveIntoCopy(/* llbinding */ ValueRef),
/// Binding a non-copy type by reference under the hood; this is
/// a codegen optimization to avoid unnecessary memory traffic.
TrByMoveRef,
/// By-ref binding exposed in the original source input.
TrByRef,
}
impl TransBindingMode {
/// if binding by making a fresh copy; returns the alloca that it
/// will copy into; otherwise None.
fn alloca_if_copy(&self) -> Option<ValueRef> {
match *self {
TrByCopy(llbinding) | TrByMoveIntoCopy(llbinding) => Some(llbinding),
TrByMoveRef | TrByRef => None,
}
}
}
/// Information about a pattern binding:
/// - `llmatch` is a pointer to a stack slot. The stack slot contains a
/// pointer into the value being matched. Hence, llmatch has type `T**`
@ -393,16 +417,45 @@ fn has_nested_bindings(m: &[Match], col: usize) -> bool {
return false;
}
// As noted in `fn match_datum`, we should eventually pass around a
// `Datum<Lvalue>` for the `val`; but until we get to that point, this
// `MatchInput` struct will serve -- it has everything `Datum<Lvalue>`
// does except for the type field.
#[derive(Copy, Clone)]
pub struct MatchInput { val: ValueRef, lval: Lvalue }
impl<'tcx> Datum<'tcx, Lvalue> {
pub fn match_input(&self) -> MatchInput {
MatchInput {
val: self.val,
lval: self.kind,
}
}
}
impl MatchInput {
fn from_val(val: ValueRef) -> MatchInput {
MatchInput {
val: val,
lval: Lvalue::new("MatchInput::from_val"),
}
}
fn to_datum<'tcx>(self, ty: Ty<'tcx>) -> Datum<'tcx, Lvalue> {
Datum::new(self.val, ty, self.lval)
}
}
fn expand_nested_bindings<'a, 'p, 'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
m: &[Match<'a, 'p, 'blk, 'tcx>],
col: usize,
val: ValueRef)
val: MatchInput)
-> Vec<Match<'a, 'p, 'blk, 'tcx>> {
debug!("expand_nested_bindings(bcx={}, m={:?}, col={}, val={})",
bcx.to_str(),
m,
col,
bcx.val_to_string(val));
bcx.val_to_string(val.val));
let _indenter = indenter();
m.iter().map(|br| {
@ -411,7 +464,7 @@ fn expand_nested_bindings<'a, 'p, 'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
loop {
pat = match pat.node {
ast::PatIdent(_, ref path, Some(ref inner)) => {
bound_ptrs.push((path.node, val));
bound_ptrs.push((path.node, val.val));
&**inner
},
_ => break
@ -433,7 +486,7 @@ fn enter_match<'a, 'b, 'p, 'blk, 'tcx, F>(bcx: Block<'blk, 'tcx>,
dm: &DefMap,
m: &[Match<'a, 'p, 'blk, 'tcx>],
col: usize,
val: ValueRef,
val: MatchInput,
mut e: F)
-> Vec<Match<'a, 'p, 'blk, 'tcx>> where
F: FnMut(&[&'p ast::Pat]) -> Option<Vec<&'p ast::Pat>>,
@ -442,7 +495,7 @@ fn enter_match<'a, 'b, 'p, 'blk, 'tcx, F>(bcx: Block<'blk, 'tcx>,
bcx.to_str(),
m,
col,
bcx.val_to_string(val));
bcx.val_to_string(val.val));
let _indenter = indenter();
m.iter().filter_map(|br| {
@ -452,7 +505,7 @@ fn enter_match<'a, 'b, 'p, 'blk, 'tcx, F>(bcx: Block<'blk, 'tcx>,
match this.node {
ast::PatIdent(_, ref path, None) => {
if pat_is_binding(dm, &*this) {
bound_ptrs.push((path.node, val));
bound_ptrs.push((path.node, val.val));
}
}
ast::PatVec(ref before, Some(ref slice), ref after) => {
@ -479,13 +532,13 @@ fn enter_default<'a, 'p, 'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
dm: &DefMap,
m: &[Match<'a, 'p, 'blk, 'tcx>],
col: usize,
val: ValueRef)
val: MatchInput)
-> Vec<Match<'a, 'p, 'blk, 'tcx>> {
debug!("enter_default(bcx={}, m={:?}, col={}, val={})",
bcx.to_str(),
m,
col,
bcx.val_to_string(val));
bcx.val_to_string(val.val));
let _indenter = indenter();
// Collect all of the matches that can match against anything.
@ -536,14 +589,14 @@ fn enter_opt<'a, 'p, 'blk, 'tcx>(
opt: &Opt,
col: usize,
variant_size: usize,
val: ValueRef)
val: MatchInput)
-> Vec<Match<'a, 'p, 'blk, 'tcx>> {
debug!("enter_opt(bcx={}, m={:?}, opt={:?}, col={}, val={})",
bcx.to_str(),
m,
*opt,
col,
bcx.val_to_string(val));
bcx.val_to_string(val.val));
let _indenter = indenter();
let ctor = match opt {
@ -639,11 +692,11 @@ struct ExtractedBlock<'blk, 'tcx: 'blk> {
fn extract_variant_args<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
repr: &adt::Repr<'tcx>,
disr_val: ty::Disr,
val: ValueRef)
val: MatchInput)
-> ExtractedBlock<'blk, 'tcx> {
let _icx = push_ctxt("match::extract_variant_args");
let args = (0..adt::num_args(repr, disr_val)).map(|i| {
adt::trans_field_ptr(bcx, repr, val, disr_val, i)
adt::trans_field_ptr(bcx, repr, val.val, disr_val, i)
}).collect();
ExtractedBlock { vals: args, bcx: bcx }
@ -651,13 +704,13 @@ fn extract_variant_args<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
/// Helper for converting from the ValueRef that we pass around in the match code, which is always
/// an lvalue, into a Datum. Eventually we should just pass around a Datum and be done with it.
fn match_datum<'tcx>(val: ValueRef, left_ty: Ty<'tcx>) -> Datum<'tcx, Lvalue> {
Datum::new(val, left_ty, Lvalue)
fn match_datum<'tcx>(val: MatchInput, left_ty: Ty<'tcx>) -> Datum<'tcx, Lvalue> {
val.to_datum(left_ty)
}
fn bind_subslice_pat(bcx: Block,
pat_id: ast::NodeId,
val: ValueRef,
val: MatchInput,
offset_left: usize,
offset_right: usize) -> ValueRef {
let _icx = push_ctxt("match::bind_subslice_pat");
@ -687,7 +740,7 @@ fn extract_vec_elems<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
left_ty: Ty<'tcx>,
before: usize,
after: usize,
val: ValueRef)
val: MatchInput)
-> ExtractedBlock<'blk, 'tcx> {
let _icx = push_ctxt("match::extract_vec_elems");
let vec_datum = match_datum(val, left_ty);
@ -888,32 +941,79 @@ fn insert_lllocals<'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>,
cs: Option<cleanup::ScopeId>)
-> Block<'blk, 'tcx> {
for (&ident, &binding_info) in bindings_map {
let llval = match binding_info.trmode {
let (llval, aliases_other_state) = match binding_info.trmode {
// By value mut binding for a copy type: load from the ptr
// into the matched value and copy to our alloca
TrByCopy(llbinding) => {
TrByCopy(llbinding) |
TrByMoveIntoCopy(llbinding) => {
let llval = Load(bcx, binding_info.llmatch);
let datum = Datum::new(llval, binding_info.ty, Lvalue);
let lvalue = match binding_info.trmode {
TrByCopy(..) =>
Lvalue::new("_match::insert_lllocals"),
TrByMoveIntoCopy(..) => {
// match_input moves from the input into a
// separate stack slot.
//
// E.g. consider moving the value `D(A)` out
// of the tuple `(D(A), D(B))` and into the
// local variable `x` via the pattern `(x,_)`,
// leaving the remainder of the tuple `(_,
// D(B))` still to be dropped in the future.
//
// Thus, here we must must zero the place that
// we are moving *from*, because we do not yet
// track drop flags for a fragmented parent
// match input expression.
//
// Longer term we will be able to map the move
// into `(x, _)` up to the parent path that
// owns the whole tuple, and mark the
// corresponding stack-local drop-flag
// tracking the first component of the tuple.
let hint_kind = HintKind::ZeroAndMaintain;
Lvalue::new_with_hint("_match::insert_lllocals (match_input)",
bcx, binding_info.id, hint_kind)
}
_ => unreachable!(),
};
let datum = Datum::new(llval, binding_info.ty, lvalue);
call_lifetime_start(bcx, llbinding);
bcx = datum.store_to(bcx, llbinding);
if let Some(cs) = cs {
bcx.fcx.schedule_lifetime_end(cs, llbinding);
}
llbinding
(llbinding, false)
},
// By value move bindings: load from the ptr into the matched value
TrByMove => Load(bcx, binding_info.llmatch),
TrByMoveRef => (Load(bcx, binding_info.llmatch), true),
// By ref binding: use the ptr into the matched value
TrByRef => binding_info.llmatch
TrByRef => (binding_info.llmatch, true),
};
let datum = Datum::new(llval, binding_info.ty, Lvalue);
// A local that aliases some other state must be zeroed, since
// the other state (e.g. some parent data that we matched
// into) will still have its subcomponents (such as this
// local) destructed at the end of the parent's scope. Longer
// term, we will properly map such parents to the set of
// unique drop flags for its fragments.
let hint_kind = if aliases_other_state {
HintKind::ZeroAndMaintain
} else {
HintKind::DontZeroJustUse
};
let lvalue = Lvalue::new_with_hint("_match::insert_lllocals (local)",
bcx,
binding_info.id,
hint_kind);
let datum = Datum::new(llval, binding_info.ty, lvalue);
if let Some(cs) = cs {
let opt_datum = lvalue.dropflag_hint(bcx);
bcx.fcx.schedule_lifetime_end(cs, binding_info.llmatch);
bcx.fcx.schedule_drop_and_fill_mem(cs, llval, binding_info.ty);
bcx.fcx.schedule_drop_and_fill_mem(cs, llval, binding_info.ty, opt_datum);
}
debug!("binding {} to {}", binding_info.id, bcx.val_to_string(llval));
@ -927,7 +1027,7 @@ fn compile_guard<'a, 'p, 'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
guard_expr: &ast::Expr,
data: &ArmData<'p, 'blk, 'tcx>,
m: &[Match<'a, 'p, 'blk, 'tcx>],
vals: &[ValueRef],
vals: &[MatchInput],
chk: &FailureHandler,
has_genuine_default: bool)
-> Block<'blk, 'tcx> {
@ -935,7 +1035,7 @@ fn compile_guard<'a, 'p, 'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
bcx.to_str(),
guard_expr,
m,
vals.iter().map(|v| bcx.val_to_string(*v)).collect::<Vec<_>>().join(", "));
vals.iter().map(|v| bcx.val_to_string(v.val)).collect::<Vec<_>>().join(", "));
let _indenter = indenter();
let mut bcx = insert_lllocals(bcx, &data.bindings_map, None);
@ -944,8 +1044,8 @@ fn compile_guard<'a, 'p, 'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
let val = val.to_llbool(bcx);
for (_, &binding_info) in &data.bindings_map {
if let TrByCopy(llbinding) = binding_info.trmode {
call_lifetime_end(bcx, llbinding);
if let Some(llbinding) = binding_info.trmode.alloca_if_copy() {
call_lifetime_end(bcx, llbinding)
}
}
@ -974,13 +1074,13 @@ fn compile_guard<'a, 'p, 'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
fn compile_submatch<'a, 'p, 'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
m: &[Match<'a, 'p, 'blk, 'tcx>],
vals: &[ValueRef],
vals: &[MatchInput],
chk: &FailureHandler,
has_genuine_default: bool) {
debug!("compile_submatch(bcx={}, m={:?}, vals=[{}])",
bcx.to_str(),
m,
vals.iter().map(|v| bcx.val_to_string(*v)).collect::<Vec<_>>().join(", "));
vals.iter().map(|v| bcx.val_to_string(v.val)).collect::<Vec<_>>().join(", "));
let _indenter = indenter();
let _icx = push_ctxt("match::compile_submatch");
let mut bcx = bcx;
@ -1040,10 +1140,10 @@ fn compile_submatch<'a, 'p, 'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
fn compile_submatch_continue<'a, 'p, 'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>,
m: &[Match<'a, 'p, 'blk, 'tcx>],
vals: &[ValueRef],
vals: &[MatchInput],
chk: &FailureHandler,
col: usize,
val: ValueRef,
val: MatchInput,
has_genuine_default: bool) {
let fcx = bcx.fcx;
let tcx = bcx.tcx();
@ -1073,13 +1173,13 @@ fn compile_submatch_continue<'a, 'p, 'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>,
let repr = adt::represent_type(bcx.ccx(), left_ty);
let arg_count = adt::num_args(&*repr, 0);
let (arg_count, struct_val) = if type_is_sized(bcx.tcx(), left_ty) {
(arg_count, val)
(arg_count, val.val)
} else {
// For an unsized ADT (i.e. DST struct), we need to treat
// the last field specially: instead of simply passing a
// ValueRef pointing to that field, as with all the others,
// we skip it and instead construct a 'fat ptr' below.
(arg_count - 1, Load(bcx, expr::get_dataptr(bcx, val)))
(arg_count - 1, Load(bcx, expr::get_dataptr(bcx, val.val)))
};
let mut field_vals: Vec<ValueRef> = (0..arg_count).map(|ix|
adt::trans_field_ptr(bcx, &*repr, struct_val, 0, ix)
@ -1098,7 +1198,7 @@ fn compile_submatch_continue<'a, 'p, 'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>,
let llty = type_of::type_of(bcx.ccx(), unsized_ty);
let scratch = alloca_no_lifetime(bcx, llty, "__struct_field_fat_ptr");
let data = adt::trans_field_ptr(bcx, &*repr, struct_val, 0, arg_count);
let len = Load(bcx, expr::get_len(bcx, val));
let len = Load(bcx, expr::get_len(bcx, val.val));
Store(bcx, data, expr::get_dataptr(bcx, scratch));
Store(bcx, len, expr::get_len(bcx, scratch));
field_vals.push(scratch);
@ -1107,7 +1207,7 @@ fn compile_submatch_continue<'a, 'p, 'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>,
}
Some(field_vals)
} else if any_uniq_pat(m, col) || any_region_pat(m, col) {
Some(vec!(Load(bcx, val)))
Some(vec!(Load(bcx, val.val)))
} else {
match left_ty.sty {
ty::TyArray(_, n) => {
@ -1124,7 +1224,9 @@ fn compile_submatch_continue<'a, 'p, 'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>,
&check_match::Single, col,
field_vals.len())
);
let mut vals = field_vals;
let mut vals: Vec<_> = field_vals.into_iter()
.map(|v|MatchInput::from_val(v))
.collect();
vals.push_all(&vals_left);
compile_submatch(bcx, &pats, &vals, chk, has_genuine_default);
return;
@ -1136,12 +1238,12 @@ fn compile_submatch_continue<'a, 'p, 'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>,
let opts = get_branches(bcx, m, col);
debug!("options={:?}", opts);
let mut kind = NoBranch;
let mut test_val = val;
let mut test_val = val.val;
debug!("test_val={}", bcx.val_to_string(test_val));
if !opts.is_empty() {
match opts[0] {
ConstantValue(..) | ConstantRange(..) => {
test_val = load_if_immediate(bcx, val, left_ty);
test_val = load_if_immediate(bcx, val.val, left_ty);
kind = if left_ty.is_integral() {
Switch
} else {
@ -1149,12 +1251,12 @@ fn compile_submatch_continue<'a, 'p, 'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>,
};
}
Variant(_, ref repr, _, _) => {
let (the_kind, val_opt) = adt::trans_switch(bcx, &**repr, val);
let (the_kind, val_opt) = adt::trans_switch(bcx, &**repr, val.val);
kind = the_kind;
if let Some(tval) = val_opt { test_val = tval; }
}
SliceLengthEqual(..) | SliceLengthGreaterOrEqual(..) => {
let (_, len) = tvec::get_base_and_len(bcx, val, left_ty);
let (_, len) = tvec::get_base_and_len(bcx, val.val, left_ty);
test_val = len;
kind = Switch;
}
@ -1278,7 +1380,9 @@ fn compile_submatch_continue<'a, 'p, 'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>,
ConstantValue(..) | ConstantRange(..) => ()
}
let opt_ms = enter_opt(opt_cx, pat_id, dm, m, opt, col, size, val);
let mut opt_vals = unpacked;
let mut opt_vals: Vec<_> = unpacked.into_iter()
.map(|v|MatchInput::from_val(v))
.collect();
opt_vals.push_all(&vals_left[..]);
compile_submatch(opt_cx,
&opt_ms[..],
@ -1415,25 +1519,30 @@ fn create_bindings_map<'blk, 'tcx>(bcx: Block<'blk, 'tcx>, pat: &ast::Pat,
let llmatch;
let trmode;
let moves_by_default = variable_ty.moves_by_default(&param_env, span);
match bm {
ast::BindByValue(_)
if !variable_ty.moves_by_default(&param_env, span) || reassigned =>
ast::BindByValue(_) if !moves_by_default || reassigned =>
{
llmatch = alloca_no_lifetime(bcx,
llvariable_ty.ptr_to(),
"__llmatch");
trmode = TrByCopy(alloca_no_lifetime(bcx,
llvariable_ty,
&bcx.name(name)));
llvariable_ty.ptr_to(),
"__llmatch");
let llcopy = alloca_no_lifetime(bcx,
llvariable_ty,
&bcx.name(name));
trmode = if moves_by_default {
TrByMoveIntoCopy(llcopy)
} else {
TrByCopy(llcopy)
};
}
ast::BindByValue(_) => {
// in this case, the final type of the variable will be T,
// but during matching we need to store a *T as explained
// above
llmatch = alloca_no_lifetime(bcx,
llvariable_ty.ptr_to(),
&bcx.name(name));
trmode = TrByMove;
llvariable_ty.ptr_to(),
&bcx.name(name));
trmode = TrByMoveRef;
}
ast::BindByRef(_) => {
llmatch = alloca_no_lifetime(bcx,
@ -1517,7 +1626,7 @@ fn trans_match_inner<'blk, 'tcx>(scope_cx: Block<'blk, 'tcx>,
&& arm.pats.last().unwrap().node == ast::PatWild(ast::PatWildSingle)
});
compile_submatch(bcx, &matches[..], &[discr_datum.val], &chk, has_default);
compile_submatch(bcx, &matches[..], &[discr_datum.match_input()], &chk, has_default);
let mut arm_cxs = Vec::new();
for arm_data in &arm_datas {
@ -1556,7 +1665,26 @@ fn create_dummy_locals<'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>,
let scope = cleanup::var_scope(tcx, p_id);
bcx = mk_binding_alloca(
bcx, p_id, path1.node.name, scope, (),
|(), bcx, llval, ty| { drop_done_fill_mem(bcx, llval, ty); bcx });
"_match::store_local::create_dummy_locals",
|(), bcx, Datum { val: llval, ty, kind }| {
// Dummy-locals start out uninitialized, so set their
// drop-flag hints (if any) to "moved."
if let Some(hint) = kind.dropflag_hint(bcx) {
let moved_hint = adt::DTOR_MOVED_HINT as usize;
debug!("store moved_hint={} for hint={:?}, uninitialized dummy",
moved_hint, hint);
Store(bcx, C_u8(bcx.fcx.ccx, moved_hint), hint.to_value().value());
}
if kind.drop_flag_info.must_zero() {
// if no drop-flag hint, or the hint requires
// we maintain the embedded drop-flag, then
// mark embedded drop-flag(s) as moved
// (i.e. "already dropped").
drop_done_fill_mem(bcx, llval, ty);
}
bcx
});
});
bcx
}
@ -1578,8 +1706,9 @@ fn create_dummy_locals<'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>,
let var_scope = cleanup::var_scope(tcx, local.id);
return mk_binding_alloca(
bcx, pat.id, ident.name, var_scope, (),
|(), bcx, v, _| expr::trans_into(bcx, &**init_expr,
expr::SaveIn(v)));
"_match::store_local",
|(), bcx, Datum { val: v, .. }| expr::trans_into(bcx, &**init_expr,
expr::SaveIn(v)));
}
None => {}
@ -1592,7 +1721,7 @@ fn create_dummy_locals<'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>,
add_comment(bcx, "creating zeroable ref llval");
}
let var_scope = cleanup::var_scope(tcx, local.id);
bind_irrefutable_pat(bcx, pat, init_datum.val, var_scope)
bind_irrefutable_pat(bcx, pat, init_datum.match_input(), var_scope)
}
None => {
create_dummy_locals(bcx, pat)
@ -1605,24 +1734,26 @@ fn mk_binding_alloca<'blk, 'tcx, A, F>(bcx: Block<'blk, 'tcx>,
name: ast::Name,
cleanup_scope: cleanup::ScopeId,
arg: A,
caller_name: &'static str,
populate: F)
-> Block<'blk, 'tcx> where
F: FnOnce(A, Block<'blk, 'tcx>, ValueRef, Ty<'tcx>) -> Block<'blk, 'tcx>,
F: FnOnce(A, Block<'blk, 'tcx>, Datum<'tcx, Lvalue>) -> Block<'blk, 'tcx>,
{
let var_ty = node_id_type(bcx, p_id);
// Allocate memory on stack for the binding.
let llval = alloc_ty(bcx, var_ty, &bcx.name(name));
let lvalue = Lvalue::new_with_hint(caller_name, bcx, p_id, HintKind::DontZeroJustUse);
let datum = Datum::new(llval, var_ty, lvalue);
// Subtle: be sure that we *populate* the memory *before*
// we schedule the cleanup.
let bcx = populate(arg, bcx, llval, var_ty);
let bcx = populate(arg, bcx, datum);
bcx.fcx.schedule_lifetime_end(cleanup_scope, llval);
bcx.fcx.schedule_drop_mem(cleanup_scope, llval, var_ty);
bcx.fcx.schedule_drop_mem(cleanup_scope, llval, var_ty, lvalue.dropflag_hint(bcx));
// Now that memory is initialized and has cleanup scheduled,
// create the datum and insert into the local variable map.
let datum = Datum::new(llval, var_ty, Lvalue);
// insert datum into the local variable map.
bcx.fcx.lllocals.borrow_mut().insert(p_id, datum);
bcx
}
@ -1641,7 +1772,7 @@ fn mk_binding_alloca<'blk, 'tcx, A, F>(bcx: Block<'blk, 'tcx>,
/// - val: the value being matched -- must be an lvalue (by ref, with cleanup)
pub fn bind_irrefutable_pat<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
pat: &ast::Pat,
val: ValueRef,
val: MatchInput,
cleanup_scope: cleanup::ScopeId)
-> Block<'blk, 'tcx> {
debug!("bind_irrefutable_pat(bcx={}, pat={:?})",
@ -1667,12 +1798,13 @@ pub fn bind_irrefutable_pat<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
// map.
bcx = mk_binding_alloca(
bcx, pat.id, path1.node.name, cleanup_scope, (),
|(), bcx, llval, ty| {
"_match::bind_irrefutable_pat",
|(), bcx, Datum { val: llval, ty, kind: _ }| {
match pat_binding_mode {
ast::BindByValue(_) => {
// By value binding: move the value that `val`
// points at into the binding's stack slot.
let d = Datum::new(val, ty, Lvalue);
let d = val.to_datum(ty);
d.store_to(bcx, llval)
}
@ -1680,10 +1812,10 @@ pub fn bind_irrefutable_pat<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
// By ref binding: the value of the variable
// is the pointer `val` itself or fat pointer referenced by `val`
if type_is_fat_ptr(bcx.tcx(), ty) {
expr::copy_fat_ptr(bcx, val, llval);
expr::copy_fat_ptr(bcx, val.val, llval);
}
else {
Store(bcx, val, llval);
Store(bcx, val.val, llval);
}
bcx
@ -1708,8 +1840,11 @@ pub fn bind_irrefutable_pat<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
val);
if let Some(ref sub_pat) = *sub_pats {
for (i, &argval) in args.vals.iter().enumerate() {
bcx = bind_irrefutable_pat(bcx, &*sub_pat[i],
argval, cleanup_scope);
bcx = bind_irrefutable_pat(
bcx,
&*sub_pat[i],
MatchInput::from_val(argval),
cleanup_scope);
}
}
}
@ -1723,9 +1858,12 @@ pub fn bind_irrefutable_pat<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
let repr = adt::represent_node(bcx, pat.id);
for (i, elem) in elems.iter().enumerate() {
let fldptr = adt::trans_field_ptr(bcx, &*repr,
val, 0, i);
bcx = bind_irrefutable_pat(bcx, &**elem,
fldptr, cleanup_scope);
val.val, 0, i);
bcx = bind_irrefutable_pat(
bcx,
&**elem,
MatchInput::from_val(fldptr),
cleanup_scope);
}
}
}
@ -1742,26 +1880,42 @@ pub fn bind_irrefutable_pat<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
expr::with_field_tys(tcx, pat_ty, Some(pat.id), |discr, field_tys| {
for f in fields {
let ix = tcx.field_idx_strict(f.node.ident.name, field_tys);
let fldptr = adt::trans_field_ptr(bcx, &*pat_repr, val,
discr, ix);
bcx = bind_irrefutable_pat(bcx, &*f.node.pat, fldptr, cleanup_scope);
let fldptr = adt::trans_field_ptr(
bcx,
&*pat_repr,
val.val,
discr,
ix);
bcx = bind_irrefutable_pat(bcx,
&*f.node.pat,
MatchInput::from_val(fldptr),
cleanup_scope);
}
})
}
ast::PatTup(ref elems) => {
let repr = adt::represent_node(bcx, pat.id);
for (i, elem) in elems.iter().enumerate() {
let fldptr = adt::trans_field_ptr(bcx, &*repr, val, 0, i);
bcx = bind_irrefutable_pat(bcx, &**elem, fldptr, cleanup_scope);
let fldptr = adt::trans_field_ptr(bcx, &*repr, val.val, 0, i);
bcx = bind_irrefutable_pat(
bcx,
&**elem,
MatchInput::from_val(fldptr),
cleanup_scope);
}
}
ast::PatBox(ref inner) => {
let llbox = Load(bcx, val);
bcx = bind_irrefutable_pat(bcx, &**inner, llbox, cleanup_scope);
let llbox = Load(bcx, val.val);
bcx = bind_irrefutable_pat(
bcx, &**inner, MatchInput::from_val(llbox), cleanup_scope);
}
ast::PatRegion(ref inner, _) => {
let loaded_val = Load(bcx, val);
bcx = bind_irrefutable_pat(bcx, &**inner, loaded_val, cleanup_scope);
let loaded_val = Load(bcx, val.val);
bcx = bind_irrefutable_pat(
bcx,
&**inner,
MatchInput::from_val(loaded_val),
cleanup_scope);
}
ast::PatVec(ref before, ref slice, ref after) => {
let pat_ty = node_id_type(bcx, pat.id);
@ -1780,9 +1934,13 @@ pub fn bind_irrefutable_pat<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
.chain(slice.iter())
.chain(after.iter())
.zip(extracted.vals)
.fold(bcx, |bcx, (inner, elem)|
bind_irrefutable_pat(bcx, &**inner, elem, cleanup_scope)
);
.fold(bcx, |bcx, (inner, elem)| {
bind_irrefutable_pat(
bcx,
&**inner,
MatchInput::from_val(elem),
cleanup_scope)
});
}
ast::PatMac(..) => {
bcx.sess().span_bug(pat.span, "unexpanded macro");

View File

@ -163,6 +163,20 @@ macro_rules! repeat_u8_as_u64 {
(repeat_u8_as_u32!($name) as u64)) }
}
/// `DTOR_NEEDED_HINT` is a stack-local hint that just means
/// "we do not know whether the destructor has run or not; check the
/// drop-flag embedded in the value itself."
pub const DTOR_NEEDED_HINT: u8 = 0x3d;
/// `DTOR_MOVED_HINT` is a stack-local hint that means "this value has
/// definitely been moved; you do not need to run its destructor."
///
/// (However, for now, such values may still end up being explicitly
/// zeroed by the generated code; this is the distinction between
/// `datum::DropFlagInfo::ZeroAndMaintain` versus
/// `datum::DropFlagInfo::DontZeroJustUse`.)
pub const DTOR_MOVED_HINT: u8 = 0x2d;
pub const DTOR_NEEDED: u8 = 0xd4;
pub const DTOR_NEEDED_U32: u32 = repeat_u8_as_u32!(DTOR_NEEDED);
pub const DTOR_NEEDED_U64: u64 = repeat_u8_as_u64!(DTOR_NEEDED);
@ -1083,7 +1097,7 @@ pub fn trans_drop_flag_ptr<'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>,
));
bcx = fold_variants(bcx, r, val, |variant_cx, st, value| {
let ptr = struct_field_ptr(variant_cx, st, value, (st.fields.len() - 1), false);
datum::Datum::new(ptr, ptr_ty, datum::Lvalue)
datum::Datum::new(ptr, ptr_ty, datum::Lvalue::new("adt::trans_drop_flag_ptr"))
.store_to(variant_cx, scratch.val)
});
let expr_datum = scratch.to_expr_datum();

View File

@ -51,12 +51,11 @@
use trans::build::*;
use trans::builder::{Builder, noname};
use trans::callee;
use trans::cleanup::CleanupMethods;
use trans::cleanup;
use trans::cleanup::{self, CleanupMethods, DropHint};
use trans::closure;
use trans::common::{Block, C_bool, C_bytes_in_context, C_i32, C_int, C_integral};
use trans::common::{C_null, C_struct_in_context, C_u64, C_u8, C_undef};
use trans::common::{CrateContext, FunctionContext};
use trans::common::{CrateContext, DropFlagHintsMap, FunctionContext};
use trans::common::{Result, NodeIdAndSpan};
use trans::common::{node_id_type, return_type_is_void};
use trans::common::{type_is_immediate, type_is_zero_size, val_ty};
@ -88,7 +87,7 @@
use libc::c_uint;
use std::ffi::{CStr, CString};
use std::cell::{Cell, RefCell};
use std::collections::HashSet;
use std::collections::{HashMap, HashSet};
use std::mem;
use std::str;
use std::{i8, i16, i32, i64};
@ -1235,6 +1234,7 @@ pub fn new_fn_ctxt<'a, 'tcx>(ccx: &'a CrateContext<'a, 'tcx>,
caller_expects_out_pointer: uses_outptr,
lllocals: RefCell::new(NodeMap()),
llupvars: RefCell::new(NodeMap()),
lldropflag_hints: RefCell::new(DropFlagHintsMap::new()),
id: id,
param_substs: param_substs,
span: sp,
@ -1283,6 +1283,54 @@ pub fn init_function<'a, 'tcx>(fcx: &'a FunctionContext<'a, 'tcx>,
}
}
// Create the drop-flag hints for every unfragmented path in the function.
let tcx = fcx.ccx.tcx();
let fn_did = ast::DefId { krate: ast::LOCAL_CRATE, node: fcx.id };
let mut hints = fcx.lldropflag_hints.borrow_mut();
let fragment_infos = tcx.fragment_infos.borrow();
// Intern table for drop-flag hint datums.
let mut seen = HashMap::new();
if let Some(fragment_infos) = fragment_infos.get(&fn_did) {
for &info in fragment_infos {
let make_datum = |id| {
let init_val = C_u8(fcx.ccx, adt::DTOR_NEEDED_HINT as usize);
let llname = &format!("dropflag_hint_{}", id);
debug!("adding hint {}", llname);
let ptr = alloc_ty(entry_bcx, tcx.types.u8, llname);
Store(entry_bcx, init_val, ptr);
let ty = tcx.mk_ptr(ty::TypeAndMut { ty: tcx.types.u8, mutbl: ast::MutMutable });
let flag = datum::Lvalue::new_dropflag_hint("base::init_function");
let datum = datum::Datum::new(ptr, ty, flag);
datum
};
let (var, datum) = match info {
ty::FragmentInfo::Moved { var, .. } |
ty::FragmentInfo::Assigned { var, .. } => {
let datum = seen.get(&var).cloned().unwrap_or_else(|| {
let datum = make_datum(var);
seen.insert(var, datum.clone());
datum
});
(var, datum)
}
};
match info {
ty::FragmentInfo::Moved { move_expr: expr_id, .. } => {
debug!("FragmentInfo::Moved insert drop hint for {}", expr_id);
hints.insert(expr_id, DropHint::new(var, datum));
}
ty::FragmentInfo::Assigned { assignee_id: expr_id, .. } => {
debug!("FragmentInfo::Assigned insert drop hint for {}", expr_id);
hints.insert(expr_id, DropHint::new(var, datum));
}
}
}
}
entry_bcx
}
@ -1335,9 +1383,9 @@ pub fn create_datums_for_fn_args<'a, 'tcx>(mut bcx: Block<'a, 'tcx>,
let llarg = get_param(fcx.llfn, idx);
idx += 1;
bcx.fcx.schedule_lifetime_end(arg_scope_id, llarg);
bcx.fcx.schedule_drop_mem(arg_scope_id, llarg, arg_ty);
bcx.fcx.schedule_drop_mem(arg_scope_id, llarg, arg_ty, None);
datum::Datum::new(llarg, arg_ty, datum::Lvalue)
datum::Datum::new(llarg, arg_ty, datum::Lvalue::new("create_datum_for_fn_args"))
} else if common::type_is_fat_ptr(bcx.tcx(), arg_ty) {
let data = get_param(fcx.llfn, idx);
let extra = get_param(fcx.llfn, idx + 1);
@ -1408,7 +1456,7 @@ pub fn create_datums_for_fn_args<'a, 'tcx>(mut bcx: Block<'a, 'tcx>,
} else {
// General path. Copy out the values that are used in the
// pattern.
_match::bind_irrefutable_pat(bcx, pat, arg_datum.val, arg_scope_id)
_match::bind_irrefutable_pat(bcx, pat, arg_datum.match_input(), arg_scope_id)
};
debuginfo::create_argument_metadata(bcx, &args[i]);
}

View File

@ -124,6 +124,7 @@
use trans::build;
use trans::common;
use trans::common::{Block, FunctionContext, NodeIdAndSpan};
use trans::datum::{Datum, Lvalue};
use trans::debuginfo::{DebugLoc, ToDebugLoc};
use trans::glue;
use middle::region;
@ -212,6 +213,29 @@ pub enum ScopeId {
CustomScope(CustomScopeIndex)
}
#[derive(Copy, Clone, Debug)]
pub struct DropHint<K>(pub ast::NodeId, pub K);
pub type DropHintDatum<'tcx> = DropHint<Datum<'tcx, Lvalue>>;
pub type DropHintValue = DropHint<ValueRef>;
impl<K> DropHint<K> {
pub fn new(id: ast::NodeId, k: K) -> DropHint<K> { DropHint(id, k) }
}
impl DropHint<ValueRef> {
pub fn value(&self) -> ValueRef { self.1 }
}
pub trait DropHintMethods {
type ValueKind;
fn to_value(&self) -> Self::ValueKind;
}
impl<'tcx> DropHintMethods for DropHintDatum<'tcx> {
type ValueKind = DropHintValue;
fn to_value(&self) -> DropHintValue { DropHint(self.0, self.1.val) }
}
impl<'blk, 'tcx> CleanupMethods<'blk, 'tcx> for FunctionContext<'blk, 'tcx> {
/// Invoked when we start to trans the code contained within a new cleanup scope.
fn push_ast_cleanup_scope(&self, debug_loc: NodeIdAndSpan) {
@ -382,14 +406,17 @@ fn schedule_lifetime_end(&self,
fn schedule_drop_mem(&self,
cleanup_scope: ScopeId,
val: ValueRef,
ty: Ty<'tcx>) {
ty: Ty<'tcx>,
drop_hint: Option<DropHintDatum<'tcx>>) {
if !self.type_needs_drop(ty) { return; }
let drop_hint = drop_hint.map(|hint|hint.to_value());
let drop = box DropValue {
is_immediate: false,
val: val,
ty: ty,
fill_on_drop: false,
skip_dtor: false,
drop_hint: drop_hint,
};
debug!("schedule_drop_mem({:?}, val={}, ty={:?}) fill_on_drop={} skip_dtor={}",
@ -406,23 +433,28 @@ fn schedule_drop_mem(&self,
fn schedule_drop_and_fill_mem(&self,
cleanup_scope: ScopeId,
val: ValueRef,
ty: Ty<'tcx>) {
ty: Ty<'tcx>,
drop_hint: Option<DropHintDatum<'tcx>>) {
if !self.type_needs_drop(ty) { return; }
let drop_hint = drop_hint.map(|datum|datum.to_value());
let drop = box DropValue {
is_immediate: false,
val: val,
ty: ty,
fill_on_drop: true,
skip_dtor: false,
drop_hint: drop_hint,
};
debug!("schedule_drop_and_fill_mem({:?}, val={}, ty={:?}, fill_on_drop={}, skip_dtor={})",
debug!("schedule_drop_and_fill_mem({:?}, val={}, ty={:?},
fill_on_drop={}, skip_dtor={}, has_drop_hint={})",
cleanup_scope,
self.ccx.tn().val_to_string(val),
ty,
drop.fill_on_drop,
drop.skip_dtor);
drop.skip_dtor,
drop_hint.is_some());
self.schedule_clean(cleanup_scope, drop as CleanupObj);
}
@ -446,6 +478,7 @@ fn schedule_drop_adt_contents(&self,
ty: ty,
fill_on_drop: false,
skip_dtor: true,
drop_hint: None,
};
debug!("schedule_drop_adt_contents({:?}, val={}, ty={:?}) fill_on_drop={} skip_dtor={}",
@ -465,13 +498,14 @@ fn schedule_drop_immediate(&self,
ty: Ty<'tcx>) {
if !self.type_needs_drop(ty) { return; }
let drop = box DropValue {
let drop = Box::new(DropValue {
is_immediate: true,
val: val,
ty: ty,
fill_on_drop: false,
skip_dtor: false,
};
drop_hint: None,
});
debug!("schedule_drop_immediate({:?}, val={}, ty={:?}) fill_on_drop={} skip_dtor={}",
cleanup_scope,
@ -976,6 +1010,7 @@ pub struct DropValue<'tcx> {
ty: Ty<'tcx>,
fill_on_drop: bool,
skip_dtor: bool,
drop_hint: Option<DropHintValue>,
}
impl<'tcx> Cleanup<'tcx> for DropValue<'tcx> {
@ -1000,7 +1035,7 @@ fn trans<'blk>(&self,
let bcx = if self.is_immediate {
glue::drop_ty_immediate(bcx, self.val, self.ty, debug_loc, self.skip_dtor)
} else {
glue::drop_ty_core(bcx, self.val, self.ty, debug_loc, self.skip_dtor)
glue::drop_ty_core(bcx, self.val, self.ty, debug_loc, self.skip_dtor, self.drop_hint)
};
if self.fill_on_drop {
base::drop_done_fill_mem(bcx, self.val, self.ty);
@ -1128,11 +1163,13 @@ fn schedule_lifetime_end(&self,
fn schedule_drop_mem(&self,
cleanup_scope: ScopeId,
val: ValueRef,
ty: Ty<'tcx>);
ty: Ty<'tcx>,
drop_hint: Option<DropHintDatum<'tcx>>);
fn schedule_drop_and_fill_mem(&self,
cleanup_scope: ScopeId,
val: ValueRef,
ty: Ty<'tcx>);
ty: Ty<'tcx>,
drop_hint: Option<DropHintDatum<'tcx>>);
fn schedule_drop_adt_contents(&self,
cleanup_scope: ScopeId,
val: ValueRef,

View File

@ -82,9 +82,11 @@ fn load_closure_environment<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
bcx.fcx.llupvars.borrow_mut().insert(def_id.node, upvar_ptr);
if kind == ty::FnOnceClosureKind && !captured_by_ref {
let hint = bcx.fcx.lldropflag_hints.borrow().hint_datum(upvar_id.var_id);
bcx.fcx.schedule_drop_mem(arg_scope_id,
upvar_ptr,
node_id_type(bcx, def_id.node))
node_id_type(bcx, def_id.node),
hint)
}
if let Some(env_pointer_alloca) = env_pointer_alloca {

View File

@ -299,6 +299,33 @@ pub fn validate_substs(substs: &Substs) {
type RvalueDatum<'tcx> = datum::Datum<'tcx, datum::Rvalue>;
pub type LvalueDatum<'tcx> = datum::Datum<'tcx, datum::Lvalue>;
#[derive(Clone, Debug)]
struct HintEntry<'tcx> {
// The datum for the dropflag-hint itself; note that many
// source-level Lvalues will be associated with the same
// dropflag-hint datum.
datum: cleanup::DropHintDatum<'tcx>,
}
pub struct DropFlagHintsMap<'tcx> {
// Maps NodeId for expressions that read/write unfragmented state
// to that state's drop-flag "hint." (A stack-local hint
// indicates either that (1.) it is certain that no-drop is
// needed, or (2.) inline drop-flag must be consulted.)
node_map: NodeMap<HintEntry<'tcx>>,
}
impl<'tcx> DropFlagHintsMap<'tcx> {
pub fn new() -> DropFlagHintsMap<'tcx> { DropFlagHintsMap { node_map: NodeMap() } }
pub fn has_hint(&self, id: ast::NodeId) -> bool { self.node_map.contains_key(&id) }
pub fn insert(&mut self, id: ast::NodeId, datum: cleanup::DropHintDatum<'tcx>) {
self.node_map.insert(id, HintEntry { datum: datum });
}
pub fn hint_datum(&self, id: ast::NodeId) -> Option<cleanup::DropHintDatum<'tcx>> {
self.node_map.get(&id).map(|t|t.datum)
}
}
// Function context. Every LLVM function we create will have one of
// these.
pub struct FunctionContext<'a, 'tcx: 'a> {
@ -349,6 +376,10 @@ pub struct FunctionContext<'a, 'tcx: 'a> {
// Same as above, but for closure upvars
pub llupvars: RefCell<NodeMap<ValueRef>>,
// Carries info about drop-flags for local bindings (longer term,
// paths) for the code being compiled.
pub lldropflag_hints: RefCell<DropFlagHintsMap<'tcx>>,
// The NodeId of the function, or -1 if it doesn't correspond to
// a user-defined function.
pub id: ast::NodeId,

View File

@ -93,11 +93,12 @@
pub use self::RvalueMode::*;
use llvm::ValueRef;
use trans::adt;
use trans::base::*;
use trans::build::Load;
use trans::build::{Load, Store};
use trans::common::*;
use trans::cleanup;
use trans::cleanup::CleanupMethods;
use trans::cleanup::{CleanupMethods, DropHintDatum, DropHintMethods};
use trans::expr;
use trans::tvec;
use trans::type_of;
@ -111,7 +112,7 @@
/// describes where the value is stored, what Rust type the value has,
/// whether it is addressed by reference, and so forth. Please refer
/// the section on datums in `README.md` for more details.
#[derive(Clone, Copy)]
#[derive(Clone, Copy, Debug)]
pub struct Datum<'tcx, K> {
/// The llvm value. This is either a pointer to the Rust value or
/// the value itself, depending on `kind` below.
@ -138,17 +139,125 @@ pub enum Expr {
/// `val` is a pointer into memory for which a cleanup is scheduled
/// (and thus has type *T). If you move out of an Lvalue, you must
/// zero out the memory (FIXME #5016).
LvalueExpr,
LvalueExpr(Lvalue),
}
#[derive(Clone, Copy, Debug)]
pub struct Lvalue;
#[derive(Copy, Clone, PartialEq, Eq, Debug)]
pub enum DropFlagInfo {
DontZeroJustUse(ast::NodeId),
ZeroAndMaintain(ast::NodeId),
None,
}
impl DropFlagInfo {
pub fn must_zero(&self) -> bool {
match *self {
DropFlagInfo::DontZeroJustUse(..) => false,
DropFlagInfo::ZeroAndMaintain(..) => true,
DropFlagInfo::None => true,
}
}
pub fn hint_datum<'blk, 'tcx>(&self, bcx: Block<'blk, 'tcx>)
-> Option<DropHintDatum<'tcx>> {
let id = match *self {
DropFlagInfo::None => return None,
DropFlagInfo::DontZeroJustUse(id) |
DropFlagInfo::ZeroAndMaintain(id) => id,
};
let hints = bcx.fcx.lldropflag_hints.borrow();
let retval = hints.hint_datum(id);
assert!(retval.is_some(), "An id (={}) means must have a hint", id);
retval
}
}
// FIXME: having Lvalue be `Copy` is a bit of a footgun, since clients
// may not realize that subparts of an Lvalue can have a subset of
// drop-flags associated with them, while this as written will just
// memcpy the drop_flag_info. But, it is an easier way to get `_match`
// off the ground to just let this be `Copy` for now.
#[derive(Copy, Clone, Debug)]
pub struct Lvalue {
pub source: &'static str,
pub drop_flag_info: DropFlagInfo
}
#[derive(Debug)]
pub struct Rvalue {
pub mode: RvalueMode
}
/// Classifies what action we should take when a value is moved away
/// with respect to its drop-flag.
///
/// Long term there will be no need for this classification: all flags
/// (which will be stored on the stack frame) will have the same
/// interpretation and maintenance code associated with them.
#[derive(Copy, Clone, Debug)]
pub enum HintKind {
/// When the value is moved, set the drop-flag to "dropped"
/// (i.e. "zero the flag", even when the specific representation
/// is not literally 0) and when it is reinitialized, set the
/// drop-flag back to "initialized".
ZeroAndMaintain,
/// When the value is moved, do not set the drop-flag to "dropped"
/// However, continue to read the drop-flag in deciding whether to
/// drop. (In essence, the path/fragment in question will never
/// need to be dropped at the points where it is moved away by
/// this code, but we are defending against the scenario where
/// some *other* code could move away (or drop) the value and thus
/// zero-the-flag, which is why we will still read from it.
DontZeroJustUse,
}
impl Lvalue { // Constructors for various Lvalues.
pub fn new<'blk, 'tcx>(source: &'static str) -> Lvalue {
debug!("Lvalue at {} no drop flag info", source);
Lvalue { source: source, drop_flag_info: DropFlagInfo::None }
}
pub fn new_dropflag_hint(source: &'static str) -> Lvalue {
debug!("Lvalue at {} is drop flag hint", source);
Lvalue { source: source, drop_flag_info: DropFlagInfo::None }
}
pub fn new_with_hint<'blk, 'tcx>(source: &'static str,
bcx: Block<'blk, 'tcx>,
id: ast::NodeId,
k: HintKind) -> Lvalue {
let (opt_id, info) = {
let hint_available = Lvalue::has_dropflag_hint(bcx, id) &&
bcx.tcx().sess.nonzeroing_move_hints();
let info = match k {
HintKind::ZeroAndMaintain if hint_available =>
DropFlagInfo::ZeroAndMaintain(id),
HintKind::DontZeroJustUse if hint_available =>
DropFlagInfo::DontZeroJustUse(id),
_ =>
DropFlagInfo::None,
};
(Some(id), info)
};
debug!("Lvalue at {}, id: {:?} info: {:?}", source, opt_id, info);
Lvalue { source: source, drop_flag_info: info }
}
} // end Lvalue constructor methods.
impl Lvalue {
fn has_dropflag_hint<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
id: ast::NodeId) -> bool {
let hints = bcx.fcx.lldropflag_hints.borrow();
hints.has_hint(id)
}
pub fn dropflag_hint<'blk, 'tcx>(&self, bcx: Block<'blk, 'tcx>)
-> Option<DropHintDatum<'tcx>> {
self.drop_flag_info.hint_datum(bcx)
}
}
impl Rvalue {
pub fn new(m: RvalueMode) -> Rvalue {
Rvalue { mode: m }
@ -199,9 +308,9 @@ pub fn lvalue_scratch_datum<'blk, 'tcx, A, F>(bcx: Block<'blk, 'tcx>,
// Subtle. Populate the scratch memory *before* scheduling cleanup.
let bcx = populate(arg, bcx, scratch);
bcx.fcx.schedule_lifetime_end(scope, scratch);
bcx.fcx.schedule_drop_mem(scope, scratch, ty);
bcx.fcx.schedule_drop_mem(scope, scratch, ty, None);
DatumBlock::new(bcx, Datum::new(scratch, ty, Lvalue))
DatumBlock::new(bcx, Datum::new(scratch, ty, Lvalue::new("datum::lvalue_scratch_datum")))
}
/// Allocates temporary space on the stack using alloca() and returns a by-ref Datum pointing to
@ -238,7 +347,7 @@ fn add_rvalue_clean<'a, 'tcx>(mode: RvalueMode,
ByValue => { fcx.schedule_drop_immediate(scope, val, ty); }
ByRef => {
fcx.schedule_lifetime_end(scope, val);
fcx.schedule_drop_mem(scope, val, ty);
fcx.schedule_drop_mem(scope, val, ty, None);
}
}
}
@ -295,10 +404,32 @@ fn post_store<'blk, 'tcx>(&self,
-> Block<'blk, 'tcx> {
let _icx = push_ctxt("<Lvalue as KindOps>::post_store");
if bcx.fcx.type_needs_drop(ty) {
// cancel cleanup of affine values by drop-filling the memory
let () = drop_done_fill_mem(bcx, val, ty);
// cancel cleanup of affine values:
// 1. if it has drop-hint, mark as moved; then code
// aware of drop-hint won't bother calling the
// drop-glue itself.
if let Some(hint_datum) = self.drop_flag_info.hint_datum(bcx) {
let moved_hint_byte = adt::DTOR_MOVED_HINT as usize;
let hint_llval = hint_datum.to_value().value();
Store(bcx, C_u8(bcx.fcx.ccx, moved_hint_byte), hint_llval);
}
// 2. if the drop info says its necessary, drop-fill the memory.
if self.drop_flag_info.must_zero() {
let () = drop_done_fill_mem(bcx, val, ty);
}
bcx
} else {
// FIXME (#5016) would be nice to assert this, but we have
// to allow for e.g. DontZeroJustUse flags, for now.
//
// (The dropflag hint construction should be taking
// !type_needs_drop into account; earlier analysis phases
// may not have all the info they need to include such
// information properly, I think; in particular the
// fragments analysis works on a non-monomorphized view of
// the code.)
//
// assert_eq!(self.drop_flag_info, DropFlagInfo::None);
bcx
}
}
@ -308,7 +439,7 @@ fn is_by_ref(&self) -> bool {
}
fn to_expr_kind(self) -> Expr {
LvalueExpr
LvalueExpr(self)
}
}
@ -319,14 +450,14 @@ fn post_store<'blk, 'tcx>(&self,
ty: Ty<'tcx>)
-> Block<'blk, 'tcx> {
match *self {
LvalueExpr => Lvalue.post_store(bcx, val, ty),
LvalueExpr(ref l) => l.post_store(bcx, val, ty),
RvalueExpr(ref r) => r.post_store(bcx, val, ty),
}
}
fn is_by_ref(&self) -> bool {
match *self {
LvalueExpr => Lvalue.is_by_ref(),
LvalueExpr(ref l) => l.is_by_ref(),
RvalueExpr(ref r) => r.is_by_ref()
}
}
@ -360,7 +491,10 @@ pub fn to_lvalue_datum_in_scope<'blk>(self,
match self.kind.mode {
ByRef => {
add_rvalue_clean(ByRef, fcx, scope, self.val, self.ty);
DatumBlock::new(bcx, Datum::new(self.val, self.ty, Lvalue))
DatumBlock::new(bcx, Datum::new(
self.val,
self.ty,
Lvalue::new("datum::to_lvalue_datum_in_scope")))
}
ByValue => {
@ -417,7 +551,7 @@ fn match_kind<R, F, G>(self, if_lvalue: F, if_rvalue: G) -> R where
{
let Datum { val, ty, kind } = self;
match kind {
LvalueExpr => if_lvalue(Datum::new(val, ty, Lvalue)),
LvalueExpr(l) => if_lvalue(Datum::new(val, ty, l)),
RvalueExpr(r) => if_rvalue(Datum::new(val, ty, r)),
}
}
@ -528,7 +662,7 @@ pub fn get_element<'blk, F>(&self, bcx: Block<'blk, 'tcx>, ty: Ty<'tcx>,
};
Datum {
val: val,
kind: Lvalue,
kind: Lvalue::new("Datum::get_element"),
ty: ty,
}
}

View File

@ -30,7 +30,7 @@
use rustc::ast_map;
use trans::{type_of, adt, machine, monomorphize};
use trans::common::{self, CrateContext, FunctionContext, Block};
use trans::_match::{BindingInfo, TrByCopy, TrByMove, TrByRef};
use trans::_match::{BindingInfo, TransBindingMode};
use trans::type_::Type;
use middle::ty::{self, Ty};
use session::config::{self, FullDebugInfo};
@ -2082,14 +2082,15 @@ pub fn create_match_binding_metadata<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
// dereference once more. For ByCopy we just use the stack slot we created
// for the binding.
let var_access = match binding.trmode {
TrByCopy(llbinding) => VariableAccess::DirectVariable {
TransBindingMode::TrByCopy(llbinding) |
TransBindingMode::TrByMoveIntoCopy(llbinding) => VariableAccess::DirectVariable {
alloca: llbinding
},
TrByMove => VariableAccess::IndirectVariable {
TransBindingMode::TrByMoveRef => VariableAccess::IndirectVariable {
alloca: binding.llmatch,
address_operations: &aops
},
TrByRef => VariableAccess::DirectVariable {
TransBindingMode::TrByRef => VariableAccess::DirectVariable {
alloca: binding.llmatch
}
};

View File

@ -61,7 +61,7 @@
use trans::{_match, adt, asm, base, callee, closure, consts, controlflow};
use trans::base::*;
use trans::build::*;
use trans::cleanup::{self, CleanupMethods};
use trans::cleanup::{self, CleanupMethods, DropHintMethods};
use trans::common::*;
use trans::datum::*;
use trans::debuginfo::{self, DebugLoc, ToDebugLoc};
@ -227,7 +227,7 @@ pub fn trans<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
let const_ty = expr_ty_adjusted(bcx, expr);
let llty = type_of::type_of(bcx.ccx(), const_ty);
let global = PointerCast(bcx, global, llty.ptr_to());
let datum = Datum::new(global, const_ty, Lvalue);
let datum = Datum::new(global, const_ty, Lvalue::new("expr::trans"));
return DatumBlock::new(bcx, datum.to_expr_datum());
}
@ -733,7 +733,7 @@ fn trans_field<'blk, 'tcx, F>(bcx: Block<'blk, 'tcx>,
// Always generate an lvalue datum, because this pointer doesn't own
// the data and cleanup is scheduled elsewhere.
DatumBlock::new(bcx, Datum::new(scratch.val, scratch.ty, LvalueExpr))
DatumBlock::new(bcx, Datum::new(scratch.val, scratch.ty, LvalueExpr(d.kind)))
}
})
@ -810,10 +810,11 @@ fn trans_index<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
Some(SaveIn(scratch.val)),
false));
let datum = scratch.to_expr_datum();
let lval = Lvalue::new("expr::trans_index overload");
if type_is_sized(bcx.tcx(), elt_ty) {
Datum::new(datum.to_llscalarish(bcx), elt_ty, LvalueExpr)
Datum::new(datum.to_llscalarish(bcx), elt_ty, LvalueExpr(lval))
} else {
Datum::new(datum.val, elt_ty, LvalueExpr)
Datum::new(datum.val, elt_ty, LvalueExpr(lval))
}
}
None => {
@ -867,7 +868,8 @@ fn trans_index<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
});
let elt = InBoundsGEP(bcx, base, &[ix_val]);
let elt = PointerCast(bcx, elt, type_of::type_of(ccx, unit_ty).ptr_to());
Datum::new(elt, unit_ty, LvalueExpr)
let lval = Lvalue::new("expr::trans_index fallback");
Datum::new(elt, unit_ty, LvalueExpr(lval))
}
};
@ -912,7 +914,8 @@ fn trans_def<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
// Case 2.
base::get_extern_const(bcx.ccx(), did, const_ty)
};
DatumBlock::new(bcx, Datum::new(val, const_ty, LvalueExpr))
let lval = Lvalue::new("expr::trans_def");
DatumBlock::new(bcx, Datum::new(val, const_ty, LvalueExpr(lval)))
}
def::DefConst(_) => {
bcx.sess().span_bug(ref_expr.span,
@ -1001,10 +1004,26 @@ fn trans_rvalue_stmt_unadjusted<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
debuginfo::set_source_location(bcx.fcx, expr.id, expr.span);
let src_datum = unpack_datum!(
bcx, src_datum.to_rvalue_datum(bcx, "ExprAssign"));
bcx = glue::drop_ty(bcx,
dst_datum.val,
dst_datum.ty,
expr.debug_loc());
let opt_hint_datum = dst_datum.kind.drop_flag_info.hint_datum(bcx);
let opt_hint_val = opt_hint_datum.map(|d|d.to_value());
// 1. Drop the data at the destination, passing the
// drop-hint in case the lvalue has already been
// dropped or moved.
bcx = glue::drop_ty_core(bcx,
dst_datum.val,
dst_datum.ty,
expr.debug_loc(),
false,
opt_hint_val);
// 2. We are overwriting the destination; ensure that
// its drop-hint (if any) says "initialized."
if let Some(hint_val) = opt_hint_val {
let hint_llval = hint_val.value();
let drop_needed = C_u8(bcx.fcx.ccx, adt::DTOR_NEEDED_HINT as usize);
Store(bcx, drop_needed, hint_llval);
}
src_datum.store_to(bcx, dst_datum.val)
} else {
src_datum.store_to(bcx, dst_datum.val)
@ -1302,8 +1321,10 @@ pub fn trans_local_var<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
def::DefUpvar(nid, _) => {
// Can't move upvars, so this is never a ZeroMemLastUse.
let local_ty = node_id_type(bcx, nid);
let lval = Lvalue::new_with_hint("expr::trans_local_var (upvar)",
bcx, nid, HintKind::ZeroAndMaintain);
match bcx.fcx.llupvars.borrow().get(&nid) {
Some(&val) => Datum::new(val, local_ty, Lvalue),
Some(&val) => Datum::new(val, local_ty, lval),
None => {
bcx.sess().bug(&format!(
"trans_local_var: no llval for upvar {} found",
@ -1574,7 +1595,8 @@ pub fn trans_adt<'a, 'blk, 'tcx>(mut bcx: Block<'blk, 'tcx>,
bcx = trans_into(bcx, &**e, SaveIn(dest));
let scope = cleanup::CustomScope(custom_cleanup_scope);
fcx.schedule_lifetime_end(scope, dest);
fcx.schedule_drop_mem(scope, dest, e_ty);
// FIXME: nonzeroing move should generalize to fields
fcx.schedule_drop_mem(scope, dest, e_ty, None);
}
}
@ -2267,7 +2289,7 @@ fn deref_once<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
if type_is_sized(bcx.tcx(), content_ty) {
let ptr = load_ty(bcx, datum.val, datum.ty);
DatumBlock::new(bcx, Datum::new(ptr, content_ty, LvalueExpr))
DatumBlock::new(bcx, Datum::new(ptr, content_ty, LvalueExpr(datum.kind)))
} else {
// A fat pointer and a DST lvalue have the same representation
// just different types. Since there is no temporary for `*e`
@ -2275,13 +2297,15 @@ fn deref_once<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
// object code path for running drop glue and free. Instead,
// we schedule cleanup for `e`, turning it into an lvalue.
let datum = Datum::new(datum.val, content_ty, LvalueExpr);
let lval = Lvalue::new("expr::deref_once ty_uniq");
let datum = Datum::new(datum.val, content_ty, LvalueExpr(lval));
DatumBlock::new(bcx, datum)
}
}
ty::TyRawPtr(ty::TypeAndMut { ty: content_ty, .. }) |
ty::TyRef(_, ty::TypeAndMut { ty: content_ty, .. }) => {
let lval = Lvalue::new("expr::deref_once ptr");
if type_is_sized(bcx.tcx(), content_ty) {
let ptr = datum.to_llscalarish(bcx);
@ -2290,11 +2314,11 @@ fn deref_once<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
// rvalue for non-owning pointers like &T or *T, in which
// case cleanup *is* scheduled elsewhere, by the true
// owner (or, in the case of *T, by the user).
DatumBlock::new(bcx, Datum::new(ptr, content_ty, LvalueExpr))
DatumBlock::new(bcx, Datum::new(ptr, content_ty, LvalueExpr(lval)))
} else {
// A fat pointer and a DST lvalue have the same representation
// just different types.
DatumBlock::new(bcx, Datum::new(datum.val, content_ty, LvalueExpr))
DatumBlock::new(bcx, Datum::new(datum.val, content_ty, LvalueExpr(lval)))
}
}

View File

@ -130,17 +130,20 @@ pub fn drop_ty<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
v: ValueRef,
t: Ty<'tcx>,
debug_loc: DebugLoc) -> Block<'blk, 'tcx> {
drop_ty_core(bcx, v, t, debug_loc, false)
drop_ty_core(bcx, v, t, debug_loc, false, None)
}
pub fn drop_ty_core<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
v: ValueRef,
t: Ty<'tcx>,
debug_loc: DebugLoc,
skip_dtor: bool) -> Block<'blk, 'tcx> {
skip_dtor: bool,
drop_hint: Option<cleanup::DropHintValue>)
-> Block<'blk, 'tcx> {
// NB: v is an *alias* of type t here, not a direct value.
debug!("drop_ty_core(t={:?}, skip_dtor={})", t, skip_dtor);
debug!("drop_ty_core(t={:?}, skip_dtor={} drop_hint={:?})", t, skip_dtor, drop_hint);
let _icx = push_ctxt("drop_ty");
let mut bcx = bcx;
if bcx.fcx.type_needs_drop(t) {
let ccx = bcx.ccx();
let g = if skip_dtor {
@ -156,7 +159,23 @@ pub fn drop_ty_core<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
v
};
Call(bcx, glue, &[ptr], None, debug_loc);
match drop_hint {
Some(drop_hint) => {
let hint_val = load_ty(bcx, drop_hint.value(), bcx.tcx().types.u8);
let moved_val =
C_integral(Type::i8(bcx.ccx()), adt::DTOR_MOVED_HINT as u64, false);
let may_need_drop =
ICmp(bcx, llvm::IntNE, hint_val, moved_val, DebugLoc::None);
bcx = with_cond(bcx, may_need_drop, |cx| {
Call(cx, glue, &[ptr], None, debug_loc);
cx
})
}
None => {
// No drop-hint ==> call standard drop glue
Call(bcx, glue, &[ptr], None, debug_loc);
}
}
}
bcx
}
@ -170,7 +189,7 @@ pub fn drop_ty_immediate<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
let _icx = push_ctxt("drop_ty_immediate");
let vp = alloca(bcx, type_of(bcx.ccx(), t), "");
store_ty(bcx, v, vp, t);
drop_ty_core(bcx, vp, t, debug_loc, skip_dtor)
drop_ty_core(bcx, vp, t, debug_loc, skip_dtor, None)
}
pub fn get_drop_glue<'a, 'tcx>(ccx: &CrateContext<'a, 'tcx>, t: Ty<'tcx>) -> ValueRef {

View File

@ -117,7 +117,7 @@ pub fn trans_slice_vec<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
// Arrange for the backing array to be cleaned up.
let cleanup_scope = cleanup::temporary_scope(bcx.tcx(), content_expr.id);
fcx.schedule_lifetime_end(cleanup_scope, llfixed);
fcx.schedule_drop_mem(cleanup_scope, llfixed, fixed_ty);
fcx.schedule_drop_mem(cleanup_scope, llfixed, fixed_ty, None);
// Generate the content into the backing array.
// llfixed has type *[T x N], but we want the type *T,
@ -212,7 +212,7 @@ fn write_content<'blk, 'tcx>(bcx: Block<'blk, 'tcx>,
SaveIn(lleltptr));
let scope = cleanup::CustomScope(temp_scope);
fcx.schedule_lifetime_end(scope, lleltptr);
fcx.schedule_drop_mem(scope, lleltptr, vt.unit_ty);
fcx.schedule_drop_mem(scope, lleltptr, vt.unit_ty, None);
}
fcx.pop_custom_cleanup_scope(temp_scope);
}

View File

@ -9,13 +9,8 @@
// except according to those terms.
#![allow(unknown_features)]
#![feature(box_syntax)]
#![feature(intrinsics)]
// needed to check for drop fill word.
#![feature(filling_drop)]
use std::mem::{self, transmute};
mod rusti {
extern "rust-intrinsic" {
@ -26,12 +21,80 @@ mod rusti {
pub fn main() {
unsafe {
let x: Box<_> = box 1;
let mut y = rusti::init();
let mut z: *const usize = transmute(&x);
// sanity check
check_drops_state(0, None);
let mut x: Box<D> = box D(1);
assert_eq!(x.0, 1);
// A normal overwrite, to demonstrate `check_drops_state`.
x = box D(2);
// At this point, one destructor has run, because the
// overwrite of `x` drops its initial value.
check_drops_state(1, Some(1));
let mut y: Box<D> = rusti::init();
// An initial binding does not overwrite anything.
check_drops_state(1, Some(1));
// Since `y` has been initialized via the `init` intrinsic, it
// would be unsound to directly overwrite its value via normal
// assignment.
//
// The code currently generated by the compiler is overly
// accepting, however, in that it will check if `y` is itself
// null and thus avoid the unsound action of attempting to
// free null. In other words, if we were to do a normal
// assignment like `y = box D(4);` here, it probably would not
// crash today. But the plan is that it may well crash in the
// future, (I believe).
// `x` is moved here; the manner in which this is tracked by the
// compiler is hidden.
rusti::move_val_init(&mut y, x);
assert_eq!(*y, 1);
// `x` is nulled out, not directly visible
assert_eq!(*z, mem::POST_DROP_USIZE);
// In particular, it may be tracked via a drop-flag embedded
// in the value, or via a null pointer, or via
// mem::POST_DROP_USIZE, or (most preferably) via a
// stack-local drop flag.
//
// (This test used to build-in knowledge of how it was
// tracked, and check that the underlying stack slot had been
// set to `mem::POST_DROP_USIZE`.)
// But what we *can* observe is how many times the destructor
// for `D` is invoked, and what the last value we saw was
// during such a destructor call. We do so after the end of
// this scope.
assert_eq!(y.0, 2);
y.0 = 3;
assert_eq!(y.0, 3);
check_drops_state(1, Some(1));
}
check_drops_state(2, Some(3));
}
static mut NUM_DROPS: i32 = 0;
static mut LAST_DROPPED: Option<i32> = None;
fn check_drops_state(num_drops: i32, last_dropped: Option<i32>) {
unsafe {
assert_eq!(NUM_DROPS, num_drops);
assert_eq!(LAST_DROPPED, last_dropped);
}
}
struct D(i32);
impl Drop for D {
fn drop(&mut self) {
unsafe {
NUM_DROPS += 1;
LAST_DROPPED = Some(self.0);
}
}
}