[vm/ffi] Support passing structs by value

This CL adds passing structs by value in FFI trampolines.
Nested structs and inline arrays are future work.
C defines passing empty structs as undefined behavior, so that is not
supported in this CL.

Suggested review order:
1) commit message
2) ffi/marshaller (decisions for what is done in IL and what in MC)
3) frontend/kernel_to_il (IL construction)
4) backend/il (MC generation from IL)
5) rest in VM

Overall architecture is that structs are split up into word-size chunks
in IL when this is possible: 1 definition in IL per chunk, 1 Location in
IL per chunk, and 1 NativeLocation for the backend per chunk.
In some cases it is not possible or less convenient to split into
chunks. In these cases TypedDataBase objects are stored into and loaded
from directly in machine code.
The various cases:
- FFI call arguments which are not passed as pointers: pass individual
  chunks to FFI call which already have the right location.
- FFI call arguments which are passed as pointers: Pass in TypedDataBase
  to FFI call, allocate space on the stack, and make a copy on the stack
  and pass the copies' address to the callee.
- FFI call return value: pass in TypedData to FFI call, and copy result
  in machine code.
- FFI callback arguments which are not passed as pointers: IL definition
  for each chunk, and populate a new TypedData with those chunks.
- FFI callback arguments which are passed as pointer: IL definition for
  the pointer, and copying of contents in IL.
- FFI return value when location is pointer: Copy data to callee result
  location in IL.
- FFI return value when location is not a pointer: Copy data in machine
  code to the right registers.

Some other notes about the implementation:
- Due to Store/LoadIndexed loading doubles from float arrays, we use
  a int32 instead and use the BitCastInstr.
- Linux ia32 uses `ret 4` when returning structs by value. This requires
  special casing in the FFI callback trampolines to either use `ret` or
  `ret 4` when returning.
- The 1 IL definition, 1 Location, and 1 NativeLocation approach does
  not remove the need for special casing PairLocations in the machine
  code generation because they are 1 Location belonging to 1 definition.

Because of the amount of corner cases in the calling conventions that
need to be covered, the tests are generated, rather than hand-written.

ABIs tested on CQ: x64 (Linux, MacOS, Windows), ia32 (Linux, Windows),
arm (Android softFP, Linux hardFP), arm64 Android.
ABIs tested locally through Flutter: ia32 Android (emulator), x64 iOS
(simulator), arm64 iOS.
ABIs not tested: arm iOS.
TEST=runtime/bin/ffi_test/ffi_test_functions_generated.cc
TEST=runtime/bin/ffi_test/ffi_test_functions.cc
TEST=tests/{ffi,ffi_2}/function_structs_by_value_generated_test.dart
TEST=tests/{ffi,ffi_2}/function_callbacks_structs_by_value_generated_tes
TEST=tests/{ffi,ffi_2}/function_callbacks_structs_by_value_test.dart
TEST=tests/{ffi,ffi_2}/vmspecific_static_checks_test.dart

Closes https://github.com/dart-lang/sdk/issues/36730.

Change-Id: I474d3a4ee1faadbe767ddadd1b696e24d8dc364c
Cq-Include-Trybots: luci.dart.try:dart-sdk-linux-try,dart-sdk-mac-try,dart-sdk-win-try,vm-ffi-android-debug-arm-try,vm-ffi-android-debug-arm64-try,vm-kernel-asan-linux-release-x64-try,vm-kernel-mac-debug-x64-try,vm-kernel-linux-debug-ia32-try,vm-kernel-linux-debug-x64-try,vm-kernel-nnbd-linux-debug-x64-try,vm-kernel-nnbd-linux-debug-ia32-try,vm-kernel-nnbd-mac-release-x64-try,vm-kernel-nnbd-win-debug-x64-try,vm-kernel-precomp-linux-debug-x64-try,vm-kernel-precomp-linux-debug-simarm_x64-try,vm-kernel-precomp-nnbd-linux-debug-x64-try,vm-kernel-precomp-win-release-x64-try,vm-kernel-reload-linux-debug-x64-try,vm-kernel-reload-rollback-linux-debug-x64-try,vm-kernel-win-debug-x64-try,vm-kernel-win-debug-ia32-try,vm-precomp-ffi-qemu-linux-release-arm-try,vm-kernel-precomp-obfuscate-linux-release-x64-try,vm-kernel-msan-linux-release-x64-try,vm-kernel-precomp-msan-linux-release-x64-try,vm-kernel-precomp-android-release-arm_x64-try,analyzer-analysis-server-linux-try
Reviewed-on: https://dart-review.googlesource.com/c/sdk/+/140290
Commit-Queue: Daco Harkes <dacoharkes@google.com>
Reviewed-by: Martin Kustermann <kustermann@google.com>
This commit is contained in:
Daco Harkes 2020-12-14 16:22:48 +00:00 committed by commit-bot@chromium.org
parent f6c6ea5373
commit 3e7cda8a4e
48 changed files with 1938 additions and 271 deletions

View file

@ -3789,6 +3789,31 @@ const MessageCode messageFastaUsageShort =
-o <file> Generate the output into <file>.
-h Display this message (add -v for information about all options).""");
// DO NOT EDIT. THIS FILE IS GENERATED. SEE TOP OF FILE.
const Template<
Message Function(String name)> templateFfiEmptyStruct = const Template<
Message Function(String name)>(
messageTemplate:
r"""Struct '#name' is empty. Empty structs are undefined behavior.""",
withArguments: _withArgumentsFfiEmptyStruct);
// DO NOT EDIT. THIS FILE IS GENERATED. SEE TOP OF FILE.
const Code<Message Function(String name)> codeFfiEmptyStruct =
const Code<Message Function(String name)>(
"FfiEmptyStruct",
templateFfiEmptyStruct,
);
// DO NOT EDIT. THIS FILE IS GENERATED. SEE TOP OF FILE.
Message _withArgumentsFfiEmptyStruct(String name) {
if (name.isEmpty) throw 'No name provided';
name = demangleMixinApplicationName(name);
return new Message(codeFfiEmptyStruct,
message:
"""Struct '${name}' is empty. Empty structs are undefined behavior.""",
arguments: {'name': name});
}
// DO NOT EDIT. THIS FILE IS GENERATED. SEE TOP OF FILE.
const Code<Null> codeFfiExceptionalReturnNull = messageFfiExceptionalReturnNull;

View file

@ -163,6 +163,22 @@ class FfiVerifier extends RecursiveAstVisitor<void> {
element.name == 'DynamicLibraryExtension' &&
element.library.name == 'dart.ffi';
bool _isEmptyStruct(ClassElement classElement) {
final fields = classElement.fields;
var structFieldCount = 0;
for (final field in fields) {
final declaredType = field.type;
if (declaredType.isDartCoreInt) {
structFieldCount++;
} else if (declaredType.isDartCoreDouble) {
structFieldCount++;
} else if (_isPointer(declaredType.element)) {
structFieldCount++;
}
}
return structFieldCount == 0;
}
bool _isHandle(Element element) =>
element.name == 'Handle' && element.library.name == 'dart.ffi';
@ -246,12 +262,12 @@ class FfiVerifier extends RecursiveAstVisitor<void> {
nativeType.optionalParameterTypes.isNotEmpty) {
return false;
}
if (!_isValidFfiNativeType(nativeType.returnType, true)) {
if (!_isValidFfiNativeType(nativeType.returnType, true, false)) {
return false;
}
for (final DartType typeArg in nativeType.typeArguments) {
if (!_isValidFfiNativeType(typeArg, false)) {
for (final DartType typeArg in nativeType.normalParameterTypes) {
if (!_isValidFfiNativeType(typeArg, false, false)) {
return false;
}
}
@ -261,7 +277,8 @@ class FfiVerifier extends RecursiveAstVisitor<void> {
}
/// Validates that the given [nativeType] is a valid dart:ffi native type.
bool _isValidFfiNativeType(DartType nativeType, bool allowVoid) {
bool _isValidFfiNativeType(
DartType nativeType, bool allowVoid, bool allowEmptyStruct) {
if (nativeType is InterfaceType) {
// Is it a primitive integer/double type (or ffi.Void if we allow it).
final primitiveType = _primitiveNativeType(nativeType);
@ -274,10 +291,20 @@ class FfiVerifier extends RecursiveAstVisitor<void> {
}
if (_isPointerInterfaceType(nativeType)) {
final nativeArgumentType = nativeType.typeArguments.single;
return _isValidFfiNativeType(nativeArgumentType, true) ||
return _isValidFfiNativeType(nativeArgumentType, true, true) ||
_isStructClass(nativeArgumentType) ||
_isNativeTypeInterfaceType(nativeArgumentType);
}
if (_isStructClass(nativeType)) {
if (!allowEmptyStruct) {
if (_isEmptyStruct(nativeType.element)) {
// TODO(dartbug.com/36780): This results in an error message not
// mentioning empty structs at all.
return false;
}
}
return true;
}
} else if (nativeType is FunctionType) {
return _isValidFfiNativeFunctionType(nativeType);
}
@ -533,7 +560,8 @@ class FfiVerifier extends RecursiveAstVisitor<void> {
final DartType R = (T as FunctionType).returnType;
if ((FT as FunctionType).returnType.isVoid ||
_isPointer(R.element) ||
_isHandle(R.element)) {
_isHandle(R.element) ||
_isStructClass(R)) {
if (argCount != 1) {
_errorReporter.reportErrorForNode(
FfiCode.INVALID_EXCEPTION_VALUE, node.argumentList.arguments[1]);

View file

@ -47,6 +47,7 @@ export '../fasta/fasta_codes.dart'
messageFfiExpectedConstant,
noLength,
templateFfiDartTypeMismatch,
templateFfiEmptyStruct,
templateFfiExpectedExceptionalReturn,
templateFfiExpectedNoExceptionalReturn,
templateFfiExtendsOrImplementsSealedClass,

View file

@ -315,6 +315,7 @@ FastaUsageLong/example: Fail
FastaUsageShort/analyzerCode: Fail
FastaUsageShort/example: Fail
FfiDartTypeMismatch/analyzerCode: Fail
FfiEmptyStruct/analyzerCode: Fail
FfiExceptionalReturnNull/analyzerCode: Fail
FfiExpectedConstant/analyzerCode: Fail
FfiExpectedExceptionalReturn/analyzerCode: Fail

View file

@ -1914,7 +1914,7 @@ InternalProblemUnsupportedNullability:
IncrementalCompilerIllegalParameter:
template: "Illegal parameter name '#string' found during expression compilation."
IncrementalCompilerIllegalTypeParameter:
template: "Illegal type parameter name '#string' found during expression compilation."
@ -4233,6 +4233,11 @@ FfiTypeMismatch:
template: "Expected type '#type' to be '#type2', which is the Dart type corresponding to '#type3'."
external: test/ffi_test.dart
FfiEmptyStruct:
# Used by dart:ffi
template: "Struct '#name' is empty. Empty structs are undefined behavior."
external: test/ffi_test.dart
FfiTypeInvalid:
# Used by dart:ffi
template: "Expected type '#type' to be a valid and instantiated subtype of 'NativeType'."

View file

@ -22,7 +22,6 @@ import 'package:kernel/vm/constants_native_effects.dart'
import '../transformations/call_site_annotator.dart' as callSiteAnnotator;
import '../transformations/lowering.dart' as lowering show transformLibraries;
import '../transformations/ffi.dart' as transformFfi show ReplacedMembers;
import '../transformations/ffi_definitions.dart' as transformFfiDefinitions
show transformLibraries;
import '../transformations/ffi_use_sites.dart' as transformFfiUseSites
@ -155,17 +154,16 @@ class VmTarget extends Target {
this, coreTypes, hierarchy, libraries, referenceFromIndex);
logger?.call("Transformed mixin applications");
transformFfi.ReplacedMembers replacedFields =
transformFfiDefinitions.transformLibraries(
component,
coreTypes,
hierarchy,
libraries,
diagnosticReporter,
referenceFromIndex,
changedStructureNotifier);
final ffiTransformerData = transformFfiDefinitions.transformLibraries(
component,
coreTypes,
hierarchy,
libraries,
diagnosticReporter,
referenceFromIndex,
changedStructureNotifier);
transformFfiUseSites.transformLibraries(component, coreTypes, hierarchy,
libraries, diagnosticReporter, replacedFields, referenceFromIndex);
libraries, diagnosticReporter, ffiTransformerData, referenceFromIndex);
logger?.call("Transformed ffi annotations");
// TODO(kmillikin): Make this run on a per-method basis.

View file

@ -310,8 +310,8 @@ class FfiTransformer extends Transformer {
/// [Handle] -> [Object]
/// [NativeFunction]<T1 Function(T2, T3) -> S1 Function(S2, S3)
/// where DartRepresentationOf(Tn) -> Sn
DartType convertNativeTypeToDartType(
DartType nativeType, bool allowStructs, bool allowHandle) {
DartType convertNativeTypeToDartType(DartType nativeType,
{bool allowStructs = false, bool allowHandle = false}) {
if (nativeType is! InterfaceType) {
return null;
}
@ -352,13 +352,13 @@ class FfiTransformer extends Transformer {
return null;
}
if (fun.typeParameters.length != 0) return null;
// TODO(36730): Structs cannot appear in native function signatures.
final DartType returnType = convertNativeTypeToDartType(
fun.returnType, /*allowStructs=*/ false, /*allowHandle=*/ true);
final DartType returnType = convertNativeTypeToDartType(fun.returnType,
allowStructs: allowStructs, allowHandle: true);
if (returnType == null) return null;
final List<DartType> argumentTypes = fun.positionalParameters
.map((t) => convertNativeTypeToDartType(
t, /*allowStructs=*/ false, /*allowHandle=*/ true))
.map((t) => convertNativeTypeToDartType(t,
allowStructs: allowStructs, allowHandle: true))
.toList();
if (argumentTypes.contains(null)) return null;
return FunctionType(argumentTypes, returnType, Nullability.legacy);
@ -373,12 +373,12 @@ class FfiTransformer extends Transformer {
}
}
/// Contains replaced members, of which all the call sites need to be replaced.
///
/// [ReplacedMembers] is populated by _FfiDefinitionTransformer and consumed by
/// _FfiUseSiteTransformer.
class ReplacedMembers {
/// Contains all information collected by _FfiDefinitionTransformer that is
/// needed in _FfiUseSiteTransformer.
class FfiTransformerData {
final Map<Field, Procedure> replacedGetters;
final Map<Field, Procedure> replacedSetters;
ReplacedMembers(this.replacedGetters, this.replacedSetters);
final Set<Class> emptyStructs;
FfiTransformerData(
this.replacedGetters, this.replacedSetters, this.emptyStructs);
}

View file

@ -56,7 +56,7 @@ import 'ffi.dart';
///
/// static final int #sizeOf = 24;
/// }
ReplacedMembers transformLibraries(
FfiTransformerData transformLibraries(
Component component,
CoreTypes coreTypes,
ClassHierarchy hierarchy,
@ -70,17 +70,17 @@ ReplacedMembers transformLibraries(
// TODO: This check doesn't make sense: "dart:ffi" is always loaded/created
// for the VM target.
// If dart:ffi is not loaded, do not do the transformation.
return ReplacedMembers({}, {});
return FfiTransformerData({}, {}, {});
}
if (index.tryGetClass('dart:ffi', 'NativeFunction') == null) {
// If dart:ffi is not loaded (for real): do not do the transformation.
return ReplacedMembers({}, {});
return FfiTransformerData({}, {}, {});
}
final transformer = new _FfiDefinitionTransformer(index, coreTypes, hierarchy,
diagnosticReporter, referenceFromIndex, changedStructureNotifier);
libraries.forEach(transformer.visitLibrary);
return ReplacedMembers(
transformer.replacedGetters, transformer.replacedSetters);
return FfiTransformerData(transformer.replacedGetters,
transformer.replacedSetters, transformer.emptyStructs);
}
/// Checks and elaborates the dart:ffi structs and fields.
@ -89,6 +89,7 @@ class _FfiDefinitionTransformer extends FfiTransformer {
Map<Field, Procedure> replacedGetters = {};
Map<Field, Procedure> replacedSetters = {};
Set<Class> emptyStructs = {};
ChangedStructureNotifier changedStructureNotifier;
@ -231,7 +232,9 @@ class _FfiDefinitionTransformer extends FfiTransformer {
Nullability.legacy);
// TODO(dartbug.com/37271): Support structs inside structs.
final DartType shouldBeDartType = convertNativeTypeToDartType(
nativeType, /*allowStructs=*/ false, /*allowHandle=*/ false);
nativeType,
allowStructs: false,
allowHandle: false);
if (shouldBeDartType == null ||
!env.isSubtypeOf(type, shouldBeDartType,
SubtypeCheckMode.ignoringNullabilities)) {
@ -338,6 +341,9 @@ class _FfiDefinitionTransformer extends FfiTransformer {
}
_annoteStructWithFields(node, classes);
if (classes.isEmpty) {
emptyStructs.add(node);
}
final sizeAndOffsets = <Abi, SizeAndOffsets>{};
for (final Abi abi in Abi.values) {
@ -565,6 +571,9 @@ class _FfiDefinitionTransformer extends FfiTransformer {
return offset;
}
// Keep consistent with runtime/vm/compiler/ffi/native_type.cc
// NativeCompoundType::FromNativeTypes.
//
// TODO(37271): Support nested structs.
SizeAndOffsets _calculateSizeAndOffsets(List<NativeType> types, Abi abi) {
int offset = 0;

View file

@ -9,6 +9,7 @@ import 'package:front_end/src/api_unstable/vm.dart'
messageFfiExceptionalReturnNull,
messageFfiExpectedConstant,
templateFfiDartTypeMismatch,
templateFfiEmptyStruct,
templateFfiExpectedExceptionalReturn,
templateFfiExpectedNoExceptionalReturn,
templateFfiExtendsOrImplementsSealedClass,
@ -25,7 +26,7 @@ import 'package:kernel/target/targets.dart' show DiagnosticReporter;
import 'package:kernel/type_environment.dart';
import 'ffi.dart'
show ReplacedMembers, NativeType, FfiTransformer, optimizedTypes;
show FfiTransformerData, NativeType, FfiTransformer, optimizedTypes;
/// Checks and replaces calls to dart:ffi struct fields and methods.
void transformLibraries(
@ -34,7 +35,7 @@ void transformLibraries(
ClassHierarchy hierarchy,
List<Library> libraries,
DiagnosticReporter diagnosticReporter,
ReplacedMembers replacedFields,
FfiTransformerData ffiTransformerData,
ReferenceFromIndex referenceFromIndex) {
final index = new LibraryIndex(component, ["dart:ffi"]);
if (!index.containsLibrary("dart:ffi")) {
@ -53,8 +54,9 @@ void transformLibraries(
hierarchy,
diagnosticReporter,
referenceFromIndex,
replacedFields.replacedGetters,
replacedFields.replacedSetters);
ffiTransformerData.replacedGetters,
ffiTransformerData.replacedSetters,
ffiTransformerData.emptyStructs);
libraries.forEach(transformer.visitLibrary);
}
@ -62,6 +64,7 @@ void transformLibraries(
class _FfiUseSiteTransformer extends FfiTransformer {
final Map<Field, Procedure> replacedGetters;
final Map<Field, Procedure> replacedSetters;
final Set<Class> emptyStructs;
StaticTypeContext _staticTypeContext;
Library currentLibrary;
@ -79,7 +82,8 @@ class _FfiUseSiteTransformer extends FfiTransformer {
DiagnosticReporter diagnosticReporter,
ReferenceFromIndex referenceFromIndex,
this.replacedGetters,
this.replacedSetters)
this.replacedSetters,
this.emptyStructs)
: super(index, coreTypes, hierarchy, diagnosticReporter,
referenceFromIndex) {}
@ -171,18 +175,18 @@ class _FfiUseSiteTransformer extends FfiTransformer {
nativeFunctionClass, Nullability.legacy, [node.arguments.types[0]]);
final DartType dartType = node.arguments.types[1];
_ensureNativeTypeValid(nativeType, node, allowStructs: false);
_ensureNativeTypeToDartType(nativeType, dartType, node,
allowStructs: false);
_ensureNativeTypeValid(nativeType, node);
_ensureNativeTypeToDartType(nativeType, dartType, node);
_ensureNoEmptyStructs(dartType, node);
return _replaceLookupFunction(node);
} else if (target == asFunctionMethod) {
final DartType dartType = node.arguments.types[1];
final DartType nativeType = InterfaceType(
nativeFunctionClass, Nullability.legacy, [node.arguments.types[0]]);
_ensureNativeTypeValid(nativeType, node, allowStructs: false);
_ensureNativeTypeToDartType(nativeType, dartType, node,
allowStructs: false);
_ensureNativeTypeValid(nativeType, node);
_ensureNativeTypeToDartType(nativeType, dartType, node);
_ensureNoEmptyStructs(dartType, node);
final DartType nativeSignature =
(nativeType as InterfaceType).typeArguments[0];
@ -199,9 +203,9 @@ class _FfiUseSiteTransformer extends FfiTransformer {
_ensureIsStaticFunction(func);
_ensureNativeTypeValid(nativeType, node, allowStructs: false);
_ensureNativeTypeToDartType(nativeType, dartType, node,
allowStructs: false);
_ensureNativeTypeValid(nativeType, node);
_ensureNativeTypeToDartType(nativeType, dartType, node);
_ensureNoEmptyStructs(dartType, node);
// Check `exceptionalReturn`'s type.
final FunctionType funcType = dartType;
@ -394,9 +398,11 @@ class _FfiUseSiteTransformer extends FfiTransformer {
void _ensureNativeTypeToDartType(
DartType nativeType, DartType dartType, Expression node,
{bool allowStructs: false, bool allowHandle: false}) {
final DartType correspondingDartType =
convertNativeTypeToDartType(nativeType, allowStructs, allowHandle);
{bool allowHandle: false}) {
final DartType correspondingDartType = convertNativeTypeToDartType(
nativeType,
allowStructs: true,
allowHandle: allowHandle);
if (dartType == correspondingDartType) return;
if (env.isSubtypeOf(correspondingDartType, dartType,
SubtypeCheckMode.ignoringNullabilities)) {
@ -412,9 +418,9 @@ class _FfiUseSiteTransformer extends FfiTransformer {
}
void _ensureNativeTypeValid(DartType nativeType, Expression node,
{bool allowStructs: false, bool allowHandle: false}) {
{bool allowHandle: false}) {
if (!_nativeTypeValid(nativeType,
allowStructs: allowStructs, allowHandle: allowHandle)) {
allowStructs: true, allowHandle: allowHandle)) {
diagnosticReporter.report(
templateFfiTypeInvalid.withArguments(
nativeType, currentLibrary.isNonNullableByDefault),
@ -425,11 +431,35 @@ class _FfiUseSiteTransformer extends FfiTransformer {
}
}
void _ensureNoEmptyStructs(DartType nativeType, Expression node) {
// Error on structs with no fields.
if (nativeType is InterfaceType) {
final Class nativeClass = nativeType.classNode;
if (hierarchy.isSubclassOf(nativeClass, structClass)) {
if (emptyStructs.contains(nativeClass)) {
diagnosticReporter.report(
templateFfiEmptyStruct.withArguments(nativeClass.name),
node.fileOffset,
1,
node.location.file);
}
}
}
// Recurse when seeing a function type.
if (nativeType is FunctionType) {
nativeType.positionalParameters
.forEach((e) => _ensureNoEmptyStructs(e, node));
_ensureNoEmptyStructs(nativeType.returnType, node);
}
}
/// The Dart type system does not enforce that NativeFunction return and
/// parameter types are only NativeTypes, so we need to check this.
bool _nativeTypeValid(DartType nativeType,
{bool allowStructs: false, allowHandle: false}) {
return convertNativeTypeToDartType(nativeType, allowStructs, allowHandle) !=
return convertNativeTypeToDartType(nativeType,
allowStructs: allowStructs, allowHandle: allowHandle) !=
null;
}

View file

@ -299,6 +299,16 @@ DEFINE_NATIVE_ENTRY(Ffi_pointerFromFunction, 1, 1) {
ASSERT(!code.IsNull());
thread->SetFfiCallbackCode(function.FfiCallbackId(), code);
#ifdef TARGET_ARCH_IA32
// On ia32, store the stack delta that we need to use when returning.
const intptr_t stack_return_delta =
function.FfiCSignatureReturnsStruct() && CallingConventions::kUsesRet4
? compiler::target::kWordSize
: 0;
thread->SetFfiCallbackStackReturn(function.FfiCallbackId(),
stack_return_delta);
#endif
uword entry_point = code.EntryPoint();
#if !defined(DART_PRECOMPILED_RUNTIME)
if (NativeCallbackTrampolines::Enabled()) {

View file

@ -1673,6 +1673,7 @@ void FlowGraphCompiler::AllocateRegistersLocally(Instruction* instr) {
result_location = locs->in(0);
break;
case Location::kRequiresFpuRegister:
case Location::kRequiresStackSlot:
UNREACHABLE();
break;
}
@ -3393,8 +3394,6 @@ void FlowGraphCompiler::EmitNativeMove(
EmitNativeMoveArchitecture(destination, source);
}
// TODO(dartbug.com/36730): Remove this if PairLocations can be converted
// into NativeLocations.
void FlowGraphCompiler::EmitMoveToNative(
const compiler::ffi::NativeLocation& dst,
Location src_loc,
@ -3413,8 +3412,6 @@ void FlowGraphCompiler::EmitMoveToNative(
}
}
// TODO(dartbug.com/36730): Remove this if PairLocations can be converted
// into NativeLocations.
void FlowGraphCompiler::EmitMoveFromNative(
Location dst_loc,
Representation dst_type,

View file

@ -4187,6 +4187,10 @@ void NativeEntryInstr::SaveArguments(FlowGraphCompiler* compiler) const {
__ Comment("SaveArguments");
// Save the argument registers, in reverse order.
const auto& return_loc = marshaller_.Location(compiler::ffi::kResultIndex);
if (return_loc.IsPointerToMemory()) {
SaveArgument(compiler, return_loc.AsPointerToMemory().pointer_location());
}
for (intptr_t i = marshaller_.num_args(); i-- > 0;) {
SaveArgument(compiler, marshaller_.Location(i));
}
@ -4214,8 +4218,24 @@ void NativeEntryInstr::SaveArgument(
const auto& dst = compiler::ffi::NativeStackLocation(
nloc.payload_type(), nloc.payload_type(), SPREG, 0);
compiler->EmitNativeMove(dst, nloc, &temp_alloc);
} else if (nloc.IsPointerToMemory()) {
const auto& pointer_loc = nloc.AsPointerToMemory().pointer_location();
if (pointer_loc.IsRegisters()) {
const auto& regs_loc = pointer_loc.AsRegisters();
ASSERT(regs_loc.num_regs() == 1);
__ PushRegister(regs_loc.reg_at(0));
} else {
ASSERT(pointer_loc.IsStack());
// It's already on the stack, so we don't have to save it.
}
} else {
UNREACHABLE();
ASSERT(nloc.IsMultiple());
const auto& multiple = nloc.AsMultiple();
const intptr_t num = multiple.locations().length();
// Save the argument registers, in reverse order.
for (intptr_t i = num; i-- > 0;) {
SaveArgument(compiler, *multiple.locations().At(i));
}
}
}
@ -4522,8 +4542,12 @@ void NativeParameterInstr::EmitNativeCode(FlowGraphCompiler* compiler) {
/*old_base=*/SPREG, /*new_base=*/FPREG,
(-kExitLinkSlotFromEntryFp + kEntryFramePadding) *
compiler::target::kWordSize);
const auto& location =
marshaller_.NativeLocationOfNativeParameter(def_index_);
const auto& src =
rebase.Rebase(marshaller_.NativeLocationOfNativeParameter(index_));
rebase.Rebase(location.IsPointerToMemory()
? location.AsPointerToMemory().pointer_location()
: location);
NoTemporaryAllocator no_temp;
const Location out_loc = locs()->out(0);
const Representation out_rep = representation();
@ -6187,7 +6211,7 @@ void NativeCallInstr::SetupNative() {
set_native_c_function(native_function);
}
#if !defined(TARGET_ARCH_ARM)
#if !defined(TARGET_ARCH_ARM) && !defined(TARGET_ARCH_ARM64)
LocationSummary* BitCastInstr::MakeLocationSummary(Zone* zone, bool opt) const {
UNREACHABLE();
@ -6197,13 +6221,16 @@ void BitCastInstr::EmitNativeCode(FlowGraphCompiler* compiler) {
UNREACHABLE();
}
#endif // defined(TARGET_ARCH_ARM)
#endif // !defined(TARGET_ARCH_ARM) && !defined(TARGET_ARCH_ARM64)
Representation FfiCallInstr::RequiredInputRepresentation(intptr_t idx) const {
if (idx == TargetAddressIndex()) {
if (idx < TargetAddressIndex()) {
return marshaller_.RepInFfiCall(idx);
} else if (idx == TargetAddressIndex()) {
return kUnboxedFfiIntPtr;
} else {
return marshaller_.RepInFfiCall(idx);
ASSERT(idx == TypedDataIndex());
return kTagged;
}
}
@ -6222,51 +6249,216 @@ LocationSummary* FfiCallInstr::MakeLocationSummary(Zone* zone,
LocationSummary(zone, /*num_inputs=*/InputCount(),
/*num_temps=*/kNumTemps, LocationSummary::kCall);
const Register temp0 = CallingConventions::kSecondNonArgumentRegister;
const Register temp1 = CallingConventions::kFfiAnyNonAbiRegister;
ASSERT(temp0 != temp1);
summary->set_temp(0, Location::RegisterLocation(temp0));
summary->set_temp(1, Location::RegisterLocation(temp1));
summary->set_in(TargetAddressIndex(),
Location::RegisterLocation(
CallingConventions::kFirstNonArgumentRegister));
summary->set_temp(0, Location::RegisterLocation(
CallingConventions::kSecondNonArgumentRegister));
summary->set_temp(
1, Location::RegisterLocation(CallingConventions::kFfiAnyNonAbiRegister));
summary->set_out(0, marshaller_.LocInFfiCall(compiler::ffi::kResultIndex));
for (intptr_t i = 0, n = marshaller_.num_args(); i < n; ++i) {
for (intptr_t i = 0, n = marshaller_.NumDefinitions(); i < n; ++i) {
summary->set_in(i, marshaller_.LocInFfiCall(i));
}
if (marshaller_.PassTypedData()) {
// The register allocator already preserves this value across the call on
// a stack slot, so we'll use the spilled value directly.
summary->set_in(TypedDataIndex(), Location::RequiresStackSlot());
// We don't care about return location, but we need to pass a register.
summary->set_out(
0, Location::RegisterLocation(CallingConventions::kReturnReg));
} else {
summary->set_out(0, marshaller_.LocInFfiCall(compiler::ffi::kResultIndex));
}
return summary;
}
void FfiCallInstr::EmitParamMoves(FlowGraphCompiler* compiler) {
if (compiler::Assembler::EmittingComments()) {
__ Comment("EmitParamMoves");
}
const Register saved_fp = locs()->temp(0).reg();
const Register temp = locs()->temp(1).reg();
// Moves for return pointer.
const auto& return_location =
marshaller_.Location(compiler::ffi::kResultIndex);
if (return_location.IsPointerToMemory()) {
const auto& pointer_location =
return_location.AsPointerToMemory().pointer_location();
const auto& pointer_register =
pointer_location.IsRegisters()
? pointer_location.AsRegisters().reg_at(0)
: temp;
__ MoveRegister(pointer_register, SPREG);
__ AddImmediate(pointer_register, marshaller_.PassByPointerStackOffset(
compiler::ffi::kResultIndex));
if (pointer_location.IsStack()) {
const auto& pointer_stack = pointer_location.AsStack();
__ StoreMemoryValue(pointer_register, pointer_stack.base_register(),
pointer_stack.offset_in_bytes());
}
}
// Moves for arguments.
compiler::ffi::FrameRebase rebase(zone_, /*old_base=*/FPREG,
/*new_base=*/saved_fp,
/*stack_delta=*/0);
for (intptr_t i = 0, n = NativeArgCount(); i < n; ++i) {
const Location origin = rebase.Rebase(locs()->in(i));
const Representation origin_rep = RequiredInputRepresentation(i);
const auto& target = marshaller_.Location(i);
ConstantTemporaryAllocator temp_alloc(temp);
if (origin.IsConstant()) {
compiler->EmitMoveConst(target, origin, origin_rep, &temp_alloc);
} else {
compiler->EmitMoveToNative(target, origin, origin_rep, &temp_alloc);
intptr_t def_index = 0;
for (intptr_t arg_index = 0; arg_index < marshaller_.num_args();
arg_index++) {
const intptr_t num_defs = marshaller_.NumDefinitions(arg_index);
const auto& arg_target = marshaller_.Location(arg_index);
// First deal with moving all individual definitions passed in to the
// FfiCall to the right native location based on calling convention.
for (intptr_t i = 0; i < num_defs; i++) {
const Location origin = rebase.Rebase(locs()->in(def_index));
const Representation origin_rep =
RequiredInputRepresentation(def_index) == kTagged
? kUnboxedFfiIntPtr // When arg_target.IsPointerToMemory().
: RequiredInputRepresentation(def_index);
// Find the native location where this individual definition should be
// moved to.
const auto& def_target =
arg_target.payload_type().IsPrimitive()
? arg_target
: arg_target.IsMultiple()
? *arg_target.AsMultiple().locations()[i]
: arg_target.IsPointerToMemory()
? arg_target.AsPointerToMemory().pointer_location()
: /*arg_target.IsStack()*/ arg_target.Split(
zone_, num_defs, i);
ConstantTemporaryAllocator temp_alloc(temp);
if (origin.IsConstant()) {
compiler->EmitMoveConst(def_target, origin, origin_rep, &temp_alloc);
} else {
compiler->EmitMoveToNative(def_target, origin, origin_rep, &temp_alloc);
}
def_index++;
}
// Then make sure that any pointers passed through the calling convention
// actually have a copy of the struct.
// Note that the step above has already moved the pointer into the expected
// native location.
if (arg_target.IsPointerToMemory()) {
NoTemporaryAllocator temp_alloc;
const auto& pointer_loc =
arg_target.AsPointerToMemory().pointer_location();
// TypedData/Pointer data pointed to in temp.
const auto& dst = compiler::ffi::NativeRegistersLocation(
zone_, pointer_loc.payload_type(), pointer_loc.container_type(),
temp);
compiler->EmitNativeMove(dst, pointer_loc, &temp_alloc);
__ LoadField(
temp,
compiler::FieldAddress(
temp, compiler::target::TypedDataBase::data_field_offset()));
// Copy chuncks.
const intptr_t sp_offset =
marshaller_.PassByPointerStackOffset(arg_index);
// Struct size is rounded up to a multiple of target::kWordSize.
// This is safe because we do the same rounding when we allocate the
// space on the stack.
for (intptr_t i = 0; i < arg_target.payload_type().SizeInBytes();
i += compiler::target::kWordSize) {
__ LoadMemoryValue(TMP, temp, i);
__ StoreMemoryValue(TMP, SPREG, i + sp_offset);
}
// Store the stack address in the argument location.
__ MoveRegister(temp, SPREG);
__ AddImmediate(temp, sp_offset);
const auto& src = compiler::ffi::NativeRegistersLocation(
zone_, pointer_loc.payload_type(), pointer_loc.container_type(),
temp);
compiler->EmitNativeMove(pointer_loc, src, &temp_alloc);
}
}
if (compiler::Assembler::EmittingComments()) {
__ Comment("EmitParamMovesEnd");
}
}
void FfiCallInstr::EmitReturnMoves(FlowGraphCompiler* compiler) {
const auto& src = marshaller_.Location(compiler::ffi::kResultIndex);
if (src.payload_type().IsVoid()) {
__ Comment("EmitReturnMoves");
const auto& returnLocation =
marshaller_.Location(compiler::ffi::kResultIndex);
if (returnLocation.payload_type().IsVoid()) {
return;
}
const Location dst_loc = locs()->out(0);
const Representation dst_type = representation();
NoTemporaryAllocator no_temp;
compiler->EmitMoveFromNative(dst_loc, dst_type, src, &no_temp);
if (returnLocation.IsRegisters() || returnLocation.IsFpuRegisters()) {
const auto& src = returnLocation;
const Location dst_loc = locs()->out(0);
const Representation dst_type = representation();
compiler->EmitMoveFromNative(dst_loc, dst_type, src, &no_temp);
} else if (returnLocation.IsPointerToMemory() ||
returnLocation.IsMultiple()) {
ASSERT(returnLocation.payload_type().IsCompound());
ASSERT(marshaller_.PassTypedData());
const Register temp0 = TMP != kNoRegister ? TMP : locs()->temp(0).reg();
const Register temp1 = locs()->temp(1).reg();
ASSERT(temp0 != temp1);
// Get the typed data pointer which we have pinned to a stack slot.
const Location typed_data_loc = locs()->in(TypedDataIndex());
ASSERT(typed_data_loc.IsStackSlot());
ASSERT(typed_data_loc.base_reg() == FPREG);
__ LoadMemoryValue(temp0, FPREG, 0);
__ LoadMemoryValue(temp0, temp0, typed_data_loc.ToStackSlotOffset());
__ LoadField(
temp0,
compiler::FieldAddress(
temp0, compiler::target::TypedDataBase::data_field_offset()));
if (returnLocation.IsPointerToMemory()) {
// Copy blocks from the stack location to TypedData.
// Struct size is rounded up to a multiple of target::kWordSize.
// This is safe because we do the same rounding when we allocate the
// TypedData in IL.
const intptr_t sp_offset =
marshaller_.PassByPointerStackOffset(compiler::ffi::kResultIndex);
for (intptr_t i = 0; i < marshaller_.TypedDataSizeInBytes();
i += compiler::target::kWordSize) {
__ LoadMemoryValue(temp1, SPREG, i + sp_offset);
__ StoreMemoryValue(temp1, temp0, i);
}
} else {
ASSERT(returnLocation.IsMultiple());
// Copy to the struct from the native locations.
const auto& multiple =
marshaller_.Location(compiler::ffi::kResultIndex).AsMultiple();
int offset_in_bytes = 0;
for (int i = 0; i < multiple.locations().length(); i++) {
const auto& src = *multiple.locations().At(i);
const auto& dst = compiler::ffi::NativeStackLocation(
src.payload_type(), src.container_type(), temp0, offset_in_bytes);
compiler->EmitNativeMove(dst, src, &no_temp);
offset_in_bytes += src.payload_type().SizeInBytes();
}
}
} else {
UNREACHABLE();
}
__ Comment("EmitReturnMovesEnd");
}
static Location FirstArgumentLocation() {
@ -6397,10 +6589,37 @@ void RawStoreFieldInstr::EmitNativeCode(FlowGraphCompiler* compiler) {
}
void NativeReturnInstr::EmitReturnMoves(FlowGraphCompiler* compiler) {
const auto& dst = marshaller_.Location(compiler::ffi::kResultIndex);
if (dst.payload_type().IsVoid()) {
const auto& dst1 = marshaller_.Location(compiler::ffi::kResultIndex);
if (dst1.payload_type().IsVoid()) {
return;
}
if (dst1.IsMultiple()) {
Register typed_data_reg = locs()->in(0).reg();
// Load the data pointer out of the TypedData/Pointer.
__ LoadField(typed_data_reg,
compiler::FieldAddress(
typed_data_reg,
compiler::target::TypedDataBase::data_field_offset()));
const auto& multiple = dst1.AsMultiple();
int offset_in_bytes = 0;
for (intptr_t i = 0; i < multiple.locations().length(); i++) {
const auto& dst = *multiple.locations().At(i);
ASSERT(!dst.IsRegisters() ||
dst.AsRegisters().reg_at(0) != typed_data_reg);
const auto& src = compiler::ffi::NativeStackLocation(
dst.payload_type(), dst.container_type(), typed_data_reg,
offset_in_bytes);
NoTemporaryAllocator no_temp;
compiler->EmitNativeMove(dst, src, &no_temp);
offset_in_bytes += dst.payload_type().SizeInBytes();
}
return;
}
const auto& dst = dst1.IsPointerToMemory()
? dst1.AsPointerToMemory().pointer_return_location()
: dst1;
const Location src_loc = locs()->in(0);
const Representation src_type = RequiredInputRepresentation(0);
NoTemporaryAllocator no_temp;
@ -6413,14 +6632,32 @@ LocationSummary* NativeReturnInstr::MakeLocationSummary(Zone* zone,
const intptr_t kNumTemps = 0;
LocationSummary* locs = new (zone)
LocationSummary(zone, kNumInputs, kNumTemps, LocationSummary::kNoCall);
locs->set_in(
0, marshaller_.LocationOfNativeParameter(compiler::ffi::kResultIndex));
ASSERT(marshaller_.NumReturnDefinitions() == 1);
const auto& native_loc = marshaller_.Location(compiler::ffi::kResultIndex);
const auto& native_return_loc =
native_loc.IsPointerToMemory()
? native_loc.AsPointerToMemory().pointer_return_location()
: native_loc;
if (native_loc.IsMultiple()) {
// We pass in a typed data for easy copying in machine code.
// Can be any register which does not conflict with return registers.
Register typed_data_reg = CallingConventions::kSecondNonArgumentRegister;
ASSERT(typed_data_reg != CallingConventions::kReturnReg);
ASSERT(typed_data_reg != CallingConventions::kSecondReturnReg);
locs->set_in(0, Location::RegisterLocation(typed_data_reg));
} else {
locs->set_in(0, native_return_loc.AsLocation());
}
return locs;
}
#undef Z
Representation FfiCallInstr::representation() const {
if (marshaller_.PassTypedData()) {
// Don't care, we're discarding the value.
return kTagged;
}
return marshaller_.RepInFfiCall(compiler::ffi::kResultIndex);
}

View file

@ -2622,16 +2622,13 @@ class ParameterInstr : public Definition {
class NativeParameterInstr : public Definition {
public:
NativeParameterInstr(const compiler::ffi::CallbackMarshaller& marshaller,
intptr_t index)
: marshaller_(marshaller), index_(index) {
const auto& loc = marshaller.NativeLocationOfNativeParameter(index_);
ASSERT(loc.IsStack() && loc.AsStack().base_register() == SPREG);
}
intptr_t def_index)
: marshaller_(marshaller), def_index_(def_index) {}
DECLARE_INSTRUCTION(NativeParameter)
virtual Representation representation() const {
return marshaller_.RepInFfiCall(index_);
return marshaller_.RepInFfiCall(def_index_);
}
intptr_t InputCount() const { return 0; }
@ -2655,7 +2652,7 @@ class NativeParameterInstr : public Definition {
virtual void RawSetInputAt(intptr_t i, Value* value) { UNREACHABLE(); }
const compiler::ffi::CallbackMarshaller& marshaller_;
const intptr_t index_;
const intptr_t def_index_;
DISALLOW_COPY_AND_ASSIGN(NativeParameterInstr);
};
@ -5089,7 +5086,11 @@ class NativeCallInstr : public TemplateDartCall<0> {
// are unboxed and passed through the native calling convention. However, not
// all dart objects can be passed as arguments. Please see the FFI documentation
// for more details.
// TODO(35775): Add link to the documentation when it's written.
//
// Arguments to FfiCallInstr:
// - The arguments to the native call, marshalled in IL as far as possible.
// - The argument address.
// - A TypedData for the return value to populate in machine code (optional).
class FfiCallInstr : public Definition {
public:
FfiCallInstr(Zone* zone,
@ -5098,17 +5099,23 @@ class FfiCallInstr : public Definition {
: Definition(deopt_id),
zone_(zone),
marshaller_(marshaller),
inputs_(marshaller.num_args() + 1) {
inputs_.FillWith(nullptr, 0, marshaller.num_args() + 1);
inputs_(marshaller.NumDefinitions() + 1 +
(marshaller.PassTypedData() ? 1 : 0)) {
inputs_.FillWith(
nullptr, 0,
marshaller.NumDefinitions() + 1 + (marshaller.PassTypedData() ? 1 : 0));
}
DECLARE_INSTRUCTION(FfiCall)
// Number of arguments to the native function.
intptr_t NativeArgCount() const { return InputCount() - 1; }
// Input index of the function pointer to invoke.
intptr_t TargetAddressIndex() const { return NativeArgCount(); }
intptr_t TargetAddressIndex() const { return marshaller_.NumDefinitions(); }
// Input index of the typed data to populate if return value is struct.
intptr_t TypedDataIndex() const {
ASSERT(marshaller_.PassTypedData());
return marshaller_.NumDefinitions() + 1;
}
virtual intptr_t InputCount() const { return inputs_.length(); }
virtual Value* InputAt(intptr_t i) const { return inputs_[i]; }

View file

@ -1298,7 +1298,7 @@ void FfiCallInstr::EmitNativeCode(FlowGraphCompiler* compiler) {
__ EnterDartFrame(0, /*load_pool_pointer=*/false);
// Reserve space for arguments and align frame before entering C++ world.
__ ReserveAlignedFrameSpace(marshaller_.StackTopInBytes());
__ ReserveAlignedFrameSpace(marshaller_.RequiredStackSpaceInBytes());
EmitParamMoves(compiler);

View file

@ -1129,7 +1129,7 @@ void FfiCallInstr::EmitNativeCode(FlowGraphCompiler* compiler) {
__ EnterDartFrame(0, PP);
// Make space for arguments and align the frame.
__ ReserveAlignedFrameSpace(marshaller_.StackTopInBytes());
__ ReserveAlignedFrameSpace(marshaller_.RequiredStackSpaceInBytes());
EmitParamMoves(compiler);
@ -1206,11 +1206,11 @@ void NativeReturnInstr::EmitNativeCode(FlowGraphCompiler* compiler) {
// The dummy return address is in LR, no need to pop it as on Intel.
// These can be anything besides the return register (R0) and THR (R26).
const Register vm_tag_reg = R1;
const Register old_exit_frame_reg = R2;
const Register old_exit_through_ffi_reg = R3;
const Register tmp = R4;
// These can be anything besides the return registers (R0, R1) and THR (R26).
const Register vm_tag_reg = R2;
const Register old_exit_frame_reg = R3;
const Register old_exit_through_ffi_reg = R4;
const Register tmp = R5;
__ PopPair(old_exit_frame_reg, old_exit_through_ffi_reg);
@ -6468,6 +6468,74 @@ void IntConverterInstr::EmitNativeCode(FlowGraphCompiler* compiler) {
}
}
LocationSummary* BitCastInstr::MakeLocationSummary(Zone* zone, bool opt) const {
LocationSummary* summary =
new (zone) LocationSummary(zone, /*num_inputs=*/InputCount(),
/*num_temps=*/0, LocationSummary::kNoCall);
switch (from()) {
case kUnboxedInt32:
case kUnboxedInt64:
summary->set_in(0, Location::RequiresRegister());
break;
case kUnboxedFloat:
case kUnboxedDouble:
summary->set_in(0, Location::RequiresFpuRegister());
break;
default:
UNREACHABLE();
}
switch (to()) {
case kUnboxedInt32:
case kUnboxedInt64:
summary->set_out(0, Location::RequiresRegister());
break;
case kUnboxedFloat:
case kUnboxedDouble:
summary->set_out(0, Location::RequiresFpuRegister());
break;
default:
UNREACHABLE();
}
return summary;
}
void BitCastInstr::EmitNativeCode(FlowGraphCompiler* compiler) {
switch (from()) {
case kUnboxedInt32: {
ASSERT(to() == kUnboxedFloat);
const Register from_reg = locs()->in(0).reg();
const FpuRegister to_reg = locs()->out(0).fpu_reg();
__ fmovsr(to_reg, from_reg);
break;
}
case kUnboxedFloat: {
ASSERT(to() == kUnboxedInt32);
const FpuRegister from_reg = locs()->in(0).fpu_reg();
const Register to_reg = locs()->out(0).reg();
__ fmovrs(to_reg, from_reg);
break;
}
case kUnboxedInt64: {
ASSERT(to() == kUnboxedDouble);
const Register from_reg = locs()->in(0).reg();
const FpuRegister to_reg = locs()->out(0).fpu_reg();
__ fmovdr(to_reg, from_reg);
break;
}
case kUnboxedDouble: {
ASSERT(to() == kUnboxedInt64);
const FpuRegister from_reg = locs()->in(0).fpu_reg();
const Register to_reg = locs()->out(0).reg();
__ fmovrd(to_reg, from_reg);
break;
}
default:
UNREACHABLE();
}
}
LocationSummary* StopInstr::MakeLocationSummary(Zone* zone, bool opt) const {
return new (zone) LocationSummary(zone, 0, 0, LocationSummary::kNoCall);
}

View file

@ -338,6 +338,7 @@ void NativeReturnInstr::EmitNativeCode(FlowGraphCompiler* compiler) {
// Leave the entry frame.
__ LeaveFrame();
// We deal with `ret 4` for structs in the JIT callback trampolines.
__ ret();
}
@ -998,7 +999,7 @@ void FfiCallInstr::EmitNativeCode(FlowGraphCompiler* compiler) {
// We need to create a dummy "exit frame". It will have a null code object.
__ LoadObject(CODE_REG, Object::null_object());
__ EnterDartFrame(marshaller_.StackTopInBytes());
__ EnterDartFrame(marshaller_.RequiredStackSpaceInBytes());
// Align frame before entering C++ world.
if (OS::ActivationFrameAlignment() > 1) {
@ -1035,6 +1036,16 @@ void FfiCallInstr::EmitNativeCode(FlowGraphCompiler* compiler) {
ASSERT(temp == EBX && branch == EAX);
__ call(temp);
// Restore the stack when a struct by value is returned into memory pointed
// to by a pointer that is passed into the function.
if (CallingConventions::kUsesRet4 &&
marshaller_.Location(compiler::ffi::kResultIndex).IsPointerToMemory()) {
// Callee uses `ret 4` instead of `ret` to return.
// See: https://c9x.me/x86/html/file_module_x86_id_280.html
// Caller does `sub esp, 4` immediately after return to balance stack.
__ subl(SPREG, compiler::Immediate(compiler::target::kWordSize));
}
// The x86 calling convention requires floating point values to be returned on
// the "floating-point stack" (aka. register ST0). We don't use the
// floating-point stack in Dart, so we need to move the return value back into

View file

@ -1067,11 +1067,26 @@ void ReturnInstr::PrintOperandsTo(BaseTextBuffer* f) const {
void FfiCallInstr::PrintOperandsTo(BaseTextBuffer* f) const {
f->AddString(" pointer=");
InputAt(TargetAddressIndex())->PrintTo(f);
for (intptr_t i = 0, n = InputCount(); i < n - 1; ++i) {
if (marshaller_.PassTypedData()) {
f->AddString(", typed_data=");
InputAt(TypedDataIndex())->PrintTo(f);
}
intptr_t def_index = 0;
for (intptr_t arg_index = 0; arg_index < marshaller_.num_args();
arg_index++) {
const auto& arg_location = marshaller_.Location(arg_index);
const bool is_compound = arg_location.container_type().IsCompound();
const intptr_t num_defs = marshaller_.NumDefinitions(arg_index);
f->AddString(", ");
InputAt(i)->PrintTo(f);
if (is_compound) f->AddString("(");
for (intptr_t i = 0; i < num_defs; i++) {
InputAt(def_index)->PrintTo(f);
if ((i + 1) < num_defs) f->AddString(", ");
def_index++;
}
if (is_compound) f->AddString(")");
f->AddString(" (@");
marshaller_.Location(i).PrintTo(f);
arg_location.PrintTo(f);
f->AddString(")");
}
}
@ -1093,10 +1108,10 @@ void NativeReturnInstr::PrintOperandsTo(BaseTextBuffer* f) const {
void NativeParameterInstr::PrintOperandsTo(BaseTextBuffer* f) const {
// Where the calling convention puts it.
marshaller_.Location(index_).PrintTo(f);
marshaller_.Location(marshaller_.ArgumentIndex(def_index_)).PrintTo(f);
f->AddString(" at ");
// Where the arguments are when pushed on the stack.
marshaller_.NativeLocationOfNativeParameter(index_).PrintTo(f);
marshaller_.NativeLocationOfNativeParameter(def_index_).PrintTo(f);
}
void CatchBlockEntryInstr::PrintTo(BaseTextBuffer* f) const {

View file

@ -1060,7 +1060,7 @@ void FfiCallInstr::EmitNativeCode(FlowGraphCompiler* compiler) {
// but have a null code object.
__ LoadObject(CODE_REG, Object::null_object());
__ set_constant_pool_allowed(false);
__ EnterDartFrame(marshaller_.StackTopInBytes(), PP);
__ EnterDartFrame(marshaller_.RequiredStackSpaceInBytes(), PP);
// Align frame before entering C++ world.
if (OS::ActivationFrameAlignment() > 1) {

View file

@ -1370,7 +1370,7 @@ void FlowGraphAllocator::ProcessOneInstruction(BlockEntryInstr* block,
}
}
// Block all allocatable registers for calls.
// Block all allocatable registers for calls.
if (locs->always_calls() && !locs->callee_safe_call()) {
// Expected shape of live range:
//
@ -1407,7 +1407,8 @@ void FlowGraphAllocator::ProcessOneInstruction(BlockEntryInstr* block,
pair->At(1).policy() == Location::kAny);
} else {
ASSERT(!locs->in(j).IsUnallocated() ||
locs->in(j).policy() == Location::kAny);
locs->in(j).policy() == Location::kAny ||
locs->in(j).policy() == Location::kRequiresStackSlot);
}
}

View file

@ -155,7 +155,8 @@ void LocationSummary::set_in(intptr_t index, Location loc) {
// restrictions.
if (always_calls()) {
if (loc.IsUnallocated()) {
ASSERT(loc.policy() == Location::kAny);
ASSERT(loc.policy() == Location::kAny ||
loc.policy() == Location::kRequiresStackSlot);
} else if (loc.IsPairLocation()) {
ASSERT(!loc.AsPairLocation()->At(0).IsUnallocated() ||
loc.AsPairLocation()->At(0).policy() == Location::kAny);
@ -280,6 +281,8 @@ const char* Location::Name() const {
return "R";
case kRequiresFpuRegister:
return "DR";
case kRequiresStackSlot:
return "RS";
case kWritableRegister:
return "WR";
case kSameAsFirstInput:

View file

@ -238,6 +238,7 @@ class Location : public ValueObject {
kPrefersRegister,
kRequiresRegister,
kRequiresFpuRegister,
kRequiresStackSlot,
kWritableRegister,
kSameAsFirstInput,
};
@ -265,6 +266,10 @@ class Location : public ValueObject {
return UnallocatedLocation(kRequiresFpuRegister);
}
static Location RequiresStackSlot() {
return UnallocatedLocation(kRequiresStackSlot);
}
static Location WritableRegister() {
return UnallocatedLocation(kWritableRegister);
}

View file

@ -50,9 +50,6 @@ FunctionPtr NativeCallbackFunction(const Function& c_signature,
//
// Exceptional return values currently cannot be pointers because we don't
// have constant pointers.
//
// TODO(36730): We'll need to extend this when we support passing/returning
// structs by value.
ASSERT(exceptional_return.IsNull() || exceptional_return.IsNumber());
if (!exceptional_return.IsSmi() && exceptional_return.IsNew()) {
function.SetFfiCallbackExceptionalReturn(Instance::Handle(

View file

@ -75,11 +75,224 @@ bool BaseMarshaller::ContainsHandles() const {
return dart_signature_.FfiCSignatureContainsHandles();
}
Location CallMarshaller::LocInFfiCall(intptr_t arg_index) const {
if (arg_index == kResultIndex) {
return Location(arg_index).AsLocation();
intptr_t BaseMarshaller::NumDefinitions() const {
intptr_t total = 0;
for (intptr_t i = 0; i < num_args(); i++) {
total += NumDefinitions(i);
}
return total;
}
intptr_t BaseMarshaller::NumDefinitions(intptr_t arg_index) const {
if (ArgumentIndexIsReturn(arg_index)) {
return NumReturnDefinitions();
}
const auto& loc = Location(arg_index);
const auto& type = loc.payload_type();
if (type.IsPrimitive()) {
// All non-struct arguments are 1 definition in IL. Even 64 bit values
// on 32 bit architectures.
return 1;
}
ASSERT(type.IsCompound());
if (loc.IsMultiple()) {
// One IL definition for every nested location.
const auto& multiple = loc.AsMultiple();
return multiple.locations().length();
}
if (loc.IsPointerToMemory()) {
// For FFI calls, pass in TypedDataBase (1 IL definition) in IL, and copy
// contents to stack and pass pointer in right location in MC.
// For FFI callbacks, get the pointer in a NativeParameter and construct
// the TypedDataBase in IL.
return 1;
}
ASSERT(loc.IsStack());
// For stack, word size definitions in IL. In FFI calls passed in to the
// native call, in FFI callbacks read in separate NativeParams.
const intptr_t size_in_bytes = type.SizeInBytes();
const intptr_t num_defs =
Utils::RoundUp(size_in_bytes, compiler::target::kWordSize) /
compiler::target::kWordSize;
return num_defs;
}
intptr_t BaseMarshaller::NumReturnDefinitions() const {
// For FFI calls we always have 1 definition, because the IL instruction can
// only be 1 definition. We pass in a TypedDataBase in IL and fill it in
// machine code.
//
// For FFI callbacks we always have 1 definition. If it's a struct and the
// native ABI is passing a pointer, we copy to it in IL. If it's a multiple
// locations return value we copy the value in machine code because some
// native locations cannot be expressed in IL in Location.
return 1;
}
bool BaseMarshaller::ArgumentIndexIsReturn(intptr_t arg_index) const {
ASSERT(arg_index == kResultIndex || arg_index >= 0);
return arg_index == kResultIndex;
}
// Definitions in return value count down.
bool BaseMarshaller::DefinitionIndexIsReturn(intptr_t def_index_global) const {
return def_index_global <= kResultIndex;
}
intptr_t BaseMarshaller::ArgumentIndex(intptr_t def_index_global) const {
if (DefinitionIndexIsReturn(def_index_global)) {
const intptr_t def = DefinitionInArgument(def_index_global, kResultIndex);
ASSERT(def < NumReturnDefinitions());
return kResultIndex;
}
ASSERT(def_index_global < NumDefinitions());
intptr_t defs = 0;
intptr_t arg_index = 0;
for (; arg_index < num_args(); arg_index++) {
defs += NumDefinitions(arg_index);
if (defs > def_index_global) {
return arg_index;
}
}
UNREACHABLE();
}
intptr_t BaseMarshaller::FirstDefinitionIndex(intptr_t arg_index) const {
if (arg_index <= kResultIndex) {
return kResultIndex;
}
ASSERT(arg_index < num_args());
intptr_t num_defs = 0;
for (intptr_t i = 0; i < arg_index; i++) {
num_defs += NumDefinitions(i);
}
return num_defs;
}
intptr_t BaseMarshaller::DefinitionInArgument(intptr_t def_index_global,
intptr_t arg_index) const {
if (ArgumentIndexIsReturn(arg_index)) {
// Counting down for return definitions.
const intptr_t def = kResultIndex - def_index_global;
ASSERT(def < NumReturnDefinitions());
return def;
} else {
// Counting up for arguments in consecutive order.
const intptr_t def = def_index_global - FirstDefinitionIndex(arg_index);
ASSERT(def < NumDefinitions());
return def;
}
}
intptr_t BaseMarshaller::DefinitionIndex(intptr_t def_index_in_arg,
intptr_t arg_index) const {
ASSERT(def_index_in_arg < NumDefinitions(arg_index));
if (ArgumentIndexIsReturn(arg_index)) {
return kResultIndex - def_index_in_arg;
} else {
return FirstDefinitionIndex(arg_index) + def_index_in_arg;
}
}
static Representation SelectRepresentationInIL(Zone* zone,
const NativeLocation& location) {
if (location.container_type().IsInt() && location.payload_type().IsFloat()) {
// IL can only pass integers to integer Locations, so pass as integer if
// the Location requires it to be an integer.
return location.container_type().AsRepresentationOverApprox(zone);
}
// Representations do not support 8 or 16 bit ints, over approximate to 32
// bits.
return location.payload_type().AsRepresentationOverApprox(zone);
}
// Implemented partially in BaseMarshaller because most Representations are
// the same in Calls and Callbacks.
Representation BaseMarshaller::RepInFfiCall(intptr_t def_index_global) const {
intptr_t arg_index = ArgumentIndex(def_index_global);
const auto& location = Location(arg_index);
if (location.container_type().IsPrimitive()) {
return SelectRepresentationInIL(zone_, location);
}
ASSERT(location.container_type().IsCompound());
if (location.IsStack()) {
// Split the struct in architecture size chunks.
return compiler::target::kWordSize == 8 ? Representation::kUnboxedInt64
: Representation::kUnboxedInt32;
}
if (location.IsMultiple()) {
const intptr_t def_index_in_arg =
DefinitionInArgument(def_index_global, arg_index);
const auto& def_loc =
*(location.AsMultiple().locations()[def_index_in_arg]);
return SelectRepresentationInIL(zone_, def_loc);
}
ASSERT(location.IsPointerToMemory());
UNREACHABLE(); // Implemented in subclasses.
}
Representation CallMarshaller::RepInFfiCall(intptr_t def_index_global) const {
intptr_t arg_index = ArgumentIndex(def_index_global);
const auto& location = Location(arg_index);
if (location.IsPointerToMemory()) {
if (ArgumentIndexIsReturn(arg_index)) {
// The IL type is the unboxed pointer.
const auto& pointer_location = location.AsPointerToMemory();
const auto& rep = pointer_location.pointer_location().payload_type();
ASSERT(rep.Equals(
pointer_location.pointer_return_location().payload_type()));
return rep.AsRepresentation();
} else {
// We're passing Pointer/TypedData object, the GC might move TypedData so
// we can't load the address from it eagerly.
return kTagged;
}
}
return BaseMarshaller::RepInFfiCall(def_index_global);
}
Representation CallbackMarshaller::RepInFfiCall(
intptr_t def_index_global) const {
intptr_t arg_index = ArgumentIndex(def_index_global);
const auto& location = Location(arg_index);
if (location.IsPointerToMemory()) {
// The IL type is the unboxed pointer, and FFI callback return. In the
// latter we've already copied the data into the result location in IL.
const auto& pointer_location = location.AsPointerToMemory();
const auto& rep = pointer_location.pointer_location().payload_type();
ASSERT(
rep.Equals(pointer_location.pointer_return_location().payload_type()));
return rep.AsRepresentation();
}
if (ArgumentIndexIsReturn(arg_index) && location.IsMultiple()) {
// We're passing a TypedData.
return Representation::kTagged;
}
return BaseMarshaller::RepInFfiCall(def_index_global);
}
void BaseMarshaller::RepsInFfiCall(intptr_t arg_index,
GrowableArray<Representation>* out) const {
const intptr_t num_definitions = NumDefinitions(arg_index);
const intptr_t first_def = FirstDefinitionIndex(arg_index);
for (int i = 0; i < num_definitions; i++) {
out->Add(RepInFfiCall(first_def + i));
}
}
// Helper method for `LocInFfiCall` to turn a stack location into either any
// location or a pair of two any locations.
static Location ConvertToAnyLocation(const NativeStackLocation& loc,
Representation rep_in_ffi_call) {
// Floating point values are never split: they are either in a single "FPU"
// register or a contiguous 64-bit slot on the stack. Unboxed 64-bit integer
// values, in contrast, can be split between any two registers on a 32-bit
@ -90,33 +303,149 @@ Location CallMarshaller::LocInFfiCall(intptr_t arg_index) const {
// convention is concerned. However, the representation of these arguments
// are set to kUnboxedInt32 or kUnboxedInt64 already, so we don't have to
// account for that here.
const bool is_atomic = RepInFfiCall(arg_index) == kUnboxedDouble ||
RepInFfiCall(arg_index) == kUnboxedFloat;
const bool is_atomic =
rep_in_ffi_call == kUnboxedDouble || rep_in_ffi_call == kUnboxedFloat;
const NativeLocation& loc = this->Location(arg_index);
// Don't pin stack locations, they need to be moved anyway.
if (loc.IsStack()) {
if (loc.payload_type().SizeInBytes() == 2 * compiler::target::kWordSize &&
!is_atomic) {
return Location::Pair(Location::Any(), Location::Any());
}
return Location::Any();
if (loc.payload_type().IsPrimitive() &&
loc.payload_type().SizeInBytes() == 2 * compiler::target::kWordSize &&
!is_atomic) {
return Location::Pair(Location::Any(), Location::Any());
}
return Location::Any();
}
static Location SelectFpuLocationInIL(Zone* zone,
const NativeLocation& location) {
ASSERT((location.IsFpuRegisters()));
#if defined(TARGET_ARCH_ARM)
// Only pin FPU register if it is the lowest bits.
if (loc.IsFpuRegisters()) {
const auto& fpu_loc = loc.AsFpuRegisters();
if (fpu_loc.IsLowestBits()) {
return fpu_loc.WidenToQFpuRegister(zone_).AsLocation();
}
return Location::Any();
const auto& fpu_loc = location.AsFpuRegisters();
if (fpu_loc.IsLowestBits()) {
return fpu_loc.WidenToQFpuRegister(zone).AsLocation();
}
return Location::Any();
#endif // defined(TARGET_ARCH_ARM)
return location.AsLocation();
}
Location CallMarshaller::LocInFfiCall(intptr_t def_index_global) const {
const intptr_t arg_index = ArgumentIndex(def_index_global);
const NativeLocation& loc = this->Location(arg_index);
if (ArgumentIndexIsReturn(arg_index)) {
const intptr_t def = kResultIndex - def_index_global;
if (loc.IsMultiple()) {
ASSERT(loc.AsMultiple().locations()[def]->IsExpressibleAsLocation());
return loc.AsMultiple().locations()[def]->AsLocation();
}
if (loc.IsPointerToMemory()) {
// No location at all, because we store into TypedData passed to the
// FfiCall instruction. But we have to supply a location.
return Location::RegisterLocation(CallingConventions::kReturnReg);
}
return loc.AsLocation();
}
if (loc.IsMultiple()) {
const intptr_t def_index_in_arg =
def_index_global - FirstDefinitionIndex(arg_index);
const auto& def_loc = *(loc.AsMultiple().locations()[def_index_in_arg]);
if (def_loc.IsStack()) {
// Don't pin stack locations, they need to be moved anyway.
return ConvertToAnyLocation(def_loc.AsStack(),
RepInFfiCall(def_index_global));
}
if (def_loc.IsFpuRegisters()) {
return SelectFpuLocationInIL(zone_, def_loc);
}
return def_loc.AsLocation();
}
if (loc.IsPointerToMemory()) {
const auto& pointer_location = loc.AsPointerToMemory().pointer_location();
if (pointer_location.IsStack()) {
// Don't pin stack locations, they need to be moved anyway.
return ConvertToAnyLocation(pointer_location.AsStack(),
RepInFfiCall(def_index_global));
}
return pointer_location.AsLocation();
}
if (loc.IsStack()) {
return ConvertToAnyLocation(loc.AsStack(), RepInFfiCall(def_index_global));
}
if (loc.IsFpuRegisters()) {
return SelectFpuLocationInIL(zone_, loc);
}
ASSERT(loc.IsRegisters());
return loc.AsLocation();
}
bool CallMarshaller::PassTypedData() const {
return IsStruct(compiler::ffi::kResultIndex);
}
intptr_t CallMarshaller::TypedDataSizeInBytes() const {
ASSERT(PassTypedData());
return Utils::RoundUp(
Location(compiler::ffi::kResultIndex).payload_type().SizeInBytes(),
compiler::target::kWordSize);
}
// Const to be able to look up the `RequiredStackSpaceInBytes` in
// `PassByPointerStackOffset`.
const intptr_t kAfterLastArgumentIndex = kIntptrMax;
intptr_t CallMarshaller::PassByPointerStackOffset(intptr_t arg_index) const {
ASSERT(arg_index == kResultIndex ||
(arg_index >= 0 && arg_index < num_args()) ||
arg_index == kAfterLastArgumentIndex);
intptr_t stack_offset = 0;
// First the native arguments are on the stack.
// This is governed by the native ABI, the rest we can chose freely.
stack_offset += native_calling_convention_.StackTopInBytes();
stack_offset = Utils::RoundUp(stack_offset, compiler::target::kWordSize);
if (arg_index == kResultIndex) {
return stack_offset;
}
// Then save space for the result.
const auto& result_location = Location(compiler::ffi::kResultIndex);
if (result_location.IsPointerToMemory()) {
stack_offset += result_location.payload_type().SizeInBytes();
stack_offset = Utils::RoundUp(stack_offset, compiler::target::kWordSize);
}
// And finally put the arguments on the stack that are passed by pointer.
for (int i = 0; i < num_args(); i++) {
if (arg_index == i) {
return stack_offset;
}
const auto& arg_location = Location(i);
if (arg_location.IsPointerToMemory()) {
stack_offset += arg_location.payload_type().SizeInBytes();
stack_offset = Utils::RoundUp(stack_offset, compiler::target::kWordSize);
}
}
// The total stack space we need.
ASSERT(arg_index == kAfterLastArgumentIndex);
return stack_offset;
}
intptr_t CallMarshaller::RequiredStackSpaceInBytes() const {
return PassByPointerStackOffset(kAfterLastArgumentIndex);
}
// This classes translates the ABI location of arguments into the locations they
// will inhabit after entry-frame setup in the invocation of a native callback.
//
@ -133,15 +462,26 @@ class CallbackArgumentTranslator : public ValueObject {
public:
static NativeLocations& TranslateArgumentLocations(
Zone* zone,
const NativeLocations& arg_locs) {
auto& pushed_locs = *(new (zone) NativeLocations(arg_locs.length()));
const NativeLocations& argument_locations,
const NativeLocation& return_loc) {
const bool treat_return_loc = return_loc.IsPointerToMemory();
auto& pushed_locs = *(new (zone) NativeLocations(
argument_locations.length() + (treat_return_loc ? 1 : 0)));
CallbackArgumentTranslator translator;
for (intptr_t i = 0, n = arg_locs.length(); i < n; i++) {
translator.AllocateArgument(*arg_locs[i]);
for (intptr_t i = 0, n = argument_locations.length(); i < n; i++) {
translator.AllocateArgument(*argument_locations[i]);
}
for (intptr_t i = 0, n = arg_locs.length(); i < n; ++i) {
pushed_locs.Add(&translator.TranslateArgument(zone, *arg_locs[i]));
if (treat_return_loc) {
translator.AllocateArgument(return_loc);
}
for (intptr_t i = 0, n = argument_locations.length(); i < n; ++i) {
pushed_locs.Add(
&translator.TranslateArgument(zone, *argument_locations[i]));
}
if (treat_return_loc) {
pushed_locs.Add(&translator.TranslateArgument(zone, return_loc));
}
return pushed_locs;
@ -155,8 +495,16 @@ class CallbackArgumentTranslator : public ValueObject {
argument_slots_required_ += arg.AsRegisters().num_regs();
} else if (arg.IsFpuRegisters()) {
argument_slots_required_ += 8 / target::kWordSize;
} else if (arg.IsPointerToMemory()) {
if (arg.AsPointerToMemory().pointer_location().IsRegisters()) {
argument_slots_required_ += 1;
}
} else {
UNREACHABLE();
ASSERT(arg.IsMultiple());
const auto& multiple = arg.AsMultiple();
for (intptr_t i = 0; i < multiple.locations().length(); i++) {
AllocateArgument(*multiple.locations().At(i));
}
}
}
@ -192,12 +540,33 @@ class CallbackArgumentTranslator : public ValueObject {
return result;
}
ASSERT(arg.IsFpuRegisters());
const auto& result = *new (zone) NativeStackLocation(
arg.payload_type(), arg.container_type(), SPREG,
argument_slots_used_ * compiler::target::kWordSize);
argument_slots_used_ += 8 / target::kWordSize;
return result;
if (arg.IsFpuRegisters()) {
const auto& result = *new (zone) NativeStackLocation(
arg.payload_type(), arg.container_type(), SPREG,
argument_slots_used_ * compiler::target::kWordSize);
argument_slots_used_ += 8 / target::kWordSize;
return result;
}
if (arg.IsPointerToMemory()) {
const auto& pointer_loc = arg.AsPointerToMemory().pointer_location();
const auto& pointer_ret_loc =
arg.AsPointerToMemory().pointer_return_location();
const auto& pointer_translated = TranslateArgument(zone, pointer_loc);
return *new (zone) PointerToMemoryLocation(
pointer_translated, pointer_ret_loc, arg.payload_type().AsCompound());
}
ASSERT(arg.IsMultiple());
const auto& multiple = arg.AsMultiple();
NativeLocations& multiple_locations =
*new (zone) NativeLocations(multiple.locations().length());
for (intptr_t i = 0; i < multiple.locations().length(); i++) {
multiple_locations.Add(
&TranslateArgument(zone, *multiple.locations().At(i)));
}
return *new (zone) MultipleNativeLocations(
multiple.payload_type().AsCompound(), multiple_locations);
}
intptr_t argument_slots_used_ = 0;
@ -209,7 +578,46 @@ CallbackMarshaller::CallbackMarshaller(Zone* zone,
: BaseMarshaller(zone, dart_signature),
callback_locs_(CallbackArgumentTranslator::TranslateArgumentLocations(
zone_,
native_calling_convention_.argument_locations())) {}
native_calling_convention_.argument_locations(),
native_calling_convention_.return_location())) {}
const NativeLocation& CallbackMarshaller::NativeLocationOfNativeParameter(
intptr_t def_index) const {
const intptr_t arg_index = ArgumentIndex(def_index);
if (arg_index == kResultIndex) {
const auto& result_loc = Location(arg_index);
if (result_loc.IsPointerToMemory()) {
// If it's a pointer we return it in the last.
return *callback_locs_.At(callback_locs_.length() - 1);
}
// The other return types are not translated.
return result_loc;
}
// Check that we only have stack arguments.
const auto& loc = *callback_locs_.At(arg_index);
ASSERT(loc.IsStack() || loc.IsPointerToMemory() || loc.IsMultiple());
if (loc.IsStack()) {
ASSERT(loc.AsStack().base_register() == SPREG);
if (loc.payload_type().IsPrimitive()) {
return loc;
}
const intptr_t index = DefinitionInArgument(def_index, arg_index);
const intptr_t count = NumDefinitions(arg_index);
return loc.Split(zone_, count, index);
} else if (loc.IsPointerToMemory()) {
const auto& pointer_loc = loc.AsPointerToMemory().pointer_location();
ASSERT(pointer_loc.IsStack() &&
pointer_loc.AsStack().base_register() == SPREG);
return loc;
}
const auto& multiple = loc.AsMultiple();
const intptr_t index = DefinitionInArgument(def_index, arg_index);
const auto& multi_loc = *multiple.locations().At(index);
ASSERT(multi_loc.IsStack() && multi_loc.AsStack().base_register() == SPREG);
return multi_loc;
}
} // namespace ffi
} // namespace compiler

View file

@ -39,9 +39,27 @@ class BaseMarshaller : public ZoneAllocated {
return native_calling_convention_.argument_locations().length();
}
intptr_t StackTopInBytes() const {
return native_calling_convention_.StackTopInBytes();
}
// Number of definitions passed to FfiCall, number of NativeParams, or number
// of definitions passed to NativeReturn in IL.
//
// All non-struct values have 1 definition, struct values can have either 1
// or multiple definitions. If a struct has multiple definitions, they either
// correspond to the number of native locations in the native ABI or to word-
// sized chunks.
//
// `arg_index` is the index of an argument.
// `def_index_in_argument` is the definition in one argument.
// `def_index_global` is the index of the definition in all arguments.
intptr_t NumDefinitions() const;
intptr_t NumDefinitions(intptr_t arg_index) const;
intptr_t NumReturnDefinitions() const;
bool ArgumentIndexIsReturn(intptr_t arg_index) const;
bool DefinitionIndexIsReturn(intptr_t def_index_global) const;
intptr_t ArgumentIndex(intptr_t def_index_global) const;
intptr_t FirstDefinitionIndex(intptr_t arg_index) const;
intptr_t DefinitionInArgument(intptr_t def_index_global,
intptr_t arg_index) const;
intptr_t DefinitionIndex(intptr_t def_index_in_arg, intptr_t arg_index) const;
// The location of the argument at `arg_index`.
const NativeLocation& Location(intptr_t arg_index) const {
@ -53,20 +71,18 @@ class BaseMarshaller : public ZoneAllocated {
// Unboxed representation on how the value is passed or received from regular
// Dart code.
//
// Implemented in BaseMarshaller because most Representations are the same
// in Calls and Callbacks.
Representation RepInDart(intptr_t arg_index) const {
return Location(arg_index).payload_type().AsRepresentationOverApprox(zone_);
}
// Representation on how the value is passed to or recieved from the FfiCall
// instruction or StaticCall, NativeParameter, and NativeReturn instructions.
Representation RepInFfiCall(intptr_t arg_index) const {
if (Location(arg_index).container_type().IsInt() &&
Location(arg_index).payload_type().IsFloat()) {
return Location(arg_index).container_type().AsRepresentationOverApprox(
zone_);
}
return Location(arg_index).payload_type().AsRepresentationOverApprox(zone_);
}
virtual Representation RepInFfiCall(intptr_t def_index_global) const;
void RepsInFfiCall(intptr_t arg_index,
GrowableArray<Representation>* out) const;
// Bitcasting floats to ints, only required in SoftFP.
bool RequiresBitCast(intptr_t index) const {
@ -94,6 +110,12 @@ class BaseMarshaller : public ZoneAllocated {
kFfiHandleCid;
}
bool IsStruct(intptr_t arg_index) const {
const auto& type = AbstractType::Handle(zone_, CType(arg_index));
const bool predefined = IsFfiTypeClassId(type.type_class_id());
return !predefined;
}
// Treated as a null constant in Dart.
bool IsVoid(intptr_t arg_index) const {
return AbstractType::Handle(zone_, CType(arg_index)).type_class_id() ==
@ -122,7 +144,24 @@ class CallMarshaller : public BaseMarshaller {
CallMarshaller(Zone* zone, const Function& dart_signature)
: BaseMarshaller(zone, dart_signature) {}
dart::Location LocInFfiCall(intptr_t arg_index) const;
virtual Representation RepInFfiCall(intptr_t def_index_global) const;
// The location of the inputs to the IL FfiCall instruction.
dart::Location LocInFfiCall(intptr_t def_index_global) const;
// Allocate a TypedData before the FfiCall and pass it in to the FfiCall so
// that it can be populated in assembly.
bool PassTypedData() const;
intptr_t TypedDataSizeInBytes() const;
// We allocate space for PointerToMemory arguments and PointerToMemory return
// locations on the stack. This is faster than allocation ExternalTypedData.
// Normal TypedData is not an option, as these might be relocated by GC
// during FFI calls.
intptr_t PassByPointerStackOffset(intptr_t arg_index) const;
// The total amount of stack space required for FFI trampolines.
intptr_t RequiredStackSpaceInBytes() const;
protected:
~CallMarshaller() {}
@ -132,19 +171,19 @@ class CallbackMarshaller : public BaseMarshaller {
public:
CallbackMarshaller(Zone* zone, const Function& dart_signature);
// All parameters are saved on stack to do safe-point transition.
const NativeLocation& NativeLocationOfNativeParameter(
intptr_t arg_index) const {
if (arg_index == kResultIndex) {
// No moving around of result.
return Location(arg_index);
}
return *callback_locs_.At(arg_index);
}
virtual Representation RepInFfiCall(intptr_t def_index_global) const;
// All parameters are saved on stack to do safe-point transition.
dart::Location LocationOfNativeParameter(intptr_t arg_index) const {
return NativeLocationOfNativeParameter(arg_index).AsLocation();
const NativeLocation& NativeLocationOfNativeParameter(
intptr_t def_index) const;
// All parameters are saved on stack to do safe-point transition.
dart::Location LocationOfNativeParameter(intptr_t def_index) const {
const auto& native_loc = NativeLocationOfNativeParameter(def_index);
if (native_loc.IsPointerToMemory()) {
return native_loc.AsPointerToMemory().pointer_location().AsLocation();
}
return native_loc.AsLocation();
}
protected:

View file

@ -557,7 +557,7 @@ Fragment StreamingFlowGraphBuilder::CompleteBodyWithYieldContinuations(
dispatch += Constant(offsets);
dispatch += LoadLocal(scopes()->switch_variable);
// Ideally this would just be LoadIndexedTypedData(kTypedDataInt32ArrayCid),
// Ideally this would just be LoadIndexed(kTypedDataInt32ArrayCid),
// but that doesn't work in unoptimised code.
// The optimiser will turn this into that in any case.
dispatch += InstanceCall(TokenPosition::kNoSource, Symbols::IndexToken(),

View file

@ -5,6 +5,8 @@
#include "vm/compiler/frontend/kernel_to_il.h"
#include "platform/assert.h"
#include "platform/globals.h"
#include "vm/class_id.h"
#include "vm/compiler/aot/precompiler.h"
#include "vm/compiler/backend/il.h"
#include "vm/compiler/backend/il_printer.h"
@ -12,13 +14,17 @@
#include "vm/compiler/backend/range_analysis.h"
#include "vm/compiler/ffi/abi.h"
#include "vm/compiler/ffi/marshaller.h"
#include "vm/compiler/ffi/native_calling_convention.h"
#include "vm/compiler/ffi/native_type.h"
#include "vm/compiler/ffi/recognized_method.h"
#include "vm/compiler/frontend/kernel_binary_flowgraph.h"
#include "vm/compiler/frontend/kernel_translation_helper.h"
#include "vm/compiler/frontend/prologue_builder.h"
#include "vm/compiler/jit/compiler.h"
#include "vm/compiler/runtime_api.h"
#include "vm/kernel_isolate.h"
#include "vm/kernel_loader.h"
#include "vm/log.h"
#include "vm/longjump.h"
#include "vm/native_entry.h"
#include "vm/object_store.h"
@ -26,6 +32,7 @@
#include "vm/resolver.h"
#include "vm/scopes.h"
#include "vm/stack_frame.h"
#include "vm/symbols.h"
namespace dart {
namespace kernel {
@ -3558,6 +3565,67 @@ void FlowGraphBuilder::SetConstantRangeOfCurrentDefinition(
fragment.current->AsDefinition()->set_range(range);
}
static classid_t TypedDataCidUnboxed(Representation unboxed_representation) {
switch (unboxed_representation) {
case kUnboxedFloat:
// Note kTypedDataFloat32ArrayCid loads kUnboxedDouble.
UNREACHABLE();
return kTypedDataFloat32ArrayCid;
case kUnboxedInt32:
return kTypedDataInt32ArrayCid;
case kUnboxedUint32:
return kTypedDataUint32ArrayCid;
case kUnboxedInt64:
return kTypedDataInt64ArrayCid;
case kUnboxedDouble:
return kTypedDataFloat64ArrayCid;
default:
UNREACHABLE();
}
UNREACHABLE();
}
Fragment FlowGraphBuilder::StoreIndexedTypedDataUnboxed(
Representation unboxed_representation,
intptr_t index_scale,
bool index_unboxed) {
ASSERT(unboxed_representation == kUnboxedInt32 ||
unboxed_representation == kUnboxedUint32 ||
unboxed_representation == kUnboxedInt64 ||
unboxed_representation == kUnboxedFloat ||
unboxed_representation == kUnboxedDouble);
Fragment fragment;
if (unboxed_representation == kUnboxedFloat) {
fragment += BitCast(kUnboxedFloat, kUnboxedInt32);
unboxed_representation = kUnboxedInt32;
}
fragment += StoreIndexedTypedData(TypedDataCidUnboxed(unboxed_representation),
index_scale, index_unboxed);
return fragment;
}
Fragment FlowGraphBuilder::LoadIndexedTypedDataUnboxed(
Representation unboxed_representation,
intptr_t index_scale,
bool index_unboxed) {
ASSERT(unboxed_representation == kUnboxedInt32 ||
unboxed_representation == kUnboxedUint32 ||
unboxed_representation == kUnboxedInt64 ||
unboxed_representation == kUnboxedFloat ||
unboxed_representation == kUnboxedDouble);
Representation representation_for_load = unboxed_representation;
if (unboxed_representation == kUnboxedFloat) {
representation_for_load = kUnboxedInt32;
}
Fragment fragment;
fragment += LoadIndexed(TypedDataCidUnboxed(representation_for_load),
index_scale, index_unboxed);
if (unboxed_representation == kUnboxedFloat) {
fragment += BitCast(kUnboxedInt32, kUnboxedFloat);
}
return fragment;
}
Fragment FlowGraphBuilder::EnterHandleScope() {
auto* instr = new (Z)
EnterHandleScopeInstr(EnterHandleScopeInstr::Kind::kEnterHandleScope);
@ -3661,7 +3729,7 @@ Fragment FlowGraphBuilder::NativeReturn(
const compiler::ffi::CallbackMarshaller& marshaller) {
auto* instr = new (Z) NativeReturnInstr(TokenPosition::kNoSource, Pop(),
marshaller, DeoptId::kNone);
return Fragment(instr);
return Fragment(instr).closed();
}
Fragment FlowGraphBuilder::FfiPointerFromAddress(const Type& result_type) {
@ -3703,9 +3771,296 @@ Fragment FlowGraphBuilder::BitCast(Representation from, Representation to) {
return Fragment(instr);
}
Fragment FlowGraphBuilder::FfiConvertArgumentToDart(
Fragment FlowGraphBuilder::WrapTypedDataBaseInStruct(
const AbstractType& struct_type) {
const auto& struct_sub_class = Class::ZoneHandle(Z, struct_type.type_class());
struct_sub_class.EnsureIsFinalized(thread_);
const auto& lib_ffi = Library::Handle(Z, Library::FfiLibrary());
const auto& struct_class =
Class::Handle(Z, lib_ffi.LookupClass(Symbols::Struct()));
const auto& struct_addressof = Field::ZoneHandle(
Z, struct_class.LookupInstanceFieldAllowPrivate(Symbols::_addressOf()));
ASSERT(!struct_addressof.IsNull());
Fragment body;
LocalVariable* typed_data = MakeTemporary("typed_data_base");
body += AllocateObject(TokenPosition::kNoSource, struct_sub_class, 0);
body += LoadLocal(MakeTemporary("struct")); // Duplicate Struct.
body += LoadLocal(typed_data);
body += StoreInstanceField(struct_addressof,
StoreInstanceFieldInstr::Kind::kInitializing);
body += DropTempsPreserveTop(1); // Drop TypedData.
return body;
}
Fragment FlowGraphBuilder::LoadTypedDataBaseFromStruct() {
const Library& lib_ffi = Library::Handle(zone_, Library::FfiLibrary());
const Class& struct_class =
Class::Handle(zone_, lib_ffi.LookupClass(Symbols::Struct()));
const Field& struct_addressof = Field::ZoneHandle(
zone_,
struct_class.LookupInstanceFieldAllowPrivate(Symbols::_addressOf()));
ASSERT(!struct_addressof.IsNull());
Fragment body;
body += LoadField(struct_addressof, /*calls_initializer=*/false);
return body;
}
Fragment FlowGraphBuilder::CopyFromStructToStack(
LocalVariable* variable,
const GrowableArray<Representation>& representations) {
Fragment body;
const intptr_t num_defs = representations.length();
int offset_in_bytes = 0;
for (intptr_t i = 0; i < num_defs; i++) {
body += LoadLocal(variable);
body += LoadTypedDataBaseFromStruct();
body += LoadUntagged(compiler::target::Pointer::data_field_offset());
body += IntConstant(offset_in_bytes);
const Representation representation = representations[i];
offset_in_bytes += RepresentationUtils::ValueSize(representation);
body += LoadIndexedTypedDataUnboxed(representation, /*index_scale=*/1,
/*index_unboxed=*/false);
}
return body;
}
Fragment FlowGraphBuilder::PopFromStackToTypedDataBase(
ZoneGrowableArray<LocalVariable*>* definitions,
const GrowableArray<Representation>& representations) {
Fragment body;
const intptr_t num_defs = representations.length();
ASSERT(definitions->length() == num_defs);
LocalVariable* uint8_list = MakeTemporary("uint8_list");
int offset_in_bytes = 0;
for (intptr_t i = 0; i < num_defs; i++) {
const Representation representation = representations[i];
body += LoadLocal(uint8_list);
body += LoadUntagged(compiler::target::TypedDataBase::data_field_offset());
body += IntConstant(offset_in_bytes);
body += LoadLocal(definitions->At(i));
body += StoreIndexedTypedDataUnboxed(representation, /*index_scale=*/1,
/*index_unboxed=*/false);
offset_in_bytes += RepresentationUtils::ValueSize(representation);
}
body += DropTempsPreserveTop(num_defs); // Drop chunck defs keep TypedData.
return body;
}
static intptr_t chunk_size(intptr_t bytes_left) {
ASSERT(bytes_left >= 1);
if (bytes_left >= 8 && compiler::target::kWordSize == 8) {
return 8;
}
if (bytes_left >= 4) {
return 4;
}
if (bytes_left >= 2) {
return 2;
}
return 1;
}
static classid_t typed_data_cid(intptr_t chunk_size) {
switch (chunk_size) {
case 8:
return kTypedDataInt64ArrayCid;
case 4:
return kTypedDataInt32ArrayCid;
case 2:
return kTypedDataInt16ArrayCid;
case 1:
return kTypedDataInt8ArrayCid;
}
UNREACHABLE();
}
Fragment FlowGraphBuilder::CopyFromTypedDataBaseToUnboxedAddress(
intptr_t length_in_bytes) {
Fragment body;
Value* unboxed_address_value = Pop();
LocalVariable* typed_data_base = MakeTemporary("typed_data_base");
Push(unboxed_address_value->definition());
LocalVariable* unboxed_address = MakeTemporary("unboxed_address");
intptr_t offset_in_bytes = 0;
while (offset_in_bytes < length_in_bytes) {
const intptr_t bytes_left = length_in_bytes - offset_in_bytes;
const intptr_t chunk_sizee = chunk_size(bytes_left);
const classid_t typed_data_cidd = typed_data_cid(chunk_sizee);
body += LoadLocal(typed_data_base);
body += LoadUntagged(compiler::target::TypedDataBase::data_field_offset());
body += IntConstant(offset_in_bytes);
body += LoadIndexed(typed_data_cidd, /*index_scale=*/1,
/*index_unboxed=*/false);
LocalVariable* chunk_value = MakeTemporary("chunk_value");
body += LoadLocal(unboxed_address);
body += ConvertUnboxedToUntagged(kUnboxedFfiIntPtr);
body += IntConstant(offset_in_bytes);
body += LoadLocal(chunk_value);
body += StoreIndexedTypedData(typed_data_cidd, /*index_scale=*/1,
/*index_unboxed=*/false);
body += DropTemporary(&chunk_value);
offset_in_bytes += chunk_sizee;
}
ASSERT(offset_in_bytes == length_in_bytes);
body += DropTemporary(&unboxed_address);
body += DropTemporary(&typed_data_base);
return body;
}
Fragment FlowGraphBuilder::CopyFromUnboxedAddressToTypedDataBase(
intptr_t length_in_bytes) {
Fragment body;
Value* typed_data_base_value = Pop();
LocalVariable* unboxed_address = MakeTemporary("unboxed_address");
Push(typed_data_base_value->definition());
LocalVariable* typed_data_base = MakeTemporary("typed_data_base");
intptr_t offset_in_bytes = 0;
while (offset_in_bytes < length_in_bytes) {
const intptr_t bytes_left = length_in_bytes - offset_in_bytes;
const intptr_t chunk_sizee = chunk_size(bytes_left);
const classid_t typed_data_cidd = typed_data_cid(chunk_sizee);
body += LoadLocal(unboxed_address);
body += ConvertUnboxedToUntagged(kUnboxedFfiIntPtr);
body += IntConstant(offset_in_bytes);
body += LoadIndexed(typed_data_cidd, /*index_scale=*/1,
/*index_unboxed=*/false);
LocalVariable* chunk_value = MakeTemporary("chunk_value");
body += LoadLocal(typed_data_base);
body += LoadUntagged(compiler::target::TypedDataBase::data_field_offset());
body += IntConstant(offset_in_bytes);
body += LoadLocal(chunk_value);
body += StoreIndexedTypedData(typed_data_cidd, /*index_scale=*/1,
/*index_unboxed=*/false);
body += DropTemporary(&chunk_value);
offset_in_bytes += chunk_sizee;
}
ASSERT(offset_in_bytes == length_in_bytes);
body += DropTemporary(&typed_data_base);
body += DropTemporary(&unboxed_address);
return body;
}
Fragment FlowGraphBuilder::FfiCallConvertStructArgumentToNative(
LocalVariable* variable,
const compiler::ffi::BaseMarshaller& marshaller,
intptr_t arg_index) {
Fragment body;
const auto& native_loc = marshaller.Location(arg_index);
if (native_loc.IsStack() || native_loc.IsMultiple()) {
// Break struct in pieces to separate IL definitions to pass those
// separate definitions into the FFI call.
GrowableArray<Representation> representations;
marshaller.RepsInFfiCall(arg_index, &representations);
body += CopyFromStructToStack(variable, representations);
} else {
ASSERT(native_loc.IsPointerToMemory());
// Only load the typed data, do copying in the FFI call machine code.
body += LoadLocal(variable); // User-defined struct.
body += LoadTypedDataBaseFromStruct();
}
return body;
}
Fragment FlowGraphBuilder::FfiCallConvertStructReturnToDart(
const compiler::ffi::BaseMarshaller& marshaller,
intptr_t arg_index) {
Fragment body;
// The typed data is allocated before the FFI call, and is populated in
// machine code. So, here, it only has to be wrapped in the struct class.
const auto& struct_type =
AbstractType::Handle(Z, marshaller.CType(arg_index));
body += WrapTypedDataBaseInStruct(struct_type);
return body;
}
Fragment FlowGraphBuilder::FfiCallbackConvertStructArgumentToDart(
const compiler::ffi::BaseMarshaller& marshaller,
intptr_t arg_index,
ZoneGrowableArray<LocalVariable*>* definitions) {
const intptr_t length_in_bytes =
marshaller.Location(arg_index).payload_type().SizeInBytes();
Fragment body;
if ((marshaller.Location(arg_index).IsMultiple() ||
marshaller.Location(arg_index).IsStack())) {
// Allocate and populate a TypedData from the individual NativeParameters.
body += IntConstant(length_in_bytes);
body +=
AllocateTypedData(TokenPosition::kNoSource, kTypedDataUint8ArrayCid);
GrowableArray<Representation> representations;
marshaller.RepsInFfiCall(arg_index, &representations);
body += PopFromStackToTypedDataBase(definitions, representations);
} else {
ASSERT(marshaller.Location(arg_index).IsPointerToMemory());
// Allocate a TypedData and copy contents pointed to by an address into it.
LocalVariable* address_of_struct = MakeTemporary("address_of_struct");
body += IntConstant(length_in_bytes);
body +=
AllocateTypedData(TokenPosition::kNoSource, kTypedDataUint8ArrayCid);
LocalVariable* typed_data_base = MakeTemporary("typed_data_base");
body += LoadLocal(address_of_struct);
body += LoadLocal(typed_data_base);
body += CopyFromUnboxedAddressToTypedDataBase(length_in_bytes);
body += DropTempsPreserveTop(1); // address_of_struct.
}
// Wrap typed data in struct class.
const auto& struct_type =
AbstractType::Handle(Z, marshaller.CType(arg_index));
body += WrapTypedDataBaseInStruct(struct_type);
return body;
}
Fragment FlowGraphBuilder::FfiCallbackConvertStructReturnToNative(
const compiler::ffi::CallbackMarshaller& marshaller,
intptr_t arg_index) {
Fragment body;
const auto& native_loc = marshaller.Location(arg_index);
if (native_loc.IsMultiple()) {
// We pass in typed data to native return instruction, and do the copying
// in machine code.
body += LoadTypedDataBaseFromStruct();
} else {
ASSERT(native_loc.IsPointerToMemory());
// We copy the data into the right location in IL.
const intptr_t length_in_bytes =
marshaller.Location(arg_index).payload_type().SizeInBytes();
body += LoadTypedDataBaseFromStruct();
LocalVariable* typed_data_base = MakeTemporary("typed_data_base");
auto* pointer_to_return =
new (Z) NativeParameterInstr(marshaller, compiler::ffi::kResultIndex);
Push(pointer_to_return); // Address where return value should be stored.
body <<= pointer_to_return;
body += UnboxTruncate(kUnboxedFfiIntPtr);
LocalVariable* unboxed_address = MakeTemporary("unboxed_address");
body += LoadLocal(typed_data_base);
body += LoadLocal(unboxed_address);
body += CopyFromTypedDataBaseToUnboxedAddress(length_in_bytes);
body += DropTempsPreserveTop(1); // Keep address, drop typed_data_base.
}
return body;
}
Fragment FlowGraphBuilder::FfiConvertPrimitiveToDart(
const compiler::ffi::BaseMarshaller& marshaller,
intptr_t arg_index) {
ASSERT(!marshaller.IsStruct(arg_index));
Fragment body;
if (marshaller.IsPointer(arg_index)) {
body += Box(kUnboxedFfiIntPtr);
@ -3718,8 +4073,9 @@ Fragment FlowGraphBuilder::FfiConvertArgumentToDart(
body += NullConstant();
} else {
if (marshaller.RequiresBitCast(arg_index)) {
body += BitCast(marshaller.RepInFfiCall(arg_index),
marshaller.RepInDart(arg_index));
body += BitCast(
marshaller.RepInFfiCall(marshaller.FirstDefinitionIndex(arg_index)),
marshaller.RepInDart(arg_index));
}
body += Box(marshaller.RepInDart(arg_index));
@ -3727,12 +4083,13 @@ Fragment FlowGraphBuilder::FfiConvertArgumentToDart(
return body;
}
Fragment FlowGraphBuilder::FfiConvertArgumentToNative(
Fragment FlowGraphBuilder::FfiConvertPrimitiveToNative(
const compiler::ffi::BaseMarshaller& marshaller,
intptr_t arg_index,
LocalVariable* api_local_scope) {
Fragment body;
ASSERT(!marshaller.IsStruct(arg_index));
Fragment body;
if (marshaller.IsPointer(arg_index)) {
// This can only be Pointer, so it is always safe to LoadUntagged.
body += LoadUntagged(compiler::target::Pointer::data_field_offset());
@ -3744,8 +4101,9 @@ Fragment FlowGraphBuilder::FfiConvertArgumentToNative(
}
if (marshaller.RequiresBitCast(arg_index)) {
body += BitCast(marshaller.RepInDart(arg_index),
marshaller.RepInFfiCall(arg_index));
body += BitCast(
marshaller.RepInDart(arg_index),
marshaller.RepInFfiCall(marshaller.FirstDefinitionIndex(arg_index)));
}
return body;
@ -3817,14 +4175,30 @@ FlowGraph* FlowGraphBuilder::BuildGraphOfFfiNative(const Function& function) {
++try_depth_;
body += EnterHandleScope();
api_local_scope = MakeTemporary();
api_local_scope = MakeTemporary("api_local_scope");
}
// Allocate typed data before FfiCall and pass it in to ffi call if needed.
LocalVariable* typed_data = nullptr;
if (marshaller.PassTypedData()) {
body += IntConstant(marshaller.TypedDataSizeInBytes());
body +=
AllocateTypedData(TokenPosition::kNoSource, kTypedDataUint8ArrayCid);
typed_data = MakeTemporary();
}
// Unbox and push the arguments.
for (intptr_t i = 0; i < marshaller.num_args(); i++) {
body += LoadLocal(
parsed_function_->ParameterVariable(kFirstArgumentParameterOffset + i));
body += FfiConvertArgumentToNative(marshaller, i, api_local_scope);
if (marshaller.IsStruct(i)) {
body += FfiCallConvertStructArgumentToNative(
parsed_function_->ParameterVariable(kFirstArgumentParameterOffset +
i),
marshaller, i);
} else {
body += LoadLocal(parsed_function_->ParameterVariable(
kFirstArgumentParameterOffset + i));
body += FfiConvertPrimitiveToNative(marshaller, i, api_local_scope);
}
}
// Push the function pointer, which is stored (as Pointer object) in the
@ -3840,6 +4214,11 @@ FlowGraph* FlowGraphBuilder::BuildGraphOfFfiNative(const Function& function) {
// This can only be Pointer, so it is always safe to LoadUntagged.
body += LoadUntagged(compiler::target::Pointer::data_field_offset());
body += ConvertUntaggedToUnboxed(kUnboxedFfiIntPtr);
if (marshaller.PassTypedData()) {
body += LoadLocal(typed_data);
}
body += FfiCall(marshaller);
for (intptr_t i = 0; i < marshaller.num_args(); i++) {
@ -3850,7 +4229,23 @@ FlowGraph* FlowGraphBuilder::BuildGraphOfFfiNative(const Function& function) {
}
}
body += FfiConvertArgumentToDart(marshaller, compiler::ffi::kResultIndex);
const intptr_t num_defs = marshaller.NumReturnDefinitions();
ASSERT(num_defs >= 1);
auto defs = new (Z) ZoneGrowableArray<LocalVariable*>(Z, num_defs);
LocalVariable* def = MakeTemporary();
defs->Add(def);
if (marshaller.PassTypedData()) {
// Drop call result, typed data with contents is already on the stack.
body += Drop();
}
if (marshaller.IsStruct(compiler::ffi::kResultIndex)) {
body += FfiCallConvertStructReturnToDart(marshaller,
compiler::ffi::kResultIndex);
} else {
body += FfiConvertPrimitiveToDart(marshaller, compiler::ffi::kResultIndex);
}
if (signature_contains_handles) {
body += DropTempsPreserveTop(1); // Drop api_local_scope.
@ -3909,10 +4304,23 @@ FlowGraph* FlowGraphBuilder::BuildGraphOfFfiCallback(const Function& function) {
// Box and push the arguments.
for (intptr_t i = 0; i < marshaller.num_args(); i++) {
auto* parameter = new (Z) NativeParameterInstr(marshaller, i);
Push(parameter);
body <<= parameter;
body += FfiConvertArgumentToDart(marshaller, i);
const intptr_t num_defs = marshaller.NumDefinitions(i);
auto defs = new (Z) ZoneGrowableArray<LocalVariable*>(Z, num_defs);
for (intptr_t j = 0; j < num_defs; j++) {
const intptr_t def_index = marshaller.DefinitionIndex(j, i);
auto* parameter = new (Z) NativeParameterInstr(marshaller, def_index);
Push(parameter);
body <<= parameter;
LocalVariable* def = MakeTemporary();
defs->Add(def);
}
if (marshaller.IsStruct(i)) {
body += FfiCallbackConvertStructArgumentToDart(marshaller, i, defs);
} else {
body += FfiConvertPrimitiveToDart(marshaller, i);
}
}
// Call the target.
@ -3923,7 +4331,6 @@ FlowGraph* FlowGraphBuilder::BuildGraphOfFfiCallback(const Function& function) {
Function::ZoneHandle(Z, function.FfiCallbackTarget()),
marshaller.num_args(), Array::empty_array(),
ICData::kNoRebind);
if (marshaller.IsVoid(compiler::ffi::kResultIndex)) {
body += Drop();
body += IntConstant(0);
@ -3932,8 +4339,15 @@ FlowGraph* FlowGraphBuilder::BuildGraphOfFfiCallback(const Function& function) {
CheckNullOptimized(TokenPosition::kNoSource,
String::ZoneHandle(Z, marshaller.function_name()));
}
body += FfiConvertArgumentToNative(marshaller, compiler::ffi::kResultIndex,
/*api_local_scope=*/nullptr);
if (marshaller.IsStruct(compiler::ffi::kResultIndex)) {
body += FfiCallbackConvertStructReturnToNative(marshaller,
compiler::ffi::kResultIndex);
} else {
body += FfiConvertPrimitiveToNative(marshaller, compiler::ffi::kResultIndex,
/*api_local_scope=*/nullptr);
}
body += NativeReturn(marshaller);
--try_depth_;
@ -3955,13 +4369,32 @@ FlowGraph* FlowGraphBuilder::BuildGraphOfFfiCallback(const Function& function) {
catch_body += UnboxTruncate(kUnboxedFfiIntPtr);
} else if (marshaller.IsHandle(compiler::ffi::kResultIndex)) {
catch_body += UnhandledException();
catch_body += FfiConvertArgumentToNative(
marshaller, compiler::ffi::kResultIndex, /*api_local_scope=*/nullptr);
catch_body +=
FfiConvertPrimitiveToNative(marshaller, compiler::ffi::kResultIndex,
/*api_local_scope=*/nullptr);
} else if (marshaller.IsStruct(compiler::ffi::kResultIndex)) {
ASSERT(function.FfiCallbackExceptionalReturn() == Object::null());
// Manufacture empty result.
const intptr_t size =
Utils::RoundUp(marshaller.Location(compiler::ffi::kResultIndex)
.payload_type()
.SizeInBytes(),
compiler::target::kWordSize);
catch_body += IntConstant(size);
catch_body +=
AllocateTypedData(TokenPosition::kNoSource, kTypedDataUint8ArrayCid);
catch_body += WrapTypedDataBaseInStruct(
AbstractType::Handle(Z, marshaller.CType(compiler::ffi::kResultIndex)));
catch_body += FfiCallbackConvertStructReturnToNative(
marshaller, compiler::ffi::kResultIndex);
} else {
catch_body += Constant(
Instance::ZoneHandle(Z, function.FfiCallbackExceptionalReturn()));
catch_body += FfiConvertArgumentToNative(
marshaller, compiler::ffi::kResultIndex, /*api_local_scope=*/nullptr);
catch_body +=
FfiConvertPrimitiveToNative(marshaller, compiler::ffi::kResultIndex,
/*api_local_scope=*/nullptr);
}
catch_body += NativeReturn(marshaller);

View file

@ -252,6 +252,17 @@ class FlowGraphBuilder : public BaseFlowGraphBuilder {
bool NeedsDebugStepCheck(const Function& function, TokenPosition position);
bool NeedsDebugStepCheck(Value* value, TokenPosition position);
// Deals with StoreIndexed not working with kUnboxedFloat.
// TODO(dartbug.com/43448): Remove this workaround.
Fragment StoreIndexedTypedDataUnboxed(Representation unboxed_representation,
intptr_t index_scale,
bool index_unboxed);
// Deals with LoadIndexed not working with kUnboxedFloat.
// TODO(dartbug.com/43448): Remove this workaround.
Fragment LoadIndexedTypedDataUnboxed(Representation unboxed_representation,
intptr_t index_scale,
bool index_unboxed);
// Truncates (instead of deoptimizing) if the origin does not fit into the
// target representation.
Fragment UnboxTruncate(Representation to);
@ -266,16 +277,81 @@ class FlowGraphBuilder : public BaseFlowGraphBuilder {
// Pops a Dart object and push the unboxed native version, according to the
// semantics of FFI argument translation.
Fragment FfiConvertArgumentToNative(
//
// Works for FFI call arguments, and FFI callback return values.
Fragment FfiConvertPrimitiveToNative(
const compiler::ffi::BaseMarshaller& marshaller,
intptr_t arg_index,
LocalVariable* api_local_scope);
// Reverse of 'FfiConvertArgumentToNative'.
Fragment FfiConvertArgumentToDart(
// Pops an unboxed native value, and pushes a Dart object, according to the
// semantics of FFI argument translation.
//
// Works for FFI call return values, and FFI callback arguments.
Fragment FfiConvertPrimitiveToDart(
const compiler::ffi::BaseMarshaller& marshaller,
intptr_t arg_index);
// We pass in `variable` instead of on top of the stack so that we can have
// multiple consecutive calls that keep only struct parts on the stack with
// no struct parts in between.
Fragment FfiCallConvertStructArgumentToNative(
LocalVariable* variable,
const compiler::ffi::BaseMarshaller& marshaller,
intptr_t arg_index);
Fragment FfiCallConvertStructReturnToDart(
const compiler::ffi::BaseMarshaller& marshaller,
intptr_t arg_index);
// We pass in multiple `definitions`, which are also expected to be the top
// of the stack. This eases storing each definition in the resulting struct.
Fragment FfiCallbackConvertStructArgumentToDart(
const compiler::ffi::BaseMarshaller& marshaller,
intptr_t arg_index,
ZoneGrowableArray<LocalVariable*>* definitions);
Fragment FfiCallbackConvertStructReturnToNative(
const compiler::ffi::CallbackMarshaller& marshaller,
intptr_t arg_index);
// Wraps a TypedDataBase from the stack and wraps it in a subclass of Struct.
Fragment WrapTypedDataBaseInStruct(const AbstractType& struct_type);
// Loads the addressOf field from a subclass of Struct.
Fragment LoadTypedDataBaseFromStruct();
// Breaks up a subclass of Struct in multiple definitions and puts them on
// the stack.
//
// Takes in the Struct as a local `variable` so that can be anywhere on the
// stack and this function can be called multiple times to leave only the
// results of this function on the stack without any Structs in between.
//
// The struct contents are heterogeneous, so pass in `representations` to
// know what representation to load.
Fragment CopyFromStructToStack(
LocalVariable* variable,
const GrowableArray<Representation>& representations);
// Copy `definitions` into TypedData.
//
// Expects the TypedData on top of the stack and `definitions` right under it.
//
// Leaves TypedData on stack.
//
// The struct contents are heterogeneous, so pass in `representations` to
// know what representation to load.
Fragment PopFromStackToTypedDataBase(
ZoneGrowableArray<LocalVariable*>* definitions,
const GrowableArray<Representation>& representations);
// Copies bytes from a TypedDataBase to the address of an kUnboxedFfiIntPtr.
Fragment CopyFromTypedDataBaseToUnboxedAddress(intptr_t length_in_bytes);
// Copies bytes from the address of an kUnboxedFfiIntPtr to a TypedDataBase.
Fragment CopyFromUnboxedAddressToTypedDataBase(intptr_t length_in_bytes);
// Generates a call to `Thread::EnterApiScope`.
Fragment EnterHandleScope();

View file

@ -1044,6 +1044,7 @@ class Thread : public AllStatic {
static word unboxed_int64_runtime_arg_offset();
static word callback_code_offset();
static word callback_stack_return_offset();
static word AllocateArray_entry_point_offset();
static word write_barrier_code_offset();

View file

@ -248,7 +248,7 @@ static constexpr dart::compiler::target::word
Thread_allocate_object_slow_entry_point_offset = 288;
static constexpr dart::compiler::target::word
Thread_allocate_object_slow_stub_offset = 196;
static constexpr dart::compiler::target::word Thread_api_top_scope_offset = 728;
static constexpr dart::compiler::target::word Thread_api_top_scope_offset = 732;
static constexpr dart::compiler::target::word
Thread_auto_scope_native_wrapper_entry_point_offset = 332;
static constexpr dart::compiler::target::word Thread_bool_false_offset = 112;
@ -259,7 +259,7 @@ static constexpr dart::compiler::target::word
Thread_call_to_runtime_entry_point_offset = 268;
static constexpr dart::compiler::target::word
Thread_call_to_runtime_stub_offset = 136;
static constexpr dart::compiler::target::word Thread_dart_stream_offset = 736;
static constexpr dart::compiler::target::word Thread_dart_stream_offset = 740;
static constexpr dart::compiler::target::word
Thread_dispatch_table_array_offset = 44;
static constexpr dart::compiler::target::word Thread_optimize_entry_offset =
@ -301,7 +301,7 @@ static constexpr dart::compiler::target::word Thread_global_object_pool_offset =
static constexpr dart::compiler::target::word
Thread_invoke_dart_code_stub_offset = 132;
static constexpr dart::compiler::target::word Thread_exit_through_ffi_offset =
724;
728;
static constexpr dart::compiler::target::word Thread_isolate_offset = 40;
static constexpr dart::compiler::target::word Thread_field_table_values_offset =
64;
@ -382,6 +382,8 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word Thread_write_barrier_mask_offset =
36;
static constexpr dart::compiler::target::word Thread_callback_code_offset = 720;
static constexpr dart::compiler::target::word
Thread_callback_stack_return_offset = 724;
static constexpr dart::compiler::target::word TimelineStream_enabled_offset = 8;
static constexpr dart::compiler::target::word TwoByteString_data_offset = 12;
static constexpr dart::compiler::target::word Type_arguments_offset = 16;
@ -768,7 +770,7 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
Thread_allocate_object_slow_stub_offset = 376;
static constexpr dart::compiler::target::word Thread_api_top_scope_offset =
1464;
1472;
static constexpr dart::compiler::target::word
Thread_auto_scope_native_wrapper_entry_point_offset = 648;
static constexpr dart::compiler::target::word Thread_bool_false_offset = 208;
@ -779,7 +781,7 @@ static constexpr dart::compiler::target::word
Thread_call_to_runtime_entry_point_offset = 520;
static constexpr dart::compiler::target::word
Thread_call_to_runtime_stub_offset = 256;
static constexpr dart::compiler::target::word Thread_dart_stream_offset = 1480;
static constexpr dart::compiler::target::word Thread_dart_stream_offset = 1488;
static constexpr dart::compiler::target::word
Thread_dispatch_table_array_offset = 88;
static constexpr dart::compiler::target::word Thread_optimize_entry_offset =
@ -821,7 +823,7 @@ static constexpr dart::compiler::target::word Thread_global_object_pool_offset =
static constexpr dart::compiler::target::word
Thread_invoke_dart_code_stub_offset = 248;
static constexpr dart::compiler::target::word Thread_exit_through_ffi_offset =
1456;
1464;
static constexpr dart::compiler::target::word Thread_isolate_offset = 80;
static constexpr dart::compiler::target::word Thread_field_table_values_offset =
128;
@ -903,6 +905,8 @@ static constexpr dart::compiler::target::word Thread_write_barrier_mask_offset =
72;
static constexpr dart::compiler::target::word Thread_callback_code_offset =
1448;
static constexpr dart::compiler::target::word
Thread_callback_stack_return_offset = 1456;
static constexpr dart::compiler::target::word TimelineStream_enabled_offset =
16;
static constexpr dart::compiler::target::word TwoByteString_data_offset = 16;
@ -1288,7 +1292,7 @@ static constexpr dart::compiler::target::word
Thread_allocate_object_slow_entry_point_offset = 288;
static constexpr dart::compiler::target::word
Thread_allocate_object_slow_stub_offset = 196;
static constexpr dart::compiler::target::word Thread_api_top_scope_offset = 696;
static constexpr dart::compiler::target::word Thread_api_top_scope_offset = 700;
static constexpr dart::compiler::target::word
Thread_auto_scope_native_wrapper_entry_point_offset = 332;
static constexpr dart::compiler::target::word Thread_bool_false_offset = 112;
@ -1299,7 +1303,7 @@ static constexpr dart::compiler::target::word
Thread_call_to_runtime_entry_point_offset = 268;
static constexpr dart::compiler::target::word
Thread_call_to_runtime_stub_offset = 136;
static constexpr dart::compiler::target::word Thread_dart_stream_offset = 704;
static constexpr dart::compiler::target::word Thread_dart_stream_offset = 708;
static constexpr dart::compiler::target::word
Thread_dispatch_table_array_offset = 44;
static constexpr dart::compiler::target::word Thread_optimize_entry_offset =
@ -1341,7 +1345,7 @@ static constexpr dart::compiler::target::word Thread_global_object_pool_offset =
static constexpr dart::compiler::target::word
Thread_invoke_dart_code_stub_offset = 132;
static constexpr dart::compiler::target::word Thread_exit_through_ffi_offset =
692;
696;
static constexpr dart::compiler::target::word Thread_isolate_offset = 40;
static constexpr dart::compiler::target::word Thread_field_table_values_offset =
64;
@ -1422,6 +1426,8 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word Thread_write_barrier_mask_offset =
36;
static constexpr dart::compiler::target::word Thread_callback_code_offset = 688;
static constexpr dart::compiler::target::word
Thread_callback_stack_return_offset = 692;
static constexpr dart::compiler::target::word TimelineStream_enabled_offset = 8;
static constexpr dart::compiler::target::word TwoByteString_data_offset = 12;
static constexpr dart::compiler::target::word Type_arguments_offset = 16;
@ -1805,7 +1811,7 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
Thread_allocate_object_slow_stub_offset = 376;
static constexpr dart::compiler::target::word Thread_api_top_scope_offset =
1536;
1544;
static constexpr dart::compiler::target::word
Thread_auto_scope_native_wrapper_entry_point_offset = 648;
static constexpr dart::compiler::target::word Thread_bool_false_offset = 208;
@ -1816,7 +1822,7 @@ static constexpr dart::compiler::target::word
Thread_call_to_runtime_entry_point_offset = 520;
static constexpr dart::compiler::target::word
Thread_call_to_runtime_stub_offset = 256;
static constexpr dart::compiler::target::word Thread_dart_stream_offset = 1552;
static constexpr dart::compiler::target::word Thread_dart_stream_offset = 1560;
static constexpr dart::compiler::target::word
Thread_dispatch_table_array_offset = 88;
static constexpr dart::compiler::target::word Thread_optimize_entry_offset =
@ -1858,7 +1864,7 @@ static constexpr dart::compiler::target::word Thread_global_object_pool_offset =
static constexpr dart::compiler::target::word
Thread_invoke_dart_code_stub_offset = 248;
static constexpr dart::compiler::target::word Thread_exit_through_ffi_offset =
1528;
1536;
static constexpr dart::compiler::target::word Thread_isolate_offset = 80;
static constexpr dart::compiler::target::word Thread_field_table_values_offset =
128;
@ -1940,6 +1946,8 @@ static constexpr dart::compiler::target::word Thread_write_barrier_mask_offset =
72;
static constexpr dart::compiler::target::word Thread_callback_code_offset =
1520;
static constexpr dart::compiler::target::word
Thread_callback_stack_return_offset = 1528;
static constexpr dart::compiler::target::word TimelineStream_enabled_offset =
16;
static constexpr dart::compiler::target::word TwoByteString_data_offset = 16;
@ -2325,7 +2333,7 @@ static constexpr dart::compiler::target::word
Thread_allocate_object_slow_entry_point_offset = 288;
static constexpr dart::compiler::target::word
Thread_allocate_object_slow_stub_offset = 196;
static constexpr dart::compiler::target::word Thread_api_top_scope_offset = 728;
static constexpr dart::compiler::target::word Thread_api_top_scope_offset = 732;
static constexpr dart::compiler::target::word
Thread_auto_scope_native_wrapper_entry_point_offset = 332;
static constexpr dart::compiler::target::word Thread_bool_false_offset = 112;
@ -2336,7 +2344,7 @@ static constexpr dart::compiler::target::word
Thread_call_to_runtime_entry_point_offset = 268;
static constexpr dart::compiler::target::word
Thread_call_to_runtime_stub_offset = 136;
static constexpr dart::compiler::target::word Thread_dart_stream_offset = 736;
static constexpr dart::compiler::target::word Thread_dart_stream_offset = 740;
static constexpr dart::compiler::target::word
Thread_dispatch_table_array_offset = 44;
static constexpr dart::compiler::target::word Thread_optimize_entry_offset =
@ -2378,7 +2386,7 @@ static constexpr dart::compiler::target::word Thread_global_object_pool_offset =
static constexpr dart::compiler::target::word
Thread_invoke_dart_code_stub_offset = 132;
static constexpr dart::compiler::target::word Thread_exit_through_ffi_offset =
724;
728;
static constexpr dart::compiler::target::word Thread_isolate_offset = 40;
static constexpr dart::compiler::target::word Thread_field_table_values_offset =
64;
@ -2459,6 +2467,8 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word Thread_write_barrier_mask_offset =
36;
static constexpr dart::compiler::target::word Thread_callback_code_offset = 720;
static constexpr dart::compiler::target::word
Thread_callback_stack_return_offset = 724;
static constexpr dart::compiler::target::word TimelineStream_enabled_offset = 8;
static constexpr dart::compiler::target::word TwoByteString_data_offset = 12;
static constexpr dart::compiler::target::word Type_arguments_offset = 16;
@ -2839,7 +2849,7 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
Thread_allocate_object_slow_stub_offset = 376;
static constexpr dart::compiler::target::word Thread_api_top_scope_offset =
1464;
1472;
static constexpr dart::compiler::target::word
Thread_auto_scope_native_wrapper_entry_point_offset = 648;
static constexpr dart::compiler::target::word Thread_bool_false_offset = 208;
@ -2850,7 +2860,7 @@ static constexpr dart::compiler::target::word
Thread_call_to_runtime_entry_point_offset = 520;
static constexpr dart::compiler::target::word
Thread_call_to_runtime_stub_offset = 256;
static constexpr dart::compiler::target::word Thread_dart_stream_offset = 1480;
static constexpr dart::compiler::target::word Thread_dart_stream_offset = 1488;
static constexpr dart::compiler::target::word
Thread_dispatch_table_array_offset = 88;
static constexpr dart::compiler::target::word Thread_optimize_entry_offset =
@ -2892,7 +2902,7 @@ static constexpr dart::compiler::target::word Thread_global_object_pool_offset =
static constexpr dart::compiler::target::word
Thread_invoke_dart_code_stub_offset = 248;
static constexpr dart::compiler::target::word Thread_exit_through_ffi_offset =
1456;
1464;
static constexpr dart::compiler::target::word Thread_isolate_offset = 80;
static constexpr dart::compiler::target::word Thread_field_table_values_offset =
128;
@ -2974,6 +2984,8 @@ static constexpr dart::compiler::target::word Thread_write_barrier_mask_offset =
72;
static constexpr dart::compiler::target::word Thread_callback_code_offset =
1448;
static constexpr dart::compiler::target::word
Thread_callback_stack_return_offset = 1456;
static constexpr dart::compiler::target::word TimelineStream_enabled_offset =
16;
static constexpr dart::compiler::target::word TwoByteString_data_offset = 16;
@ -3353,7 +3365,7 @@ static constexpr dart::compiler::target::word
Thread_allocate_object_slow_entry_point_offset = 288;
static constexpr dart::compiler::target::word
Thread_allocate_object_slow_stub_offset = 196;
static constexpr dart::compiler::target::word Thread_api_top_scope_offset = 696;
static constexpr dart::compiler::target::word Thread_api_top_scope_offset = 700;
static constexpr dart::compiler::target::word
Thread_auto_scope_native_wrapper_entry_point_offset = 332;
static constexpr dart::compiler::target::word Thread_bool_false_offset = 112;
@ -3364,7 +3376,7 @@ static constexpr dart::compiler::target::word
Thread_call_to_runtime_entry_point_offset = 268;
static constexpr dart::compiler::target::word
Thread_call_to_runtime_stub_offset = 136;
static constexpr dart::compiler::target::word Thread_dart_stream_offset = 704;
static constexpr dart::compiler::target::word Thread_dart_stream_offset = 708;
static constexpr dart::compiler::target::word
Thread_dispatch_table_array_offset = 44;
static constexpr dart::compiler::target::word Thread_optimize_entry_offset =
@ -3406,7 +3418,7 @@ static constexpr dart::compiler::target::word Thread_global_object_pool_offset =
static constexpr dart::compiler::target::word
Thread_invoke_dart_code_stub_offset = 132;
static constexpr dart::compiler::target::word Thread_exit_through_ffi_offset =
692;
696;
static constexpr dart::compiler::target::word Thread_isolate_offset = 40;
static constexpr dart::compiler::target::word Thread_field_table_values_offset =
64;
@ -3487,6 +3499,8 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word Thread_write_barrier_mask_offset =
36;
static constexpr dart::compiler::target::word Thread_callback_code_offset = 688;
static constexpr dart::compiler::target::word
Thread_callback_stack_return_offset = 692;
static constexpr dart::compiler::target::word TimelineStream_enabled_offset = 8;
static constexpr dart::compiler::target::word TwoByteString_data_offset = 12;
static constexpr dart::compiler::target::word Type_arguments_offset = 16;
@ -3864,7 +3878,7 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
Thread_allocate_object_slow_stub_offset = 376;
static constexpr dart::compiler::target::word Thread_api_top_scope_offset =
1536;
1544;
static constexpr dart::compiler::target::word
Thread_auto_scope_native_wrapper_entry_point_offset = 648;
static constexpr dart::compiler::target::word Thread_bool_false_offset = 208;
@ -3875,7 +3889,7 @@ static constexpr dart::compiler::target::word
Thread_call_to_runtime_entry_point_offset = 520;
static constexpr dart::compiler::target::word
Thread_call_to_runtime_stub_offset = 256;
static constexpr dart::compiler::target::word Thread_dart_stream_offset = 1552;
static constexpr dart::compiler::target::word Thread_dart_stream_offset = 1560;
static constexpr dart::compiler::target::word
Thread_dispatch_table_array_offset = 88;
static constexpr dart::compiler::target::word Thread_optimize_entry_offset =
@ -3917,7 +3931,7 @@ static constexpr dart::compiler::target::word Thread_global_object_pool_offset =
static constexpr dart::compiler::target::word
Thread_invoke_dart_code_stub_offset = 248;
static constexpr dart::compiler::target::word Thread_exit_through_ffi_offset =
1528;
1536;
static constexpr dart::compiler::target::word Thread_isolate_offset = 80;
static constexpr dart::compiler::target::word Thread_field_table_values_offset =
128;
@ -3999,6 +4013,8 @@ static constexpr dart::compiler::target::word Thread_write_barrier_mask_offset =
72;
static constexpr dart::compiler::target::word Thread_callback_code_offset =
1520;
static constexpr dart::compiler::target::word
Thread_callback_stack_return_offset = 1528;
static constexpr dart::compiler::target::word TimelineStream_enabled_offset =
16;
static constexpr dart::compiler::target::word TwoByteString_data_offset = 16;
@ -4405,7 +4421,7 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_Thread_allocate_object_slow_stub_offset = 196;
static constexpr dart::compiler::target::word AOT_Thread_api_top_scope_offset =
728;
732;
static constexpr dart::compiler::target::word
AOT_Thread_auto_scope_native_wrapper_entry_point_offset = 332;
static constexpr dart::compiler::target::word AOT_Thread_bool_false_offset =
@ -4418,7 +4434,7 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_Thread_call_to_runtime_stub_offset = 136;
static constexpr dart::compiler::target::word AOT_Thread_dart_stream_offset =
736;
740;
static constexpr dart::compiler::target::word
AOT_Thread_dispatch_table_array_offset = 44;
static constexpr dart::compiler::target::word AOT_Thread_optimize_entry_offset =
@ -4461,7 +4477,7 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_Thread_invoke_dart_code_stub_offset = 132;
static constexpr dart::compiler::target::word
AOT_Thread_exit_through_ffi_offset = 724;
AOT_Thread_exit_through_ffi_offset = 728;
static constexpr dart::compiler::target::word AOT_Thread_isolate_offset = 40;
static constexpr dart::compiler::target::word
AOT_Thread_field_table_values_offset = 64;
@ -4547,6 +4563,8 @@ static constexpr dart::compiler::target::word
AOT_Thread_write_barrier_mask_offset = 36;
static constexpr dart::compiler::target::word AOT_Thread_callback_code_offset =
720;
static constexpr dart::compiler::target::word
AOT_Thread_callback_stack_return_offset = 724;
static constexpr dart::compiler::target::word
AOT_TimelineStream_enabled_offset = 8;
static constexpr dart::compiler::target::word AOT_TwoByteString_data_offset =
@ -4979,7 +4997,7 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_Thread_allocate_object_slow_stub_offset = 376;
static constexpr dart::compiler::target::word AOT_Thread_api_top_scope_offset =
1464;
1472;
static constexpr dart::compiler::target::word
AOT_Thread_auto_scope_native_wrapper_entry_point_offset = 648;
static constexpr dart::compiler::target::word AOT_Thread_bool_false_offset =
@ -4992,7 +5010,7 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_Thread_call_to_runtime_stub_offset = 256;
static constexpr dart::compiler::target::word AOT_Thread_dart_stream_offset =
1480;
1488;
static constexpr dart::compiler::target::word
AOT_Thread_dispatch_table_array_offset = 88;
static constexpr dart::compiler::target::word AOT_Thread_optimize_entry_offset =
@ -5035,7 +5053,7 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_Thread_invoke_dart_code_stub_offset = 248;
static constexpr dart::compiler::target::word
AOT_Thread_exit_through_ffi_offset = 1456;
AOT_Thread_exit_through_ffi_offset = 1464;
static constexpr dart::compiler::target::word AOT_Thread_isolate_offset = 80;
static constexpr dart::compiler::target::word
AOT_Thread_field_table_values_offset = 128;
@ -5122,6 +5140,8 @@ static constexpr dart::compiler::target::word
AOT_Thread_write_barrier_mask_offset = 72;
static constexpr dart::compiler::target::word AOT_Thread_callback_code_offset =
1448;
static constexpr dart::compiler::target::word
AOT_Thread_callback_stack_return_offset = 1456;
static constexpr dart::compiler::target::word
AOT_TimelineStream_enabled_offset = 16;
static constexpr dart::compiler::target::word AOT_TwoByteString_data_offset =
@ -5559,7 +5579,7 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_Thread_allocate_object_slow_stub_offset = 376;
static constexpr dart::compiler::target::word AOT_Thread_api_top_scope_offset =
1536;
1544;
static constexpr dart::compiler::target::word
AOT_Thread_auto_scope_native_wrapper_entry_point_offset = 648;
static constexpr dart::compiler::target::word AOT_Thread_bool_false_offset =
@ -5572,7 +5592,7 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_Thread_call_to_runtime_stub_offset = 256;
static constexpr dart::compiler::target::word AOT_Thread_dart_stream_offset =
1552;
1560;
static constexpr dart::compiler::target::word
AOT_Thread_dispatch_table_array_offset = 88;
static constexpr dart::compiler::target::word AOT_Thread_optimize_entry_offset =
@ -5615,7 +5635,7 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_Thread_invoke_dart_code_stub_offset = 248;
static constexpr dart::compiler::target::word
AOT_Thread_exit_through_ffi_offset = 1528;
AOT_Thread_exit_through_ffi_offset = 1536;
static constexpr dart::compiler::target::word AOT_Thread_isolate_offset = 80;
static constexpr dart::compiler::target::word
AOT_Thread_field_table_values_offset = 128;
@ -5702,6 +5722,8 @@ static constexpr dart::compiler::target::word
AOT_Thread_write_barrier_mask_offset = 72;
static constexpr dart::compiler::target::word AOT_Thread_callback_code_offset =
1520;
static constexpr dart::compiler::target::word
AOT_Thread_callback_stack_return_offset = 1528;
static constexpr dart::compiler::target::word
AOT_TimelineStream_enabled_offset = 16;
static constexpr dart::compiler::target::word AOT_TwoByteString_data_offset =
@ -6134,7 +6156,7 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_Thread_allocate_object_slow_stub_offset = 196;
static constexpr dart::compiler::target::word AOT_Thread_api_top_scope_offset =
728;
732;
static constexpr dart::compiler::target::word
AOT_Thread_auto_scope_native_wrapper_entry_point_offset = 332;
static constexpr dart::compiler::target::word AOT_Thread_bool_false_offset =
@ -6147,7 +6169,7 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_Thread_call_to_runtime_stub_offset = 136;
static constexpr dart::compiler::target::word AOT_Thread_dart_stream_offset =
736;
740;
static constexpr dart::compiler::target::word
AOT_Thread_dispatch_table_array_offset = 44;
static constexpr dart::compiler::target::word AOT_Thread_optimize_entry_offset =
@ -6190,7 +6212,7 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_Thread_invoke_dart_code_stub_offset = 132;
static constexpr dart::compiler::target::word
AOT_Thread_exit_through_ffi_offset = 724;
AOT_Thread_exit_through_ffi_offset = 728;
static constexpr dart::compiler::target::word AOT_Thread_isolate_offset = 40;
static constexpr dart::compiler::target::word
AOT_Thread_field_table_values_offset = 64;
@ -6276,6 +6298,8 @@ static constexpr dart::compiler::target::word
AOT_Thread_write_barrier_mask_offset = 36;
static constexpr dart::compiler::target::word AOT_Thread_callback_code_offset =
720;
static constexpr dart::compiler::target::word
AOT_Thread_callback_stack_return_offset = 724;
static constexpr dart::compiler::target::word
AOT_TimelineStream_enabled_offset = 8;
static constexpr dart::compiler::target::word AOT_TwoByteString_data_offset =
@ -6701,7 +6725,7 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_Thread_allocate_object_slow_stub_offset = 376;
static constexpr dart::compiler::target::word AOT_Thread_api_top_scope_offset =
1464;
1472;
static constexpr dart::compiler::target::word
AOT_Thread_auto_scope_native_wrapper_entry_point_offset = 648;
static constexpr dart::compiler::target::word AOT_Thread_bool_false_offset =
@ -6714,7 +6738,7 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_Thread_call_to_runtime_stub_offset = 256;
static constexpr dart::compiler::target::word AOT_Thread_dart_stream_offset =
1480;
1488;
static constexpr dart::compiler::target::word
AOT_Thread_dispatch_table_array_offset = 88;
static constexpr dart::compiler::target::word AOT_Thread_optimize_entry_offset =
@ -6757,7 +6781,7 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_Thread_invoke_dart_code_stub_offset = 248;
static constexpr dart::compiler::target::word
AOT_Thread_exit_through_ffi_offset = 1456;
AOT_Thread_exit_through_ffi_offset = 1464;
static constexpr dart::compiler::target::word AOT_Thread_isolate_offset = 80;
static constexpr dart::compiler::target::word
AOT_Thread_field_table_values_offset = 128;
@ -6844,6 +6868,8 @@ static constexpr dart::compiler::target::word
AOT_Thread_write_barrier_mask_offset = 72;
static constexpr dart::compiler::target::word AOT_Thread_callback_code_offset =
1448;
static constexpr dart::compiler::target::word
AOT_Thread_callback_stack_return_offset = 1456;
static constexpr dart::compiler::target::word
AOT_TimelineStream_enabled_offset = 16;
static constexpr dart::compiler::target::word AOT_TwoByteString_data_offset =
@ -7274,7 +7300,7 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_Thread_allocate_object_slow_stub_offset = 376;
static constexpr dart::compiler::target::word AOT_Thread_api_top_scope_offset =
1536;
1544;
static constexpr dart::compiler::target::word
AOT_Thread_auto_scope_native_wrapper_entry_point_offset = 648;
static constexpr dart::compiler::target::word AOT_Thread_bool_false_offset =
@ -7287,7 +7313,7 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_Thread_call_to_runtime_stub_offset = 256;
static constexpr dart::compiler::target::word AOT_Thread_dart_stream_offset =
1552;
1560;
static constexpr dart::compiler::target::word
AOT_Thread_dispatch_table_array_offset = 88;
static constexpr dart::compiler::target::word AOT_Thread_optimize_entry_offset =
@ -7330,7 +7356,7 @@ static constexpr dart::compiler::target::word
static constexpr dart::compiler::target::word
AOT_Thread_invoke_dart_code_stub_offset = 248;
static constexpr dart::compiler::target::word
AOT_Thread_exit_through_ffi_offset = 1528;
AOT_Thread_exit_through_ffi_offset = 1536;
static constexpr dart::compiler::target::word AOT_Thread_isolate_offset = 80;
static constexpr dart::compiler::target::word
AOT_Thread_field_table_values_offset = 128;
@ -7417,6 +7443,8 @@ static constexpr dart::compiler::target::word
AOT_Thread_write_barrier_mask_offset = 72;
static constexpr dart::compiler::target::word AOT_Thread_callback_code_offset =
1520;
static constexpr dart::compiler::target::word
AOT_Thread_callback_stack_return_offset = 1528;
static constexpr dart::compiler::target::word
AOT_TimelineStream_enabled_offset = 16;
static constexpr dart::compiler::target::word AOT_TwoByteString_data_offset =

View file

@ -256,6 +256,7 @@
FIELD(Thread, write_barrier_entry_point_offset) \
FIELD(Thread, write_barrier_mask_offset) \
FIELD(Thread, callback_code_offset) \
FIELD(Thread, callback_stack_return_offset) \
FIELD(TimelineStream, enabled_offset) \
FIELD(TwoByteString, data_offset) \
FIELD(Type, arguments_offset) \

View file

@ -112,8 +112,8 @@ class StubCodeCompiler : public AllStatic {
static constexpr intptr_t kNativeCallbackTrampolineStackDelta = 2;
#elif defined(TARGET_ARCH_IA32)
static constexpr intptr_t kNativeCallbackTrampolineSize = 10;
static constexpr intptr_t kNativeCallbackSharedStubSize = 90;
static constexpr intptr_t kNativeCallbackTrampolineStackDelta = 2;
static constexpr intptr_t kNativeCallbackSharedStubSize = 134;
static constexpr intptr_t kNativeCallbackTrampolineStackDelta = 4;
#elif defined(TARGET_ARCH_ARM)
static constexpr intptr_t kNativeCallbackTrampolineSize = 12;
static constexpr intptr_t kNativeCallbackSharedStubSize = 140;

View file

@ -375,6 +375,8 @@ void StubCodeCompiler::GenerateJITCallbackTrampolines(
RegisterSet all_registers;
all_registers.AddAllArgumentRegisters();
all_registers.Add(Location::RegisterLocation(
CallingConventions::kPointerToReturnStructRegisterCall));
// The call below might clobber R9 (volatile, holding callback_id).
all_registers.Add(Location::RegisterLocation(R9));

View file

@ -213,7 +213,7 @@ void StubCodeCompiler::GenerateCallNativeThroughSafepointStub(
void StubCodeCompiler::GenerateJITCallbackTrampolines(
Assembler* assembler,
intptr_t next_callback_id) {
Label done;
Label done, ret_4;
// EAX is volatile and doesn't hold any arguments.
COMPILE_ASSERT(!IsArgumentRegister(EAX) && !IsCalleeSavedRegister(EAX));
@ -232,16 +232,20 @@ void StubCodeCompiler::GenerateJITCallbackTrampolines(
const intptr_t shared_stub_start = __ CodeSize();
// Save THR which is callee-saved.
// Save THR and EBX which are callee-saved.
__ pushl(THR);
__ pushl(EBX);
// We need the callback ID after the call for return stack.
__ pushl(EAX);
// THR & return address
COMPILE_ASSERT(StubCodeCompiler::kNativeCallbackTrampolineStackDelta == 2);
COMPILE_ASSERT(StubCodeCompiler::kNativeCallbackTrampolineStackDelta == 4);
// Load the thread, verify the callback ID and exit the safepoint.
//
// We exit the safepoint inside DLRT_GetThreadForNativeCallbackTrampoline
// in order to safe code size on this shared stub.
// in order to save code size on this shared stub.
{
__ EnterFrame(0);
__ ReserveAlignedFrameSpace(compiler::target::kWordSize);
@ -278,14 +282,54 @@ void StubCodeCompiler::GenerateJITCallbackTrampolines(
// the saved THR and the return address. The target will know to skip them.
__ call(ECX);
// Register state:
// - callee saved registers (should be restored)
// - EBX available as scratch because we restore it later.
// - ESI(THR) contains thread
// - EDI
// - return registers (should not be touched)
// - EAX
// - EDX
// - available scratch registers
// - ECX free
// Load the return stack delta from the thread.
__ movl(ECX,
compiler::Address(
THR, compiler::target::Thread::callback_stack_return_offset()));
__ popl(EBX); // Compiler callback id.
__ movzxb(EBX, __ ElementAddressForRegIndex(
/*external=*/false,
/*array_cid=*/kTypedDataUint8ArrayCid,
/*index=*/1,
/*index_unboxed=*/false,
/*array=*/ECX,
/*index=*/EBX));
#if defined(DEBUG)
// Stack delta should be either 0 or 4.
Label check_done;
__ BranchIfZero(EBX, &check_done);
__ CompareImmediate(EBX, compiler::target::kWordSize);
__ BranchIf(EQUAL, &check_done);
__ Breakpoint();
__ Bind(&check_done);
#endif
// EnterSafepoint takes care to not clobber *any* registers (besides scratch).
__ EnterSafepoint(/*scratch=*/ECX);
// Restore THR (callee-saved).
// Restore callee-saved registers.
__ movl(ECX, EBX);
__ popl(EBX);
__ popl(THR);
__ cmpl(ECX, compiler::Immediate(Smi::RawValue(0)));
__ j(NOT_EQUAL, &ret_4, compiler::Assembler::kNearJump);
__ ret();
__ Bind(&ret_4);
__ ret(Immediate(4));
// 'kNativeCallbackSharedStubSize' is an upper bound because the exact
// instruction size can vary slightly based on OS calling conventions.
ASSERT((__ CodeSize() - shared_stub_start) <= kNativeCallbackSharedStubSize);

View file

@ -266,6 +266,15 @@ class CallingConventions {
static constexpr Register kSecondReturnReg = EDX;
static constexpr Register kPointerToReturnStructRegisterReturn = kReturnReg;
// Whether the callee uses `ret 4` instead of `ret` to return with struct
// return values.
// See: https://c9x.me/x86/html/file_module_x86_id_280.html
#if defined(_WIN32)
static const bool kUsesRet4 = false;
#else
static const bool kUsesRet4 = true;
#endif
// Floating point values are returned on the "FPU stack" (in "ST" registers).
// However, we use XMM0 in our compiler pipeline as the location.
// The move from and to ST is done in FfiCallInstr::EmitNativeCode and

View file

@ -461,7 +461,7 @@ class CallingConventions {
COMPILE_ASSERT((kArgumentRegisters & kReservedCpuRegisters) == 0);
static constexpr Register kFfiAnyNonAbiRegister = RBX;
static constexpr Register kFfiAnyNonAbiRegister = R12;
static constexpr Register kFirstNonArgumentRegister = RAX;
static constexpr Register kSecondNonArgumentRegister = RBX;
static constexpr Register kStackPointerRegister = SPREG;

View file

@ -7145,6 +7145,14 @@ bool Function::FfiCSignatureContainsHandles() const {
kFfiHandleCid;
}
bool Function::FfiCSignatureReturnsStruct() const {
ASSERT(IsFfiTrampoline());
const Function& c_signature = Function::Handle(FfiCSignature());
const auto& return_type = AbstractType::Handle(c_signature.result_type());
const bool predefined = IsFfiTypeClassId(return_type.type_class_id());
return !predefined;
}
int32_t Function::FfiCallbackId() const {
ASSERT(IsFfiTrampoline());
const Object& obj = Object::Handle(raw_ptr()->data());

View file

@ -2477,6 +2477,7 @@ class Function : public Object {
FunctionPtr FfiCSignature() const;
bool FfiCSignatureContainsHandles() const;
bool FfiCSignatureReturnsStruct() const;
// Can only be called on FFI trampolines.
// -1 for Dart -> native calls.

View file

@ -68,6 +68,15 @@ COMPILE_ASSERT(kAbiPreservedFpuRegCount == 4);
// kNativeCallbackTrampolineStackDelta must be added as well.
constexpr intptr_t kCallbackSlotsBeforeSavedArguments = 2;
// For FFI calls passing in TypedData, we save it on the stack before entering
// a Dart frame. This denotes how to get to the backed up typed data.
//
// Note: This is not kCallerSpSlotFromFp on arm.
//
// [fp] holds callers fp, [fp+4] holds callers lr, [fp+8] is space for
// return address, [fp+12] is our pushed TypedData pointer.
static const int kFfiCallerTypedDataSlotFromFp = kCallerSpSlotFromFp + 1;
} // namespace dart
#endif // RUNTIME_VM_STACK_FRAME_ARM_H_

View file

@ -67,6 +67,10 @@ COMPILE_ASSERT(kAbiPreservedFpuRegCount == 8);
// kNativeCallbackTrampolineStackDelta must be added as well.
constexpr intptr_t kCallbackSlotsBeforeSavedArguments = 2;
// For FFI calls passing in TypedData, we save it on the stack before entering
// a Dart frame. This denotes how to get to the backed up typed data.
static const int kFfiCallerTypedDataSlotFromFp = kCallerSpSlotFromFp;
} // namespace dart
#endif // RUNTIME_VM_STACK_FRAME_ARM64_H_

View file

@ -57,6 +57,10 @@ static const int kExitLinkSlotFromEntryFp = -8;
// kNativeCallbackTrampolineStackDelta must be added as well.
constexpr intptr_t kCallbackSlotsBeforeSavedArguments = 0;
// For FFI calls passing in TypedData, we save it on the stack before entering
// a Dart frame. This denotes how to get to the backed up typed data.
static const int kFfiCallerTypedDataSlotFromFp = kCallerSpSlotFromFp;
} // namespace dart
#endif // RUNTIME_VM_STACK_FRAME_IA32_H_

View file

@ -68,6 +68,10 @@ static const int kExitLinkSlotFromEntryFp = -11;
constexpr intptr_t kCallbackSlotsBeforeSavedArguments =
2 + CallingConventions::kShadowSpaceBytes / kWordSize;
// For FFI calls passing in TypedData, we save it on the stack before entering
// a Dart frame. This denotes how to get to the backed up typed data.
static const int kFfiCallerTypedDataSlotFromFp = kCallerSpSlotFromFp;
} // namespace dart
#endif // RUNTIME_VM_STACK_FRAME_X64_H_

View file

@ -425,6 +425,7 @@ class ObjectPointerVisitor;
V(_Utf8Decoder, "_Utf8Decoder") \
V(_VariableMirror, "_VariableMirror") \
V(_WeakProperty, "_WeakProperty") \
V(_addressOf, "_addressOf") \
V(_classRangeCheck, "_classRangeCheck") \
V(_current, "_current") \
V(_ensureScheduleImmediate, "_ensureScheduleImmediate") \

View file

@ -79,6 +79,7 @@ Thread::Thread(bool is_vm_isolate)
execution_state_(kThreadInNative),
safepoint_state_(0),
ffi_callback_code_(GrowableObjectArray::null()),
ffi_callback_stack_return_(TypedData::null()),
api_top_scope_(NULL),
task_kind_(kUnknownTask),
dart_stream_(NULL),
@ -641,6 +642,8 @@ void Thread::VisitObjectPointers(ObjectPointerVisitor* visitor,
visitor->VisitPointer(reinterpret_cast<ObjectPtr*>(&active_stacktrace_));
visitor->VisitPointer(reinterpret_cast<ObjectPtr*>(&sticky_error_));
visitor->VisitPointer(reinterpret_cast<ObjectPtr*>(&ffi_callback_code_));
visitor->VisitPointer(
reinterpret_cast<ObjectPtr*>(&ffi_callback_stack_return_));
// Visit the api local scope as it has all the api local handles.
ApiLocalScope* scope = api_top_scope_;
@ -1078,6 +1081,47 @@ void Thread::SetFfiCallbackCode(int32_t callback_id, const Code& code) {
array.SetAt(callback_id, code);
}
void Thread::SetFfiCallbackStackReturn(int32_t callback_id,
intptr_t stack_return_delta) {
#if defined(TARGET_ARCH_IA32)
#else
UNREACHABLE();
#endif
Zone* Z = Thread::Current()->zone();
/// In AOT the callback ID might have been allocated during compilation but
/// 'ffi_callback_code_' is initialized to empty again when the program
/// starts. Therefore we may need to initialize or expand it to accomodate
/// the callback ID.
if (ffi_callback_stack_return_ == TypedData::null()) {
ffi_callback_stack_return_ = TypedData::New(
kTypedDataInt8ArrayCid, kInitialCallbackIdsReserved, Heap::kOld);
}
auto& array = TypedData::Handle(Z, ffi_callback_stack_return_);
if (callback_id >= array.Length()) {
const int32_t capacity = array.Length();
if (callback_id >= capacity) {
// Ensure both that we grow enough and an exponential growth strategy.
const int32_t new_capacity =
Utils::Maximum(callback_id + 1, capacity * 2);
const auto& new_array = TypedData::Handle(
Z, TypedData::New(kTypedDataUint8ArrayCid, new_capacity, Heap::kOld));
for (intptr_t i = 0; i < capacity; i++) {
new_array.SetUint8(i, array.GetUint8(i));
}
array ^= new_array.raw();
ffi_callback_stack_return_ = new_array.raw();
}
}
ASSERT(callback_id < array.Length());
array.SetUint8(callback_id, stack_return_delta);
}
void Thread::VerifyCallbackIsolate(int32_t callback_id, uword entry) {
NoSafepointScope _;

View file

@ -351,6 +351,10 @@ class Thread : public ThreadState {
return OFFSET_OF(Thread, ffi_callback_code_);
}
static intptr_t callback_stack_return_offset() {
return OFFSET_OF(Thread, ffi_callback_stack_return_);
}
// Tag state is maintained on transitions.
enum {
// Always true in generated state.
@ -829,9 +833,17 @@ class Thread : public ThreadState {
// Store 'code' for the native callback identified by 'callback_id'.
//
// Expands the callback code array as necessary to accomodate the callback ID.
// Expands the callback code array as necessary to accomodate the callback
// ID.
void SetFfiCallbackCode(int32_t callback_id, const Code& code);
// Store 'stack_return' for the native callback identified by 'callback_id'.
//
// Expands the callback stack return array as necessary to accomodate the
// callback ID.
void SetFfiCallbackStackReturn(int32_t callback_id,
intptr_t stack_return_delta);
// Ensure that 'callback_id' refers to a valid callback in this isolate.
//
// If "entry != 0", additionally checks that entry is inside the instructions
@ -949,6 +961,7 @@ class Thread : public ThreadState {
uword execution_state_;
std::atomic<uword> safepoint_state_;
GrowableObjectArrayPtr ffi_callback_code_;
TypedDataPtr ffi_callback_stack_return_;
uword exit_through_ffi_ = 0;
ApiLocalScope* api_top_scope_;

View file

@ -2,11 +2,6 @@
# for details. All rights reserved. Use of this source code is governed by a
# BSD-style license that can be found in the LICENSE file.
# TODO(dartbug.com/36730): Implement structs by value.
function_callbacks_structs_by_value_generated_test: Skip
function_callbacks_structs_by_value_test: Skip
function_structs_by_value_generated_test: Skip
[ $builder_tag == msan ]
vmspecific_handle_test: Skip # https://dartbug.com/42314

View file

@ -45,6 +45,12 @@ void main() {
testNativeFunctionSignatureInvalidOptionalNamed();
testNativeFunctionSignatureInvalidOptionalPositional();
testHandleVariance();
testEmptyStructLookupFunctionArgument();
testEmptyStructLookupFunctionReturn();
testEmptyStructAsFunctionArgument();
testEmptyStructAsFunctionReturn();
testEmptyStructFromFunctionArgument();
testEmptyStructFromFunctionReturn();
}
typedef Int8UnOp = Int8 Function(Int8);
@ -476,3 +482,47 @@ class TestStruct1002 extends Struct {
@Handle() //# 1002: compile-time error
Object handle; //# 1002: compile-time error
}
class EmptyStruct extends Struct {}
void testEmptyStructLookupFunctionArgument() {
testLibrary.lookupFunction< //# 1100: compile-time error
Void Function(EmptyStruct), //# 1100: compile-time error
void Function(EmptyStruct)>("DoesNotExist"); //# 1100: compile-time error
}
void testEmptyStructLookupFunctionReturn() {
testLibrary.lookupFunction< //# 1101: compile-time error
EmptyStruct Function(), //# 1101: compile-time error
EmptyStruct Function()>("DoesNotExist"); //# 1101: compile-time error
}
void testEmptyStructAsFunctionArgument() {
final pointer =
Pointer<NativeFunction<Void Function(EmptyStruct)>>.fromAddress(1234);
pointer.asFunction<void Function(EmptyStruct)>(); //# 1102: compile-time error
}
void testEmptyStructAsFunctionReturn() {
final pointer =
Pointer<NativeFunction<EmptyStruct Function()>>.fromAddress(1234);
pointer.asFunction<EmptyStruct Function()>(); //# 1103: compile-time error
}
void _consumeEmptyStruct(EmptyStruct e) {
print(e);
}
void testEmptyStructFromFunctionArgument() {
Pointer.fromFunction<Void Function(EmptyStruct)>(//# 1104: compile-time error
_consumeEmptyStruct); //# 1104: compile-time error
}
EmptyStruct _returnEmptyStruct() {
return EmptyStruct();
}
void testEmptyStructFromFunctionReturn() {
Pointer.fromFunction<EmptyStruct Function()>(//# 1105: compile-time error
_returnEmptyStruct); //# 1105: compile-time error
}

View file

@ -2,11 +2,6 @@
# for details. All rights reserved. Use of this source code is governed by a
# BSD-style license that can be found in the LICENSE file.
# TODO(dartbug.com/36730): Implement structs by value.
function_callbacks_structs_by_value_generated_test: Skip
function_callbacks_structs_by_value_test: Skip
function_structs_by_value_generated_test: Skip
[ $builder_tag == msan ]
vmspecific_handle_test: Skip # https://dartbug.com/42314

View file

@ -45,6 +45,12 @@ void main() {
testNativeFunctionSignatureInvalidOptionalNamed();
testNativeFunctionSignatureInvalidOptionalPositional();
testHandleVariance();
testEmptyStructLookupFunctionArgument();
testEmptyStructLookupFunctionReturn();
testEmptyStructAsFunctionArgument();
testEmptyStructAsFunctionReturn();
testEmptyStructFromFunctionArgument();
testEmptyStructFromFunctionReturn();
}
typedef Int8UnOp = Int8 Function(Int8);
@ -476,3 +482,47 @@ class TestStruct1002 extends Struct {
@Handle() //# 1002: compile-time error
Object handle; //# 1002: compile-time error
}
class EmptyStruct extends Struct {}
void testEmptyStructLookupFunctionArgument() {
testLibrary.lookupFunction< //# 1100: compile-time error
Void Function(EmptyStruct), //# 1100: compile-time error
void Function(EmptyStruct)>("DoesNotExist"); //# 1100: compile-time error
}
void testEmptyStructLookupFunctionReturn() {
testLibrary.lookupFunction< //# 1101: compile-time error
EmptyStruct Function(), //# 1101: compile-time error
EmptyStruct Function()>("DoesNotExist"); //# 1101: compile-time error
}
void testEmptyStructAsFunctionArgument() {
final pointer =
Pointer<NativeFunction<Void Function(EmptyStruct)>>.fromAddress(1234);
pointer.asFunction<void Function(EmptyStruct)>(); //# 1102: compile-time error
}
void testEmptyStructAsFunctionReturn() {
final pointer =
Pointer<NativeFunction<EmptyStruct Function()>>.fromAddress(1234);
pointer.asFunction<EmptyStruct Function()>(); //# 1103: compile-time error
}
void _consumeEmptyStruct(EmptyStruct e) {
print(e);
}
void testEmptyStructFromFunctionArgument() {
Pointer.fromFunction<Void Function(EmptyStruct)>(//# 1104: compile-time error
_consumeEmptyStruct); //# 1104: compile-time error
}
EmptyStruct _returnEmptyStruct() {
return EmptyStruct();
}
void testEmptyStructFromFunctionReturn() {
Pointer.fromFunction<EmptyStruct Function()>(//# 1105: compile-time error
_returnEmptyStruct); //# 1105: compile-time error
}