[][src]Module core::arch::aarch64

🔬 This is a nightly-only experimental API. (stdsimd #27731)
This is supported on AArch64 only.

Platform-specific intrinsics for the aarch64 platform.

See the module documentation for more details.

Structs

APSRExperimentalAArch64

Application Program Status Register

ISHExperimentalAArch64

Inner Shareable is the required shareability domain, reads and writes are the required access types

ISHSTExperimentalAArch64

Inner Shareable is the required shareability domain, writes are the required access type

NSHExperimentalAArch64

Non-shareable is the required shareability domain, reads and writes are the required access types

NSHSTExperimentalAArch64

Non-shareable is the required shareability domain, writes are the required access type

OSHExperimentalAArch64

Outer Shareable is the required shareability domain, reads and writes are the required access types

OSHSTExperimentalAArch64

Outer Shareable is the required shareability domain, writes are the required access type

STExperimentalAArch64

Full system is the required shareability domain, writes are the required access type

SYExperimentalAArch64

Full system is the required shareability domain, reads and writes are the required access types

float32x2_tExperimentalAArch64

ARM-specific 64-bit wide vector of two packed f32.

float32x4_tExperimentalAArch64

ARM-specific 128-bit wide vector of four packed f32.

float64x1_tExperimentalAArch64

ARM-specific 64-bit wide vector of one packed f64.

float64x2_tExperimentalAArch64

ARM-specific 128-bit wide vector of two packed f64.

int16x2_tExperimentalAArch64

ARM-specific 32-bit wide vector of two packed i16.

int16x4_tExperimentalAArch64

ARM-specific 64-bit wide vector of four packed i16.

int16x8_tExperimentalAArch64

ARM-specific 128-bit wide vector of eight packed i16.

int32x2_tExperimentalAArch64

ARM-specific 64-bit wide vector of two packed i32.

int32x4_tExperimentalAArch64

ARM-specific 128-bit wide vector of four packed i32.

int64x1_tExperimentalAArch64

ARM-specific 64-bit wide vector of one packed i64.

int64x2_tExperimentalAArch64

ARM-specific 128-bit wide vector of two packed i64.

int8x4_tExperimentalAArch64

ARM-specific 32-bit wide vector of four packed i8.

int8x8_tExperimentalAArch64

ARM-specific 64-bit wide vector of eight packed i8.

int8x16_tExperimentalAArch64

ARM-specific 128-bit wide vector of sixteen packed i8.

int8x16x2_tExperimentalAArch64

ARM-specific type containing two int8x16_t vectors.

int8x16x3_tExperimentalAArch64

ARM-specific type containing three int8x16_t vectors.

int8x16x4_tExperimentalAArch64

ARM-specific type containing four int8x16_t vectors.

int8x8x2_tExperimentalAArch64

ARM-specific type containing two int8x8_t vectors.

int8x8x3_tExperimentalAArch64

ARM-specific type containing three int8x8_t vectors.

int8x8x4_tExperimentalAArch64

ARM-specific type containing four int8x8_t vectors.

poly16x4_tExperimentalAArch64

ARM-specific 64-bit wide vector of four packed u16.

poly16x8_tExperimentalAArch64

ARM-specific 128-bit wide vector of eight packed u16.

poly64x1_tExperimentalAArch64

ARM-specific 64-bit wide vector of one packed p64.

poly64x2_tExperimentalAArch64

ARM-specific 64-bit wide vector of two packed p64.

poly8x8_tExperimentalAArch64

ARM-specific 64-bit wide polynomial vector of eight packed u8.

poly8x16_tExperimentalAArch64

ARM-specific 128-bit wide vector of sixteen packed u8.

poly8x16x2_tExperimentalAArch64

ARM-specific type containing two poly8x16_t vectors.

poly8x16x3_tExperimentalAArch64

ARM-specific type containing three poly8x16_t vectors.

poly8x16x4_tExperimentalAArch64

ARM-specific type containing four poly8x16_t vectors.

poly8x8x2_tExperimentalAArch64

ARM-specific type containing two poly8x8_t vectors.

poly8x8x3_tExperimentalAArch64

ARM-specific type containing three poly8x8_t vectors.

poly8x8x4_tExperimentalAArch64

ARM-specific type containing four poly8x8_t vectors.

uint16x2_tExperimentalAArch64

ARM-specific 32-bit wide vector of two packed u16.

uint16x4_tExperimentalAArch64

ARM-specific 64-bit wide vector of four packed u16.

uint16x8_tExperimentalAArch64

ARM-specific 128-bit wide vector of eight packed u16.

uint32x2_tExperimentalAArch64

ARM-specific 64-bit wide vector of two packed u32.

uint32x4_tExperimentalAArch64

ARM-specific 128-bit wide vector of four packed u32.

uint64x1_tExperimentalAArch64

ARM-specific 64-bit wide vector of one packed u64.

uint64x2_tExperimentalAArch64

ARM-specific 128-bit wide vector of two packed u64.

uint8x4_tExperimentalAArch64

ARM-specific 32-bit wide vector of four packed u8.

uint8x8_tExperimentalAArch64

ARM-specific 64-bit wide vector of eight packed u8.

uint8x16_tExperimentalAArch64

ARM-specific 128-bit wide vector of sixteen packed u8.

uint8x16x2_tExperimentalAArch64

ARM-specific type containing two uint8x16_t vectors.

uint8x16x3_tExperimentalAArch64

ARM-specific type containing three uint8x16_t vectors.

uint8x16x4_tExperimentalAArch64

ARM-specific type containing four uint8x16_t vectors.

uint8x8x2_tExperimentalAArch64

ARM-specific type containing two uint8x8_t vectors.

uint8x8x3_tExperimentalAArch64

ARM-specific type containing three uint8x8_t vectors.

uint8x8x4_tExperimentalAArch64

ARM-specific type containing four uint8x8_t vectors.

Functions

__breakpointExperimentalAArch64

Inserts a breakpoint instruction.

__clrexExperimentalAArch64

Removes the exclusive lock created by LDREX

__crc32bExperimentalAArch64 and crc

CRC32 single round checksum for bytes (8 bits).

__crc32hExperimentalAArch64 and crc

CRC32 single round checksum for half words (16 bits).

__crc32wExperimentalAArch64 and crc

CRC32 single round checksum for words (32 bits).

__crc32dExperimentalAArch64 and crc

CRC32 single round checksum for quad words (64 bits).

__crc32cbExperimentalAArch64 and crc

CRC32-C single round checksum for bytes (8 bits).

__crc32chExperimentalAArch64 and crc

CRC32-C single round checksum for half words (16 bits).

__crc32cwExperimentalAArch64 and crc

CRC32-C single round checksum for words (32 bits).

__crc32cdExperimentalAArch64 and crc

CRC32-C single round checksum for quad words (64 bits).

__dbgExperimentalAArch64

Generates a DBG instruction.

__dmbExperimentalAArch64

Generates a DMB (data memory barrier) instruction or equivalent CP15 instruction.

__dsbExperimentalAArch64

Generates a DSB (data synchronization barrier) instruction or equivalent CP15 instruction.

__isbExperimentalAArch64

Generates an ISB (instruction synchronization barrier) instruction or equivalent CP15 instruction.

__ldrexExperimentalAArch64

Executes a exclusive LDR instruction for 32 bit value.

__ldrexbExperimentalAArch64

Executes a exclusive LDR instruction for 8 bit value.

__ldrexhExperimentalAArch64

Executes a exclusive LDR instruction for 16 bit value.

__nopExperimentalAArch64

Generates an unspecified no-op instruction.

__qaddExperimentalAArch64

Signed saturating addition

__qadd8ExperimentalAArch64

Saturating four 8-bit integer additions

__qadd16ExperimentalAArch64

Saturating two 16-bit integer additions

__qasxExperimentalAArch64

Returns the 16-bit signed saturated equivalent of

__qdblExperimentalAArch64

Insert a QADD instruction

__qsaxExperimentalAArch64

Returns the 16-bit signed saturated equivalent of

__qsubExperimentalAArch64

Signed saturating subtraction

__qsub8ExperimentalAArch64

Saturating two 8-bit integer subtraction

__qsub16ExperimentalAArch64

Saturating two 16-bit integer subtraction

__rsrExperimentalAArch64

Reads a 32-bit system register

__rsrpExperimentalAArch64

Reads a system register containing an address

__sadd8ExperimentalAArch64

Returns the 8-bit signed saturated equivalent of

__sadd16ExperimentalAArch64

Returns the 16-bit signed saturated equivalent of

__sasxExperimentalAArch64

Returns the 16-bit signed equivalent of

__selExperimentalAArch64

Select bytes from each operand according to APSR GE flags

__sevExperimentalAArch64

Generates a SEV (send a global event) hint instruction.

__shadd8ExperimentalAArch64

Signed halving parallel byte-wise addition.

__shadd16ExperimentalAArch64

Signed halving parallel halfword-wise addition.

__shsub8ExperimentalAArch64

Signed halving parallel byte-wise subtraction.

__shsub16ExperimentalAArch64

Signed halving parallel halfword-wise subtraction.

__smlabbExperimentalAArch64

Insert a SMLABB instruction

__smlabtExperimentalAArch64

Insert a SMLABT instruction

__smladExperimentalAArch64

Dual 16-bit Signed Multiply with Addition of products and 32-bit accumulation.

__smlatbExperimentalAArch64

Insert a SMLATB instruction

__smlattExperimentalAArch64

Insert a SMLATT instruction

__smlawbExperimentalAArch64

Insert a SMLAWB instruction

__smlawtExperimentalAArch64

Insert a SMLAWT instruction

__smlsdExperimentalAArch64

Dual 16-bit Signed Multiply with Subtraction of products and 32-bit accumulation and overflow detection.

__smuadExperimentalAArch64

Signed Dual Multiply Add.

__smuadxExperimentalAArch64

Signed Dual Multiply Add Reversed.

__smulbbExperimentalAArch64

Insert a SMULBB instruction

__smulbtExperimentalAArch64

Insert a SMULTB instruction

__smultbExperimentalAArch64

Insert a SMULTB instruction

__smulttExperimentalAArch64

Insert a SMULTT instruction

__smulwbExperimentalAArch64

Insert a SMULWB instruction

__smulwtExperimentalAArch64

Insert a SMULWT instruction

__smusdExperimentalAArch64

Signed Dual Multiply Subtract.

__smusdxExperimentalAArch64

Signed Dual Multiply Subtract Reversed.

__ssub8ExperimentalAArch64

Inserts a SSUB8 instruction.

__strexExperimentalAArch64

Executes a exclusive STR instruction for 32 bit values

__strexbExperimentalAArch64

Executes a exclusive STR instruction for 8 bit values

__strexhExperimentalAArch64

Executes a exclusive STR instruction for 16 bit values

__usad8ExperimentalAArch64

Sum of 8-bit absolute differences.

__usada8ExperimentalAArch64

Sum of 8-bit absolute differences and constant.

__usub8ExperimentalAArch64

Inserts a USUB8 instruction.

__wfeExperimentalAArch64

Generates a WFE (wait for event) hint instruction, or nothing.

__wfiExperimentalAArch64

Generates a WFI (wait for interrupt) hint instruction, or nothing.

__wsrExperimentalAArch64

Writes a 32-bit system register

__wsrpExperimentalAArch64

Writes a system register containing an address

__yieldExperimentalAArch64

Generates a YIELD hint instruction.

_cls_u32ExperimentalAArch64

Counts the leading most significant bits set.

_cls_u64ExperimentalAArch64

Counts the leading most significant bits set.

_clz_u8ExperimentalAArch64 and v7

Count Leading Zeros.

_clz_u16ExperimentalAArch64 and v7

Count Leading Zeros.

_clz_u32ExperimentalAArch64 and v7

Count Leading Zeros.

_clz_u64ExperimentalAArch64

Count Leading Zeros.

_rbit_u32ExperimentalAArch64 and v7

Reverse the bit order.

_rbit_u64ExperimentalAArch64

Reverse the bit order.

_rev_u16ExperimentalAArch64

Reverse the order of the bytes.

_rev_u32ExperimentalAArch64

Reverse the order of the bytes.

_rev_u64ExperimentalAArch64

Reverse the order of the bytes.

brkExperimentalAArch64

Generates the trap instruction BRK 1

udfExperimentalAArch64

Generates the trap instruction UDF

vadd_f32Experimentalneon and v7 and AArch64

Vector add.

vadd_f64ExperimentalAArch64 and neon

Vector add.

vadd_s8Experimentalneon and v7 and AArch64

Vector add.

vadd_s16Experimentalneon and v7 and AArch64

Vector add.

vadd_s32Experimentalneon and v7 and AArch64

Vector add.

vadd_u8Experimentalneon and v7 and AArch64

Vector add.

vadd_u16Experimentalneon and v7 and AArch64

Vector add.

vadd_u32Experimentalneon and v7 and AArch64

Vector add.

vaddd_s64ExperimentalAArch64 and neon

Vector add.

vaddd_u64ExperimentalAArch64 and neon

Vector add.

vaddl_s8Experimentalneon and v7 and AArch64

Vector long add.

vaddl_s16Experimentalneon and v7 and AArch64

Vector long add.

vaddl_s32Experimentalneon and v7 and AArch64

Vector long add.

vaddl_u8Experimentalneon and v7 and AArch64

Vector long add.

vaddl_u16Experimentalneon and v7 and AArch64

Vector long add.

vaddl_u32Experimentalneon and v7 and AArch64

Vector long add.

vaddq_f32Experimentalneon and v7 and AArch64

Vector add.

vaddq_f64ExperimentalAArch64 and neon

Vector add.

vaddq_s8Experimentalneon and v7 and AArch64

Vector add.

vaddq_s16Experimentalneon and v7 and AArch64

Vector add.

vaddq_s32Experimentalneon and v7 and AArch64

Vector add.

vaddq_s64Experimentalneon and v7 and AArch64

Vector add.

vaddq_u8Experimentalneon and v7 and AArch64

Vector add.

vaddq_u16Experimentalneon and v7 and AArch64

Vector add.

vaddq_u32Experimentalneon and v7 and AArch64

Vector add.

vaddq_u64Experimentalneon and v7 and AArch64

Vector add.

vaesdq_u8ExperimentalAArch64 and crypto

AES single round decryption.

vaeseq_u8ExperimentalAArch64 and crypto

AES single round encryption.

vaesimcq_u8ExperimentalAArch64 and crypto

AES inverse mix columns.

vaesmcq_u8ExperimentalAArch64 and crypto

AES mix columns.

vcombine_f32ExperimentalAArch64 and neon

Vector combine

vcombine_f64ExperimentalAArch64 and neon

Vector combine

vcombine_p8ExperimentalAArch64 and neon

Vector combine

vcombine_p16ExperimentalAArch64 and neon

Vector combine

vcombine_p64ExperimentalAArch64 and neon

Vector combine

vcombine_s8ExperimentalAArch64 and neon

Vector combine

vcombine_s16ExperimentalAArch64 and neon

Vector combine

vcombine_s32ExperimentalAArch64 and neon

Vector combine

vcombine_s64ExperimentalAArch64 and neon

Vector combine

vcombine_u8ExperimentalAArch64 and neon

Vector combine

vcombine_u16ExperimentalAArch64 and neon

Vector combine

vcombine_u32ExperimentalAArch64 and neon

Vector combine

vcombine_u64ExperimentalAArch64 and neon

Vector combine

vmaxv_f32ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_s8ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_s16ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_s32ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_u8ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_u16ExperimentalAArch64 and neon

Horizontal vector max.

vmaxv_u32ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_f32ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_f64ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_s8ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_s16ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_s32ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_u8ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_u16ExperimentalAArch64 and neon

Horizontal vector max.

vmaxvq_u32ExperimentalAArch64 and neon

Horizontal vector max.

vminv_f32ExperimentalAArch64 and neon

Horizontal vector min.

vminv_s8ExperimentalAArch64 and neon

Horizontal vector min.

vminv_s16ExperimentalAArch64 and neon

Horizontal vector min.

vminv_s32ExperimentalAArch64 and neon

Horizontal vector min.

vminv_u8ExperimentalAArch64 and neon

Horizontal vector min.

vminv_u16ExperimentalAArch64 and neon

Horizontal vector min.

vminv_u32ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_f32ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_f64ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_s8ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_s16ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_s32ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_u8ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_u16ExperimentalAArch64 and neon

Horizontal vector min.

vminvq_u32ExperimentalAArch64 and neon

Horizontal vector min.

vmovl_s8Experimentalneon and v7 and AArch64

Vector long move.

vmovl_s16Experimentalneon and v7 and AArch64

Vector long move.

vmovl_s32Experimentalneon and v7 and AArch64

Vector long move.

vmovl_u8Experimentalneon and v7 and AArch64

Vector long move.

vmovl_u16Experimentalneon and v7 and AArch64

Vector long move.

vmovl_u32Experimentalneon and v7 and AArch64

Vector long move.

vmovn_s16Experimentalneon and v7 and AArch64

Vector narrow integer.

vmovn_s32Experimentalneon and v7 and AArch64

Vector narrow integer.

vmovn_s64Experimentalneon and v7 and AArch64

Vector narrow integer.

vmovn_u16Experimentalneon and v7 and AArch64

Vector narrow integer.

vmovn_u32Experimentalneon and v7 and AArch64

Vector narrow integer.

vmovn_u64Experimentalneon and v7 and AArch64

Vector narrow integer.

vpmax_f32Experimentalneon and v7 and AArch64

Folding maximum of adjacent pairs

vpmax_s8Experimentalneon and v7 and AArch64

Folding maximum of adjacent pairs

vpmax_s16Experimentalneon and v7 and AArch64

Folding maximum of adjacent pairs

vpmax_s32Experimentalneon and v7 and AArch64

Folding maximum of adjacent pairs

vpmax_u8Experimentalneon and v7 and AArch64

Folding maximum of adjacent pairs

vpmax_u16Experimentalneon and v7 and AArch64

Folding maximum of adjacent pairs

vpmax_u32Experimentalneon and v7 and AArch64

Folding maximum of adjacent pairs

vpmaxq_f32ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_f64ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_s8ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_s16ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_s32ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_u8ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_u16ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmaxq_u32ExperimentalAArch64 and neon

Folding maximum of adjacent pairs

vpmin_f32Experimentalneon and v7 and AArch64

Folding minimum of adjacent pairs

vpmin_s8Experimentalneon and v7 and AArch64

Folding minimum of adjacent pairs

vpmin_s16Experimentalneon and v7 and AArch64

Folding minimum of adjacent pairs

vpmin_s32Experimentalneon and v7 and AArch64

Folding minimum of adjacent pairs

vpmin_u8Experimentalneon and v7 and AArch64

Folding minimum of adjacent pairs

vpmin_u16Experimentalneon and v7 and AArch64

Folding minimum of adjacent pairs

vpmin_u32Experimentalneon and v7 and AArch64

Folding minimum of adjacent pairs

vpminq_f32ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_f64ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_s8ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_s16ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_s32ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_u8ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_u16ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vpminq_u32ExperimentalAArch64 and neon

Folding minimum of adjacent pairs

vqtbl1_p8ExperimentalAArch64 and neon

Table look-up

vqtbl1_s8ExperimentalAArch64 and neon

Table look-up

vqtbl1_u8ExperimentalAArch64 and neon

Table look-up

vqtbl1q_p8ExperimentalAArch64 and neon

Table look-up

vqtbl1q_s8ExperimentalAArch64 and neon

Table look-up

vqtbl1q_u8ExperimentalAArch64 and neon

Table look-up

vqtbl2_p8ExperimentalAArch64 and neon

Table look-up

vqtbl2_s8ExperimentalAArch64 and neon

Table look-up

vqtbl2_u8ExperimentalAArch64 and neon

Table look-up

vqtbl2q_p8ExperimentalAArch64 and neon

Table look-up

vqtbl2q_s8ExperimentalAArch64 and neon

Table look-up

vqtbl2q_u8ExperimentalAArch64 and neon

Table look-up

vqtbl3_p8ExperimentalAArch64 and neon

Table look-up

vqtbl3_s8ExperimentalAArch64 and neon

Table look-up

vqtbl3_u8ExperimentalAArch64 and neon

Table look-up

vqtbl3q_p8ExperimentalAArch64 and neon

Table look-up

vqtbl3q_s8ExperimentalAArch64 and neon

Table look-up

vqtbl3q_u8ExperimentalAArch64 and neon

Table look-up

vqtbl4_p8ExperimentalAArch64 and neon

Table look-up

vqtbl4_s8ExperimentalAArch64 and neon

Table look-up

vqtbl4_u8ExperimentalAArch64 and neon

Table look-up

vqtbl4q_p8ExperimentalAArch64 and neon

Table look-up

vqtbl4q_s8ExperimentalAArch64 and neon

Table look-up

vqtbl4q_u8ExperimentalAArch64 and neon

Table look-up

vqtbx1_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx1_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx1_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx1q_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx1q_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx1q_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2q_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2q_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx2q_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3q_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3q_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx3q_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4_u8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4q_p8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4q_s8ExperimentalAArch64 and neon

Extended table look-up

vqtbx4q_u8ExperimentalAArch64 and neon

Extended table look-up

vrsqrte_f32ExperimentalAArch64 and neon

Reciprocal square-root estimate.

vsha1cq_u32ExperimentalAArch64 and crypto

SHA1 hash update accelerator, choose.

vsha1h_u32ExperimentalAArch64 and crypto

SHA1 fixed rotate.

vsha1mq_u32ExperimentalAArch64 and crypto

SHA1 hash update accelerator, majority.

vsha1pq_u32ExperimentalAArch64 and crypto

SHA1 hash update accelerator, parity.

vsha1su0q_u32ExperimentalAArch64 and crypto

SHA1 schedule update accelerator, first part.

vsha1su1q_u32ExperimentalAArch64 and crypto

SHA1 schedule update accelerator, second part.

vsha256h2q_u32ExperimentalAArch64 and crypto

SHA256 hash update accelerator, upper part.

vsha256hq_u32ExperimentalAArch64 and crypto

SHA256 hash update accelerator.

vsha256su0q_u32ExperimentalAArch64 and crypto

SHA256 schedule update accelerator, first part.

vsha256su1q_u32ExperimentalAArch64 and crypto

SHA256 schedule update accelerator, second part.

vtbl1_p8ExperimentalAArch64 and neon,v7

Table look-up

vtbl1_s8ExperimentalAArch64 and neon,v7

Table look-up

vtbl1_u8ExperimentalAArch64 and neon,v7

Table look-up

vtbl2_p8ExperimentalAArch64 and neon,v7

Table look-up

vtbl2_s8ExperimentalAArch64 and neon,v7

Table look-up

vtbl2_u8ExperimentalAArch64 and neon,v7

Table look-up

vtbl3_p8ExperimentalAArch64 and neon,v7

Table look-up

vtbl3_s8ExperimentalAArch64 and neon,v7

Table look-up

vtbl3_u8ExperimentalAArch64 and neon,v7

Table look-up

vtbl4_p8ExperimentalAArch64 and neon,v7

Table look-up

vtbl4_s8ExperimentalAArch64 and neon,v7

Table look-up

vtbl4_u8ExperimentalAArch64 and neon,v7

Table look-up

vtbx1_p8ExperimentalAArch64 and neon,v7

Extended table look-up

vtbx1_s8ExperimentalAArch64 and neon,v7

Extended table look-up

vtbx1_u8ExperimentalAArch64 and neon,v7

Extended table look-up

vtbx2_p8ExperimentalAArch64 and neon,v7

Extended table look-up

vtbx2_s8ExperimentalAArch64 and neon,v7

Extended table look-up

vtbx2_u8ExperimentalAArch64 and neon,v7

Extended table look-up

vtbx3_p8ExperimentalAArch64 and neon,v7

Extended table look-up

vtbx3_s8ExperimentalAArch64 and neon,v7

Extended table look-up

vtbx3_u8ExperimentalAArch64 and neon,v7

Extended table look-up

vtbx4_p8ExperimentalAArch64 and neon,v7

Extended table look-up

vtbx4_s8ExperimentalAArch64 and neon,v7

Extended table look-up

vtbx4_u8ExperimentalAArch64 and neon,v7

Extended table look-up