aboutsummaryrefslogtreecommitdiff
path: root/sysdeps/aarch64/fpu/erfcf_advsimd.c
AgeCommit message (Collapse)AuthorFilesLines
2025-01-01Update copyright dates with scripts/update-copyrightsPaul Eggert1-1/+1
2024-11-01AArch64: Remove SVE erf and erfc tablesJoe Ramsay1-4/+4
By using a combination of mask-and-add instead of the shift-based index calculation the routines can share the same table as other variants with no performance degradation. The tables change name because of other changes in downstream AOR. Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
2024-05-14aarch64: Fix AdvSIMD libmvec routines for big-endianJoe Ramsay1-11/+17
Previously many routines used * to load from vector types stored in the data table. This is emitted as ldr, which byte-swaps the entire vector register, and causes bugs for big-endian when not all lanes contain the same value. When a vector is to be used this way, it has been replaced with an array and the load with an explicit ld1 intrinsic, which byte-swaps only within lanes. As well, many routines previously used non-standard GCC syntax for vector operations such as indexing into vectors types with [] and assembling vectors using {}. This syntax should not be mixed with ACLE, as the former does not respect endianness whereas the latter does. Such examples have been replaced with, for instance, vcombine_* and vgetq_lane* intrinsics. Helpers which only use the GCC syntax, such as the v_call helpers, do not need changing as they do not use intrinsics. Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
2024-04-04aarch64/fpu: Add vector variants of erfcJoe Ramsay1-0/+170
Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>