aboutsummaryrefslogtreecommitdiff
path: root/sysdeps/x86_64
AgeCommit message (Collapse)AuthorFilesLines
2025-04-22elf: tst-audit10: split AVX512F code into dedicated functions [BZ #32882]Aurelien Jarno2-55/+55
"Recent" GCC versions (since commit fc62716fe8d1, backported to stable branches) emit a vzeroupper instruction at the end of functions containing AVX instructions. This causes the tst-audit10 test to fail on CPUs lacking AVX instructions, despite the AVX512F check. The crash occurs in the pltenter function of tst-auditmod10b.c. Fix that by moving the code guarded by the check_avx512 function into specific functions using the target ("avx512f") attribute. Note that since commit 5359c3bc91cc ("x86-64: Remove compiler -mavx512f check") it is safe to assume that the compiler has AVX512F support, thus the __AVX512F__ checks can be dropped. Tested on non-AVX, AVX2 and AVX512F machines. Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-03-29x86: Use separate variable for TLSDESC XSAVE/XSAVEC state size (bug 32810)Florian Weimer1-1/+1
Previously, the initialization code reused the xsave_state_full_size member of struct cpu_features for the TLSDESC state size. However, the tunable processing code assumes that this member has the original XSAVE (non-compact) state size, so that it can use its value if XSAVEC is disabled via tunable. This change uses a separate variable and not a struct member because the value is only needed in ld.so and the static libc, but not in libc.so. As a result, struct cpu_features layout does not change, helping a future backport of this change. Fixes commit 9b7091415af47082664717210ac49d51551456ab ("x86-64: Update _dl_tlsdesc_dynamic to preserve AMX registers"). Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2025-03-13x86_64: Add atanh with FMASunil K Pandey3-0/+42
On SPR, it improves atanh bench performance by: Before After Improvement reciprocal-throughput 15.1715 14.8628 2% latency 57.1941 56.1883 2% Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2025-03-13x86_64: Add sinh with FMASunil K Pandey3-0/+49
On SPR, it improves sinh bench performance by: Before After Improvement reciprocal-throughput 14.2017 11.815 17% latency 36.4917 35.2114 4% Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2025-03-13x86_64: Add tanh with FMASunil K Pandey3-0/+44
On Skylake, it improves tanh bench performance by: Before After Improvement max 110.89 95.826 14% min 20.966 20.157 4% mean 30.9601 29.8431 4% Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
2025-03-12math: Refactor how to use libm-test-ulpsAdhemerval Zanella2-2430/+0
The current approach tracks math maximum supported errors by explicitly setting them per function and architecture. On newer implementations or new compiler versions, the file is updated with newer values if it shows higher results. The idea is to track the maximum known error, to update the manual with the obtained values. The constant libm-test-ulps shows little value, where it is usually a mechanical change done by the maintainer, for past releases it is usually ignored whether the ulp change resulted from a compiler regression, and the math tests already have a maximum ulp error that triggers a regression. It was shown by a recent update after the new acosf [1] implementation that is correctly rounded, where the libm-test-ulps was indeed from a compiler issue. This patch removes all arch-specific libm-test-ulps, adds system generic libm-test-ulps where applicable, and changes its semantics. The generic files now track specific implementation constraints, like if it is expected to be correctly rounded, or if the system-specific has different error expectations. Now multiple libm-test-ulps can be defined, and system-specific overrides generic implementation. This is for the case where arch-specific implementation might show worse precision than generic implementation, for instance, the cbrtf on i686. Regressions are only reported if the implementation shows larger errors than 9 ulps (13 for IBM long double) unless it is overridden by libm-test-ulps and the maximum error is not printed at the end of tests. The regen-ulps rule is also removed since it does not make sense to update the libm-test-ulps automatically. The manual error table is also removed, Paul Zimmermann and others have been tracking libm precision with a more comprehensive analysis for some releases; so link to his work instead. [1] https://sourceware.org/git/?p=glibc.git;a=commit;h=9cc9f8e11e8fb8f54f1e84d9f024917634a78201
2025-03-07Implement C23 rsqrtJoseph Myers1-0/+24
C23 adds various <math.h> function families originally defined in TS 18661-4. Add the rsqrt functions (1/sqrt(x)). The test inputs are taken from those for sqrt. Tested for x86_64 and x86, and with build-many-glibcs.py.
2025-02-12math: Use tanpif from CORE-MATHAdhemerval Zanella1-4/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows better performance to the generic tanpif. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): latency master patched improvement x86_64 85.1683 47.7990 43.88% x86_64v2 76.8219 41.4679 46.02% x86_64v3 73.7775 37.7734 48.80% aarch64 (Neoverse) 35.4514 18.0742 49.02% power8 22.7604 10.1054 55.60% power10 22.1358 9.9553 55.03% reciprocal-throughput master patched improvement x86_64 41.0174 19.4718 52.53% x86_64v2 34.8565 11.3761 67.36% x86_64v3 34.0325 9.6989 71.50% aarch64 (Neoverse) 25.4349 9.2017 63.82% power8 13.8626 3.8486 72.24% power10 11.7933 3.6420 69.12% Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12math: Use sinpif from CORE-MATHAdhemerval Zanella1-4/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows better performance to the generic sinpif. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): latency master patched improvement x86_64 47.5710 38.4455 19.18% x86_64v2 46.8828 40.7563 13.07% x86_64v3 44.0034 34.1497 22.39% aarch64 (Neoverse) 19.2493 14.1968 26.25% power8 23.5312 16.3854 30.37% power10 22.6485 10.2888 54.57% reciprocal-throughput master patched improvement x86_64 21.8858 11.6717 46.67% x86_64v2 22.0620 11.9853 45.67% x86_64v3 21.5653 11.3291 47.47% aarch64 (Neoverse) 13.0615 6.5499 49.85% power8 16.2030 6.9580 57.06% power10 12.8911 4.2858 66.75% Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12math: Use cospif from CORE-MATHAdhemerval Zanella1-4/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows better performance to the generic cospif. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): latency master patched improvement x86_64 47.4679 38.4157 19.07% x86_64v2 46.9686 38.3329 18.39% x86_64v3 43.8929 31.8510 27.43% aarch64 (Neoverse) 18.8867 13.2089 30.06% power8 22.9435 7.8023 65.99% power10 15.4472 7.77505 49.67% reciprocal-throughput master patched improvement x86_64 20.9518 11.4991 45.12% x86_64v2 19.8699 10.5921 46.69% x86_64v3 19.3475 9.3998 51.42% aarch64 (Neoverse) 12.5767 6.2158 50.58% power8 15.0566 3.2654 78.31% power10 9.2866 3.1147 66.46% Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12math: Use atanpif from CORE-MATHAdhemerval Zanella1-4/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows better performance to the generic atanpif. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): latency master patched improvement x86_64 66.3296 52.7558 20.46% x86_64v2 66.0429 51.4007 22.17% x86_64v3 60.6294 48.7876 19.53% aarch64 (Neoverse) 24.3163 20.9110 14.00% power8 16.5766 13.3620 19.39% power10 16.5115 13.4072 18.80% reciprocal-throughput master patched improvement x86_64 30.8599 16.0866 47.87% x86_64v2 29.2286 15.4688 47.08% x86_64v3 23.0960 12.8510 44.36% aarch64 (Neoverse) 15.4619 10.6752 30.96% power8 7.9200 5.2483 33.73% power10 6.8539 4.6262 32.50% Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12math: Use atan2pif from CORE-MATHAdhemerval Zanella1-4/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows better performance to the generic atan2pif. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): latency master patched improvement x86_64 79.4006 70.8726 10.74% x86_64v2 77.5136 69.1424 10.80% x86_64v3 71.8050 68.1637 5.07% aarch64 (Neoverse) 27.8363 24.7700 11.02% power8 39.3893 17.2929 56.10% power10 19.7200 16.8187 14.71% reciprocal-throughput master patched improvement x86_64 38.3457 30.9471 19.29% x86_64v2 37.4023 30.3112 18.96% x86_64v3 33.0713 24.4891 25.95% aarch64 (Neoverse) 19.3683 15.3259 20.87% power8 19.5507 8.27165 57.69% power10 9.05331 7.63775 15.64% Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12math: Use asinpif from CORE-MATHAdhemerval Zanella1-4/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows better performance to the generic asinpif. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): latency master patched improvement x86_64 46.4996 41.6126 10.51% x86_64v2 46.7551 38.8235 16.96% x86_64v3 42.6235 33.7603 20.79% aarch64 (Neoverse) 17.4161 14.3604 17.55% power8 10.7347 9.0193 15.98% power10 10.6420 9.0362 15.09% reciprocal-throughput master patched improvement x86_64 24.7208 16.5544 33.03% x86_64v2 24.2177 14.8938 38.50% x86_64v3 20.5617 10.5452 48.71% aarch64 (Neoverse) 13.4827 7.17613 46.78% power8 6.46134 3.56089 44.89% power10 5.79007 3.49544 39.63% Reviewed-by: DJ Delorie <dj@redhat.com>
2025-02-12math: Use acospif from CORE-MATHAdhemerval Zanella1-4/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows better performance to the generic acospif. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): latency master patched improvement x86_64 54.8281 42.9070 21.74% x86_64v2 54.1717 42.7497 21.08% x86_64v3 49.3552 34.1512 30.81% aarch64 (Neoverse) 17.9395 14.3733 19.88% power8 20.3110 8.8609 56.37% power10 11.3113 8.84067 21.84% reciprocal-throughput master patched improvement x86_64 21.2301 14.4803 31.79% x86_64v2 20.6858 13.9506 32.56% x86_64v3 16.1944 11.3377 29.99% aarch64 (Neoverse) 11.4474 7.13282 37.69% power8 10.6916 3.57547 66.56% power10 4.64269 3.54145 23.72% Reviewed-by: DJ Delorie <dj@redhat.com>
2025-01-12x86-64: Cast __rseq_offset to long long int [BZ #32543]H.J. Lu1-6/+6
commit 494d65129ed5ae1154b75cc189bbdde5e9ecf1df Author: Michael Jeanson <mjeanson@efficios.com> Date: Thu Aug 1 10:35:34 2024 -0400 nptl: Introduce <rseq-access.h> for RSEQ_* accessors added things like asm volatile ("movl %%fs:%P1(%q2),%0" \ : "=r" (__value) \ : "i" (offsetof (struct rseq_area, member)), \ "r" (__rseq_offset)); \ But this doesn't work for x32 when __rseq_offset is negative since the address is computed as FS + 32-bit to 64-bit zero extension of __rseq_offset + offsetof (struct rseq_area, member) Cast __rseq_offset to long long int "r" ((long long int) __rseq_offset)); \ to sign-extend 32-bit __rseq_offset to 64-bit. This is a no-op for x86-64 since x86-64 __rseq_offset is 64-bit. This fixes BZ #32543. Signed-off-by: H.J. Lu <hjl.tools@gmail.com> Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-01-10nptl: Introduce <rseq-access.h> for RSEQ_* accessorsMichael Jeanson1-0/+77
In preparation to move the rseq area to the 'extra TLS' block, we need accessors based on the thread pointer and the rseq offset. The ONCE variant of the accessors ensures single-copy atomicity for loads and stores which is required for all fields once the registration is active. A separate header is required to allow including <atomic.h> which results in an include loop when added to <tcb-access.h>. Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Reviewed-by: Florian Weimer <fweimer@redhat.com>
2025-01-09elf: Always define TLS_TP_OFFSETFlorian Weimer1-0/+3
This will be needed to compute __rseq_offset outside of the TLS relocation machinery. Reviewed-by: Michael Jeanson <mjeanson@efficios.com>
2025-01-09math: Fix acosf when building with gcc <= 11Adhemerval Zanella1-2/+0
GCC <= 11 wrongly assumes the rounding is to nearest and performs a constant folding where it should evaluate since the result is not exact [1]. [1] https://gcc.gnu.org/bugzilla/show_bug.cgi?id=57245
2025-01-07Revert "x86_64: Remove unused padding from tcbhead_t"Florian Weimer1-0/+12
This reverts commit 30d3fd7f4f4bc8f767d73ad4e4b005c1bd234310. The padding is required by Chromium's MaybeUpdateGlibcTidCache in sandbox/linux/services/namespace_sandbox.cc. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-01-02new inputs with large errors for [a]cospi, [a]sinpi, [a]tanpi, atan2piPaul Zimmermann1-33/+33
These inputs were generated with the programs from https://gitlab.inria.fr/zimmerma/math_accuracy, with rounding to nearest: * for univariate binary32 functions by exhaustive search * for other functions with the "threshold" parameter up to 10^6
2025-01-02elf: Introduce generic <dl-tls.h>Florian Weimer1-0/+4
On arc, the definition of TLS_DTV_UNALLOCATED now comes from <dl-dtv.h>. For x86-64 x32, a separate version is needed because unsigned long int is 32 bits on this target. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2025-01-01Update copyright in generated files by running "make"Paul Eggert2-5/+7
2025-01-01Update copyright dates with scripts/update-copyrightsPaul Eggert1234-1234/+1234
2024-12-27elf: Remove the GET_ADDR_ARGS and related macros from the TLS codeFlorian Weimer1-4/+4
This was used to manage an IA-64 ABI divergence is no longere needed after the IA-64 removal. (It should be possible to encode all the required information in one machine word, so the pointer indirection is really unnecessary. Technically, none of this is part of the ABI, so perhaps it's possible to do this retroactively. See bug 27404.) Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2024-12-23include/sys/cdefs.h: Add __attribute_optimization_barrier__Adhemerval Zanella1-1/+1
Add __attribute_optimization_barrier__ to disable inlining and cloning on a function. For Clang, expand it to __attribute__ ((optnone)) Otherwise, expand it to __attribute__ ((noinline, clone)) Co-Authored-By: H.J. Lu <hjl.tools@gmail.com> Reviewed-by: Sam James <sam@gentoo.org>
2024-12-22elf: Compile test modules with -fsemantic-interpositionH.J. Lu1-0/+3
Compiler may default to -fno-semantic-interposition. But some elf test modules must be compiled with -fsemantic-interposition to function properly. Add a TEST_CC check for -fsemantic-interposition and use it on elf test modules. This fixed FAIL: elf/tst-dlclose-lazy FAIL: elf/tst-pie1 FAIL: elf/tst-plt-rewrite1 FAIL: elf/unload4 when Clang 19 is used to test glibc. Signed-off-by: H.J. Lu <hjl.tools@gmail.com> Reviewed-by: Sam James <sam@gentoo.org>
2024-12-22x86-64: Disable libmvec ABI test for ClangH.J. Lu1-0/+4
Unlike GCC, libmvec support in Clang is hard-coded. Clang doesn't use macros defined in <bits/libm-simd-decl-stubs.h> to support new libmvec functions added to glibc and can't vectorize all test loops to test libmvec ABI: https://github.com/llvm/llvm-project/issues/120868 disable libmvec ABI test for Clang. Signed-off-by: H.J. Lu <hjl.tools@gmail.com> Reviewed-by: Sam James <sam@gentoo.org>
2024-12-22Check if -mamx-tile works for testingH.J. Lu2-63/+32
Since -mamx-tile is used only for testing, use LIBC_TRY_TEST_CC_COMMAND, instead of LIBC_TRY_CC_AND_TEST_CC_COMMAND to check it and don't check __builtin_ia32_ldtilecfg for Clang. Signed-off-by: H.J. Lu <hjl.tools@gmail.com> Reviewed-by: Sam James <sam@gentoo.org>
2024-12-21cet: Drop '#pragma GCC target' in tst-cet-legacy-10a[-static].cAdhemerval Zanella2-2/+0
After commit 215447f5cbcf1a494cded57734f68d7f9c2b0dc0 Author: H.J. Lu <hjl.tools@gmail.com> Date: Tue Dec 17 06:18:55 2024 +0800 cet: Pass -mshstk to compiler for tst-cet-legacy-10a[-static].c we can remove '#pragma GCC target' in tst-cet-legacy-10a[-static].c. Co-Authored-By: H.J. Lu <hjl.tools@gmail.com>
2024-12-21Fix elf: Introduce is_rtld_link_map [BZ #32488]H.J. Lu1-2/+2
Also use is_rtld_link_map in dl-cet.c. This fixes BZ #32488. Signed-off-by: H.J. Lu <hjl.tools@gmail.com>
2024-12-20x86_64: Regenerate ulpsFlorian Weimer1-0/+2
As seen with an AMD 7950X CPU, on a glibc built with GCC 11.5.
2024-12-19x86_64: Remove unused padding from tcbhead_tFlorian Weimer1-12/+0
This padding is difficult to use for preserving the internal GLIBC_PRIVATE ABI. The comment is misleading. Current Address Sanitizer uses heuristics to determine struct pthread size. It does not depend on its precise layout. It merely scans for pointers allocated using malloc. Due to the removal of the padding, the assert for its start is no longer required. Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
2024-12-18math: Use tanhf from CORE-MATHAdhemerval Zanella1-4/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows slight better performance to the generic tanhf. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 51.5273 41.0951 20.25% x86_64v2 47.7021 39.1526 17.92% x86_64v3 45.0373 34.2737 23.90% i686 133.9970 83.8596 37.42% aarch64 (Neoverse) 21.5439 14.7961 31.32% power10 13.3301 8.4406 36.68% reciprocal-throughput master patched improvement x86_64 24.9493 12.8547 48.48% x86_64v2 20.7051 12.7761 38.29% x86_64v3 19.2492 11.0851 42.41% i686 78.6498 29.8211 62.08% aarch64 (Neoverse) 11.6026 7.11487 38.68% power10 6.3328 2.8746 54.61% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-12-18math: Use sinhf from CORE-MATHAdhemerval Zanella1-4/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows slight better performance to the generic sinhf. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 52.6819 49.1489 6.71% x86_64v2 49.1162 42.9447 12.57% x86_64v3 46.9732 39.9157 15.02% i686 141.1470 129.6410 8.15% aarch64 (Neoverse) 20.8539 17.1288 17.86% power10 14.5258 9.1906 36.73% reciprocal-throughput master patched improvement x86_64 27.5553 23.9395 13.12% x86_64v2 21.6423 20.3219 6.10% x86_64v3 21.4842 16.0224 25.42% i686 87.9709 86.1626 2.06% aarch64 (Neoverse) 15.1919 12.2744 19.20% power10 7.2188 5.2611 27.12% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-12-18math: Use coshf from CORE-MATHAdhemerval Zanella1-5/+1
The CORE-MATH implementation is correctly rounded (for any rounding mode), although it should worse performance than current one. The current implementation performance comes mainly from the internal usage of the optimize expf implementation, and shows a maximum ULPs of 2 for FE_TONEAREST and 3 for other rounding modes. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 40.6995 49.0737 -20.58% x86_64v2 40.5841 44.3604 -9.30% x86_64v3 39.3879 39.7502 -0.92% i686 112.3380 129.8570 -15.59% aarch64 (Neoverse) 18.6914 17.0946 8.54% power10 11.1343 9.3245 16.25% reciprocal-throughput master patched improvement x86_64 18.6471 24.1077 -29.28% x86_64v2 17.7501 20.2946 -14.34% x86_64v3 17.8262 17.1877 3.58% i686 64.1454 86.5645 -34.95% aarch64 (Neoverse) 9.77226 12.2314 -25.16% power10 4.0200 5.3316 -32.63% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-12-18math: Use atanhf from CORE-MATHAdhemerval Zanella1-4/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows slight better performance to the generic atanhf. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 59.4930 45.8568 22.92% x86_64v2 59.5705 45.5804 23.48% x86_64v3 53.1838 37.7155 29.08% i686 169.354 133.5940 21.12% aarch64 (Neoverse) 26.0781 16.9829 34.88% power10 15.6591 10.7623 31.27% reciprocal-throughput master patched improvement x86_64 23.5903 18.5766 21.25% x86_64v2 22.6489 18.2683 19.34% x86_64v3 19.0401 13.9474 26.75% i686 97.6034 107.3260 -9.96% aarch64 (Neoverse) 15.3664 9.57846 37.67% power10 6.8877 4.6242 32.86% Signed-off-by: Alexei Sibidanov <sibid@uvic.ca> Signed-off-by: Paul Zimmermann <Paul.Zimmermann@inria.fr> Signed-off-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: DJ Delorie <dj@redhat.com>
2024-12-18math: Use atan2f from CORE-MATHAdhemerval Zanella1-8/+0
The CORE-MATH implementation is correctly rounded (for any rounding mode) and shows slight better performance to the generic atan2f. The code was adapted to glibc style and to use the definition of math_config.h (to handle errno, overflow, and underflow). Benchtest on x64_64 (Ryzen 9 5900X, gcc 14.2.1), aarch64 (Neoverse-N1, gcc 13.3.1), and powerpc (POWER10, gcc 13.2.1): Latency master patched improvement x86_64 68.1175 69.2014 -1.59% x86_64v2 66.9884 66.0081 1.46% x86_64v3 57.7034 61.6407 -6.82% i686 189.8690 152.7560 19.55% aarch64 (Neoverse) 32.6151 24.5382 24.76% power10 21.7282 17.1896 20.89% reciprocal-throughput master patched improvement x86_64 34.5202 31.6155 8.41% x86_64v2 32.6379 30.3372 7.05% x86_64v3 34.3677 23.6455 31.20% i686 157.7290 75.8308 51.92% aarch64 (Neoverse) 27.7788 16.2671 41.4