aboutsummaryrefslogtreecommitdiff
path: root/ChangeLog
diff options
context:
space:
mode:
authorH.J. Lu <hjl.tools@gmail.com>2016-03-28 04:39:48 -0700
committerH.J. Lu <hjl.tools@gmail.com>2016-03-28 04:40:03 -0700
commite41b395523040fcb58c7d378475720c2836d280c (patch)
tree7a4271638219c8d5b141039178105b0f1564ea16 /ChangeLog
parentb66d837bb5398795c6b0f651bd5a5d66091d8577 (diff)
downloadglibc-e41b395523040fcb58c7d378475720c2836d280c.tar.xz
glibc-e41b395523040fcb58c7d378475720c2836d280c.zip
[x86] Add a feature bit: Fast_Unaligned_Copy
On AMD processors, memcpy optimized with unaligned SSE load is slower than emcpy optimized with aligned SSSE3 while other string functions are faster with unaligned SSE load. A feature bit, Fast_Unaligned_Copy, is added to select memcpy optimized with unaligned SSE load. [BZ #19583] * sysdeps/x86/cpu-features.c (init_cpu_features): Set Fast_Unaligned_Copy with Fast_Unaligned_Load for Intel processors. Set Fast_Copy_Backward for AMD Excavator processors. * sysdeps/x86/cpu-features.h (bit_arch_Fast_Unaligned_Copy): New. (index_arch_Fast_Unaligned_Copy): Likewise. * sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Check Fast_Unaligned_Copy instead of Fast_Unaligned_Load.
Diffstat (limited to 'ChangeLog')
-rw-r--r--ChangeLog14
1 files changed, 14 insertions, 0 deletions
diff --git a/ChangeLog b/ChangeLog
index 7f629acced..5375f3b508 100644
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,17 @@
+2016-03-28 H.J. Lu <hongjiu.lu@intel.com>
+ Amit Pawar <Amit.Pawar@amd.com>
+
+ [BZ #19583]
+ * sysdeps/x86/cpu-features.c (init_cpu_features): Set
+ Fast_Unaligned_Copy with Fast_Unaligned_Load for Intel
+ processors. Set Fast_Copy_Backward for AMD Excavator
+ processors.
+ * sysdeps/x86/cpu-features.h (bit_arch_Fast_Unaligned_Copy):
+ New.
+ (index_arch_Fast_Unaligned_Copy): Likewise.
+ * sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Check
+ Fast_Unaligned_Copy instead of Fast_Unaligned_Load.
+
2016-03-25 Florian Weimer <fweimer@redhat.com>
[BZ #19791]