aboutsummaryrefslogtreecommitdiff
path: root/malloc/malloc.c
AgeCommit message (Collapse)AuthorFilesLines
2021-10-29Handle NULL input to malloc_usable_size [BZ #28506]Siddhesh Poyarekar1-16/+9
Hoist the NULL check for malloc_usable_size into its entry points in malloc-debug and malloc and assume non-NULL in all callees. This fixes BZ #28506 Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org> Reviewed-by: Florian Weimer <fweimer@redhat.com> Reviewed-by: Richard W.M. Jones <rjones@redhat.com> (cherry picked from commit 88e316b06414ee7c944cd6f8b30b07a972b78499)
2021-07-23Fix build and tests with --disable-tunablesSiddhesh Poyarekar1-25/+26
Remove unused code and declare __libc_mallopt when !IS_IN (libc) to allow the debug hook to build with --disable-tunables. Also, run tst-ifunc-isa-2* tests only when tunables are enabled since the result depends on it. Tested on x86_64. Reported-by: Matheus Castanho <msc@linux.ibm.com> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2021-07-22Move malloc_{g,s}et_state to libc_malloc_debugSiddhesh Poyarekar1-50/+5
These deprecated functions are only safe to call from __malloc_initialize_hook and as a result, are not useful in the general case. Move the implementations to libc_malloc_debug so that existing binaries that need it will now have to preload the debug DSO to work correctly. This also allows simplification of the core malloc implementation by dropping all the undumping support code that was added to make malloc_set_state work. One known breakage is that of ancient emacs binaries that depend on this. They will now crash when running with this libc. With LD_BIND_NOW=1, it will terminate immediately because of not being able to find malloc_set_state but with lazy binding it will crash in unpredictable ways. It will need a preloaded libc_malloc_debug.so so that its initialization hook is executed to allow its malloc implementation to work properly. Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
2021-07-22glibc.malloc.check: Wean away from malloc hooksSiddhesh Poyarekar1-15/+20
The malloc-check debugging feature is tightly integrated into glibc malloc, so thanks to an idea from Florian Weimer, much of the malloc implementation has been moved into libc_malloc_debug.so to support malloc-check. Due to this, glibc malloc and malloc-check can no longer work together; they use altogether different (but identical) structures for heap management. This should not make a difference though since the malloc check hook is not disabled anywhere. malloc_set_state does, but it does so early enough that it shouldn't cause any problems. The malloc check tunable is now in the debug DSO and has no effect when the DSO is not preloaded. Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
2021-07-22Simplify __malloc_initializedSiddhesh Poyarekar1-12/+12
Now that mcheck no longer needs to check __malloc_initialized (and no other third party hook can since the symbol is not exported), make the variable boolean and static so that it is used strictly within malloc. Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
2021-07-22Move malloc hooks into a compat DSOSiddhesh Poyarekar1-73/+12
Remove all malloc hook uses from core malloc functions and move it into a new library libc_malloc_debug.so. With this, the hooks now no longer have any effect on the core library. libc_malloc_debug.so is a malloc interposer that needs to be preloaded to get hooks functionality back so that the debugging features that depend on the hooks, i.e. malloc-check, mcheck and mtrace work again. Without the preloaded DSO these debugging features will be nops. These features will be ported away from hooks in subsequent patches. Similarly, legacy applications that need hooks functionality need to preload libc_malloc_debug.so. The symbols exported by libc_malloc_debug.so are maintained at exactly the same version as libc.so. Finally, static binaries will no longer be able to use malloc debugging features since they cannot preload the debugging DSO. Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
2021-07-22Remove __morecore and __default_morecoreSiddhesh Poyarekar1-4/+3
Make the __morecore and __default_morecore symbols compat-only and remove their declarations from the API. Also, include morecore.c directly into malloc.c; this should ideally get merged into malloc in a future cleanup. Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
2021-07-22Remove __after_morecore_hookSiddhesh Poyarekar1-21/+1
Remove __after_morecore_hook from the API and finalize the symbol so that it can no longer be used in new applications. Old applications using __after_morecore_hook will find that their hook is no longer called. Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
2021-07-09Force building with -fno-commonFlorian Weimer1-1/+1
As a result, is not necessary to specify __attribute__ ((nocommon)) on individual definitions. GCC 10 defaults to -fno-common on all architectures except ARC, but this change is compatible with older GCC versions and ARC, too. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2021-07-08_int_realloc is staticSiddhesh Poyarekar1-2/+2
_int_realloc is correctly declared at the top to be static, but incorrectly defined without the static keyword. Fix that. The generated binaries have identical code.
2021-07-08Harden tcache double-free checkSiddhesh Poyarekar1-4/+33
The tcache allocator layer uses the tcache pointer as a key to identify a block that may be freed twice. Since this is in the application data area, an attacker exploiting a use-after-free could potentially get access to the entire tcache structure through this key. A detailed write-up was provided by Awarau here: https://awaraucom.wordpress.com/2020/07/19/house-of-io-remastered/ Replace this static pointer use for key checking with one that is generated at malloc initialization. The first attempt is through getrandom with a fallback to random_bits(), which is a simple pseudo-random number generator based on the clock. The fallback ought to be sufficient since the goal of the randomness is only to make the key arbitrary enough that it is very unlikely to collide with user data. Co-authored-by: Eyal Itkin <eyalit@checkpoint.com>
2021-07-02malloc: Initiate tcache shutdown even without allocations [BZ #28028]JeffyChen1-1/+2
After commit 1e26d35193efbb29239c710a4c46a64708643320 ("malloc: Fix tcache leak after thread destruction [BZ #22111]"), tcache_shutting_down is still not early enough. When we detach a thread with no tcache allocated, tcache_shutting_down would still be false. Reviewed-by: DJ Delorie <dj@redhat.com>
2021-06-02fix typoXeonacid1-1/+1
"accomodate" should be "accommodate" Reviewed-by: Paul Zimmermann <Paul.Zimmermann@inria.fr>
2021-04-12Further fixes for REALLOC_ZERO_BYTES_FREES commentPaul Eggert1-7/+8
* malloc/malloc.c (REALLOC_ZERO_BYTES_FREES): Improve comment further.
2021-04-11Fix REALLOC_ZERO_BYTES_FREES comment to match C17Paul Eggert1-4/+7
* malloc/malloc.c (REALLOC_ZERO_BYTES_FREES): Update comment to match current C standard.
2021-03-26malloc: Ensure mtag code path in checked_request2size is coldSzabolcs Nagy1-2/+7
This is a workaround (hack) for a gcc optimization issue (PR 99551). Without this the generated code may evaluate the expression in the cold path which causes performance regression for small allocations in the memory tagging disabled (common) case. Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26malloc: Remove unnecessary tagging around _mid_memalignSzabolcs Nagy1-8/+2
The internal _mid_memalign already returns newly tagged memory. (__libc_memalign and posix_memalign already relied on this, this patch fixes the other call sites.) Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26malloc: Rename chunk2rawmemSzabolcs Nagy1-41/+41
The previous patch ensured that all chunk to mem computations use chunk2rawmem, so now we can rename it to chunk2mem, and in the few cases where the tag of mem is relevant chunk2mem_tag can be used. Replaced tag_at (chunk2rawmem (x)) with chunk2mem_tag (x). Renamed chunk2rawmem to chunk2mem. Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26malloc: Use chunk2rawmem throughoutSzabolcs Nagy1-25/+33
The difference between chunk2mem and chunk2rawmem is that the latter does not get the memory tag for the returned pointer. It turns out chunk2rawmem almost always works: The input of chunk2mem is a chunk pointer that is untagged so it can access the chunk header. All memory that is not user allocated heap memory is untagged, which in the current implementation means that it has the 0 tag, but this patch does not rely on the tag value. The patch relies on that chunk operations are either done on untagged chunks or without doing memory access to the user owned part. Internal interface contracts: sysmalloc: Returns untagged memory. _int_malloc: Returns untagged memory. _int_free: Takes untagged memory. _int_memalign: Returns untagged memory. _int_realloc: Takes and returns tagged memory. So only _int_realloc and functions outside this list need care. Alignment checks do not need the right tag and tcache works with untagged memory. tag_at was kept in realloc after an mremap, which is not strictly necessary, since the pointer is only used to retag the memory, but this way the tag is guaranteed to be different from the old tag. Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26malloc: Use different tag after mremapSzabolcs Nagy1-1/+1
The comment explained why different tag is used after mremap, but for that correctly tagged pointer should be passed to tag_new_usable. Use chunk2mem to get the tag. Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26malloc: Use memsize instead of CHUNK_AVAILABLE_SIZESzabolcs Nagy1-20/+19
This is a pure refactoring change that does not affect behaviour. The CHUNK_AVAILABLE_SIZE name was unclear, the memsize name tries to follow the existing convention of mem denoting the allocation that is handed out to the user, while chunk is its internally used container. The user owned memory for a given chunk starts at chunk2mem(p) and the size is memsize(p). It is not valid to use on dumped heap chunks. Moved the definition next to other chunk and mem related macros. Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26malloc: Use mtag_enabled instead of USE_MTAGSzabolcs Nagy1-6/+4
Use the runtime check where possible: it should not cause slow down in the !USE_MTAG case since then mtag_enabled is constant false, but it allows compiling the tagging logic so it's less likely to break or diverge when developers only test the !USE_MTAG case. Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26malloc: Use branches instead of mtag_granule_maskSzabolcs Nagy1-20/+14
The branches may be better optimized since mtag_enabled is widely used. Granule size larger than a chunk header is not supported since then we cannot have both the chunk header and user area granule aligned. To fix that for targets with large granule, the chunk layout has to change. So code that attempted to handle the granule mask generally was changed. This simplified CHUNK_AVAILABLE_SIZE and the logic in malloc_usable_size. Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26malloc: Change calloc when tagging is disabledSzabolcs Nagy1-6/+4
When glibc is built with memory tagging support (USE_MTAG) but it is not enabled at runtime (mtag_enabled) then unconditional memset was used even though that can be often avoided. This is for performance when tagging is supported but not enabled. The extra check should have no overhead: tag_new_zero_region already had a runtime check which the compiler can now optimize away. Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26malloc: Only support zeroing and not arbitrary memset with mtagSzabolcs Nagy1-9/+8
The memset api is suboptimal and does not provide much benefit. Memory tagging only needs a zeroing memset (and only for memory that's sized and aligned to multiples of the tag granule), so change the internal api and the target hooks accordingly. This is to simplify the implementation of the target hook. Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26malloc: Use global flag instead of function pointer dispatch for mtagSzabolcs Nagy1-20/+38
A flag check can be faster than function pointers because of how branch prediction and speculation works and it can also remove a layer of indirection when there is a mismatch between the malloc internal tag_* api and __libc_mtag_* target hooks. Memory tagging wrapper functions are moved to malloc.c from arena.c and the logic now checks mmap_enabled. The definition of tag_new_usable is moved after chunk related definitions. This refactoring also allows using mtag_enabled checks instead of USE_MTAG ifdefs when memory tagging support only changes code logic when memory tagging is enabled at runtime. Note: an "if (false)" code block is optimized away even at -O0 by gcc. Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26malloc: Refactor TAG_ macros to avoid indirectionSzabolcs Nagy1-43/+38
This does not change behaviour, just removes one layer of indirection in the internal memory tagging logic. Use tag_ and mtag_ prefixes instead of __tag_ and __mtag_ since these are all symbols with internal linkage, private to malloc.c, so there is no user namespace pollution issue. Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26malloc: Avoid taggig mmaped memory on freeSzabolcs Nagy1-3/+4
Either the memory belongs to the dumped area, in which case we don't want to tag (the dumped area has the same tag as malloc internal data so tagging is unnecessary, but chunks there may not have the right alignment for the tag granule), or the memory will be unmapped immediately (and thus tagging is not useful). Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26malloc: Move MTAG_MMAP_FLAGS definitionSzabolcs Nagy1-0/+2
This is only used internally in malloc.c, the extern declaration was wrong, __mtag_mmap_flags has internal linkage. Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26malloc: Fix a potential realloc issue with memory taggingSzabolcs Nagy1-7/+7
At an _int_free call site in realloc the wrong size was used for tag clearing: the chunk header of the next chunk was also cleared which in practice may work, but logically wrong. The tag clearing is moved before the memcpy to save a tag computation, this avoids a chunk2mem. Another chunk2mem is removed because newmem does not have to be recomputed. Whitespaces got fixed too. Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-26malloc: Fix a realloc crash with heap tagging [BZ 27468]Szabolcs Nagy1-1/+3
_int_free must be called with a chunk that has its tag reset. This was missing in a rare case that could crash when heap tagging is enabled: when in a multi-threaded process the current arena runs out of memory during realloc, but another arena still has space to finish the realloc then _int_free was called without clearing the user allocation tags. Fixes bug 27468. Reviewed-by: DJ Delorie <dj@redhat.com>
2021-03-25Support for multiple versions in versioned_symbol, compat_symbolFlorian Weimer1-1/+1
This essentially folds compat_symbol_unique functionality into compat_symbol. This change eliminates the need for intermediate aliases for defining multiple symbol versions, for both compat_symbol and versioned_symbol. Some binutils versions do not suport multiple versions per symbol on some targets, so aliases are automatically introduced, similar to what compat_symbol_unique did. To reduce symbol table sizes, a configure check is added to avoid these aliases if they are not needed. The new mechanism works with data symbols as well as function symbols, due to the way an assembler-level redirect is used. It is not compatible with weak symbols for old binutils versions, which is why the definition of __malloc_initialize_hook had to be changed. This is not a loss of functionality because weak symbols do not matter to dynamic linking. The placeholder symbol needs repeating in nptl/libpthread-compat.c now that compat_symbol is used, but that seems more obvious than introducing yet another macro. A subtle difference was that compat_symbol_unique made the symbol global automatically. compat_symbol does not do this, so static had to be removed from the definition of __libpthread_version_placeholder. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2021-01-02Update copyright dates with scripts/update-copyrightsPaul Eggert1-1/+1
I used these shell commands: ../glibc/scripts/update-copyrights $PWD/../gnulib/build-aux/update-copyright (cd ../glibc && git commit -am"[this commit message]") and then ignored the output, which consisted lines saying "FOO: warning: copyright statement not found" for each of 6694 files FOO. I then removed trailing white space from benchtests/bench-pthread-locks.c and iconvdata/tst-iconv-big5-hkscs-to-2ucs4.c, to work around this diagnostic from Savannah: remote: *** pre-commit check failed ... remote: *** error: lines with trailing whitespace found remote: error: hook declined to update refs/heads/master
2020-12-29free: preserve errno [BZ#17924]Paul Eggert1-4/+9
In the next release of POSIX, free must preserve errno <https://www.austingroupbugs.net/view.php?id=385>. Modify __libc_free to save and restore errno, so that any internal munmap etc. syscalls do not disturb the caller's errno. Add a test malloc/tst-free-errno.c (almost all by Bruno Haible), and document that free preserves errno. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2020-12-21malloc: Basic support for memory tagging in the malloc() familyRichard Earnshaw1-67/+269
This patch adds the basic support for memory tagging. Various flavours are supported, particularly being able to turn on tagged memory at run-time: this allows the same code to be used on systems where memory tagging support is not present without neededing a separate build of glibc. Also, depending on whether the kernel supports it, the code will use mmap for the default arena if morecore does not, or cannot support tagged memory (on AArch64 it is not available). All the hooks use function pointers to allow this to work without needing ifuncs. Reviewed-by: DJ Delorie <dj@redhat.com>
2020-12-16malloc: Use __libc_initial to detect an inner libcFlorian Weimer1-0/+2
The secondary/non-primary/inner libc (loaded via dlmopen, LD_AUDIT, static dlopen) must not use sbrk to allocate member because that would interfere with allocations in the outer libc. On Linux, this does not matter because sbrk itself was changed to fail in secondary libcs. _dl_addr occasionally shows up in profiles, but had to be used before because __libc_multiple_libs was unreliable. So this change achieves a slight reduction in startup time. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2020-12-11malloc: Detect infinite-loop in _int_free when freeing tcache [BZ#27052]W. Hashimoto1-1/+4
If linked-list of tcache contains a loop, it invokes infinite loop in _int_free when freeing tcache. The PoC which invokes such infinite loop is on the Bugzilla(#27052). This loop should terminate when the loop exceeds mp_.tcache_count and the program should abort. The affected glibc version is 2.29 or later. Reviewed-by: DJ Delorie <dj@redhat.com>
2020-10-06Replace Minumum/minumum with Minimum/minimumH.J. Lu1-1/+1
Replace Minumum/minumum in comments with Minimum/minimum.
2020-09-17Update mallinfo2 ABI, and testDJ Delorie1-0/+2
This patch adds the ABI-related bits to reflect the new mallinfo2 function, and adds a test case to verify basic functionality. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
2020-08-31Add mallinfo2 function that support sizes >= 4GB.Martin Liska1-5/+30
The current int type can easily overflow for allocation of more than 4GB.
2020-04-06malloc: ensure set_max_fast never stores zero [BZ #25733]DJ Delorie1-1/+1
The code for set_max_fast() stores an "impossibly small value" instead of zero, when the parameter is zero. However, for small values of the parameter (ex: 1 or 2) the computation results in a zero being stored anyway. This patch checks for the parameter being small enough for the computation to result in zero instead, so that a zero is never stored. key values which result in zero being stored: x86-64: 1..7 (or other 64-bit) i686: 1..11 armhfp: 1..3 (or other 32-bit) Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2020-03-31Fix alignment bug in Safe-LinkingEyal Itkin1-11/+11
Alignment checks should be performed on the user's buffer and NOT on the mchunkptr as was done before. This caused bugs in 32 bit versions, because: 2*sizeof(t) != MALLOC_ALIGNMENT. As the tcache works on users' buffers it uses the aligned_OK() check, and the rest work on mchunkptr and therefore check using misaligned_chunk(). Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2020-03-31Typo fixes and CR cleanup in Safe-LinkingEyal Itkin1-15/+15
Removed unneeded '\' chars from end of lines and fixed some indentation issues that were introduced in the original Safe-Linking patch. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
2020-03-29Add Safe-Linking to fastbins and tcacheEyal Itkin1-13/+58
Safe-Linking is a security mechanism that protects single-linked lists (such as the fastbin and tcache) from being tampered by attackers. The mechanism makes use of randomness from ASLR (mmap_base), and when combined with chunk alignment integrity checks, it protects the "next" pointers from being hijacked by an attacker. While Safe-Unlinking protects double-linked lists (such as the small bins), there wasn't any similar protection for attacks against single-linked lists. This solution protects against 3 common attacks: * Partial pointer override: modifies the lower bytes (Little Endian) * Full pointer override: hijacks the pointer to an attacker's location * Unaligned chunks: pointing the list to an unaligned address The design assumes an attacker doesn't know where the heap is located, and uses the ASLR randomness to "sign" the single-linked pointers. We mark the pointer as P and the location in which it is stored as L, and the calculation will be: * PROTECT(P) := (L >> PAGE_SHIFT) XOR (P) * *L = PROTECT(P) This way, the random bits from the address L (which start at the bit in the PAGE_SHIFT position), will be merged with LSB of the stored protected pointer. This protection layer prevents an attacker from modifying the pointer into a controlled value. An additional check that the chunks are MALLOC_ALIGNed adds an important layer: * Attackers can't point to illegal (unaligned) memory addresses * Attackers must guess correctly the alignment bits On standard 32 bit Linux machines, an attack will directly fail 7 out of 8 times, and on 64 bit machines it will fail 15 out of 16 times. This proposed patch was benchmarked and it's effect on the overall performance of the heap was negligible and couldn't be distinguished from the default variance between tests on the vanilla version. A similar protection was added to Chromium's version of TCMalloc in 2012, and according to their documentation it had an overhead of less than 2%. Reviewed-by: DJ Delorie <dj@redhat.com> Reviewed-by: Carlos O'Donell <carlos@redhat.com> Reviewed-by: Adhemerval Zacnella <adhemerval.zanella@linaro.org>
2020-01-01Update copyright dates with scripts/update-copyrights.Joseph Myers1-1/+1
2019-12-05Correct range checking in mallopt/mxfast/tcache [BZ #25194]DJ Delorie1-12/+20
do_set_tcache_max, do_set_mxfast: Fix two instances of comparing "size_t < 0" Both cases have upper limit, so the "negative value" case is already handled via overflow semantics. do_set_tcache_max, do_set_tcache_count: Fix return value on error. Note: currently not