aboutsummaryrefslogtreecommitdiff
path: root/malloc/malloc.c
AgeCommit message (Collapse)AuthorFilesLines
2016-11-08More merge-related tweaksDJ Delorie1-4/+3
* add --enable-experimental-malloc/--disable-experimental-malloc (default: enabled) * syntax errors related to new lock macros * add some missing #if USE_TCACHE pairs * Undo test tweak to environment variable scanner
2016-11-08Merge branch 'master' into dj/mallocDJ Delorie1-153/+247
2016-10-28malloc: Update comments about chunk layoutFlorian Weimer1-10/+30
2016-10-28sysmalloc: Initialize previous size field of mmaped chunksFlorian Weimer1-0/+1
With different encodings of the header, the previous zero initialization may be insufficient and produce an invalid encoding.
2016-10-28malloc: Use accessors for chunk metadata accessFlorian Weimer1-63/+84
This change allows us to change the encoding of these struct members in a centralized fashion.
2016-10-27Static inline functions for mallopt helpersSiddhesh Poyarekar1-34/+93
Make mallopt helper functions for each mallopt parameter so that it can be called consistently in other areas, like setting tunables. * malloc/malloc.c (do_set_mallopt_check): New function. (do_set_mmap_threshold): Likewise. (do_set_mmaps_max): Likewise. (do_set_top_pad): Likewise. (do_set_perturb_byte): Likewise. (do_set_trim_threshold): Likewise. (do_set_arena_max): Likewise. (do_set_arena_test): Likewise. (__libc_mallopt): Use them.
2016-10-26malloc: Remove malloc_get_state, malloc_set_state [BZ #19473]Florian Weimer1-2/+0
After the removal of __malloc_initialize_hook, newly compiled Emacs binaries are no longer able to use these interfaces. malloc_get_state is only used during the Emacs build process, so we provide a stub implementation only. Existing Emacs binaries will not call this stub function, but still reference the symbol. The rewritten tst-mallocstate test constructs a dumped heap which should approximates what existing Emacs binaries pass to glibc malloc.
2016-10-26Remove redundant definitions of M_ARENA_* macrosSiddhesh Poyarekar1-5/+0
The M_ARENA_MAX and M_ARENA_TEST macros are defined in malloc.c as well as malloc.h, and the former is unnecessary. This patch removes the duplicate. Tested on x86_64 to verify that the generated code remains unchanged barring changed line numbers to __malloc_assert. * malloc/malloc.c (M_ARENA_TEST, M_ARENA_MAX): Remove.
2016-10-26Document the M_ARENA_* mallopt parametersSiddhesh Poyarekar1-1/+0
The M_ARENA_* mallopt parameters are in wide use in production to control the number of arenas that a long lived process creates and hence there is no point in stating that this interface is non-public. Document this interface and remove the obsolete comment. * manual/memory.texi (M_ARENA_TEST): Add documentation. (M_ARENA_MAX): Likewise. * malloc/malloc.c: Remove obsolete comment.
2016-09-21malloc: Manual part of conversion to __libc_lockFlorian Weimer1-1/+1
This removes the old mutex_t-related definitions from malloc-machine.h, too.
2016-09-06malloc: Automated part of conversion to __libc_lockFlorian Weimer1-20/+20
2016-08-11Merge branch 'master' into dj/mallocDJ Delorie1-63/+0
2016-08-10Various namespace issuesDJ Delorie1-12/+12
2016-08-10Remove debugging; fix trace error handlingDJ Delorie1-9/+8
Comment out _m_printf until it's needed again. Properly unlock the trace mutex when we error out because of file errors; also disable tracing when that happens.
2016-08-09Various minor fixesDJ Delorie1-25/+26
Replace "int" with "size_t" as appropriate. Appease gcc's array-bounds warning Process tcache after hooks to support MALLOC_CHECK_
2016-08-03elf: dl-minimal malloc needs to respect fundamental alignmentFlorian Weimer1-63/+0
The dynamic linker currently uses __libc_memalign for TLS-related allocations. The goal is to switch to malloc instead. If the minimal malloc follows the ABI fundamental alignment, we can assume that malloc provides this alignment, and thus skip explicit alignment in a few cases as an optimization. It was requested on libc-alpha that MALLOC_ALIGNMENT should be used, although this results in wasted space if MALLOC_ALIGNMENT is larger than the fundamental alignment. (The dynamic linker cannot assume that the non-minimal malloc will provide an alignment of MALLOC_ALIGNMENT; the ABI provides _Alignof (max_align_t) only.)
2016-07-21Add various bin-related trace path flagsDJ Delorie1-0/+21
2016-07-20Add note about the timing of recording an mremap event.DJ Delorie1-0/+7
2016-07-20Reschedule trace record commits to avoid inversion.DJ Delorie1-17/+87
This change decouples "collecting trace data" from "allocating a trace record" so that the record can be inserted into the trace buffer in the correct sequence wrt when it "owns" the pointers being recorded (i.e. malloc should record its event after it does its allocation, but free should record its event before it returns the memory to the arena). It splits starting a trace record (function entry) with committing to the buffer (trace recording) so that path data can be accumulated easily. Trace inversion happens when one thread records a malloc, but before it can actually do the allocation, the kernel schedules a thread that free's a block, which the malloc later returns. The events are free->malloc, but the trace records are malloc->free.
2016-07-19Fix trace window unmapping bugDJ Delorie1-1/+1
We were recording window number, not trace count, resulting in windows not getting unmapped.
2016-07-16Enhance the tracer with new data and fixes.Carlos O'Donell1-34/+167
* Increase trace entry to 64-bytes. The following patch increases the trace entry to 64-bytes, still a proper multiple of the shared memory window size. While we have doubled the entry size the on-disk format is still smaller than the ASCII version. In the future we may wish to add variable sized records, but for now the simplicity of this method works well. With the extra bytes we are going to: - Record internal size information for incoming (free) and outgoing chunks (malloc, calloc, realloc, etc). - Simplifies accounting of RSS usage and provides an extra cross check between malloc<->free based on internal chunk sizes. - Record alignment information for memalign, and posix_memalign. - Continues to extend the tracer to the full API. - Leave 128-bits of padding for future path uses. - Useful for more path information. Additionally __MTB_TYPE_POSIX_MEMALIGN is added for the sole purpose of recording the trace only so that we can hard-fail in the workload converter when we see such an entry. Lastly C_MEMALIGN, C_VALLOC, C_PVALLOC, and C_POSIX_MEMALIGN are added for workload entries for the sake of completeness. Builds on x86_64, capture looks good and it works. * Teach trace_dump about the new entries. The following patch teaches trace_dump about the new posix_memalign entry. It also teaches trace_dump about the new size2 and size3 fields. Tested by tracing a program that uses malloc, free, and memalign and verifying that the extra fields show the expected chunk sizes, and alignments dumped with trace_dump. Tested on x86_64 with no apparently problems. * Teach trace2wl and trace_run about new entries (a) trace2wl changes: The following patch teaches trace2wl how to output entries for valloc and pvalloc, it does so exactly the same way it does for malloc, since from the perspective of the API they are identical. Additionally trace2wl is taught how to output an event for memalign, storing alignment and size in the event record. Lastly posix_memalign is detected and the converter aborted if it's seen. It is my opinion that we should not ignore this data during conversion. If we see a need for it we should implement it later. (b) trace_run changes: Some cosmetic cleanup in printing 'pthread_t' which is always an address of the struct pthread structure in memory, so to make debugging easier we should print the value as a hex pointer. Teach the simulator how to run memalign. With the newly recorded alignment information we double check that the resulting memory is correctly aligned. We do not implement valloc and pvalloc, they will abort the simulator. This is incremental progress. Tested on x86_64 by converting and running a multithreaded test application that calls calloc, malloc, free, and memalign. * Disable recursive traces and save new data. (a) Adds support for disabling recurisvely recorded traces e.g. realloc calling malloc no longer produces a realloc and malloc trace event. We solve this by using a per-thread variable to disable new trace creation, but allow path bits to be set. This lets us record the code paths taken, but only record one public API event. (b) Save internal chunk size information into trace events for all APIs. The most important is free where we record the free size, this allows easier tooling to compute running idea RSS values. Tested on x86_64 with some small applications and test programs.
2016-07-15Add tunables for tcache count and max sizeDJ Delorie1-35/+72
2016-07-15Fix mmap/munmap trace bitsDJ Delorie1-4/+2
2016-07-13Fix a 32-bit sign-extension bug.Anton Blanchard1-1/+1
2016-07-13Fix double-padding bugDJ Delorie1-5/+6
The tcache was calling request2size which resulted in double padding. Store tcache's copy in a separate variable to avoid this.
2016-07-12Update to new binary file-based trace file.DJ Delorie1-96/+235
In order to not lose records, or need to guess ahead of time how many records you need, this switches to a mmap'd file for the trace buffer, and grows it as needed. The trace2dat perl script is replaced with a trace2wl C++ program that runs a lot faster and can handle the binary format.
2016-07-06Use __gettid() function for tracing.Carlos O'Donell1-1/+21
Integrate with thread 'tid' cache and use the cached value if present, otherwise update the cache. This should be much faster than a syscall per trace event.
2016-06-30Merge branch 'master' into dj/mallocDJ Delorie1-24/+72
2016-06-30Build fixes for in-tree and 32/64-bitDJ Delorie1-9/+8
Expand the comments in mtrace-ctl.c to better explain how to use this tracing controller. The new docs assume the SO is built and installed. Build fixed for trace_run.c Additional build pedantry to let trace_run.c be built with more warnings/errors turned on. Build/install trace_run and trace2dat trace2dat takes dump files from mtrace-ctl.so and turns them into mmap'able data files for trace_run, which "plays back" the logged calls. 32-bit compatibility Redesign tcache macros to account for differences between 64 and 32 bit systems.
2016-06-20Revert __malloc_initialize_hook symbol poisoningFlorian Weimer1-3/+3
It turns out the Emacs-internal malloc implementation uses __malloc_* symbols. If glibc poisons them in <stdc-pre.h>, Emacs will no longer compile.
2016-06-11malloc_usable_size: Use correct size for dumped fake mapped chunksFlorian Weimer1-1/+6
The adjustment for the size computation in commit 1e8a8875d69e36d2890b223ffe8853a8ff0c9512 is needed in malloc_usable_size, too.
2016-06-10malloc: Remove __malloc_initialize_hook from the API [BZ #19564]Florian Weimer1-1/+15
__malloc_initialize_hook is interposed by application code, so the usual approach to define a compatibility symbol does not work. This commit adds a new mechanism based on #pragma GCC poison in <stdc-predef.h>.
2016-06-08malloc: Correct size computation in realloc for dumped fake mmapped chunksFlorian Weimer1-4/+8
For regular mmapped chunks there are two size fields (hence a reduction by 2 * SIZE_SZ bytes), but for fake chunks, we only have one size field, so we need to subtract SIZE_SZ bytes. This was initially reported as Emacs bug 23726.
2016-05-24malloc: Correct malloc alignment on 32-bit architectures [BZ #6527]Florian Weimer1-14/+2
After the heap rewriting added in commit 4cf6c72fd2a482e7499c29162349810029632c3f (malloc: Rewrite dumped heap for compatibility in __malloc_set_state), we can change malloc alignment for new allocations because the alignment of old allocations no longer matters. We need to increase the malloc state version number, so that binaries containing dumped heaps of the new layout will not try to run on previous versions of glibc, resulting in obscure crashes. This commit addresses a failure of tst-malloc-thread-fail on the affected architectures (32-bit ppc and mips) because the test checks pointer alignment.
2016-05-13malloc: Rewrite dumped heap for compatibility in __malloc_set_stateFlorian Weimer1-9/+46
This will allow us to change many aspects of the malloc implementation while preserving compatibility with existing Emacs binaries. As a result, existing Emacs binaries will have a larger RSS, and Emacs needs a few more milliseconds to start. This overhead is specific to Emacs (and will go away once Emacs switches to its internal malloc). The new checks to make free and realloc compatible with the dumped heap are confined to the mmap paths, which are already quite slow due to the munmap overhead. This commit weakens some security checks, but only for heap pointers in the dumped main arena. By default, this area is empty, so those checks are as effective as before.
2016-04-29Merge branch 'master' into dj/mallocDJ Delorie1-2/+1
Periodic sync
2016-04-29changes to per-thread cache algorithmsDJ Delorie1-22/+412
Core algorithm changes: * Per-thread cache is refilled from existing fastbins and smallbins instead of always needing a bigger chunk. * Caches are linked, and cache is cleaned up when the thread exits (incomplete for now, needed framework for chunk scanner). * Fixes to mutex placement - needed to sync chunk headers across threads. Enabling the per-thread cache (tcache) gives about a 20-30% speedup at a 20-30% memory cost (due to fragmentation). Still working on that :-) Debugging helpers (temporary): * __malloc_scan_chunks() calls back to the app for each chunk in each heap. * _m_printf() helper for "safe" printing within malloc * Lots of calls to the above, commented out, in case you need them. * trace_run scans leftover chunks too.
2016-04-14malloc: Remove malloc hooks from fork handlerFlorian Weimer1-2/+0
The fork handler now runs so late that there is no risk anymore that other fork handlers in the same thread use malloc, so it is no longer necessary to install malloc hooks which made a subset of malloc functionality available to the thread that called fork.
2016-04-14malloc: Run fork handler as late as possible [BZ #19431]Florian Weimer1-0/+1
Previously, a thread M invoking fork would acquire locks in this order: (M1) malloc arena locks (in the registered fork handler) (M2) libio list lock A thread F invoking flush (NULL) would acquire locks in this order: (F1) libio list lock (F2) individual _IO_FILE locks A thread G running getdelim would use this order: (G1) _IO_FILE lock (G2) malloc arena lock After executing (M1), (F1), (G1), none of the threads can make progress. This commit changes the fork lock order to: (M'1) libio list lock (M'2) malloc arena locks It explicitly encodes the lock order in the implementations of fork, and does not rely on the registration order, thus avoiding the deadlock.
2016-03-18Merge branch 'master' into dj/mallocDJ Delorie1-11/+3
2016-03-17Replace int with size_t as appropriateDJ Delorie1-5/+5
2016-03-11Fix type of parameter passed by malloc_consolidateTulio Magno Quites Machado Filho1-1/+1
atomic_exchange_acq() expected a pointer, but was receiving an integer.
2016-02-19More trace hooksDJ Delorie1-3/+20
Add hooks to pvalloc and calloc Add path flag for when a call is handled via a hook function
2016-02-19malloc: Remove NO_THREADSFlorian Weimer1-2/+0
No functional change. It was not possible to build without threading support before.
2016-02-19malloc: Remove max_total_mem member form struct malloc_parFlorian Weimer1-6/+2
Also note that sumblks in struct mallinfo is always 0. No functional change.
2016-02-19malloc: Remove arena_mem variableFlorian Weimer1-2/+0
The computed value is never used. The accesses were data races.
2016-02-11Update malloc tracing utility.DJ Delorie1-8/+9
Change head pointer to be total calls; adjust users to modulo after incrementing. Use mmap() instead of sbrk(). Split environment variables so count and file can be specified. Export trace hooks so mtrace-ctl can be built against libc.so. Allow NULL to be passed to __mtrace_get_trace_buffer. Add some error handling to mtrace-ctl.
2016-02-09Initial tracing functionalityDJ Delorie1-2/+106
First attempt at a low-overhead tracing feature. To enable, you build mtrace-ctl.c into a .so and LD_PRELOAD it. That uses a private API to set up a trace buffer, and calls to malloc et all fill in records in the trace buffer. At program exit, mtrace-ctl reads the buffer and stores the data on disk. Internally, the only contention point is the atomic update of the buffer head pointer. Once aquired, each thread fills in its record without needing locks.
2016-02-09Initial attempt at a per-thread cacheDJ Delorie1-0/+118
If a malloc of size MAX_TCACHE_SIZE or smaller is asked for, a thread-local cache is used. An entry in the cache is returned if available, else a chunk of size N*8 is requested from the arena, and broken into 8 (TCACHE_FILL_COUNT+1) N-sized chunks. One chunk is returned and the rest are stored in the cache. free() can also fill the cache, as long as there are fewer than 7 items in the cache, else the chunk is free'd as usual. The cache is per-size so no searching is required.
2016-01-04Update copyright dates with scripts/update-copyrights.Joseph Myers1-1/+1