Skip to content

Commit

Permalink
* mmap.c (backtrace_free): If freeing a large aligned block of
Browse files Browse the repository at this point in the history
	memory, call munmap rather than holding onto it.
	(backtrace_vector_grow): When growing a vector, double the number
	of pages requested.  When releasing the old version of a grown
	vector, pass the correct size to backtrace_free.


git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@210256 138bc75d-0d04-0410-961f-82ee72b054a4
  • Loading branch information
ian committed May 9, 2014
1 parent 80ede13 commit af436d5
Show file tree
Hide file tree
Showing 2 changed files with 34 additions and 2 deletions.
8 changes: 8 additions & 0 deletions libbacktrace/ChangeLog
Original file line number Diff line number Diff line change
@@ -1,3 +1,11 @@
2014-05-08 Ian Lance Taylor <[email protected]>

* mmap.c (backtrace_free): If freeing a large aligned block of
memory, call munmap rather than holding onto it.
(backtrace_vector_grow): When growing a vector, double the number
of pages requested. When releasing the old version of a grown
vector, pass the correct size to backtrace_free.

2014-03-07 Ian Lance Taylor <[email protected]>

* sort.c (backtrace_qsort): Use middle element as pivot.
Expand Down
28 changes: 26 additions & 2 deletions libbacktrace/mmap.c
Original file line number Diff line number Diff line change
Expand Up @@ -164,6 +164,26 @@ backtrace_free (struct backtrace_state *state, void *addr, size_t size,
{
int locked;

/* If we are freeing a large aligned block, just release it back to
the system. This case arises when growing a vector for a large
binary with lots of debug info. Calling munmap here may cause us
to call mmap again if there is also a large shared library; we
just live with that. */
if (size >= 16 * 4096)
{
size_t pagesize;

pagesize = getpagesize ();
if (((uintptr_t) addr & (pagesize - 1)) == 0
&& (size & (pagesize - 1)) == 0)
{
/* If munmap fails for some reason, just add the block to
the freelist. */
if (munmap (addr, size) == 0)
return;
}
}

/* If we can acquire the lock, add the new space to the free list.
If we can't acquire the lock, just leak the memory.
__sync_lock_test_and_set returns the old state of the lock, so we
Expand Down Expand Up @@ -209,14 +229,18 @@ backtrace_vector_grow (struct backtrace_state *state,size_t size,
alc = pagesize;
}
else
alc = (alc + pagesize - 1) & ~ (pagesize - 1);
{
alc *= 2;
alc = (alc + pagesize - 1) & ~ (pagesize - 1);
}
base = backtrace_alloc (state, alc, error_callback, data);
if (base == NULL)
return NULL;
if (vec->base != NULL)
{
memcpy (base, vec->base, vec->size);
backtrace_free (state, vec->base, vec->alc, error_callback, data);
backtrace_free (state, vec->base, vec->size + vec->alc,
error_callback, data);
}
vec->base = base;
vec->alc = alc - vec->size;
Expand Down

0 comments on commit af436d5

Please sign in to comment.