Skip to content

Commit

Permalink
Add size-bucketed free list to Hades
Browse files Browse the repository at this point in the history
Summary:
Previously, Hades used a "first-fit" free list algorithm for OG.
Its performance suffered on any benchmarks doing a lot of allocations
that survived until old gen (such as many Octane benchmarks).

Instead, use buckets of free lists for each size class from 8 to 2048 (in multiples of 8),
and one general free list that can be split for anything larger than that.
After a segment has the first free'd object, it transitions from bump-alloc mode
to free-list mode, and populates the buckets.

This free list pattern still doesn't do any work towards implementing coalescing;
instead, I'm going to put up a later diff for implementing compaction of particularly
fragmented segments. That means for now, the heap can become very fragmented.

Perf wins on Octane benchmarks like Typescript are in the 10x range, but some others
are unaffected.

Reviewed By: davedets

Differential Revision: D21460843

fbshipit-source-id: 132aa700428089bb60f6a6273213fa73c4d81a1d
  • Loading branch information
Riley Dulin authored and facebook-github-bot committed Jul 16, 2020
1 parent dea85c3 commit 467d5d1
Show file tree
Hide file tree
Showing 3 changed files with 248 additions and 220 deletions.
4 changes: 4 additions & 0 deletions include/hermes/VM/AlignedHeapSegment.h
Original file line number Diff line number Diff line change
Expand Up @@ -256,6 +256,10 @@ class AlignedHeapSegment {
const MarkBitArrayNC &cellHeads() const {
return contents()->startOfCells_;
}

static MarkBitArrayNC &cellHeadsCovering(const void *ptr) {
return contents(AlignedStorage::start(ptr))->startOfCells_;
}
#endif

explicit inline operator bool() const;
Expand Down
108 changes: 107 additions & 1 deletion include/hermes/VM/HadesGC.h
Original file line number Diff line number Diff line change
Expand Up @@ -187,11 +187,57 @@ class HadesGC final : public GCBase {
/// \}
#endif

class HeapSegment;
class CollectionSection;
class EvacAcceptor;
class MarkAcceptor;
class MarkWeakRootsAcceptor;
class OldGen;

/// Similar to AlignedHeapSegment except it uses a free list.
class HeapSegment final : public AlignedHeapSegment {
public:
explicit HeapSegment(AlignedStorage &&storage);
~HeapSegment() = default;

/// Allocate space by bumping a level.
/// \pre isBumpAllocMode() must be true.
AllocResult bumpAlloc(uint32_t sz);

/// YG has a much simpler alloc path, which shortcuts some steps the normal
/// \p alloc takes.
AllocResult youngGenBumpAlloc(uint32_t sz);

/// Record the head of this cell so it can be found by the card scanner.
static void setCellHead(const GCCell *cell);

/// For a given address, find the head of the cell.
/// \return A cell such that cell <= address < cell->nextCell().
GCCell *getCellHead(const void *address);

/// Call \p callback on every cell allocated in this segment.
/// NOTE: Overridden to skip free list entries.
template <typename CallbackFunction>
void forAllObjs(CallbackFunction callback);
template <typename CallbackFunction>
void forAllObjs(CallbackFunction callback) const;

bool isBumpAllocMode() const {
return bumpAllocMode_;
}

/// Transitions this segment from bump-alloc mode to freelist mode.
/// Can only be called once, when the segment is in bump-alloc mode. There
/// is no transitioning from freelist mode back to bump-alloc mode.
void transitionToFreelist(OldGen &og);

private:
/// If true, then allocations into this segment increment a level inside the
/// segment. Once the level reaches the end of the segment, no more
/// allocations can occur.
/// All segments begin in bumpAllocMode. If an OG segment has this mode set,
/// and sweeping frees an object, this mode will be unset.
bool bumpAllocMode_{true};
};

class OldGen final {
public:
Expand All @@ -218,18 +264,78 @@ class HadesGC final : public GCBase {
/// \post This function either successfully allocates, or reports OOM.
GCCell *alloc(uint32_t sz);

/// Adds the given cell to the free list for this segment.
/// \pre this->contains(cell) is true.
void addCellToFreelist(GCCell *cell);

/// Version of addCellToFreelist when nothing is initialized at the address
/// yet.
/// \param alreadyFree If true, this location is not currently allocated.
void addCellToFreelist(void *addr, uint32_t sz, bool alreadyFree);

/// Transitions the given segment from bump-alloc mode to freelist mode.
/// Can only be called once, when the segment is in bump-alloc mode. There
/// is no transitioning from freelist mode back to bump-alloc mode.
void transitionToFreelist(HeapSegment &seg);

/// \return the total number of bytes that are in use by the OG section of
/// the JS heap.
uint64_t allocatedBytes() const;

class FreelistCell final : public VariableSizeRuntimeCell {
private:
static const VTable vt;

public:
// If null, this is the tail of the free list.
FreelistCell *next_;

explicit FreelistCell(uint32_t sz, FreelistCell *next)
: VariableSizeRuntimeCell{&vt, sz}, next_{next} {}

/// Split this cell into two FreelistCells. The first cell will be the
/// requested size \p sz, but no guarantee is made about its next pointer.
/// The second cell will have the remainder that was left from the
/// original, and will be on the free list.
/// \param og The OldGen that this FreelistCell resides in.
/// \param sz The size that the newly-split cell should be.
/// \pre getAllocatedSize() >= sz + minAllocationSize()
/// \post this will now point to the first cell, but without modifying
/// this. this should no longer be used as a FreelistCell, and something
/// else should be constructed into it immediately.
void split(OldGen &og, uint32_t sz);

static bool classof(const GCCell *cell) {
return cell->getKind() == CellKind::FreelistKind;
}
};

private:
HadesGC *gc_;
std::vector<std::unique_ptr<HeapSegment>> segments_;

/// This is the sum of all bytes currently allocated in the heap, excluding
/// bump-allocated segments. Use \c allocatedBytes() to include
/// bump-allocated segments.
uint64_t allocatedBytes_{0};

/// There is one bucket for each size, in multiples of heapAlign.
static constexpr size_t kNumFreelistBuckets = 256;
static constexpr size_t kMinSizeForLargeBlock = kNumFreelistBuckets
<< LogHeapAlign;
std::array<FreelistCell *, kNumFreelistBuckets> freelistBuckets_{};
FreelistCell *largeBlockFreelistHead_ = nullptr;

/// Searches the OG for a space to allocate memory into.
/// \return A pointer to uninitialized memory that can be written into, null
/// if no such space exists.
GCCell *search(uint32_t sz);

/// Common path for when an allocation has succeeded.
/// \param cell The free memory that will soon have an object allocated into
/// it.
/// \param sz The number of bytes associated with the free memory.
GCCell *finishAlloc(FreelistCell *cell, uint32_t sz);
};

private:
Expand Down
Loading

0 comments on commit 467d5d1

Please sign in to comment.