Skip to content

Commit

Permalink
Revert r314806 "[SLP] Vectorize jumbled memory loads."
Browse files Browse the repository at this point in the history
All the buildbots are red, e.g.
http://lab.llvm.org:8011/builders/clang-cmake-aarch64-lld/builds/2436/

> Summary:
> This patch tries to vectorize loads of consecutive memory accesses, accessed
> in non-consecutive or jumbled way. An earlier attempt was made with patch D26905
> which was reverted back due to some basic issue with representing the 'use mask' of
> jumbled accesses.
>
> This patch fixes the mask representation by recording the 'use mask' in the usertree entry.
>
> Change-Id: I9fe7f5045f065d84c126fa307ef6ebe0787296df
>
> Reviewers: mkuper, loladiro, Ayal, zvi, danielcdh
>
> Reviewed By: Ayal
>
> Subscribers: hans, mzolotukhin
>
> Differential Revision: https://reviews.llvm.org/D36130

git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@314824 91177308-0d34-0410-b5e6-96231b3b80d8
  • Loading branch information
zmodem committed Oct 3, 2017
1 parent c67053f commit 771c4b9
Show file tree
Hide file tree
Showing 7 changed files with 136 additions and 373 deletions.
15 changes: 0 additions & 15 deletions include/llvm/Analysis/LoopAccessAnalysis.h
Original file line number Diff line number Diff line change
Expand Up @@ -667,21 +667,6 @@ int64_t getPtrStride(PredicatedScalarEvolution &PSE, Value *Ptr, const Loop *Lp,
const ValueToValueMap &StridesMap = ValueToValueMap(),
bool Assume = false, bool ShouldCheckWrap = true);

/// \brief Attempt to sort the 'loads' in \p VL and return the sorted values in
/// \p Sorted.
///
/// Returns 'false' if sorting is not legal or feasible, otherwise returns
/// 'true'. If \p Mask is not null, it also returns the \p Mask which is the
/// shuffle mask for actual memory access order.
///
/// For example, for a given VL of memory accesses in program order, a[i+2],
/// a[i+0], a[i+1] and a[i+3], this function will sort the VL and save the
/// sorted value in 'Sorted' as a[i+0], a[i+1], a[i+2], a[i+3] and saves the
/// mask for actual memory accesses in program order in 'Mask' as <2,0,1,3>
bool sortLoadAccesses(ArrayRef<Value *> VL, const DataLayout &DL,
ScalarEvolution &SE, SmallVectorImpl<Value *> &Sorted,
SmallVectorImpl<unsigned> *Mask = nullptr);

/// \brief Returns true if the memory operations \p A and \p B are consecutive.
/// This is a simple API that does not depend on the analysis pass.
bool isConsecutiveAccess(Value *A, Value *B, const DataLayout &DL,
Expand Down
71 changes: 0 additions & 71 deletions lib/Analysis/LoopAccessAnalysis.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1107,77 +1107,6 @@ static unsigned getAddressSpaceOperand(Value *I) {
return -1;
}

// TODO:This API can be improved by using the permutation of given width as the
// accesses are entered into the map.
bool llvm::sortLoadAccesses(ArrayRef<Value *> VL, const DataLayout &DL,
ScalarEvolution &SE,
SmallVectorImpl<Value *> &Sorted,
SmallVectorImpl<unsigned> *Mask) {
SmallVector<std::pair<int64_t, Value *>, 4> OffValPairs;
OffValPairs.reserve(VL.size());
Sorted.reserve(VL.size());

// Walk over the pointers, and map each of them to an offset relative to
// first pointer in the array.
Value *Ptr0 = getPointerOperand(VL[0]);
const SCEV *Scev0 = SE.getSCEV(Ptr0);
Value *Obj0 = GetUnderlyingObject(Ptr0, DL);
PointerType *PtrTy = dyn_cast<PointerType>(Ptr0->getType());
uint64_t Size = DL.getTypeAllocSize(PtrTy->getElementType());

for (auto *Val : VL) {
// The only kind of access we care about here is load.
if (!isa<LoadInst>(Val))
return false;

Value *Ptr = getPointerOperand(Val);
assert(Ptr && "Expected value to have a pointer operand.");
// If a pointer refers to a different underlying object, bail - the
// pointers are by definition incomparable.
Value *CurrObj = GetUnderlyingObject(Ptr, DL);
if (CurrObj != Obj0)
return false;

const SCEVConstant *Diff =
dyn_cast<SCEVConstant>(SE.getMinusSCEV(SE.getSCEV(Ptr), Scev0));
// The pointers may not have a constant offset from each other, or SCEV
// may just not be smart enough to figure out they do. Regardless,
// there's nothing we can do.
if (!Diff || static_cast<unsigned>(Diff->getAPInt().abs().getSExtValue()) >
(VL.size() - 1) * Size)
return false;

OffValPairs.emplace_back(Diff->getAPInt().getSExtValue(), Val);
}
SmallVector<unsigned, 4> UseOrder(VL.size());
for (unsigned i = 0; i < VL.size(); i++) {
UseOrder[i] = i;
}

// Sort the memory accesses and keep the order of their uses in UseOrder.
std::sort(UseOrder.begin(), UseOrder.end(),
[&OffValPairs](unsigned Left, unsigned Right) {
return OffValPairs[Left].first < OffValPairs[Right].first;
});

for (unsigned i = 0; i < VL.size(); i++)
Sorted.emplace_back(OffValPairs[UseOrder[i]].second);

// Sort UseOrder to compute the Mask.
if (Mask) {
Mask->reserve(VL.size());
for (unsigned i = 0; i < VL.size(); i++)
Mask->emplace_back(i);
std::sort(Mask->begin(), Mask->end(),
[&UseOrder](unsigned Left, unsigned Right) {
return UseOrder[Left] < UseOrder[Right];
});
}

return true;
}


/// Returns true if the memory operations \p A and \p B are consecutive.
bool llvm::isConsecutiveAccess(Value *A, Value *B, const DataLayout &DL,
ScalarEvolution &SE, bool CheckType) {
Expand Down
Loading

0 comments on commit 771c4b9

Please sign in to comment.