Skip to content

Commit

Permalink
Merge tag 'xfs-5.15-merge-6' of git://git.kernel.org/pub/scm/fs/xfs/x…
Browse files Browse the repository at this point in the history
…fs-linux

Pull xfs updates from Darrick Wong:
 "There's a lot in this cycle.

  Starting with bug fixes: To avoid livelocks between the logging code
  and the quota code, we've disabled the ability of quotaoff to turn off
  quota accounting. (Admins can still disable quota enforcement, but
  truly turning off accounting requires a remount.) We've tried to do
  this in a careful enough way that there shouldn't be any user visible
  effects aside from quotaoff no longer randomly hanging the system.

  We've also fixed some bugs in runtime log behavior that could trip up
  log recovery if (otherwise unrelated) transactions manage to start and
  commit concurrently; some bugs in the GETFSMAP ioctl where we would
  incorrectly restrict the range of records output if the two xfs
  devices are of different sizes; a bug that resulted in fallocate
  funshare failing unnecessarily; and broken behavior in the xfs inode
  cache when DONTCACHE is in play.

  As for new features: we now batch inode inactivations in percpu
  background threads, which sharply decreases frontend thread wait time
  when performing file deletions and should improve overall directory
  tree deletion times. This eliminates both the problem where closing an
  unlinked file (especially on a frozen fs) can stall for a long time,
  and should also ease complaints about direct reclaim bogging down on
  unlinked file cleanup.

  Starting with this release, we've enabled pipelining of the XFS log.
  On workloads with high rates of metadata updates to different shards
  of the filesystem, multiple threads can be used to format committed
  log updates into log checkpoints.

  Lastly, with this release, two new features have graduated to
  supported status: inode btree counters (for faster mounts), and
  support for dates beyond Y2038. Expect these to be enabled by default
  in a future release of xfsprogs.

  Summary:

   - Fix a potential log livelock on busy filesystems when there's so
     much work going on that we can't finish a quotaoff before filling
     up the log by removing the ability to disable quota accounting.

   - Introduce the ability to use per-CPU data structures in XFS so that
     we can do a better job of maintaining CPU locality for certain
     operations.

   - Defer inode inactivation work to per-CPU lists, which will help us
     batch that processing. Deletions of large sparse files will
     *appear* to run faster, but all that means is that we've moved the
     work to the backend.

   - Drop the EXPERIMENTAL warnings from the y2038+ support and the
     inode btree counters, since it's been nearly a year and no
     complaints have come in.

   - Remove more of our bespoke kmem* variants in favor of using the
     standard Linux calls.

   - Prepare for the addition of log incompat features in upcoming
     cycles by actually adding code to support this.

   - Small cleanups of the xattr code in preparation for landing support
     for full logging of extended attribute updates in a future cycle.

   - Replace the various log shutdown state and flag code all over xfs
     with a single atomic bit flag.

   - Fix a serious log recovery bug where log item replay can be skipped
     based on the start lsn of a transaction even though the transaction
     commit lsn is the key data point for that by enforcing start lsns
     to appear in the log in the same order as commit lsns.

   - Enable pipelining in the code that pushes log items to disk.

   - Drop ->writepage.

   - Fix some bugs in GETFSMAP where the last fsmap record reported for
     a device could extend beyond the end of the device, and a separate
     bug where query keys for one device could be applied to another.

   - Don't let GETFSMAP query functions edit their input parameters.

   - Small cleanups to the scrub code's handling of perag structures.

   - Small cleanups to the incore inode tree walk code.

   - Constify btree function parameters that aren't changed, so that
     there will never again be confusion about range query functions
     changing their input parameters.

   - Standardize the format and names of tracepoint data attributes.

   - Clean up all the mount state and feature flags to use wrapped
     bitset functions instead of inconsistently open-coded flag checks.

   - Fix some confusion between xfs_buf hash table key variable vs.
     block number.

   - Fix a mis-interaction with iomap where we reported shared delalloc
     cow fork extents to iomap, which would cause the iomap unshare
     operation to return IO errors unnecessarily.

   - Fix DONTCACHE behavior"

* tag 'xfs-5.15-merge-6' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux: (103 commits)
  xfs: fix I_DONTCACHE
  xfs: only set IOMAP_F_SHARED when providing a srcmap to a write
  xfs: fix perag structure refcounting error when scrub fails
  xfs: rename buffer cache index variable b_bn
  xfs: convert bp->b_bn references to xfs_buf_daddr()
  xfs: introduce xfs_buf_daddr()
  xfs: kill xfs_sb_version_has_v3inode()
  xfs: introduce xfs_sb_is_v5 helper
  xfs: remove unused xfs_sb_version_has wrappers
  xfs: convert xfs_sb_version_has checks to use mount features
  xfs: convert scrub to use mount-based feature checks
  xfs: open code sb verifier feature checks
  xfs: convert xfs_fs_geometry to use mount feature checks
  xfs: replace XFS_FORCED_SHUTDOWN with xfs_is_shutdown
  xfs: convert remaining mount flags to state flags
  xfs: convert mount flags to features
  xfs: consolidate mount option features in m_features
  xfs: replace xfs_sb_version checks with feature flag checks
  xfs: reflect sb features in xfs_mount
  xfs: rework attr2 feature and mount options
  ...
  • Loading branch information
torvalds committed Sep 2, 2021
2 parents 4ac6d90 + f38a032 commit 90c90cd
Show file tree
Hide file tree
Showing 153 changed files with 4,082 additions and 3,201 deletions.
64 changes: 0 additions & 64 deletions fs/xfs/kmem.c
Original file line number Diff line number Diff line change
Expand Up @@ -29,67 +29,3 @@ kmem_alloc(size_t size, xfs_km_flags_t flags)
congestion_wait(BLK_RW_ASYNC, HZ/50);
} while (1);
}


/*
* __vmalloc() will allocate data pages and auxiliary structures (e.g.
* pagetables) with GFP_KERNEL, yet we may be under GFP_NOFS context here. Hence
* we need to tell memory reclaim that we are in such a context via
* PF_MEMALLOC_NOFS to prevent memory reclaim re-entering the filesystem here
* and potentially deadlocking.
*/
static void *
__kmem_vmalloc(size_t size, xfs_km_flags_t flags)
{
unsigned nofs_flag = 0;
void *ptr;
gfp_t lflags = kmem_flags_convert(flags);

if (flags & KM_NOFS)
nofs_flag = memalloc_nofs_save();

ptr = __vmalloc(size, lflags);

if (flags & KM_NOFS)
memalloc_nofs_restore(nofs_flag);

return ptr;
}

/*
* Same as kmem_alloc_large, except we guarantee the buffer returned is aligned
* to the @align_mask. We only guarantee alignment up to page size, we'll clamp
* alignment at page size if it is larger. vmalloc always returns a PAGE_SIZE
* aligned region.
*/
void *
kmem_alloc_io(size_t size, int align_mask, xfs_km_flags_t flags)
{
void *ptr;

trace_kmem_alloc_io(size, flags, _RET_IP_);

if (WARN_ON_ONCE(align_mask >= PAGE_SIZE))
align_mask = PAGE_SIZE - 1;

ptr = kmem_alloc(size, flags | KM_MAYFAIL);
if (ptr) {
if (!((uintptr_t)ptr & align_mask))
return ptr;
kfree(ptr);
}
return __kmem_vmalloc(size, flags);
}

void *
kmem_alloc_large(size_t size, xfs_km_flags_t flags)
{
void *ptr;

trace_kmem_alloc_large(size, flags, _RET_IP_);

ptr = kmem_alloc(size, flags | KM_MAYFAIL);
if (ptr)
return ptr;
return __kmem_vmalloc(size, flags);
}
2 changes: 0 additions & 2 deletions fs/xfs/kmem.h
Original file line number Diff line number Diff line change
Expand Up @@ -57,8 +57,6 @@ kmem_flags_convert(xfs_km_flags_t flags)
}

extern void *kmem_alloc(size_t, xfs_km_flags_t);
extern void *kmem_alloc_io(size_t size, int align_mask, xfs_km_flags_t flags);
extern void *kmem_alloc_large(size_t size, xfs_km_flags_t);
static inline void kmem_free(const void *ptr)
{
kvfree(ptr);
Expand Down
25 changes: 12 additions & 13 deletions fs/xfs/libxfs/xfs_ag.c
Original file line number Diff line number Diff line change
Expand Up @@ -313,7 +313,6 @@ xfs_get_aghdr_buf(
if (error)
return error;

bp->b_bn = blkno;
bp->b_maps[0].bm_bn = blkno;
bp->b_ops = ops;

Expand Down Expand Up @@ -469,7 +468,7 @@ xfs_rmaproot_init(
rrec->rm_offset = 0;

/* account for refc btree root */
if (xfs_sb_version_hasreflink(&mp->m_sb)) {
if (xfs_has_reflink(mp)) {
rrec = XFS_RMAP_REC_ADDR(block, 5);
rrec->rm_startblock = cpu_to_be32(xfs_refc_block(mp));
rrec->rm_blockcount = cpu_to_be32(1);
Expand Down Expand Up @@ -528,7 +527,7 @@ xfs_agfblock_init(
agf->agf_roots[XFS_BTNUM_CNTi] = cpu_to_be32(XFS_CNT_BLOCK(mp));
agf->agf_levels[XFS_BTNUM_BNOi] = cpu_to_be32(1);
agf->agf_levels[XFS_BTNUM_CNTi] = cpu_to_be32(1);
if (xfs_sb_version_hasrmapbt(&mp->m_sb)) {
if (xfs_has_rmapbt(mp)) {
agf->agf_roots[XFS_BTNUM_RMAPi] =
cpu_to_be32(XFS_RMAP_BLOCK(mp));
agf->agf_levels[XFS_BTNUM_RMAPi] = cpu_to_be32(1);
Expand All @@ -541,9 +540,9 @@ xfs_agfblock_init(
tmpsize = id->agsize - mp->m_ag_prealloc_blocks;
agf->agf_freeblks = cpu_to_be32(tmpsize);
agf->agf_longest = cpu_to_be32(tmpsize);
if (xfs_sb_version_hascrc(&mp->m_sb))
if (xfs_has_crc(mp))
uuid_copy(&agf->agf_uuid, &mp->m_sb.sb_meta_uuid);
if (xfs_sb_version_hasreflink(&mp->m_sb)) {
if (xfs_has_reflink(mp)) {
agf->agf_refcount_root = cpu_to_be32(
xfs_refc_block(mp));
agf->agf_refcount_level = cpu_to_be32(1);
Expand All @@ -569,7 +568,7 @@ xfs_agflblock_init(
__be32 *agfl_bno;
int bucket;

if (xfs_sb_version_hascrc(&mp->m_sb)) {
if (xfs_has_crc(mp)) {
agfl->agfl_magicnum = cpu_to_be32(XFS_AGFL_MAGIC);
agfl->agfl_seqno = cpu_to_be32(id->agno);
uuid_copy(&agfl->agfl_uuid, &mp->m_sb.sb_meta_uuid);
Expand Down Expand Up @@ -599,17 +598,17 @@ xfs_agiblock_init(
agi->agi_freecount = 0;
agi->agi_newino = cpu_to_be32(NULLAGINO);
agi->agi_dirino = cpu_to_be32(NULLAGINO);
if (xfs_sb_version_hascrc(&mp->m_sb))
if (xfs_has_crc(mp))
uuid_copy(&agi->agi_uuid, &mp->m_sb.sb_meta_uuid);
if (xfs_sb_version_hasfinobt(&mp->m_sb)) {
if (xfs_has_finobt(mp)) {
agi->agi_free_root = cpu_to_be32(XFS_FIBT_BLOCK(mp));
agi->agi_free_level = cpu_to_be32(1);
}
for (bucket = 0; bucket < XFS_AGI_UNLINKED_BUCKETS; bucket++)
agi->agi_unlinked[bucket] = cpu_to_be32(NULLAGINO);
if (xfs_sb_version_hasinobtcounts(&mp->m_sb)) {
if (xfs_has_inobtcounts(mp)) {
agi->agi_iblocks = cpu_to_be32(1);
if (xfs_sb_version_hasfinobt(&mp->m_sb))
if (xfs_has_finobt(mp))
agi->agi_fblocks = cpu_to_be32(1);
}
}
Expand Down Expand Up @@ -719,22 +718,22 @@ xfs_ag_init_headers(
.ops = &xfs_finobt_buf_ops,
.work = &xfs_btroot_init,
.type = XFS_BTNUM_FINO,
.need_init = xfs_sb_version_hasfinobt(&mp->m_sb)
.need_init = xfs_has_finobt(mp)
},
{ /* RMAP root block */
.daddr = XFS_AGB_TO_DADDR(mp, id->agno, XFS_RMAP_BLOCK(mp)),
.numblks = BTOBB(mp->m_sb.sb_blocksize),
.ops = &xfs_rmapbt_buf_ops,
.work = &xfs_rmaproot_init,
.need_init = xfs_sb_version_hasrmapbt(&mp->m_sb)
.need_init = xfs_has_rmapbt(mp)
},
{ /* REFC root block */
.daddr = XFS_AGB_TO_DADDR(mp, id->agno, xfs_refc_block(mp)),
.numblks = BTOBB(mp->m_sb.sb_blocksize),
.ops = &xfs_refcountbt_buf_ops,
.work = &xfs_btroot_init,
.type = XFS_BTNUM_REFC,
.need_init = xfs_sb_version_hasreflink(&mp->m_sb)
.need_init = xfs_has_reflink(mp)
},
{ /* NULL terminating block */
.daddr = XFS_BUF_DADDR_NULL,
Expand Down
56 changes: 28 additions & 28 deletions fs/xfs/libxfs/xfs_alloc.c
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ xfs_agfl_size(
{
unsigned int size = mp->m_sb.sb_sectsize;

if (xfs_sb_version_hascrc(&mp->m_sb))
if (xfs_has_crc(mp))
size -= sizeof(struct xfs_agfl);

return size / sizeof(xfs_agblock_t);
Expand All @@ -61,9 +61,9 @@ unsigned int
xfs_refc_block(
struct xfs_mount *mp)
{
if (xfs_sb_version_hasrmapbt(&mp->m_sb))
if (xfs_has_rmapbt(mp))
return XFS_RMAP_BLOCK(mp) + 1;
if (xfs_sb_version_hasfinobt(&mp->m_sb))
if (xfs_has_finobt(mp))
return XFS_FIBT_BLOCK(mp) + 1;
return XFS_IBT_BLOCK(mp) + 1;
}
Expand All @@ -72,11 +72,11 @@ xfs_extlen_t
xfs_prealloc_blocks(
struct xfs_mount *mp)
{
if (xfs_sb_version_hasreflink(&mp->m_sb))
if (xfs_has_reflink(mp))
return xfs_refc_block(mp) + 1;
if (xfs_sb_version_hasrmapbt(&mp->m_sb))
if (xfs_has_rmapbt(mp))
return XFS_RMAP_BLOCK(mp) + 1;
if (xfs_sb_version_hasfinobt(&mp->m_sb))
if (xfs_has_finobt(mp))
return XFS_FIBT_BLOCK(mp) + 1;
return XFS_IBT_BLOCK(mp) + 1;
}
Expand Down Expand Up @@ -126,11 +126,11 @@ xfs_alloc_ag_max_usable(
blocks = XFS_BB_TO_FSB(mp, XFS_FSS_TO_BB(mp, 4)); /* ag headers */
blocks += XFS_ALLOC_AGFL_RESERVE;
blocks += 3; /* AGF, AGI btree root blocks */
if (xfs_sb_version_hasfinobt(&mp->m_sb))
if (xfs_has_finobt(mp))
blocks++; /* finobt root block */
if (xfs_sb_version_hasrmapbt(&mp->m_sb))
if (xfs_has_rmapbt(mp))
blocks++; /* rmap root block */
if (xfs_sb_version_hasreflink(&mp->m_sb))
if (xfs_has_reflink(mp))
blocks++; /* refcount root block */

return mp->m_sb.sb_agblocks - blocks;
Expand Down Expand Up @@ -598,7 +598,7 @@ xfs_agfl_verify(
* AGFL is what the AGF says is active. We can't get to the AGF, so we
* can't verify just those entries are valid.
*/
if (!xfs_sb_version_hascrc(&mp->m_sb))
if (!xfs_has_crc(mp))
return NULL;

if (!xfs_verify_magic(bp, agfl->agfl_magicnum))
Expand Down Expand Up @@ -638,7 +638,7 @@ xfs_agfl_read_verify(
* AGFL is what the AGF says is active. We can't get to the AGF, so we
* can't verify just those entries are valid.
*/
if (!xfs_sb_version_hascrc(&mp->m_sb))
if (!xfs_has_crc(mp))
return;

if (!xfs_buf_verify_cksum(bp, XFS_AGFL_CRC_OFF))
Expand All @@ -659,7 +659,7 @@ xfs_agfl_write_verify(
xfs_failaddr_t fa;

/* no verification of non-crc AGFLs */
if (!xfs_sb_version_hascrc(&mp->m_sb))
if (!xfs_has_crc(mp))
return;

fa = xfs_agfl_verify(bp);
Expand Down Expand Up @@ -2264,7 +2264,7 @@ xfs_alloc_min_freelist(
min_free += min_t(unsigned int, levels[XFS_BTNUM_CNTi] + 1,
mp->m_ag_maxlevels);
/* space needed reverse mapping used space btree */
if (xfs_sb_version_hasrmapbt(&mp->m_sb))
if (xfs_has_rmapbt(mp))
min_free += min_t(unsigned int, levels[XFS_BTNUM_RMAPi] + 1,
mp->m_rmap_maxlevels);

Expand Down Expand Up @@ -2373,7 +2373,7 @@ xfs_agfl_needs_reset(
int active;

/* no agfl header on v4 supers */
if (!xfs_sb_version_hascrc(&mp->m_sb))
if (!xfs_has_crc(mp))
return false;

/*
Expand Down Expand Up @@ -2877,7 +2877,7 @@ xfs_agf_verify(
struct xfs_mount *mp = bp->b_mount;
struct xfs_agf *agf = bp->b_addr;

if (xfs_sb_version_hascrc(&mp->m_sb)) {
if (xfs_has_crc(mp)) {
if (!uuid_equal(&agf->agf_uuid, &mp->m_sb.sb_meta_uuid))
return __this_address;
if (!xfs_log_check_lsn(mp, be64_to_cpu(agf->agf_lsn)))
Expand Down Expand Up @@ -2907,12 +2907,12 @@ xfs_agf_verify(
be32_to_cpu(agf->agf_levels[XFS_BTNUM_CNT]) > mp->m_ag_maxlevels)
return __this_address;

if (xfs_sb_version_hasrmapbt(&mp->m_sb) &&
if (xfs_has_rmapbt(mp) &&
(be32_to_cpu(agf->agf_levels[XFS_BTNUM_RMAP]) < 1 ||
be32_to_cpu(agf->agf_levels[XFS_BTNUM_RMAP]) > mp->m_rmap_maxlevels))
return __this_address;

if (xfs_sb_version_hasrmapbt(&mp->m_sb) &&
if (xfs_has_rmapbt(mp) &&
be32_to_cpu(agf->agf_rmap_blocks) > be32_to_cpu(agf->agf_length))
return __this_address;

Expand All @@ -2925,16 +2925,16 @@ xfs_agf_verify(
if (bp->b_pag && be32_to_cpu(agf->agf_seqno) != bp->b_pag->pag_agno)
return __this_address;

if (xfs_sb_version_haslazysbcount(&mp->m_sb) &&
if (xfs_has_lazysbcount(mp) &&
be32_to_cpu(agf->agf_btreeblks) > be32_to_cpu(agf->agf_length))
return __this_address;

if (xfs_sb_version_hasreflink(&mp->m_sb) &&
if (xfs_has_reflink(mp) &&
be32_to_cpu(agf->agf_refcount_blocks) >
be32_to_cpu(agf->agf_length))
return __this_address;

if (xfs_sb_version_hasreflink(&mp->m_sb) &&
if (xfs_has_reflink(mp) &&
(be32_to_cpu(agf->agf_refcount_level) < 1 ||
be32_to_cpu(agf->agf_refcount_level) > mp->m_refc_maxlevels))
return __this_address;
Expand All @@ -2950,7 +2950,7 @@ xfs_agf_read_verify(
struct xfs_mount *mp = bp->b_mount;
xfs_failaddr_t fa;

if (xfs_sb_version_hascrc(&mp->m_sb) &&
if (xfs_has_crc(mp) &&
!xfs_buf_verify_cksum(bp, XFS_AGF_CRC_OFF))
xfs_verifier_error(bp, -EFSBADCRC, __this_address);
else {
Expand All @@ -2975,7 +2975,7 @@ xfs_agf_write_verify(
return;
}

if (!xfs_sb_version_hascrc(&mp->m_sb))
if (!xfs_has_crc(mp))
return;

if (bip)
Expand Down Expand Up @@ -3073,13 +3073,13 @@ xfs_alloc_read_agf(
* counter only tracks non-root blocks.
*/
allocbt_blks = pag->pagf_btreeblks;
if (xfs_sb_version_hasrmapbt(&mp->m_sb))
if (xfs_has_rmapbt(mp))
allocbt_blks -= be32_to_cpu(agf->agf_rmap_blocks) - 1;
if (allocbt_blks > 0)
atomic64_add(allocbt_blks, &mp->m_allocbt_blks);
}
#ifdef DEBUG
else if (!XFS_FORCED_SHUTDOWN(mp)) {
else if (!xfs_is_shutdown(mp)) {
ASSERT(pag->pagf_freeblks == be32_to_cpu(agf->agf_freeblks));
ASSERT(pag->pagf_btreeblks == be32_to_cpu(agf->agf_btreeblks));
ASSERT(pag->pagf_flcount == be32_to_cpu(agf->agf_flcount));
Expand Down Expand Up @@ -3166,7 +3166,7 @@ xfs_alloc_vextent(
* the first a.g. fails.
*/
if ((args->datatype & XFS_ALLOC_INITIAL_USER_DATA) &&
(mp->m_flags & XFS_MOUNT_32BITINODES)) {
xfs_is_inode32(mp)) {
args->fsbno = XFS_AGB_TO_FSB(mp,
((mp->m_agfrotor / rotorstep) %
mp->m_sb.sb_agcount), 0);
Expand Down Expand Up @@ -3392,7 +3392,7 @@ struct xfs_alloc_query_range_info {
STATIC int
xfs_alloc_query_range_helper(
struct xfs_btree_cur *cur,
union xfs_btree_rec *rec,
const union xfs_btree_rec *rec,
void *priv)
{
struct xfs_alloc_query_range_info *query = priv;
Expand All @@ -3407,8 +3407,8 @@ xfs_alloc_query_range_helper(
int
xfs_alloc_query_range(
struct xfs_btree_cur *cur,
struct xfs_alloc_rec_incore *low_rec,
struct xfs_alloc_rec_incore *high_rec,
const struct xfs_alloc_rec_incore *low_rec,
const struct xfs_alloc_rec_incore *high_rec,
xfs_alloc_query_range_fn fn,
void *priv)
{
Expand Down
Loading

0 comments on commit 90c90cd

Please sign in to comment.