Skip to content

Commit

Permalink
Merge tag 'docs-5.6' of git://git.lwn.net/linux
Browse files Browse the repository at this point in the history
Pull documentation updates from Jonathan Corbet:
 "It has been a relatively quiet cycle for documentation, but there's
  still a couple of things of note:

   - Conversion of the NFS documentation to RST

   - A new document on how to help with documentation (and a maintainer
     profile entry too)

  Plus the usual collection of typo fixes, etc"

* tag 'docs-5.6' of git://git.lwn.net/linux: (40 commits)
  docs: filesystems: add overlayfs to index.rst
  docs: usb: remove some broken references
  scripts/find-unused-docs: Fix massive false positives
  docs: nvdimm: use ReST notation for subsection
  zram: correct documentation about sysfs node of huge page writeback
  Documentation: zram: various fixes in zram.rst
  Add a maintainer entry profile for documentation
  Add a document on how to contribute to the documentation
  docs: Keep up with the location of NoUri
  Documentation: Call out example SYM_FUNC_* usage as x86-specific
  Documentation: nfs: fault_injection: convert to ReST
  Documentation: nfs: pnfs-scsi-server: convert to ReST
  Documentation: nfs: convert pnfs-block-server to ReST
  Documentation: nfs: idmapper: convert to ReST
  Documentation: convert nfsd-admin-interfaces to ReST
  Documentation: nfs-rdma: convert to ReST
  Documentation: nfsroot.rst: COSMETIC: refill a paragraph
  Documentation: nfsroot.txt: convert to ReST
  Documentation: convert nfs.txt to ReST
  Documentation: filesystems: convert vfat.txt to RST
  ...
  • Loading branch information
torvalds committed Jan 29, 2020
2 parents 08a3ef8 + 77ce1a4 commit 05ef8b9
Show file tree
Hide file tree
Showing 44 changed files with 1,903 additions and 870 deletions.
63 changes: 32 additions & 31 deletions Documentation/admin-guide/blockdev/zram.rst
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
========================================
zram: Compressed RAM based block devices
zram: Compressed RAM-based block devices
========================================

Introduction
============

The zram module creates RAM based block devices named /dev/zram<id>
The zram module creates RAM-based block devices named /dev/zram<id>
(<id> = 0, 1, ...). Pages written to these disks are compressed and stored
in memory itself. These disks allow very fast I/O and compression provides
good amounts of memory savings. Some of the usecases include /tmp storage,
use as swap disks, various caches under /var and maybe many more :)
good amounts of memory savings. Some of the use cases include /tmp storage,
use as swap disks, various caches under /var and maybe many more. :)

Statistics for individual zram devices are exported through sysfs nodes at
/sys/block/zram<id>/
Expand Down Expand Up @@ -43,17 +43,17 @@ The list of possible return codes:

======== =============================================================
-EBUSY an attempt to modify an attribute that cannot be changed once
the device has been initialised. Please reset device first;
the device has been initialised. Please reset device first.
-ENOMEM zram was not able to allocate enough memory to fulfil your
needs;
needs.
-EINVAL invalid input has been provided.
======== =============================================================

If you use 'echo', the returned value that is changed by 'echo' utility,
If you use 'echo', the returned value is set by the 'echo' utility,
and, in general case, something like::

echo 3 > /sys/block/zram0/max_comp_streams
if [ $? -ne 0 ];
if [ $? -ne 0 ]; then
handle_error
fi

Expand All @@ -65,20 +65,21 @@ should suffice.
::

modprobe zram num_devices=4
This creates 4 devices: /dev/zram{0,1,2,3}

This creates 4 devices: /dev/zram{0,1,2,3}

num_devices parameter is optional and tells zram how many devices should be
pre-created. Default: 1.

2) Set max number of compression streams
========================================

Regardless the value passed to this attribute, ZRAM will always
allocate multiple compression streams - one per online CPUs - thus
Regardless of the value passed to this attribute, ZRAM will always
allocate multiple compression streams - one per online CPU - thus
allowing several concurrent compression operations. The number of
allocated compression streams goes down when some of the CPUs
become offline. There is no single-compression-stream mode anymore,
unless you are running a UP system or has only 1 CPU online.
unless you are running a UP system or have only 1 CPU online.

To find out how many streams are currently available::

Expand All @@ -89,7 +90,7 @@ To find out how many streams are currently available::

Using comp_algorithm device attribute one can see available and
currently selected (shown in square brackets) compression algorithms,
change selected compression algorithm (once the device is initialised
or change the selected compression algorithm (once the device is initialised
there is no way to change compression algorithm).

Examples::
Expand Down Expand Up @@ -167,9 +168,9 @@ Examples::
zram provides a control interface, which enables dynamic (on-demand) device
addition and removal.

In order to add a new /dev/zramX device, perform read operation on hot_add
attribute. This will return either new device's device id (meaning that you
can use /dev/zram<id>) or error code.
In order to add a new /dev/zramX device, perform a read operation on the hot_add
attribute. This will return either the new device's device id (meaning that you
can use /dev/zram<id>) or an error code.

Example::

Expand All @@ -186,8 +187,8 @@ execute::

Per-device statistics are exported as various nodes under /sys/block/zram<id>/

A brief description of exported device attributes. For more details please
read Documentation/ABI/testing/sysfs-block-zram.
A brief description of exported device attributes follows. For more details
please read Documentation/ABI/testing/sysfs-block-zram.

====================== ====== ===============================================
Name access description
Expand Down Expand Up @@ -245,7 +246,7 @@ whitespace:

File /sys/block/zram<id>/mm_stat

The stat file represents device's mm statistics. It consists of a single
The mm_stat file represents the device's mm statistics. It consists of a single
line of text and contains the following stats separated by whitespace:

================ =============================================================
Expand All @@ -261,7 +262,7 @@ line of text and contains the following stats separated by whitespace:
Unit: bytes
mem_limit the maximum amount of memory ZRAM can use to store
the compressed data
mem_used_max the maximum amount of memory zram have consumed to
mem_used_max the maximum amount of memory zram has consumed to
store the data
same_pages the number of same element filled pages written to this disk.
No memory is allocated for such pages.
Expand All @@ -271,7 +272,7 @@ line of text and contains the following stats separated by whitespace:

File /sys/block/zram<id>/bd_stat

The stat file represents device's backing device statistics. It consists of
The bd_stat file represents a device's backing device statistics. It consists of
a single line of text and contains the following stats separated by whitespace:

============== =============================================================
Expand Down Expand Up @@ -316,17 +317,17 @@ To use the feature, admin should set up backing device via::
echo /dev/sda5 > /sys/block/zramX/backing_dev

before disksize setting. It supports only partition at this moment.
If admin want to use incompressible page writeback, they could do via::
If admin wants to use incompressible page writeback, they could do via::

echo huge > /sys/block/zramX/write
echo huge > /sys/block/zramX/writeback

To use idle page writeback, first, user need to declare zram pages
as idle::

echo all > /sys/block/zramX/idle

From now on, any pages on zram are idle pages. The idle mark
will be removed until someone request access of the block.
will be removed until someone requests access of the block.
IOW, unless there is access request, those pages are still idle pages.

Admin can request writeback of those idle pages at right timing via::
Expand All @@ -341,16 +342,16 @@ to guarantee storage health for entire product life.

To overcome the concern, zram supports "writeback_limit" feature.
The "writeback_limit_enable"'s default value is 0 so that it doesn't limit
any writeback. IOW, if admin want to apply writeback budget, he should
any writeback. IOW, if admin wants to apply writeback budget, he should
enable writeback_limit_enable via::

$ echo 1 > /sys/block/zramX/writeback_limit_enable

Once writeback_limit_enable is set, zram doesn't allow any writeback
until admin set the budget via /sys/block/zramX/writeback_limit.
until admin sets the budget via /sys/block/zramX/writeback_limit.

(If admin doesn't enable writeback_limit_enable, writeback_limit's value
assigned via /sys/block/zramX/writeback_limit is meaninless.)
assigned via /sys/block/zramX/writeback_limit is meaningless.)

If admin want to limit writeback as per-day 400M, he could do it
like below::
Expand All @@ -361,26 +362,26 @@ like below::
/sys/block/zram0/writeback_limit.
$ echo 1 > /sys/block/zram0/writeback_limit_enable

If admin want to allow further write again once the bugdet is exausted,
If admins want to allow further write again once the bugdet is exhausted,
he could do it like below::

$ echo $((400<<MB_SHIFT>>4K_SHIFT)) > \
/sys/block/zram0/writeback_limit

If admin want to see remaining writeback budget since he set::
If admin wants to see remaining writeback budget since last set::

$ cat /sys/block/zramX/writeback_limit

If admin want to disable writeback limit, he could do::

$ echo 0 > /sys/block/zramX/writeback_limit_enable

The writeback_limit count will reset whenever you reset zram(e.g.,
The writeback_limit count will reset whenever you reset zram (e.g.,
system reboot, echo 1 > /sys/block/zramX/reset) so keeping how many of
writeback happened until you reset the zram to allocate extra writeback
budget in next setting is user's job.

If admin want to measure writeback count in a certain period, he could
If admin wants to measure writeback count in a certain period, he could
know it via /sys/block/zram0/bd_stat's 3rd column.

memory tracking
Expand Down
1 change: 1 addition & 0 deletions Documentation/admin-guide/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,7 @@ configure specific aspects of kernel behavior to your liking.
device-mapper/index
efi-stub
ext4
nfs/index
gpio/index
highuid
hw_random
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
===================
NFS Fault Injection
===================

Fault Injection
===============
Fault injection is a method for forcing errors that may not normally occur, or
may be difficult to reproduce. Forcing these errors in a controlled environment
can help the developer find and fix bugs before their code is shipped in a
Expand Down
15 changes: 15 additions & 0 deletions Documentation/admin-guide/nfs/index.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
=============
NFS
=============

.. toctree::
:maxdepth: 1

nfs-client
nfsroot
nfs-rdma
nfsd-admin-interfaces
nfs-idmapper
pnfs-block-server
pnfs-scsi-server
fault_injection
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
==========
NFS Client
==========

The NFS client
==============
Expand Down Expand Up @@ -59,10 +62,11 @@ The DNS resolver

NFSv4 allows for one server to refer the NFS client to data that has been
migrated onto another server by means of the special "fs_locations"
attribute. See
http://tools.ietf.org/html/rfc3530#section-6
and
http://tools.ietf.org/html/draft-ietf-nfsv4-referrals-00
attribute. See `RFC3530 Section 6: Filesystem Migration and Replication`_ and
`Implementation Guide for Referrals in NFSv4`_.

.. _RFC3530 Section 6\: Filesystem Migration and Replication: http://tools.ietf.org/html/rfc3530#section-6
.. _Implementation Guide for Referrals in NFSv4: http://tools.ietf.org/html/draft-ietf-nfsv4-referrals-00

The fs_locations information can take the form of either an ip address and
a path, or a DNS hostname and a path. The latter requires the NFS client to
Expand All @@ -78,8 +82,8 @@ Assuming that the user has the 'rpc_pipefs' filesystem mounted in the usual
(2) If no valid entry exists, the helper script '/sbin/nfs_cache_getent'
(may be changed using the 'nfs.cache_getent' kernel boot parameter)
is run, with two arguments:
- the cache name, "dns_resolve"
- the hostname to resolve
- the cache name, "dns_resolve"
- the hostname to resolve

(3) After looking up the corresponding ip address, the helper script
writes the result into the rpc_pipefs pseudo-file
Expand All @@ -94,43 +98,44 @@ Assuming that the user has the 'rpc_pipefs' filesystem mounted in the usual
script, and <ttl> is the 'time to live' of this cache entry (in
units of seconds).

Note: If <ip address> is invalid, say the string "0", then a negative
entry is created, which will cause the kernel to treat the hostname
as having no valid DNS translation.
.. note::
If <ip address> is invalid, say the string "0", then a negative
entry is created, which will cause the kernel to treat the hostname
as having no valid DNS translation.




A basic sample /sbin/nfs_cache_getent
=====================================

#!/bin/bash
#
ttl=600
#
cut=/usr/bin/cut
getent=/usr/bin/getent
rpc_pipefs=/var/lib/nfs/rpc_pipefs
#
die()
{
echo "Usage: $0 cache_name entry_name"
exit 1
}

[ $# -lt 2 ] && die
cachename="$1"
cache_path=${rpc_pipefs}/cache/${cachename}/channel

case "${cachename}" in
dns_resolve)
name="$2"
result="$(${getent} hosts ${name} | ${cut} -f1 -d\ )"
[ -z "${result}" ] && result="0"
;;
*)
die
;;
esac
echo "${result} ${name} ${ttl}" >${cache_path}

.. code-block:: sh
#!/bin/bash
#
ttl=600
#
cut=/usr/bin/cut
getent=/usr/bin/getent
rpc_pipefs=/var/lib/nfs/rpc_pipefs
#
die()
{
echo "Usage: $0 cache_name entry_name"
exit 1
}
[ $# -lt 2 ] && die
cachename="$1"
cache_path=${rpc_pipefs}/cache/${cachename}/channel
case "${cachename}" in
dns_resolve)
name="$2"
result="$(${getent} hosts ${name} | ${cut} -f1 -d\ )"
[ -z "${result}" ] && result="0"
;;
*)
die
;;
esac
echo "${result} ${name} ${ttl}" >${cache_path}
Loading

0 comments on commit 05ef8b9

Please sign in to comment.