Skip to content

Commit

Permalink
Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kern…
Browse files Browse the repository at this point in the history
…el/git/arm64/linux

Pull arm64 updates from Will Deacon:

 - vDSO build improvements including support for building with BSD.

 - Cleanup to the AMU support code and initialisation rework to support
   cpufreq drivers built as modules.

 - Removal of synthetic frame record from exception stack when entering
   the kernel from EL0.

 - Add support for the TRNG firmware call introduced by Arm spec
   DEN0098.

 - Cleanup and refactoring across the board.

 - Avoid calling arch_get_random_seed_long() from
   add_interrupt_randomness()

 - Perf and PMU updates including support for Cortex-A78 and the v8.3
   SPE extensions.

 - Significant steps along the road to leaving the MMU enabled during
   kexec relocation.

 - Faultaround changes to initialise prefaulted PTEs as 'old' when
   hardware access-flag updates are supported, which drastically
   improves vmscan performance.

 - CPU errata updates for Cortex-A76 (#1463225) and Cortex-A55
   (#1024718)

 - Preparatory work for yielding the vector unit at a finer granularity
   in the crypto code, which in turn will one day allow us to defer
   softirq processing when it is in use.

 - Support for overriding CPU ID register fields on the command-line.

* tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (85 commits)
  drivers/perf: Replace spin_lock_irqsave to spin_lock
  mm: filemap: Fix microblaze build failure with 'mmu_defconfig'
  arm64: Make CPU_BIG_ENDIAN depend on ld.bfd or ld.lld 13.0.0+
  arm64: cpufeatures: Allow disabling of Pointer Auth from the command-line
  arm64: Defer enabling pointer authentication on boot core
  arm64: cpufeatures: Allow disabling of BTI from the command-line
  arm64: Move "nokaslr" over to the early cpufeature infrastructure
  KVM: arm64: Document HVC_VHE_RESTART stub hypercall
  arm64: Make kvm-arm.mode={nvhe, protected} an alias of id_aa64mmfr1.vh=0
  arm64: Add an aliasing facility for the idreg override
  arm64: Honor VHE being disabled from the command-line
  arm64: Allow ID_AA64MMFR1_EL1.VH to be overridden from the command line
  arm64: cpufeature: Add an early command-line cpufeature override facility
  arm64: Extract early FDT mapping from kaslr_early_init()
  arm64: cpufeature: Use IDreg override in __read_sysreg_by_encoding()
  arm64: cpufeature: Add global feature override facility
  arm64: Move SCTLR_EL1 initialisation to EL-agnostic code
  arm64: Simplify init_el2_state to be non-VHE only
  arm64: Move VHE-specific SPE setup to mutate_to_vhe()
  arm64: Drop early setting of MDSCR_EL2.TPMS
  ...
  • Loading branch information
torvalds committed Feb 21, 2021
2 parents 4a037ad + 1ffa976 commit 99ca0ed
Show file tree
Hide file tree
Showing 91 changed files with 1,652 additions and 985 deletions.
9 changes: 9 additions & 0 deletions Documentation/admin-guide/kernel-parameters.txt
Original file line number Diff line number Diff line change
Expand Up @@ -373,6 +373,12 @@
arcrimi= [HW,NET] ARCnet - "RIM I" (entirely mem-mapped) cards
Format: <io>,<irq>,<nodeID>

arm64.nobti [ARM64] Unconditionally disable Branch Target
Identification support

arm64.nopauth [ARM64] Unconditionally disable Pointer Authentication
support

ataflop= [HW,M68k]

atarimouse= [HW,MOUSE] Atari Mouse
Expand Down Expand Up @@ -2252,6 +2258,9 @@
kvm-arm.mode=
[KVM,ARM] Select one of KVM/arm64's modes of operation.

nvhe: Standard nVHE-based mode, without support for
protected guests.

protected: nVHE-based mode with support for guests whose
state is kept private from the host.
Not valid if the kernel is running in EL2.
Expand Down
2 changes: 1 addition & 1 deletion Documentation/admin-guide/perf/arm-cmn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ PMU events
----------

The PMU driver registers a single PMU device for the whole interconnect,
see /sys/bus/event_source/devices/arm_cmn. Multi-chip systems may link
see /sys/bus/event_source/devices/arm_cmn_0. Multi-chip systems may link
more than one CMN together via external CCIX links - in this situation,
each mesh counts its own events entirely independently, and additional
PMU devices will be named arm_cmn_{1..n}.
Expand Down
1 change: 1 addition & 0 deletions Documentation/devicetree/bindings/arm/pmu.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,7 @@ properties:
- arm,cortex-a75-pmu
- arm,cortex-a76-pmu
- arm,cortex-a77-pmu
- arm,cortex-a78-pmu
- arm,neoverse-e1-pmu
- arm,neoverse-n1-pmu
- brcm,vulcan-pmu
Expand Down
9 changes: 9 additions & 0 deletions Documentation/virt/kvm/arm/hyp-abi.rst
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,15 @@ these functions (see arch/arm{,64}/include/asm/virt.h):
into place (arm64 only), and jump to the restart address while at HYP/EL2.
This hypercall is not expected to return to its caller.

* ::

x0 = HVC_VHE_RESTART (arm64 only)

Attempt to upgrade the kernel's exception level from EL1 to EL2 by enabling
the VHE mode. This is conditioned by the CPU supporting VHE, the EL2 MMU
being off, and VHE not being disabled by any other means (command line
option, for example).

Any other value of r0/x0 triggers a hypervisor-specific handling,
which is not documented here.

Expand Down
10 changes: 10 additions & 0 deletions arch/arm/include/asm/archrandom.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_ARCHRANDOM_H
#define _ASM_ARCHRANDOM_H

static inline bool __init smccc_probe_trng(void)
{
return false;
}

#endif /* _ASM_ARCHRANDOM_H */
11 changes: 8 additions & 3 deletions arch/arm64/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -522,7 +522,7 @@ config ARM64_ERRATUM_1024718
help
This option adds a workaround for ARM Cortex-A55 Erratum 1024718.

Affected Cortex-A55 cores (r0p0, r0p1, r1p0) could cause incorrect
Affected Cortex-A55 cores (all revisions) could cause incorrect
update of the hardware dirty bit when the DBM/AP bits are updated
without a break-before-make. The workaround is to disable the usage
of hardware DBM locally on the affected cores. CPUs not affected by
Expand Down Expand Up @@ -952,8 +952,9 @@ choice
that is selected here.

config CPU_BIG_ENDIAN
bool "Build big-endian kernel"
help
bool "Build big-endian kernel"
depends on !LD_IS_LLD || LLD_VERSION >= 130000
help
Say Y if you plan on running a kernel with a big-endian userspace.

config CPU_LITTLE_ENDIAN
Expand Down Expand Up @@ -1132,6 +1133,10 @@ config CRASH_DUMP

For more details see Documentation/admin-guide/kdump/kdump.rst

config TRANS_TABLE
def_bool y
depends on HIBERNATION

config XEN_DOM0
def_bool y
depends on XEN
Expand Down
10 changes: 6 additions & 4 deletions arch/arm64/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -188,10 +188,12 @@ ifeq ($(KBUILD_EXTMOD),)
# this hack.
prepare: vdso_prepare
vdso_prepare: prepare0
$(Q)$(MAKE) $(build)=arch/arm64/kernel/vdso include/generated/vdso-offsets.h
$(if $(CONFIG_COMPAT_VDSO),$(Q)$(MAKE) \
$(build)=arch/arm64/kernel/vdso32 \
include/generated/vdso32-offsets.h)
$(Q)$(MAKE) $(build)=arch/arm64/kernel/vdso \
include/generated/vdso-offsets.h arch/arm64/kernel/vdso/vdso.so
ifdef CONFIG_COMPAT_VDSO
$(Q)$(MAKE) $(build)=arch/arm64/kernel/vdso32 \
include/generated/vdso32-offsets.h arch/arm64/kernel/vdso32/vdso.so
endif
endif

define archhelp
Expand Down
82 changes: 72 additions & 10 deletions arch/arm64/include/asm/archrandom.h
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,26 @@

#ifdef CONFIG_ARCH_RANDOM

#include <linux/arm-smccc.h>
#include <linux/bug.h>
#include <linux/kernel.h>
#include <asm/cpufeature.h>

#define ARM_SMCCC_TRNG_MIN_VERSION 0x10000UL

extern bool smccc_trng_available;

static inline bool __init smccc_probe_trng(void)
{
struct arm_smccc_res res;

arm_smccc_1_1_invoke(ARM_SMCCC_TRNG_VERSION, &res);
if ((s32)res.a0 < 0)
return false;

return res.a0 >= ARM_SMCCC_TRNG_MIN_VERSION;
}

static inline bool __arm64_rndr(unsigned long *v)
{
bool ok;
Expand Down Expand Up @@ -38,26 +54,55 @@ static inline bool __must_check arch_get_random_int(unsigned int *v)

static inline bool __must_check arch_get_random_seed_long(unsigned long *v)
{
struct arm_smccc_res res;

/*
* We prefer the SMCCC call, since its semantics (return actual
* hardware backed entropy) is closer to the idea behind this
* function here than what even the RNDRSS register provides
* (the output of a pseudo RNG freshly seeded by a TRNG).
*/
if (smccc_trng_available) {
arm_smccc_1_1_invoke(ARM_SMCCC_TRNG_RND64, 64, &res);
if ((int)res.a0 >= 0) {
*v = res.a3;
return true;
}
}

/*
* Only support the generic interface after we have detected
* the system wide capability, avoiding complexity with the
* cpufeature code and with potential scheduling between CPUs
* with and without the feature.
*/
if (!cpus_have_const_cap(ARM64_HAS_RNG))
return false;
if (cpus_have_const_cap(ARM64_HAS_RNG) && __arm64_rndr(v))
return true;

return __arm64_rndr(v);
return false;
}


static inline bool __must_check arch_get_random_seed_int(unsigned int *v)
{
struct arm_smccc_res res;
unsigned long val;
bool ok = arch_get_random_seed_long(&val);

*v = val;
return ok;
if (smccc_trng_available) {
arm_smccc_1_1_invoke(ARM_SMCCC_TRNG_RND64, 32, &res);
if ((int)res.a0 >= 0) {
*v = res.a3 & GENMASK(31, 0);
return true;
}
}

if (cpus_have_const_cap(ARM64_HAS_RNG)) {
if (__arm64_rndr(&val)) {
*v = val;
return true;
}
}

return false;
}

static inline bool __init __early_cpu_has_rndr(void)
Expand All @@ -72,12 +117,29 @@ arch_get_random_seed_long_early(unsigned long *v)
{
WARN_ON(system_state != SYSTEM_BOOTING);

if (!__early_cpu_has_rndr())
return false;
if (smccc_trng_available) {
struct arm_smccc_res res;

arm_smccc_1_1_invoke(ARM_SMCCC_TRNG_RND64, 64, &res);
if ((int)res.a0 >= 0) {
*v = res.a3;
return true;
}
}

if (__early_cpu_has_rndr() && __arm64_rndr(v))
return true;

return __arm64_rndr(v);
return false;
}
#define arch_get_random_seed_long_early arch_get_random_seed_long_early

#else /* !CONFIG_ARCH_RANDOM */

static inline bool __init smccc_probe_trng(void)
{
return false;
}

#endif /* CONFIG_ARCH_RANDOM */
#endif /* _ASM_ARCHRANDOM_H */
4 changes: 2 additions & 2 deletions arch/arm64/include/asm/asm-uaccess.h
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,10 @@
.macro __uaccess_ttbr0_disable, tmp1
mrs \tmp1, ttbr1_el1 // swapper_pg_dir
bic \tmp1, \tmp1, #TTBR_ASID_MASK
sub \tmp1, \tmp1, #PAGE_SIZE // reserved_pg_dir just before swapper_pg_dir
sub \tmp1, \tmp1, #RESERVED_SWAPPER_OFFSET // reserved_pg_dir
msr ttbr0_el1, \tmp1 // set reserved TTBR0_EL1
isb
add \tmp1, \tmp1, #PAGE_SIZE
add \tmp1, \tmp1, #RESERVED_SWAPPER_OFFSET
msr ttbr1_el1, \tmp1 // set reserved ASID
isb
.endm
Expand Down
33 changes: 33 additions & 0 deletions arch/arm64/include/asm/assembler.h
Original file line number Diff line number Diff line change
Expand Up @@ -675,6 +675,23 @@ USER(\label, ic ivau, \tmp2) // invalidate I line PoU
.endif
.endm

/*
* Set SCTLR_EL1 to the passed value, and invalidate the local icache
* in the process. This is called when setting the MMU on.
*/
.macro set_sctlr_el1, reg
msr sctlr_el1, \reg
isb
/*
* Invalidate the local I-cache so that any instructions fetched
* speculatively from the PoC are discarded, since they may have
* been dynamically patched at the PoU.
*/
ic iallu
dsb nsh
isb
.endm

/*
* Check whether to yield to another runnable task from kernel mode NEON code
* (which runs with preemption disabled).
Expand Down Expand Up @@ -745,6 +762,22 @@ USER(\label, ic ivau, \tmp2) // invalidate I line PoU
.Lyield_out_\@ :
.endm

/*
* Check whether preempt-disabled code should yield as soon as it
* is able. This is the case if re-enabling preemption a single
* time results in a preempt count of zero, and the TIF_NEED_RESCHED
* flag is set. (Note that the latter is stored negated in the
* top word of the thread_info::preempt_count field)
*/
.macro cond_yield, lbl:req, tmp:req
#ifdef CONFIG_PREEMPTION
get_current_task \tmp
ldr \tmp, [\tmp, #TSK_TI_PREEMPT]
sub \tmp, \tmp, #PREEMPT_DISABLE_OFFSET
cbz \tmp, \lbl
#endif
.endm

/*
* This macro emits a program property note section identifying
* architecture features which require special handling, mainly for
Expand Down
5 changes: 0 additions & 5 deletions arch/arm64/include/asm/cacheflush.h
Original file line number Diff line number Diff line change
Expand Up @@ -30,11 +30,6 @@
* the implementation assumes non-aliasing VIPT D-cache and (aliasing)
* VIPT I-cache.
*
* flush_cache_mm(mm)
*
* Clean and invalidate all user space cache entries
* before a change of page tables.
*
* flush_icache_range(start, end)
*
* Ensure coherency between the I-cache and the D-cache in the
Expand Down
11 changes: 11 additions & 0 deletions arch/arm64/include/asm/cpufeature.h
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,11 @@ struct arm64_ftr_bits {
s64 safe_val; /* safe value for FTR_EXACT features */
};

struct arm64_ftr_override {
u64 val;
u64 mask;
};

/*
* @arm64_ftr_reg - Feature register
* @strict_mask Bits which should match across all CPUs for sanity.
Expand All @@ -74,6 +79,7 @@ struct arm64_ftr_reg {
u64 user_mask;
u64 sys_val;
u64 user_val;
struct arm64_ftr_override *override;
const struct arm64_ftr_bits *ftr_bits;
};

Expand Down Expand Up @@ -600,6 +606,7 @@ void __init setup_cpu_features(void);
void check_local_cpu_capabilities(void);

u64 read_sanitised_ftr_reg(u32 id);
u64 __read_sysreg_by_encoding(u32 sys_id);

static inline bool cpu_supports_mixed_endian_el0(void)
{
Expand Down Expand Up @@ -811,6 +818,10 @@ static inline unsigned int get_vmid_bits(u64 mmfr1)
return 8;
}

extern struct arm64_ftr_override id_aa64mmfr1_override;
extern struct arm64_ftr_override id_aa64pfr1_override;
extern struct arm64_ftr_override id_aa64isar1_override;

u32 get_kvm_ipa_limit(void);
void dump_cpu_features(void);

Expand Down
Loading

0 comments on commit 99ca0ed

Please sign in to comment.