Skip to content

Commit

Permalink
Merge tag 'x86-cleanups-2024-01-08' of git://git.kernel.org/pub/scm/l…
Browse files Browse the repository at this point in the history
…inux/kernel/git/tip/tip

Pull x86 cleanups from Ingo Molnar:

 - Change global variables to local

 - Add missing kernel-doc function parameter descriptions

 - Remove unused parameter from a macro

 - Remove obsolete Kconfig entry

 - Fix comments

 - Fix typos, mostly scripted, manually reviewed

and a micro-optimization got misplaced as a cleanup:

 - Micro-optimize the asm code in secondary_startup_64_no_verify()

* tag 'x86-cleanups-2024-01-08' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  arch/x86: Fix typos
  x86/head_64: Use TESTB instead of TESTL in secondary_startup_64_no_verify()
  x86/docs: Remove reference to syscall trampoline in PTI
  x86/Kconfig: Remove obsolete config X86_32_SMP
  x86/io: Remove the unused 'bw' parameter from the BUILDIO() macro
  x86/mtrr: Document missing function parameters in kernel-doc
  x86/setup: Make relocated_ramdisk a local variable of relocate_initrd()
  • Loading branch information
torvalds committed Jan 9, 2024
2 parents 42c371f + 54aa699 commit b51cc5d
Show file tree
Hide file tree
Showing 66 changed files with 92 additions and 96 deletions.
10 changes: 4 additions & 6 deletions Documentation/arch/x86/pti.rst
Original file line number Diff line number Diff line change
Expand Up @@ -81,11 +81,9 @@ this protection comes at a cost:
and exit (it can be skipped when the kernel is interrupted,
though.) Moves to CR3 are on the order of a hundred
cycles, and are required at every entry and exit.
b. A "trampoline" must be used for SYSCALL entry. This
trampoline depends on a smaller set of resources than the
non-PTI SYSCALL entry code, so requires mapping fewer
things into the userspace page tables. The downside is
that stacks must be switched at entry time.
b. Percpu TSS is mapped into the user page tables to allow SYSCALL64 path
to work under PTI. This doesn't have a direct runtime cost but it can
be argued it opens certain timing attack scenarios.
c. Global pages are disabled for all kernel structures not
mapped into both kernel and userspace page tables. This
feature of the MMU allows different processes to share TLB
Expand Down Expand Up @@ -167,7 +165,7 @@ that are worth noting here.
* Failures of the selftests/x86 code. Usually a bug in one of the
more obscure corners of entry_64.S
* Crashes in early boot, especially around CPU bringup. Bugs
in the trampoline code or mappings cause these.
in the mappings cause these.
* Crashes at the first interrupt. Caused by bugs in entry_64.S,
like screwing up a page table switch. Also caused by
incorrectly mapping the IRQ handler entry code.
Expand Down
4 changes: 0 additions & 4 deletions arch/x86/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -384,10 +384,6 @@ config HAVE_INTEL_TXT
def_bool y
depends on INTEL_IOMMU && ACPI

config X86_32_SMP
def_bool y
depends on X86_32 && SMP

config X86_64_SMP
def_bool y
depends on X86_64 && SMP
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/boot/compressed/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ KBUILD_CFLAGS += -D__DISABLE_EXPORTS
KBUILD_CFLAGS += $(call cc-option,-Wa$(comma)-mrelax-relocations=no)
KBUILD_CFLAGS += -include $(srctree)/include/linux/hidden.h

# sev.c indirectly inludes inat-table.h which is generated during
# sev.c indirectly includes inat-table.h which is generated during
# compilation and stored in $(objtree). Add the directory to the includes so
# that the compiler finds it even with out-of-tree builds (make O=/some/path).
CFLAGS_sev.o += -I$(objtree)/arch/x86/lib/
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/boot/compressed/mem.c
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

/*
* accept_memory() and process_unaccepted_memory() called from EFI stub which
* runs before decompresser and its early_tdx_detect().
* runs before decompressor and its early_tdx_detect().
*
* Enumerate TDX directly from the early users.
*/
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/coco/tdx/tdx.c
Original file line number Diff line number Diff line change
Expand Up @@ -887,7 +887,7 @@ void __init tdx_early_init(void)
* there.
*
* Intel-TDX has a secure RDMSR hypercall, but that needs to be
* implemented seperately in the low level startup ASM code.
* implemented separately in the low level startup ASM code.
* Until that is in place, disable parallel bringup for TDX.
*/
x86_cpuinit.parallel_bringup = false;
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/crypto/aesni-intel_asm.S
Original file line number Diff line number Diff line change
Expand Up @@ -666,7 +666,7 @@ ALL_F: .octa 0xffffffffffffffffffffffffffffffff

.ifc \operation, dec
movdqa %xmm1, %xmm3
pxor %xmm1, %xmm9 # Cyphertext XOR E(K, Yn)
pxor %xmm1, %xmm9 # Ciphertext XOR E(K, Yn)

mov \PLAIN_CYPH_LEN, %r10
add %r13, %r10
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/crypto/aesni-intel_avx-x86_64.S
Original file line number Diff line number Diff line change
Expand Up @@ -747,7 +747,7 @@ VARIABLE_OFFSET = 16*8

.if \ENC_DEC == DEC
vmovdqa %xmm1, %xmm3
pxor %xmm1, %xmm9 # Cyphertext XOR E(K, Yn)
pxor %xmm1, %xmm9 # Ciphertext XOR E(K, Yn)

mov \PLAIN_CYPH_LEN, %r10
add %r13, %r10
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/crypto/crc32c-pcl-intel-asm_64.S
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,7 @@ SYM_FUNC_START(crc_pcl)
xor crc1,crc1
xor crc2,crc2

# Fall thruogh into top of crc array (crc_128)
# Fall through into top of crc array (crc_128)

################################################################
## 3) CRC Array:
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/crypto/sha512-avx-asm.S
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ frame_size = frame_WK + WK_SIZE

# Useful QWORD "arrays" for simpler memory references
# MSG, DIGEST, K_t, W_t are arrays
# WK_2(t) points to 1 of 2 qwords at frame.WK depdending on t being odd/even
# WK_2(t) points to 1 of 2 qwords at frame.WK depending on t being odd/even

# Input message (arg1)
#define MSG(i) 8*i(msg)
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/crypto/sha512-ssse3-asm.S
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ frame_size = frame_WK + WK_SIZE

# Useful QWORD "arrays" for simpler memory references
# MSG, DIGEST, K_t, W_t are arrays
# WK_2(t) points to 1 of 2 qwords at frame.WK depdending on t being odd/even
# WK_2(t) points to 1 of 2 qwords at frame.WK depending on t being odd/even

# Input message (arg1)
#define MSG(i) 8*i(msg)
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/events/amd/brs.c
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ int amd_brs_hw_config(struct perf_event *event)
* Where X is the number of taken branches due to interrupt
* skid. Skid is large.
*
* Where Y is the occurences of the event while BRS is
* Where Y is the occurrences of the event while BRS is
* capturing the lbr_nr entries.
*
* By using retired taken branches, we limit the impact on the
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/events/amd/core.c
Original file line number Diff line number Diff line change
Expand Up @@ -1184,7 +1184,7 @@ static void amd_put_event_constraints_f17h(struct cpu_hw_events *cpuc,
* period of each one and given that the BRS saturates, it would not be possible
* to guarantee correlated content for all events. Therefore, in situations
* where multiple events want to use BRS, the kernel enforces mutual exclusion.
* Exclusion is enforced by chosing only one counter for events using BRS.
* Exclusion is enforced by choosing only one counter for events using BRS.
* The event scheduling logic will then automatically multiplex the
* events and ensure that at most one event is actively using BRS.
*
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/events/intel/core.c
Original file line number Diff line number Diff line change
Expand Up @@ -4027,7 +4027,7 @@ static int intel_pmu_hw_config(struct perf_event *event)

/*
* Currently, the only caller of this function is the atomic_switch_perf_msrs().
* The host perf conext helps to prepare the values of the real hardware for
* The host perf context helps to prepare the values of the real hardware for
* a set of msrs that need to be switched atomically in a vmx transaction.
*
* For example, the pseudocode needed to add a new msr should look like:
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/hyperv/hv_apic.c
Original file line number Diff line number Diff line change
Expand Up @@ -209,7 +209,7 @@ static bool __send_ipi_mask(const struct cpumask *mask, int vector,

/*
* This particular version of the IPI hypercall can
* only target upto 64 CPUs.
* only target up to 64 CPUs.
*/
if (vcpu >= 64)
goto do_ex_hypercall;
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/hyperv/irqdomain.c
Original file line number Diff line number Diff line change
Expand Up @@ -212,7 +212,7 @@ static void hv_irq_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
* This interrupt is already mapped. Let's unmap first.
*
* We don't use retarget interrupt hypercalls here because
* Microsoft Hypervisor doens't allow root to change the vector
* Microsoft Hypervisor doesn't allow root to change the vector
* or specify VPs outside of the set that is initially used
* during mapping.
*/
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/hyperv/ivm.c
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ void __noreturn hv_ghcb_terminate(unsigned int set, unsigned int reason)
/* Tell the hypervisor what went wrong. */
val |= GHCB_SEV_TERM_REASON(set, reason);

/* Request Guest Termination from Hypvervisor */
/* Request Guest Termination from Hypervisor */
wr_ghcb_msr(val);
VMGEXIT();

Expand Down
2 changes: 1 addition & 1 deletion arch/x86/include/asm/amd_nb.h
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ static inline bool amd_gart_present(void)
if (boot_cpu_data.x86_vendor != X86_VENDOR_AMD)
return false;

/* GART present only on Fam15h, upto model 0fh */
/* GART present only on Fam15h, up to model 0fh */
if (boot_cpu_data.x86 == 0xf || boot_cpu_data.x86 == 0x10 ||
(boot_cpu_data.x86 == 0x15 && boot_cpu_data.x86_model < 0x10))
return true;
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/include/asm/extable_fixup_types.h
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

/*
* Our IMM is signed, as such it must live at the top end of the word. Also,
* since C99 hex constants are of ambigious type, force cast the mask to 'int'
* since C99 hex constants are of ambiguous type, force cast the mask to 'int'
* so that FIELD_GET() will DTRT and sign extend the value when it extracts it.
*/
#define EX_DATA_TYPE_MASK ((int)0x000000FF)
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/include/asm/fpu/types.h
Original file line number Diff line number Diff line change
Expand Up @@ -415,7 +415,7 @@ struct fpu_state_perm {
*
* This master permission field is only to be used when
* task.fpu.fpstate based checks fail to validate whether the task
* is allowed to expand it's xfeatures set which requires to
* is allowed to expand its xfeatures set which requires to
* allocate a larger sized fpstate buffer.
*
* Do not access this field directly. Use the provided helper
Expand Down
8 changes: 4 additions & 4 deletions arch/x86/include/asm/io.h
Original file line number Diff line number Diff line change
Expand Up @@ -242,7 +242,7 @@ static inline void slow_down_io(void)

#endif

#define BUILDIO(bwl, bw, type) \
#define BUILDIO(bwl, type) \
static inline void out##bwl##_p(type value, u16 port) \
{ \
out##bwl(value, port); \
Expand Down Expand Up @@ -288,9 +288,9 @@ static inline void ins##bwl(u16 port, void *addr, unsigned long count) \
} \
}

BUILDIO(b, b, u8)
BUILDIO(w, w, u16)
BUILDIO(l, , u32)
BUILDIO(b, u8)
BUILDIO(w, u16)
BUILDIO(l, u32)
#undef BUILDIO

#define inb_p inb_p
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/include/asm/iosf_mbi.h
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@ int iosf_mbi_modify(u8 port, u8 opcode, u32 offset, u32 mdr, u32 mask);
* This function will block all kernel access to the PMIC I2C bus, so that the
* P-Unit can safely access the PMIC over the shared I2C bus.
*
* Note on these systems the i2c-bus driver will request a sempahore from the
* Note on these systems the i2c-bus driver will request a semaphore from the
* P-Unit for exclusive access to the PMIC bus when i2c drivers are accessing
* it, but this does not appear to be sufficient, we still need to avoid making
* certain P-Unit requests during the access window to avoid problems.
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/include/asm/kvm_host.h
Original file line number Diff line number Diff line change
Expand Up @@ -1652,7 +1652,7 @@ struct kvm_x86_ops {
/* Whether or not a virtual NMI is pending in hardware. */
bool (*is_vnmi_pending)(struct kvm_vcpu *vcpu);
/*
* Attempt to pend a virtual NMI in harware. Returns %true on success
* Attempt to pend a virtual NMI in hardware. Returns %true on success
* to allow using static_call_ret0 as the fallback.
*/
bool (*set_vnmi_pending)(struct kvm_vcpu *vcpu);
Expand Down
4 changes: 2 additions & 2 deletions arch/x86/include/asm/nospec-branch.h
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@
* but there is still a cushion vs. the RSB depth. The algorithm does not
* claim to be perfect and it can be speculated around by the CPU, but it
* is considered that it obfuscates the problem enough to make exploitation
* extremly difficult.
* extremely difficult.
*/
#define RET_DEPTH_SHIFT 5
#define RSB_RET_STUFF_LOOPS 16
Expand Down Expand Up @@ -208,7 +208,7 @@

/*
* Abuse ANNOTATE_RETPOLINE_SAFE on a NOP to indicate UNRET_END, should
* eventually turn into it's own annotation.
* eventually turn into its own annotation.
*/
.macro VALIDATE_UNRET_END
#if defined(CONFIG_NOINSTR_VALIDATION) && \
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/include/asm/pgtable_64.h
Original file line number Diff line number Diff line change
Expand Up @@ -203,7 +203,7 @@ static inline void native_pgd_clear(pgd_t *pgd)
* F (2) in swp entry is used to record when a pagetable is
* writeprotected by userfaultfd WP support.
*
* E (3) in swp entry is used to rememeber PG_anon_exclusive.
* E (3) in swp entry is used to remember PG_anon_exclusive.
*
* Bit 7 in swp entry should be 0 because pmd_present checks not only P,
* but also L and G.
Expand Down
2 changes: 0 additions & 2 deletions arch/x86/include/asm/setup.h
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,6 @@
#include <asm/bootparam.h>
#include <asm/x86_init.h>

extern u64 relocated_ramdisk;

/* Interrupt control for vSMPowered x86_64 systems */
#ifdef CONFIG_X86_64
void vsmp_init(void);
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/include/asm/uv/uv_hub.h
Original file line number Diff line number Diff line change
Expand Up @@ -653,7 +653,7 @@ static inline int uv_blade_to_node(int blade)
return uv_socket_to_node(blade);
}

/* Blade number of current cpu. Numnbered 0 .. <#blades -1> */
/* Blade number of current cpu. Numbered 0 .. <#blades -1> */
static inline int uv_numa_blade_id(void)
{
return uv_hub_info->numa_blade_id;
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/include/asm/vdso/gettimeofday.h
Original file line number Diff line number Diff line change
Expand Up @@ -321,7 +321,7 @@ static __always_inline
u64 vdso_calc_delta(u64 cycles, u64 last, u64 mask, u32 mult)
{
/*
* Due to the MSB/Sign-bit being used as invald marker (see
* Due to the MSB/Sign-bit being used as invalid marker (see
* arch_vdso_cycles_valid() above), the effective mask is S64_MAX.
*/
u64 delta = (cycles - last) & S64_MAX;
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/include/asm/xen/interface_64.h
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@
* RING1 -> RING3 kernel mode.
* RING2 -> RING3 kernel mode.
* RING3 -> RING3 user mode.
* However RING0 indicates that the guest kernel should return to iteself
* However RING0 indicates that the guest kernel should return to itself
* directly with
* orb $3,1*8(%rsp)
* iretq
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/include/uapi/asm/amd_hsmp.h
Original file line number Diff line number Diff line change
Expand Up @@ -238,7 +238,7 @@ static const struct hsmp_msg_desc hsmp_msg_desc_table[] = {
/*
* HSMP_GET_DIMM_THERMAL, num_args = 1, response_sz = 1
* input: args[0] = DIMM address[7:0]
* output: args[0] = temperature in degree celcius[31:21] + update rate in ms[16:8] +
* output: args[0] = temperature in degree celsius[31:21] + update rate in ms[16:8] +
* DIMM address[7:0]
*/
{1, 1, HSMP_GET},
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/kernel/alternative.c
Original file line number Diff line number Diff line change
Expand Up @@ -1906,7 +1906,7 @@ static void *__text_poke(text_poke_f func, void *addr, const void *src, size_t l
* Note that the caller must ensure that if the modified code is part of a
* module, the module would not be removed during poking. This can be achieved
* by registering a module notifier, and ordering module removal and patching
* trough a mutex.
* through a mutex.
*/
void *text_poke(void *addr, const void *opcode, size_t len)
{
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/kernel/amd_gart_64.c
Original file line number Diff line number Diff line change
Expand Up @@ -776,7 +776,7 @@ int __init gart_iommu_init(void)
iommu_size >> PAGE_SHIFT);
/*
* Tricky. The GART table remaps the physical memory range,
* so the CPU wont notice potential aliases and if the memory
* so the CPU won't notice potential aliases and if the memory
* is remapped to UC later on, we might surprise the PCI devices
* with a stray writeout of a cacheline. So play it sure and
* do an explicit, full-scale wbinvd() _after_ having marked all
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/kernel/apic/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
#

# Leads to non-deterministic coverage that is not a function of syscall inputs.
# In particualr, smp_apic_timer_interrupt() is called in random places.
# In particular, smp_apic_timer_interrupt() is called in random places.
KCOV_INSTRUMENT := n

obj-$(CONFIG_X86_LOCAL_APIC) += apic.o apic_common.o apic_noop.o ipi.o vector.o init.o
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/kernel/apic/apic.c
Original file line number Diff line number Diff line change
Expand Up @@ -782,7 +782,7 @@ bool __init apic_needs_pit(void)

/*
* If interrupt delivery mode is legacy PIC or virtual wire without
* configuration, the local APIC timer wont be set up. Make sure
* configuration, the local APIC timer won't be set up. Make sure
* that the PIT is initialized.
*/
if (apic_intr_mode == APIC_PIC ||
Expand Down
4 changes: 2 additions & 2 deletions arch/x86/kernel/apic/vector.c
Original file line number Diff line number Diff line change
Expand Up @@ -738,8 +738,8 @@ int __init arch_probe_nr_irqs(void)
void lapic_assign_legacy_vector(unsigned int irq, bool replace)
{
/*
* Use assign system here so it wont get accounted as allocated
* and moveable in the cpu hotplug check and it prevents managed
* Use assign system here so it won't get accounted as allocated
* and movable in the cpu hotplug check and it prevents managed
* irq reservation from touching it.
*/
irq_matrix_assign_system(vector_matrix, ISA_IRQ_VECTOR(irq), replace);
Expand Down
14 changes: 10 additions & 4 deletions arch/x86/kernel/cpu/mtrr/generic.c
Original file line number Diff line number Diff line change
Expand Up @@ -428,6 +428,10 @@ void __init mtrr_copy_map(void)
* from the x86_init.hyper.init_platform() hook. It can be called only once.
* The MTRR state can't be changed afterwards. To ensure that, X86_FEATURE_MTRR
* is cleared.
*
* @var: MTRR variable range array to use
* @num_var: length of the @var array
* @def_type: default caching type
*/
void mtrr_overwrite_state(struct mtrr_var_range *var, unsigned int num_var,
mtrr_type def_type)
Expand Down Expand Up @@ -492,13 +496,15 @@ static u8 type_merge(u8 type, u8 new_type, u8 *uniform)
/**
* mtrr_type_lookup - look up memory type in MTRR
*
* @start: Begin of the physical address range
* @end: End of the physical address range
* @uniform: output argument:
* - 1: the returned MTRR type is valid for the whole region
* - 0: otherwise
*
* Return Values:
* MTRR_TYPE_(type) - The effective MTRR type for the region
* MTRR_TYPE_INVALID - MTRR is disabled
*
* Output Argument:
* uniform - Set to 1 when the returned MTRR type is valid for the whole
* region, set to 0 else.
*/
u8 mtrr_type_lookup(u64 start, u64 end, u8 *uniform)
{
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/kernel/cpu/sgx/ioctl.c
Original file line number Diff line number Diff line change
Expand Up @@ -581,7 +581,7 @@ static int sgx_encl_init(struct sgx_encl *encl, struct sgx_sigstruct *sigstruct,
*
* Flush any outstanding enqueued EADD operations and perform EINIT. The
* Launch Enclave Public Key Hash MSRs are rewritten as necessary to match
* the enclave's MRSIGNER, which is caculated from the provided sigstruct.
* the enclave's MRSIGNER, which is calculated from the provided sigstruct.
*
* Return:
* - 0: Success.
Expand Down
2 changes: 1 addition & 1 deletion arch/x86/kernel/fpu/core.c
Original file line number Diff line number Diff line change
Expand Up @@ -308,7 +308,7 @@ EXPORT_SYMBOL_GPL(fpu_update_guest_xfd);
* Must be invoked from KVM after a VMEXIT before enabling interrupts when
* XFD write emulation is disabled. This is required because the guest can
* freely modify XFD and the state at VMEXIT is not guaranteed to be the
* same as the state on VMENTER. So software state has to be udpated before
* same as the state on VMENTER. So software state has to be updated before
* any operation which depends on it can take place.
*
* Note: It can be invoked unconditionally even when write emulation is
Expand Down
Loading

0 comments on commit b51cc5d

Please sign in to comment.