aboutsummaryrefslogtreecommitdiffstats
path: root/arch/x86/kernel/machine_kexec_64.c (follow)
AgeCommit message (Collapse)AuthorFilesLines
2025-10-11Merge tag 'x86_core_for_v6.18_rc1' of ↵Linus Torvalds1-0/+4
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull more x86 updates from Borislav Petkov: - Remove a bunch of asm implementing condition flags testing in KVM's emulator in favor of int3_emulate_jcc() which is written in C - Replace KVM fastops with C-based stubs which avoids problems with the fastop infra related to latter not adhering to the C ABI due to their special calling convention and, more importantly, bypassing compiler control-flow integrity checking because they're written in asm - Remove wrongly used static branches and other ugliness accumulated over time in hyperv's hypercall implementation with a proper static function call to the correct hypervisor call variant - Add some fixes and modifications to allow running FRED-enabled kernels in KVM even on non-FRED hardware - Add kCFI improvements like validating indirect calls and prepare for enabling kCFI with GCC. Add cmdline params documentation and other code cleanups - Use the single-byte 0xd6 insn as the official #UD single-byte undefined opcode instruction as agreed upon by both x86 vendors - Other smaller cleanups and touchups all over the place * tag 'x86_core_for_v6.18_rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (24 commits) x86,retpoline: Optimize patch_retpoline() x86,ibt: Use UDB instead of 0xEA x86/cfi: Remove __noinitretpoline and __noretpoline x86/cfi: Add "debug" option to "cfi=" bootparam x86/cfi: Standardize on common "CFI:" prefix for CFI reports x86/cfi: Document the "cfi=" bootparam options x86/traps: Clarify KCFI instruction layout compiler_types.h: Move __nocfi out of compiler-specific header objtool: Validate kCFI calls x86/fred: KVM: VMX: Always use FRED for IRQs when CONFIG_X86_FRED=y x86/fred: Play nice with invoking asm_fred_entry_from_kvm() on non-FRED hardware x86/fred: Install system vector handlers even if FRED isn't fully enabled x86/hyperv: Use direct call to hypercall-page x86/hyperv: Clean up hv_do_hypercall() KVM: x86: Remove fastops KVM: x86: Convert em_salc() to C KVM: x86: Introduce EM_ASM_3WCL KVM: x86: Introduce EM_ASM_1SRC2 KVM: x86: Introduce EM_ASM_2CL KVM: x86: Introduce EM_ASM_2W ...
2025-09-05x86/kexec: Disable kexec/kdump on platforms with TDX partial write erratumKai Huang1-0/+16
Some early TDX-capable platforms have an erratum: A kernel partial write (a write transaction of less than cacheline lands at memory controller) to TDX private memory poisons that memory, and a subsequent read triggers a machine check. On those platforms, the old kernel must reset TDX private memory before jumping to the new kernel, otherwise the new kernel may see unexpected machine check. Currently the kernel doesn't track which page is a TDX private page. For simplicity just fail kexec/kdump for those platforms. Leverage the existing machine_kexec_prepare() to fail kexec/kdump by adding the check of the presence of the TDX erratum (which is only checked for if the kernel is built with TDX host support). This rejects kexec/kdump when the kernel is loading the kexec/kdump kernel image. The alternative is to reject kexec/kdump when the kernel is jumping to the new kernel. But for kexec this requires adding a new check (e.g., arch_kexec_allowed()) in the common code to fail kernel_kexec() at early stage. Kdump (crash_kexec()) needs similar check, but it's hard to justify because crash_kexec() is not supposed to abort. It's feasible to further relax this limitation, i.e., only fail kexec when TDX is actually enabled by the kernel. But this is still a half measure compared to resetting TDX private memory so just do the simplest thing for now. The impact to userspace is the users will get an error when loading the kexec/kdump kernel image: kexec_load failed: Operation not supported This might be confusing to the users, thus also print the reason in the dmesg: [..] kexec: Not allowed on platform with tdx_pw_mce bug. Signed-off-by: Kai Huang <kai.huang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Rick Edgecombe <rick.p.edgecombe@intel.com> Reviewed-by: Binbin Wu <binbin.wu@linux.intel.com> Tested-by: Farrah Chen <farrah.chen@intel.com> Link: https://lore.kernel.org/all/20250901160930.1785244-5-pbonzini%40redhat.com
2025-09-05x86/sme: Use percpu boolean to control WBINVD during kexecKai Huang1-4/+10
TL;DR: Prepare to unify how TDX and SME do cache flushing during kexec by making a percpu boolean control whether to do the WBINVD. -- Background -- On SME platforms, dirty cacheline aliases with and without encryption bit can coexist, and the CPU can flush them back to memory in random order. During kexec, the caches must be flushed before jumping to the new kernel otherwise the dirty cachelines could silently corrupt the memory used by the new kernel due to different encryption property. TDX also needs a cache flush during kexec for the same reason. It would be good to have a generic way to flush the cache instead of scattering checks for each feature all around. When SME is enabled, the kernel basically encrypts all memory including the kernel itself and a simple memory write from the kernel could dirty cachelines. Currently, the kernel uses WBINVD to flush the cache for SME during kexec in two places: 1) the one in stop_this_cpu() for all remote CPUs when the kexec-ing CPU stops them; 2) the one in the relocate_kernel() where the kexec-ing CPU jumps to the new kernel. -- Solution -- Unlike SME, TDX can only dirty cachelines when it is used (i.e., when SEAMCALLs are performed). Since there are no more SEAMCALLs after the aforementioned WBINVDs, leverage this for TDX. To unify the approach for SME and TDX, use a percpu boolean to indicate the cache may be in an incoherent state and needs flushing during kexec, and set the boolean for SME. TDX can then leverage it. While SME could use a global flag (since it's enabled at early boot and enabled on all CPUs), the percpu flag fits TDX better: The percpu flag can be set when a CPU makes a SEAMCALL, and cleared when another WBINVD on the CPU obviates the need for a kexec-time WBINVD. Saving kexec-time WBINVD is valuable, because there is an existing race[*] where kexec could proceed while another CPU is active. WBINVD could make this race worse, so it's worth skipping it when possible. -- Side effect to SME -- Today the first WBINVD in the stop_this_cpu() is performed when SME is *supported* by the platform, and the second WBINVD is done in relocate_kernel() when SME is *activated* by the kernel. Make things simple by changing to do the second WBINVD when the platform supports SME. This allows the kernel to simply turn on this percpu boolean when bringing up a CPU by checking whether the platform supports SME. No other functional change intended. [*] The aforementioned race: During kexec native_stop_other_cpus() is called to stop all remote CPUs before jumping to the new kernel. native_stop_other_cpus() firstly sends normal REBOOT vector IPIs to stop remote CPUs and waits them to stop. If that times out, it sends NMI to stop the CPUs that are still alive. The race happens when native_stop_other_cpus() has to send NMIs and could potentially result in the system hang (for more information please see [1]). Signed-off-by: Kai Huang <kai.huang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de> Tested-by: Tom Lendacky <thomas.lendacky@amd.com> Link: https://lore.kernel.org/kvm/b963fcd60abe26c7ec5dc20b42f1a2ebbcc72397.1750934177.git.kai.huang@intel.com/ [1] Link: https://lore.kernel.org/all/20250901160930.1785244-3-pbonzini%40redhat.com
2025-09-05x86/kexec: Consolidate relocate_kernel() function parametersKai Huang1-9/+13
During kexec, the kernel jumps to the new kernel in relocate_kernel(), which is implemented in assembly and both 32-bit and 64-bit have their own version. Currently, for both 32-bit and 64-bit, the last two parameters of the relocate_kernel() are both 'unsigned int' but actually they only convey a boolean, i.e., one bit information. The 'unsigned int' has enough space to carry two bits information therefore there's no need to pass the two booleans in two separate 'unsigned int'. Consolidate the last two function parameters of relocate_kernel() into a single 'unsigned int' and pass flags instead. Only consolidate the 64-bit version albeit the similar optimization can be done for the 32-bit version too. Don't bother changing the 32-bit version while it is working (since assembly code change is required). Signed-off-by: Kai Huang <kai.huang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Reviewed-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: David Woodhouse <dwmw@amazon.co.uk> Link: https://lore.kernel.org/all/20250901160930.1785244-2-pbonzini%40redhat.com
2025-08-18objtool: Validate kCFI callsPeter Zijlstra1-0/+4
Validate that all indirect calls adhere to kCFI rules. Notably doing nocfi indirect call to a cfi function is broken. Apparently some Rust 'core' code violates this and explodes when ran with FineIBT. All the ANNOTATE_NOCFI_SYM sites are prime targets for attackers. - runtime EFI is especially henous because it also needs to disable IBT. Basically calling unknown code without CFI protection at runtime is a massice security issue. - Kexec image handover; if you can exploit this, you get to keep it :-) Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Josh Poimboeuf <jpoimboe@kernel.org> Acked-by: Sean Christopherson <seanjc@google.com> Link: https://lkml.kernel.org/r/20250714103441.496787279@infradead.org
2025-05-31Merge tag 'mm-nonmm-stable-2025-05-31-15-28' of ↵Linus Torvalds1-0/+22
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull non-MM updates from Andrew Morton: - "hung_task: extend blocking task stacktrace dump to semaphore" from Lance Yang enhances the hung task detector. The detector presently dumps the blocking tasks's stack when it is blocked on a mutex. Lance's series extends this to semaphores - "nilfs2: improve sanity checks in dirty state propagation" from Wentao Liang addresses a couple of minor flaws in nilfs2 - "scripts/gdb: Fixes related to lx_per_cpu()" from Illia Ostapyshyn fixes a couple of issues in the gdb scripts - "Support kdump with LUKS encryption by reusing LUKS volume keys" from Coiby Xu addresses a usability problem with kdump. When the dump device is LUKS-encrypted, the kdump kernel may not have the keys to the encrypted filesystem. A full writeup of this is in the series [0/N] cover letter - "sysfs: add counters for lockups and stalls" from Max Kellermann adds /sys/kernel/hardlockup_count and /sys/kernel/hardlockup_count and /sys/kernel/rcu_stall_count - "fork: Page operation cleanups in the fork code" from Pasha Tatashin implements a number of code cleanups in fork.c - "scripts/gdb/symbols: determine KASLR offset on s390 during early boot" from Ilya Leoshkevich fixes some s390 issues in the gdb scripts * tag 'mm-nonmm-stable-2025-05-31-15-28' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (67 commits) llist: make llist_add_batch() a static inline delayacct: remove redundant code and adjust indentation squashfs: add optional full compressed block caching crash_dump, nvme: select CONFIGFS_FS as built-in scripts/gdb/symbols: determine KASLR offset on s390 during early boot scripts/gdb/symbols: factor out pagination_off() scripts/gdb/symbols: factor out get_vmlinux() kernel/panic.c: format kernel-doc comments mailmap: update and consolidate Casey Connolly's name and email nilfs2: remove wbc->for_reclaim handling fork: define a local GFP_VMAP_STACK fork: check charging success before zeroing stack fork: clean-up naming of vm_stack/vm_struct variables in vmap stacks code fork: clean-up ifdef logic around stack allocation kernel/rcu/tree_stall: add /sys/kernel/rcu_stall_count kernel/watchdog: add /sys/kernel/{hard,soft}lockup_count x86/crash: make the page that stores the dm crypt keys inaccessible x86/crash: pass dm crypt keys to kdump kernel Revert "x86/mm: Remove unused __set_memory_prot()" crash_dump: retrieve dm crypt keys in kdump kernel ...
2025-05-21x86/crash: make the page that stores the dm crypt keys inaccessibleCoiby Xu1-0/+22
This adds an addition layer of protection for the saved copy of dm crypt key. Trying to access the saved copy will cause page fault. Link: https://lkml.kernel.org/r/20250502011246.99238-9-coxu@redhat.com Signed-off-by: Coiby Xu <coxu@redhat.com> Suggested-by: Pingfan Liu <kernelfans@gmail.com> Acked-by: Baoquan He <bhe@redhat.com> Cc: "Daniel P. Berrange" <berrange@redhat.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Dave Young <dyoung@redhat.com> Cc: Jan Pazdziora <jpazdziora@redhat.com> Cc: Milan Broz <gmazyland@gmail.com> Cc: Ondrej Kozina <okozina@redhat.com> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2025-04-10x86/kexec: Invalidate GDT/IDT from relocate_kernel() instead of earlierDavid Woodhouse1-8/+2
Reduce the window during which exceptions are unhandled, by leaving the GDT/IDT in place all the way into the relocate_kernel() function, until the moment that %cr3 gets replaced. Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Brian Gerst <brgerst@gmail.com> Cc: Juergen Gross <jgross@suse.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Kees Cook <keescook@chromium.org> Cc: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250326142404.256980-4-dwmw2@infradead.org
2025-04-10x86/kexec: Add 8250 MMIO serial port outputDavid Woodhouse1-0/+17
This supports the same 32-bit MMIO-mapped 8250 as the early_printk code. It's not clear why the early_printk code supports this form and only this form; the actual runtime 8250_pci doesn't seem to support it. But having hacked up QEMU to expose such a device, early_printk does work with it, and now so does the kexec debug code. Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Brian Gerst <brgerst@gmail.com> Cc: Juergen Gross <jgross@suse.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Kees Cook <keescook@chromium.org> Cc: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/r/20250326142404.256980-3-dwmw2@infradead.org
2025-03-25x86/kexec: Debugging support: Load an IDT and basic exception entry pointsDavid Woodhouse1-0/+21
[ mingo: Minor readability edits ] Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Brian Gerst <brgerst@gmail.com> Cc: Juergen Gross <jgross@suse.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Kees Cook <keescook@chromium.org> Link: https://lore.kernel.org/r/20250314173226.3062535-2-dwmw2@infradead.org
2025-01-14x86/kexec: Use typedef for relocate_kernel_fn function prototypeDavid Woodhouse1-7/+2
Both i386 and x86_64 now copy the relocate_kernel() function into the control page and execute it from there, using an open-coded function pointer. Use a typedef for it instead. [ bp: Put relocate_kernel_ptr ptr arithmetic on a single line. ] Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20250109140757.2841269-10-dwmw2@infradead.org
2025-01-14x86/kexec: Mark machine_kexec() with __nocfiDavid Woodhouse1-1/+1
A recent commit caused the relocate_kernel() function to be invoked through a function pointer, but it does not have CFI information. The resulting trap occurs after the IDT and GDT have been invalidated, leading to a triple-fault if CONFIG_CFI_CLANG is enabled. Using SYM_TYPED_FUNC_START() to provide the CFI information looks like it will require a prolonged battle with objtool. And is fairly pointless anyway, as the actual signature comes from a __kcfi_typeid_… symbol emitted from the C code based on the function prototype it thinks that relocate_kernel has, rendering the check somewhat tautological. The simple fix is just to mark machine_kexec() with __nocfi. Fixes: eeebbde57113 ("x86/kexec: Invoke copy of relocate_kernel() instead of the original") Reported-by: Nathan Chancellor <nathan@kernel.org> Suggested-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Link: https://lore.kernel.org/r/20250109140757.2841269-7-dwmw2@infradead.org
2024-12-06x86/kexec: Mark relocate_kernel page as ROX instead of RWXDavid Woodhouse1-1/+2
All writes to the page now happen before it gets marked as executable (or after it's already switched to the identmap page tables where it's OK to be RWX). Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Dave Young <dyoung@redhat.com> Cc: Eric Biederman <ebiederm@xmission.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Link: https://lore.kernel.org/r/20241205153343.3275139-14-dwmw2@infradead.org
2024-12-06x86/kexec: Drop page_list argument from relocate_kernel()David Woodhouse1-11/+7
The kernel's virtual mapping of the relocate_kernel page currently needs to be RWX because it is written to before the %cr3 switch. Now that the relocate_kernel page has its own .data section and local variables, it can also have *global* variables. So eliminate the separate page_list argument, and write the same information directly to variables in the relocate_kernel page instead. This way, the relocate_kernel code itself doesn't need to copy it. Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Dave Young <dyoung@redhat.com> Cc: Eric Biederman <ebiederm@xmission.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lore.kernel.org/r/20241205153343.3275139-11-dwmw2@infradead.org
2024-12-06x86/kexec: Add data section to relocate_kernelDavid Woodhouse1-1/+7
Now that the relocate_kernel page is handled sanely by a linker script we can have actual data, and just use %rip-relative addressing to access it. Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Dave Young <dyoung@redhat.com> Cc: Eric Biederman <ebiederm@xmission.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lore.kernel.org/r/20241205153343.3275139-10-dwmw2@infradead.org
2024-12-06x86/kexec: Move relocate_kernel to kernel .data sectionDavid Woodhouse1-1/+3
Now that the copy is executed instead of the original, the relocate_kernel page can live in the kernel's .text section. This will allow subsequent commits to actually add real data to it and clean up the code somewhat as well as making the control page ROX. Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Dave Young <dyoung@redhat.com> Cc: Eric Biederman <ebiederm@xmission.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lore.kernel.org/r/20241205153343.3275139-9-dwmw2@infradead.org
2024-12-06x86/kexec: Invoke copy of relocate_kernel() instead of the originalDavid Woodhouse1-6/+24
This currently calls set_memory_x() from machine_kexec_prepare() just like the 32-bit version does. That's actually a bit earlier than I'd like, as it leaves the page RWX all the time the image is even *loaded*. Subsequent commits will eliminate all the writes to the page between the point it's marked executable in machine_kexec_prepare() the time that relocate_kernel() is running and has switched to the identmap %cr3, so that it can be ROX. But that can't happen until it's moved to the .data section of the kernel, and *that* can't happen until we start executing the copy instead of executing it in place in the kernel .text. So break the circular dependency in those commits by letting it be RWX for now. Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Dave Young <dyoung@redhat.com> Cc: Eric Biederman <ebiederm@xmission.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lore.kernel.org/r/20241205153343.3275139-8-dwmw2@infradead.org
2024-12-06x86/kexec: Copy control page into place in machine_kexec_prepare()David Woodhouse1-6/+4
There's no need for this to wait until the actual machine_kexec() invocation; future changes will need to make the control page read-only and executable, so all writes should be completed before machine_kexec_prepare() returns. Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Dave Young <dyoung@redhat.com> Cc: Eric Biederman <ebiederm@xmission.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lore.kernel.org/r/20241205153343.3275139-7-dwmw2@infradead.org
2024-12-06x86/kexec: Allocate PGD for x86_64 transition page tables separatelyDavid Woodhouse1-21/+24
Now that the following fix: d0ceea662d45 ("x86/mm: Add _PAGE_NOPTISHADOW bit to avoid updating userspace page tables") stops kernel_ident_mapping_init() from scribbling over the end of a 4KiB PGD by assuming the following 4KiB will be a userspace PGD, there's no good reason for the kexec PGD to be part of a single 8KiB allocation with the control_code_page. ( It's not clear that that was the reason for x86_64 kexec doing it that way in the first place either; there were no comments to that effect and it seems to have been the case even before PTI came along. It looks like it was just a happy accident which prevented memory corruption on kexec. ) Either way, it definitely isn't needed now. Just allocate the PGD separately on x86_64, like i386 already does. Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Dave Young <dyoung@redhat.com> Cc: Eric Biederman <ebiederm@xmission.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Link: https://lore.kernel.org/r/20241205153343.3275139-6-dwmw2@infradead.org
2024-08-05x86/kexec: Add EFI config table identity mapping for kexec kernelTao Liu1-0/+27
A kexec kernel boot failure is sometimes observed on AMD CPUs due to an unmapped EFI config table array. This can be seen when "nogbpages" is on the kernel command line, and has been observed as a full BIOS reboot rather than a successful kexec. This was also the cause of reported regressions attributed to Commit 7143c5f4cf20 ("x86/mm/ident_map: Use gbpages only where full GB page should be mapped.") which was subsequently reverted. To avoid this page fault, explicitly include the EFI config table array in the kexec identity map. Further explanation: The following 2 commits caused the EFI config table array to be accessed when enabling sev at kernel startup. commit ec1c66af3a30 ("x86/compressed/64: Detect/setup SEV/SME features earlier during boot") commit c01fce9cef84 ("x86/compressed: Add SEV-SNP feature detection/setup") This is in the code that examines whether SEV should be enabled or not, so it can even affect systems that are not SEV capable. This may result in a page fault if the EFI config table array's address is unmapped. Since the page fault occurs before the new kernel establishes its own identity map and page fault routines, it is unrecoverable and kexec fails. Most often, this problem is not seen because the EFI config table array gets included in the map by the luck of being placed at a memory address close enough to other memory areas that *are* included in the map created by kexec. Both the "nogbpages" command line option and the "use gpbages only where full GB page should be mapped" change greatly reduce the chance of being included in the map by luck, which is why the problem appears. Signed-off-by: Tao Liu <ltao@redhat.com> Signed-off-by: Steve Wahl <steve.wahl@hpe.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Pavin Joseph <me@pavinjoseph.com> Tested-by: Sarah Brofeldt <srhb@dbc.dk> Tested-by: Eric Hagberg <ehagberg@gmail.com> Reviewed-by: Ard Biesheuvel <ardb@kernel.org> Link: https://lore.kernel.org/all/20240717213121.3064030-2-steve.wahl@hpe.com
2024-06-03x86/kexec: Fix bug with call depth trackingDavid Kaplan1-2/+9
The call to cc_platform_has() triggers a fault and system crash if call depth tracking is active because the GS segment has been reset by load_segments() and GS_BASE is now 0 but call depth tracking uses per-CPU variables to operate. Call cc_platform_has() earlier in the function when GS is still valid. [ bp: Massage. ] Fixes: 5d8213864ade ("x86/retbleed: Add SKL return thunk") Signed-off-by: David Kaplan <david.kaplan@amd.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Cc: <stable@kernel.org> Link: https://lore.kernel.org/r/20240603083036.637-1-bp@kernel.org
2024-02-23x86, crash: wrap crash dumping code into crash related ifdefsBaoquan He1-0/+3
Now crash codes under kernel/ folder has been split out from kexec code, crash dumping can be separated from kexec reboot in config items on x86 with some adjustments. Here, also change some ifdefs or IS_ENABLED() check to more appropriate ones, e,g - #ifdef CONFIG_KEXEC_CORE -> #ifdef CONFIG_CRASH_DUMP - (!IS_ENABLED(CONFIG_KEXEC_CORE)) - > (!IS_ENABLED(CONFIG_CRASH_RESERVE)) [bhe@redhat.com: don't nest CONFIG_CRASH_DUMP ifdef inside CONFIG_KEXEC_CODE ifdef scope] Link: https://lore.kernel.org/all/SN6PR02MB4157931105FA68D72E3D3DB8D47B2@SN6PR02MB4157.namprd02.prod.outlook.com/T/#u Link: https://lkml.kernel.org/r/20240124051254.67105-7-bhe@redhat.com Signed-off-by: Baoquan He <bhe@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Hari Bathini <hbathini@linux.ibm.com> Cc: Pingfan Liu <piliu@redhat.com> Cc: Klara Modin <klarasmodin@gmail.com> Cc: Michael Kelley <mhklinux@outlook.com> Cc: Nathan Chancellor <nathan@kernel.org> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Yang Li <yang.lee@linux.alibaba.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-12-29x86/kexec: fix incorrect end address passed to kernel_ident_mapping_init()Yuntao Wang1-1/+1
kernel_ident_mapping_init() takes an exclusive memory range [pstart, pend) where pend is not included in the range, while res represents an inclusive memory range [start, end] where end is considered part of the range. Passing [start, end] rather than [start, end+1) to kernel_ident_mapping_init() may result in the identity mapping for the end address not being set up. For example, when res->start is equal to res->end, kernel_ident_mapping_init() will not establish any identity mapping. Similarly, when the value of res->end is a multiple of 2M and the page table maps 2M pages, kernel_ident_mapping_init() will also not set up identity mapping for res->end. Therefore, passing res->end directly to kernel_ident_mapping_init() is incorrect, the correct end address should be `res->end + 1`. Link: https://lkml.kernel.org/r/20231221101702.20956-1-ytcoode@gmail.com Signed-off-by: Yuntao Wang <ytcoode@gmail.com> Cc: Baoquan He <bhe@redhat.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Borislav Petkov (AMD) <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Simon Horman <horms@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-12-20x86/kexec: simplify the logic of mem_region_callback()Yuntao Wang1-5/+2
The expression `mstart + resource_size(res) - 1` is actually equivalent to `res->end`, simplify the logic of this function to improve readability. Link: https://lkml.kernel.org/r/20231212150506.31711-1-ytcoode@gmail.com Signed-off-by: Yuntao Wang <ytcoode@gmail.com> Acked-by: Baoquan He <bhe@redhat.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Simon Horman <horms@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2023-04-08x86/kexec: remove unnecessary arch_kexec_kernel_image_load()Bjorn Helgaas1-11/+0
Patch series "kexec: Remove unnecessary arch hook", v2. There are no arch-specific things in arch_kexec_kernel_image_load(), so remove it and just use the generic version. This patch (of 2): The x86 implementation of arch_kexec_kernel_image_load() is functionally identical to the generic arch_kexec_kernel_image_load(): arch_kexec_kernel_image_load # x86 if (!image->fops || !image->fops->load) return ERR_PTR(-ENOEXEC); return image->fops->load(image, image->kernel_buf, ...) arch_kexec_kernel_image_load # generic kexec_image_load_default if (!image->fops || !image->fops->load) return ERR_PTR(-ENOEXEC); return image->fops->load(image, image->kernel_buf, ...) Remove the x86-specific version and use the generic arch_kexec_kernel_image_load(). No functional change intended. Link: https://lkml.kernel.org/r/20230307224416.907040-1-helgaas@kernel.org Link: https://lkml.kernel.org/r/20230307224416.907040-2-helgaas@kernel.org Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Reviewed-by: Simon Horman <horms@kernel.org> Acked-by: Baoquan He <bhe@redhat.com> Cc: Borislav Petkov (AMD) <bp@alien8.de> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Eric Biederman <ebiederm@xmission.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-06-01x86/kexec: fix memory leak of elf header bufferBaoquan He1-3/+9
This is reported by kmemleak detector: unreferenced object 0xffffc900002a9000 (size 4096): comm "kexec", pid 14950, jiffies 4295110793 (age 373.951s) hex dump (first 32 bytes): 7f 45 4c 46 02 01 01 00 00 00 00 00 00 00 00 00 .ELF............ 04 00 3e 00 01 00 00 00 00 00 00 00 00 00 00 00 ..>............. backtrace: [<0000000016a8ef9f>] __vmalloc_node_range+0x101/0x170 [<000000002b66b6c0>] __vmalloc_node+0xb4/0x160 [<00000000ad40107d>] crash_prepare_elf64_headers+0x8e/0xcd0 [<0000000019afff23>] crash_load_segments+0x260/0x470 [<0000000019ebe95c>] bzImage64_load+0x814/0xad0 [<0000000093e16b05>] arch_kexec_kernel_image_load+0x1be/0x2a0 [<000000009ef2fc88>] kimage_file_alloc_init+0x2ec/0x5a0 [<0000000038f5a97a>] __do_sys_kexec_file_load+0x28d/0x530 [<0000000087c19992>] do_syscall_64+0x3b/0x90 [<0000000066e063a4>] entry_SYSCALL_64_after_hwframe+0x44/0xae In crash_prepare_elf64_headers(), a buffer is allocated via vmalloc() to store elf headers. While it's not freed back to system correctly when kdump kernel is reloaded or unloaded. Then memory leak is caused. Fix it by introducing x86 specific function arch_kimage_file_post_load_cleanup(), and freeing the buffer there. And also remove the incorrect elf header buffer freeing code. Before calling arch specific kexec_file loading function, the image instance has been initialized. So 'image->elf_headers' must be NULL. It doesn't make sense to free the elf header buffer in the place. Three different people have reported three bugs about the memory leak on x86_64 inside Redhat. Link: https://lkml.kernel.org/r/20220223113225.63106-2-bhe@redhat.com Signed-off-by: Baoquan He <bhe@redhat.com> Acked-by: Dave Young <dyoung@redhat.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-03-15x86/ibt,kexec: Disable CET on kexecPeter Zijlstra1-1/+3
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lore.kernel.org/r/20220308154318.641454603@infradead.org
2021-10-04x86/sev: Replace occurrences of sev_active() with cc_platform_has()Tom Lendacky1-2/+2
Replace uses of sev_active() with the more generic cc_platform_has() using CC_ATTR_GUEST_MEM_ENCRYPT. If future support is added for other memory encryption technologies, the use of CC_ATTR_GUEST_MEM_ENCRYPT can be updated, as required. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210928191009.32551-7-bp@alien8.de
2021-10-04x86/sme: Replace occurrences of sme_active() with cc_platform_has()Tom Lendacky1-7/+8
Replace uses of sme_active() with the more generic cc_platform_has() using CC_ATTR_HOST_MEM_ENCRYPT. If future support is added for other memory encryption technologies, the use of CC_ATTR_HOST_MEM_ENCRYPT can be updated, as required. This also replaces two usages of sev_active() that are really geared towards detecting if SME is active. Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210928191009.32551-6-bp@alien8.de
2021-05-21x86/kexec: Set_[gi]dt() -> native_[gi]dt_invalidate() in machine_kexec_*.cH. Peter Anvin (Intel)1-31/+2
These files contain private set_gdt() functions which are only used to invalid the gdt; machine_kexec_64.c also contains a set_idt() function to invalidate the idt. phys_to_virt(0) *really* doesn't make any sense for creating an invalid GDT. A NULL pointer (virtual 0) makes a lot more sense; although neither will allow any actual memory reference, a NULL pointer stands out more. Replace these calls with native_[gi]dt_invalidate(). Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: https://lore.kernel.org/r/20210519212154.511983-7-hpa@zytor.com
2021-04-28Merge tag 'devicetree-for-5.13' of ↵Linus Torvalds1-2/+2
git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux Pull devicetree updates from Rob Herring: - Refactor powerpc and arm64 kexec DT handling to common code. This enables IMA on arm64. - Add kbuild support for applying DT overlays at build time. The first user are the DT unittests. - Fix kerneldoc formatting and W=1 warnings in drivers/of/ - Fix handling 64-bit flag on PCI resources - Bump dtschema version required to v2021.2.1 - Enable undocumented compatible checks for dtbs_check. This allows tracking of missing binding schemas. - DT docs improvements. Regroup the DT docs and add the example schema and DT kernel ABI docs to the doc build. - Convert Broadcom Bluetooth and video-mux bindings to schema - Add QCom sm8250 Venus video codec binding schema - Add vendor prefixes for AESOP, YIC System Co., Ltd, and Siliconfile Technologies Inc. - Cleanup of DT schema type references on common properties and standard unit properties * tag 'devicetree-for-5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux: (64 commits) powerpc: If kexec_build_elf_info() fails return immediately from elf64_load() powerpc: Free fdt on error in elf64_load() of: overlay: Fix kerneldoc warning in of_overlay_remove() of: linux/of.h: fix kernel-doc warnings of/pci: Add IORESOURCE_MEM_64 to resource flags for 64-bit memory addresses dt-bindings: bcm4329-fmac: add optional brcm,ccode-map docs: dt: update writing-schema.rst references dt-bindings: media: venus: Add sm8250 dt schema of: base: Fix spelling issue with function param 'prop' docs: dt: Add DT API documentation of: Add missing 'Return' section in kerneldoc comments of: Fix kerneldoc output formatting docs: dt: Group DT docs into relevant sub-sections docs: dt: Make 'Devicetree' wording more consistent docs: dt: writing-schema: Include the example schema in the doc build docs: dt: writing-schema: Remove spurious indentation dt-bindings: Fix reference in submitting-patches.rst to the DT ABI doc dt-bindings: ddr: Add optional manufacturer and revision ID to LPDDR3 dt-bindings: media: video-interfaces: Drop the example devicetree: bindings: clock: Minor typo fix in the file armada3700-tbg-clock.txt ...
2021-03-18x86: Fix various typos in commentsIngo Molnar1-1/+1
Fix ~144 single-word typos in arch/x86/ code comments. Doing this in a single commit should reduce the churn. Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Borislav Petkov <bp@alien8.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: linux-kernel@vger.kernel.org
2021-03-08x86: Use ELF fields defined in 'struct kimage'Lakshmi Ramasubramanian1-2/+2
ELF related fields elf_headers, elf_headers_sz, and elf_load_addr have been moved from 'struct kimage_arch' to 'struct kimage'. Use the ELF fields defined in 'struct kimage'. Suggested-by: Rob Herring <robh@kernel.org> Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Lakshmi Ramasubramanian <nramas@linux.microsoft.com> Signed-off-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20210221174930.27324-5-nramas@linux.microsoft.com
2020-06-09mm: don't include asm/pgtable.h if linux/mm.h is already includedMike Rapoport1-1/+0
Patch series "mm: consolidate definitions of page table accessors", v2. The low level page table accessors (pXY_index(), pXY_offset()) are duplicated across all architectures and sometimes more than once. For instance, we have 31 definition of pgd_offset() for 25 supported architectures. Most of these definitions are actually identical and typically it boils down to, e.g. static inline unsigned long pmd_index(unsigned long address) { return (address >> PMD_SHIFT) & (PTRS_PER_PMD - 1); } static inline pmd_t *pmd_offset(pud_t *pud, unsigned long address) { return (pmd_t *)pud_page_vaddr(*pud) + pmd_index(address); } These definitions can be shared among 90% of the arches provided XYZ_SHIFT, PTRS_PER_XYZ and xyz_page_vaddr() are defined. For architectures that really need a custom version there is always possibility to override the generic version with the usual ifdefs magic. These patches introduce include/linux/pgtable.h that replaces include/asm-generic/pgtable.h and add the definitions of the page table accessors to the new header. This patch (of 12): The linux/mm.h header includes <asm/pgtable.h> to allow inlining of the functions involving page table manipulations, e.g. pte_alloc() and pmd_alloc(). So, there is no point to explicitly include <asm/pgtable.h> in the files that include <linux/mm.h>. The include statements in such cases are remove with a simple loop: for f in $(git grep -l "include <linux/mm.h>") ; do sed -i -e '/include <asm\/pgtable.h>/ d' $f done Signed-off-by: Mike Rapoport <rppt@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Cain <bcain@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Chris Zankel <chris@zankel.net> Cc: "David S. Miller" <davem@davemloft.net> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Greentime Hu <green.hu@gmail.com> Cc: Greg Ungerer <gerg@linux-m68k.org> Cc: Guan Xuetao <gxt@pku.edu.cn> Cc: Guo Ren <guoren@kernel.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: Ley Foon Tan <ley.foon.tan@intel.com> Cc: Mark Salter <msalter@redhat.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Matt Turner <mattst88@gmail.com> Cc: Max Filippov <jcmvbkbc@gmail.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Michal Simek <monstr@monstr.eu> Cc: Mike Rapoport <rppt@kernel.org> Cc: Nick Hu <nickhu@andestech.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Richard Weinberger <richard@nod.at> Cc: Rich Felker <dalias@libc.org> Cc: Russell King <linux@armlinux.org.uk> Cc: Stafford Horne <shorne@gmail.com> Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: Vincent Chen <deanbo422@gmail.com> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Will Deacon <will@kernel.org> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Link: http://lkml.kernel.org/r/20200514170327.31389-1-rppt@kernel.org Link: http://lkml.kernel.org/r/20200514170327.31389-2-rppt@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2019-12-23x86/crash: Define arch_crash_save_vmcoreinfo() if CONFIG_CRASH_CORE=yOmar Sandoval1-19/+0
On x86 kernels configured with CONFIG_PROC_KCORE=y and CONFIG_KEXEC_CORE=n, the vmcoreinfo note in /proc/kcore is incomplete. Specifically, it is missing arch-specific information like the KASLR offset and whether 5-level page tables are enabled. This breaks applications like drgn [1] and crash [2], which need this information for live debugging via /proc/kcore. This happens because: 1. CONFIG_PROC_KCORE selects CONFIG_CRASH_CORE. 2. kernel/crash_core.c (compiled if CONFIG_CRASH_CORE=y) calls arch_crash_save_vmcoreinfo() to get the arch-specific parts of vmcoreinfo. If it is not defined, then it uses a no-op fallback. 3. x86 defines arch_crash_save_vmcoreinfo() in arch/x86/kernel/machine_kexec_*.c, which is only compiled if CONFIG_KEXEC_CORE=y. Therefore, an x86 kernel with CONFIG_CRASH_CORE=y and CONFIG_KEXEC_CORE=n uses the no-op fallback and gets incomplete vmcoreinfo data. This isn't relevant to kdump, which requires CONFIG_KEXEC_CORE. It only affects applications which read vmcoreinfo at runtime, like the ones mentioned above. Fix it by moving arch_crash_save_vmcoreinfo() into two new arch/x86/kernel/crash_core_*.c files, which are gated behind CONFIG_CRASH_CORE. 1: https://github.com/osandov/drgn/blob/73dd7def1217e24cc83d8ca95c995decbd9ba24c/libdrgn/program.c#L385 2: https://github.com/crash-utility/crash/commit/60a42d709280cdf38ab06327a5b4fa9d9208ef86 Signed-off-by: Omar Sandoval <osandov@fb.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Andy Lutomirski <luto@kernel.org> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Kairui Song <kasong@redhat.com> Cc: Lianbo Jiang <lijiang@redhat.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/0589961254102cca23e3618b96541b89f2b249e2.1576858905.git.osandov@fb.com
2019-11-14x86/kdump: Remove the backup region handlingLianbo Jiang1-47/+0
When the crashkernel kernel command line option is specified, the low 1M memory will always be reserved now. Therefore, it's not necessary to create a backup region anymore and also no need to copy the contents of the first 640k to it. Remove all the code related to handling that backup region. [ bp: Massage commit message. ] Signed-off-by: Lianbo Jiang <lijiang@redhat.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: bhe@redhat.com Cc: Dave Young <dyoung@redhat.com> Cc: d.hatayama@fujitsu.com Cc: dhowells@redhat.com Cc: ebiederm@xmission.com Cc: horms@verge.net.au Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jürgen Gross <jgross@suse.com> Cc: kexec@lists.infradead.org Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: vgoyal@redhat.com Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20191108090027.11082-3-lijiang@redhat.com
2019-07-09Merge branch 'x86-kdump-for-linus' of ↵Linus Torvalds1-3/+28
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x865 kdump updates from Thomas Gleixner: "Yet more kexec/kdump updates: - Properly support kexec when AMD's memory encryption (SME) is enabled - Pass reserved e820 ranges to the kexec kernel so both PCI and SME can work" * 'x86-kdump-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: fs/proc/vmcore: Enable dumping of encrypted memory when SEV was active x86/kexec: Set the C-bit in the identity map page table when SEV is active x86/kexec: Do not map kexec area as decrypted when SEV is active x86/crash: Add e820 reserved ranges to kdump kernel's e820 table x86/mm: Rework ioremap resource mapping determination x86/e820, ioport: Add a new I/O resource descriptor IORES_DESC_RESERVED x86/mm: Create a workarea in the kernel for SME early encryption x86/mm: Identify the end of the kernel area to be reserved
2019-07-09Merge branch 'x86-boot-for-linus' of ↵Linus Torvalds1-0/+87
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 boot updates from Thomas Gleixner: "Assorted updates to kexec/kdump: - Proper kexec support for 4/5-level paging and jumping from a 5-level to a 4-level paging kernel. - Make the EFI support for kexec/kdump more robust - Enforce that the GDT is properly aligned instead of getting the alignment by chance" * 'x86-boot-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/kdump/64: Restrict kdump kernel reservation to <64TB x86/kexec/64: Prevent kexec from 5-level paging to a 4-level only kernel x86/boot: Add xloadflags bits to check for 5-level paging support x86/boot: Make the GDT 8-byte aligned x86/kexec: Add the ACPI NVS region to the ident map x86/boot: Call get_rsdp_addr() after console_init() Revert "x86/boot: Disable RSDP parsing temporarily" x86/boot: Use efi_setup_data for searching RSDP on kexec-ed kernels x86/kexec: Add the EFI system tables and ACPI tables to the ident map
2019-06-20x86/kexec: Set the C-bit in the identity map page table when SEV is activeLianbo Jiang1-3/+13
When SEV is active, the second kernel image is loaded into encrypted memory. For that, make sure that when kexec builds the identity mapping page table, the memory is encrypted (i.e., _PAGE_ENC is set). [ bp: Sort local args and OR in _PAGE_ENC for more clarity. ] Co-developed-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Lianbo Jiang <lijiang@redhat.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: bhe@redhat.com Cc: dyoung@redhat.com Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: kexec@lists.infradead.org Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20190430074421.7852-3-lijiang@redhat.com
2019-06-20x86/kexec: Do not map kexec area as decrypted when SEV is activeLianbo Jiang1-0/+15
When a virtual machine panics, its memory needs to be dumped for analysis. With memory encryption in the picture, special care must be taken when loading a kexec/kdump kernel in a SEV guest. A SEV guest starts and runs fully encrypted. In order to load a kexec kernel and initrd, arch_kexec_post_{alloc,free}_pages() need to not map areas as decrypted unconditionally but differentiate whether the kernel is running as a SEV guest and if so, leave kexec area encrypted. [ bp: Reduce commit message to the relevant information pertaining to this commit only. ] Co-developed-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Brijesh Singh <brijesh.singh@amd.com> Signed-off-by: Lianbo Jiang <lijiang@redhat.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: bhe@redhat.com Cc: Brijesh Singh <brijesh.singh@amd.com> Cc: dyoung@redhat.com Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: kexec@lists.infradead.org Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20190430074421.7852-2-lijiang@redhat.com
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 230Thomas Gleixner1-3/+1
Based on 2 normalized pattern(s): this source code is licensed under the gnu general public license version 2 see the file copying for more details this source code is licensed under general public license version 2 see extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 52 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Enrico Weigelt <info@metux.net> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190602204653.449021192@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-10x86/kexec: Add the ACPI NVS region to the ident mapKairui Song1-3/+15
With the recent addition of RSDP parsing in the decompression stage, a kexec-ed kernel now needs ACPI tables to be covered by the identity mapping. And in commit 6bbeb276b71f ("x86/kexec: Add the EFI system tables and ACPI tables to the ident map") the ACPI tables memory region was added to the ident map. But some machines have only an ACPI NVS memory region and the ACPI tables are located in that region. In such case, the kexec-ed kernel will still fail when trying to access ACPI tables if they're not mapped. So add the NVS memory region to the ident map as well. [ bp: Massage. ] Fixes: 6bbeb276b71f ("x86/kexec: Add the EFI system tables and ACPI tables to the ident map") Suggested-by: Junichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: Kairui Song <kasong@redhat.com> Signed-off-by: Borislav Petkov <bp@suse.de> Tested-by: Junichi Nomura <j-nomura@ce.jp.nec.com> Cc: Baoquan He <bhe@redhat.com> Cc: Chao Fan <fanc.fnst@cn.fujitsu.com> Cc: Dave Young <dyoung@redhat.com> Cc: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: kexec@lists.infradead.org Cc: Lianbo Jiang <lijiang@redhat.com> Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20190610073617.19767-1-kasong@redhat.com
2019-06-06x86/kexec: Add the EFI system tables and ACPI tables to the ident mapKairui Song1-0/+75
Currently, only the whole physical memory is identity-mapped for the kexec kernel and the regions reserved by firmware are ignored. However, the recent addition of RSDP parsing in the decompression stage and especially: 33f0df8d843d ("x86/boot: Search for RSDP in the EFI tables") which tries to access EFI system tables and to dig out the RDSP address from there, becomes a problem because in certain configurations, they might not be mapped in the kexec'ed kernel's address space. What is more, this problem doesn't appear on all systems because the kexec kernel uses gigabyte pages to build the identity mapping. And the EFI system tables and ACPI tables can, depending on the system configuration, end up being mapped as part of all physical memory, if they share the same 1 GB area with the physical memory. Therefore, make sure they're always mapped. [ bp: productize half-baked patch: - rewrite commit message. - correct the map_acpi_tables() function name in the !ACPI case. ] Signed-off-by: Kairui Song <kasong@redhat.com> Signed-off-by: Baoquan He <bhe@redhat.com> Signed-off-by: Borislav Petkov <bp@suse.de> Tested-by: Dirk van der Merwe <dirk.vandermerwe@netronome.com> Cc: dyoung@redhat.com Cc: fanc.fnst@cn.fujitsu.com Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: j-nomura@ce.jp.nec.com Cc: kexec@lists.infradead.org Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Lianbo Jiang <lijiang@redhat.com> Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20190429002318.GA25400@MiWiFi-R3L-srv
2019-01-11x86/kdump: Export the SME mask to vmcoreinfoLianbo Jiang1-0/+3
On AMD SME machines, makedumpfile tools need to know whether the crashed kernel was encrypted. If SME is enabled in the first kernel, the crashed kernel's page table entries (pgd/pud/pmd/pte) contain the memory encryption mask which makedumpfile needs to remove in order to obtain the true physical address. Export that mask in a vmcoreinfo variable. [ bp: Massage commit message and move define at the end of the function. ] Signed-off-by: Lianbo Jiang <lijiang@redhat.com> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Baoquan He <bhe@redhat.com> Cc: Dave Young <dyoung@redhat.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: anderson@redhat.com Cc: k-hagio@ab.jp.nec.com Cc: kexec@lists.infradead.org Cc: linux-doc@vger.kernel.org Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20190110121944.6050-3-lijiang@redhat.com
2018-05-19x86/mm: Stop pretending pgtable_l5_enabled is a variableKirill A. Shutemov1-1/+2
pgtable_l5_enabled is defined using cpu_feature_enabled() but we refer to it as a variable. This is misleading. Make pgtable_l5_enabled() a function. We cannot literally define it as a function due to circular dependencies between header files. Function-alike macros is close enough. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Cc: Hugh Dickins <hughd@google.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/20180518103528.59260-4-kirill.shutemov@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2018-05-13x86/kexec: Avoid double free_page() upon do_kexec_load() failureTetsuo Handa1-1/+4
>From ff82bedd3e12f0d3353282054ae48c3bd8c72012 Mon Sep 17 00:00:00 2001 From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Date: Wed, 9 May 2018 12:12:39 +0900 Subject: [PATCH v3] x86/kexec: avoid double free_page() upon do_kexec_load() failure. syzbot is reporting crashes after memory allocation failure inside do_kexec_load() [1]. This is because free_transition_pgtable() is called by both init_transition_pgtable() and machine_kexec_cleanup() when memory allocation failed inside init_transition_pgtable(). Regarding 32bit code, machine_kexec_free_page_tables() is called by both machine_kexec_alloc_page_tables() and machine_kexec_cleanup() when memory allocation failed inside machine_kexec_alloc_page_tables(). Fix this by leaving the error handling to machine_kexec_cleanup() (and optionally setting NULL after free_page()). [1] https://syzkaller.appspot.com/bug?id=91e52396168cf2bdd572fe1e1bc0bc645c1c6b40 Fixes: f5deb79679af6eb4 ("x86: kexec: Use one page table in x86_64 machine_kexec") Fixes: 92be3d6bdf2cb349 ("kexec/i386: allocate page table pages dynamically") Reported-by: syzbot <syzbot+d96f60296ef613fe1d69@syzkaller.appspotmail.com> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Baoquan He <bhe@redhat.com> Cc: thomas.lendacky@amd.com Cc: prudo@linux.vnet.ibm.com Cc: Huang Ying <ying.huang@intel.com> Cc: syzkaller-bugs@googlegroups.com Cc: takahiro.akashi@linaro.org Cc: H. Peter Anvin <hpa@zytor.com> Cc: akpm@linux-foundation.org Cc: dyoung@redhat.com Cc: kirill.shutemov@linux.intel.com Link: https://lkml.kernel.org/r/201805091942.DGG12448.tMFVFSJFQOOLHO@I-love.SAKURA.ne.jp
2018-04-13kernel/kexec_file.c: remove mis-use of sh_offset field during purgatory loadPhilipp Rudo1-4/+6
The current code uses the sh_offset field in purgatory_info->sechdrs to store a pointer to the current load address of the section. Depending whether the section will be loaded or not this is either a pointer into purgatory_info->purgatory_buf or kexec_purgatory. This is not only a violation of the ELF standard but also makes the code very hard to understand as you cannot tell if the memory you are using is read-only or not. Remove this misuse and store the offset of the section in pugaroty_info->purgatory_buf in sh_offset. Link: http://lkml.kernel.org/r/20180321112751.22196-10-prudo@linux.vnet.ibm.com Signed-off-by: Philipp Rudo <prudo@linux.vnet.ibm.com> Acked-by: Dave Young <dyoung@redhat.com> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Eric Biederman <ebiederm@xmission.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com> Cc: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-04-13kernel/kexec_file.c: use read-only sections in arch_kexec_apply_relocations*Philipp Rudo1-36/+20
When the relocations are applied to the purgatory only the section the relocations are applied to is writable. The other sections, i.e. the symtab and .rel/.rela, are in read-only kexec_purgatory. Highlight this by marking the corresponding variables as 'const'. While at it also change the signatures of arch_kexec_apply_relocations* to take section pointers instead of just the index of the relocation section. This removes the second lookup and sanity check of the sections in arch code. Link: http://lkml.kernel.org/r/20180321112751.22196-6-prudo@linux.vnet.ibm.com Signed-off-by: Philipp Rudo <prudo@linux.vnet.ibm.com> Acked-by: Dave Young <dyoung@redhat.com> Cc: AKASHI Takahiro <takahiro.akashi@linaro.org> Cc: Eric Biederman <ebiederm@xmission.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com> Cc: Vivek Goyal <vgoyal@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-04-13kexec_file,x86,powerpc: factor out kexec_file_ops functionsAKASHI Takahiro1-43/+2
As arch_kexec_kernel_image_{probe,load}(), arch_kimage_file_post_load_cleanup() and arch_kexec_kernel_verify_sig() are almost duplicated among architectures, they can be commonalized with an architecture-defined kexec_file_ops array. So let's factor them out. Link: http://lkml.kernel.org/r/20180306102303.9063-3-takahiro.akashi@linaro.org Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> Acked-by: Dave Young <dyoung@redhat.com> Tested-by: Dave Young <dyoung@redhat.com> Cc: Vivek Goyal <vgoyal@redhat.com> Cc: Baoquan He <bhe@redhat.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-04-02Merge branch 'x86-mm-for-linus' of ↵Linus Torvalds1-0/+1
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 mm updates from Ingo Molnar: - Extend the memmap= boot parameter syntax to allow the redeclaration and dropping of existing ranges, and to support all e820 range types (Jan H. Schönherr) - Improve the W+X boot time security checks to remove false positive warnings on Xen (Jan Beulich) - Support booting as Xen PVH guest (Juergen Gross) - Improved 5-level paging (LA57) support, in particular it's possible now to have a single kernel image for both 4-level and 5-level hardware (Kirill A. Shutemov) - AMD hardware RAM encryption support (SME/SEV) fixes (Tom Lendacky) - Preparatory commits for hardware-encrypted RAM support on Intel CPUs. (Kirill A. Shutemov) - Improved Intel-MID support (Andy Shevchenko) - Show EFI page tables in page_tables debug files (Andy Lutomirski) - ... plus misc fixes and smaller cleanups * 'x86-mm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (56 commits) x86/cpu/tme: Fix spelling: "configuation" -> "configuration" x86/boot: Fix SEV boot failure from change to __PHYSICAL_MASK_SHIFT x86/mm: Update comment in detect_tme() regarding x86_phys_bits x86/mm/32: Remove unused node_memmap_size_bytes() & CONFIG_NEED_NODE_MEMMAP_SIZE logic x86/mm: Remove pointless checks in vmalloc_fault x86/platform/intel-mid: Add special handling for ACPI HW reduced platforms ACPI, x86/boot: Introduce the ->reduced_hw_early_init() ACPI callback ACPI, x86/boot: Split out acpi_generic_reduce_hw_init() and export x86/pconfig: Provide defines and helper to run MKTME_KEY_PROG leaf x86/pconfig: Detect PCONFIG targets x86/tme: Detect if TME and MKTME is activated by BIOS x86/boot/compressed/64: Handle 5-level paging boot if kernel is above 4G x86/boot/compressed/64: Use page table in trampoline memory x86/boot/compressed/64: Use stack from trampoline memory x86/boot/compressed/64: Make sure we have a 32-bit code segment x86/mm: Do not use paravirtualized calls in native_set_p4d() kdump, vmcoreinfo: Export pgtable_l5_enabled value x86/boot/compressed/64: Prepare new top-level page table for trampoline x86/boot/compressed/64: Set up trampoline memory x86/boot/compressed/64: Save and restore trampoline memory ...