summaryrefslogtreecommitdiffstats
path: root/drivers/block
AgeCommit message (Collapse)AuthorLines
5 daysMerge tag 'mm-stable-2026-04-18-02-14' of ↵Linus Torvalds-1/+4
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull more MM updates from Andrew Morton: - "Eliminate Dying Memory Cgroup" (Qi Zheng and Muchun Song) Address the longstanding "dying memcg problem". A situation wherein a no-longer-used memory control group will hang around for an extended period pointlessly consuming memory - "fix unexpected type conversions and potential overflows" (Qi Zheng) Fix a couple of potential 32-bit/64-bit issues which were identified during review of the "Eliminate Dying Memory Cgroup" series - "kho: history: track previous kernel version and kexec boot count" (Breno Leitao) Use Kexec Handover (KHO) to pass the previous kernel's version string and the number of kexec reboots since the last cold boot to the next kernel, and print it at boot time - "liveupdate: prevent double preservation" (Pasha Tatashin) Teach LUO to avoid managing the same file across different active sessions - "liveupdate: Fix module unloading and unregister API" (Pasha Tatashin) Address an issue with how LUO handles module reference counting and unregistration during module unloading - "zswap pool per-CPU acomp_ctx simplifications" (Kanchana Sridhar) Simplify and clean up the zswap crypto compression handling and improve the lifecycle management of zswap pool's per-CPU acomp_ctx resources - "mm/damon/core: fix damon_call()/damos_walk() vs kdmond exit race" (SeongJae Park) Address unlikely but possible leaks and deadlocks in damon_call() and damon_walk() - "mm/damon/core: validate damos_quota_goal->nid" (SeongJae Park) Fix a couple of root-only wild pointer dereferences - "Docs/admin-guide/mm/damon: warn commit_inputs vs other params race" (SeongJae Park) Update the DAMON documentation to warn operators about potential races which can occur if the commit_inputs parameter is altered at the wrong time - "Minor hmm_test fixes and cleanups" (Alistair Popple) Bugfixes and a cleanup for the HMM kernel selftests - "Modify memfd_luo code" (Chenghao Duan) Cleanups, simplifications and speedups to the memfd_lou code - "mm, kvm: allow uffd support in guest_memfd" (Mike Rapoport) Support for userfaultfd in guest_memfd - "selftests/mm: skip several tests when thp is not available" (Chunyu Hu) Fix several issues in the selftests code which were causing breakage when the tests were run on CONFIG_THP=n kernels - "mm/mprotect: micro-optimization work" (Pedro Falcato) A couple of nice speedups for mprotect() - "MAINTAINERS: update KHO and LIVE UPDATE entries" (Pratyush Yadav) Document upcoming changes in the maintenance of KHO, LUO, memfd_luo, kexec, crash, kdump and probably other kexec-based things - they are being moved out of mm.git and into a new git tree * tag 'mm-stable-2026-04-18-02-14' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (121 commits) MAINTAINERS: add page cache reviewer mm/vmscan: avoid false-positive -Wuninitialized warning MAINTAINERS: update Dave's kdump reviewer email address MAINTAINERS: drop include/linux/liveupdate from LIVE UPDATE MAINTAINERS: drop include/linux/kho/abi/ from KHO MAINTAINERS: update KHO and LIVE UPDATE maintainers MAINTAINERS: update kexec/kdump maintainers entries mm/migrate_device: remove dead migration entry check in migrate_vma_collect_huge_pmd() selftests: mm: skip charge_reserved_hugetlb without killall userfaultfd: allow registration of ranges below mmap_min_addr mm/vmstat: fix vmstat_shepherd double-scheduling vmstat_update mm/hugetlb: fix early boot crash on parameters without '=' separator zram: reject unrecognized type= values in recompress_store() docs: proc: document ProtectionKey in smaps mm/mprotect: special-case small folios when applying permissions mm/mprotect: move softleaf code out of the main function mm: remove '!root_reclaim' checking in should_abort_scan() mm/sparse: fix comment for section map alignment mm/page_io: use sio->len for PSWPIN accounting in sio_read_complete() selftests/mm: transhuge_stress: skip the test when thp not available ...
6 dayszram: reject unrecognized type= values in recompress_store()Andrew Stellman-0/+2
recompress_store() parses the type= parameter with three if statements checking for "idle", "huge", and "huge_idle". An unrecognized value silently falls through with mode left at 0, causing the recompression pass to run with no slot filter — processing all slots instead of the intended subset. Add a !mode check after the type parsing block to return -EINVAL for unrecognized values, consistent with the function's other parameter validation. Link: https://lore.kernel.org/20260407153027.42425-1-astellman@stellman-greene.com Signed-off-by: Andrew Stellman <astellman@stellman-greene.com> Suggested-by: Sergey Senozhatsky <senozhatsky@chromium.org> Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
6 dayszram: do not forget to endio for partial discard requestsSergey Senozhatsky-1/+2
As reported by Qu Wenruo and Avinesh Kumar, the following getconf PAGESIZE 65536 blkdiscard -p 4k /dev/zram0 takes literally forever to complete. zram doesn't support partial discards and just returns immediately w/o doing any discard work in such cases. The problem is that we forget to endio on our way out, so blkdiscard sleeps forever in submit_bio_wait(). Fix this by jumping to end_bio label, which does bio_endio(). Link: https://lore.kernel.org/20260331074255.777019-1-senozhatsky@chromium.org Fixes: 0120dd6e4e20 ("zram: make zram_bio_discard more self-contained") Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org> Reported-by: Qu Wenruo <wqu@suse.com> Closes: https://lore.kernel.org/linux-block/92361cd3-fb8b-482e-bc89-15ff1acb9a59@suse.com Tested-by: Qu Wenruo <wqu@suse.com> Reported-by: Avinesh Kumar <avinesh.kumar@suse.com> Closes: https://bugzilla.suse.com/show_bug.cgi?id=1256530 Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Brian Geffon <bgeffon@google.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Minchan Kim <minchan@kernel.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
9 daysMerge tag 'mm-stable-2026-04-13-21-45' of ↵Linus Torvalds-154/+156
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull MM updates from Andrew Morton: - "maple_tree: Replace big node with maple copy" (Liam Howlett) Mainly prepararatory work for ongoing development but it does reduce stack usage and is an improvement. - "mm, swap: swap table phase III: remove swap_map" (Kairui Song) Offers memory savings by removing the static swap_map. It also yields some CPU savings and implements several cleanups. - "mm: memfd_luo: preserve file seals" (Pratyush Yadav) File seal preservation to LUO's memfd code - "mm: zswap: add per-memcg stat for incompressible pages" (Jiayuan Chen) Additional userspace stats reportng to zswap - "arch, mm: consolidate empty_zero_page" (Mike Rapoport) Some cleanups for our handling of ZERO_PAGE() and zero_pfn - "mm/kmemleak: Improve scan_should_stop() implementation" (Zhongqiu Han) A robustness improvement and some cleanups in the kmemleak code - "Improve khugepaged scan logic" (Vernon Yang) Improve khugepaged scan logic and reduce CPU consumption by prioritizing scanning tasks that access memory frequently - "Make KHO Stateless" (Jason Miu) Simplify Kexec Handover by transitioning KHO from an xarray-based metadata tracking system with serialization to a radix tree data structure that can be passed directly to the next kernel - "mm: vmscan: add PID and cgroup ID to vmscan tracepoints" (Thomas Ballasi and Steven Rostedt) Enhance vmscan's tracepointing - "mm: arch/shstk: Common shadow stack mapping helper and VM_NOHUGEPAGE" (Catalin Marinas) Cleanup for the shadow stack code: remove per-arch code in favour of a generic implementation - "Fix KASAN support for KHO restored vmalloc regions" (Pasha Tatashin) Fix a WARN() which can be emitted the KHO restores a vmalloc area - "mm: Remove stray references to pagevec" (Tal Zussman) Several cleanups, mainly udpating references to "struct pagevec", which became folio_batch three years ago - "mm: Eliminate fake head pages from vmemmap optimization" (Kiryl Shutsemau) Simplify the HugeTLB vmemmap optimization (HVO) by changing how tail pages encode their relationship to the head page - "mm/damon/core: improve DAMOS quota efficiency for core layer filters" (SeongJae Park) Improve two problematic behaviors of DAMOS that makes it less efficient when core layer filters are used - "mm/damon: strictly respect min_nr_regions" (SeongJae Park) Improve DAMON usability by extending the treatment of the min_nr_regions user-settable parameter - "mm/page_alloc: pcp locking cleanup" (Vlastimil Babka) The proper fix for a previously hotfixed SMP=n issue. Code simplifications and cleanups ensued - "mm: cleanups around unmapping / zapping" (David Hildenbrand) A bunch of cleanups around unmapping and zapping. Mostly simplifications, code movements, documentation and renaming of zapping functions - "support batched checking of the young flag for MGLRU" (Baolin Wang) Batched checking of the young flag for MGLRU. It's part cleanups; one benchmark shows large performance benefits for arm64 - "memcg: obj stock and slab stat caching cleanups" (Johannes Weiner) memcg cleanup and robustness improvements - "Allow order zero pages in page reporting" (Yuvraj Sakshith) Enhance free page reporting - it is presently and undesirably order-0 pages when reporting free memory. - "mm: vma flag tweaks" (Lorenzo Stoakes) Cleanup work following from the recent conversion of the VMA flags to a bitmap - "mm/damon: add optional debugging-purpose sanity checks" (SeongJae Park) Add some more developer-facing debug checks into DAMON core - "mm/damon: test and document power-of-2 min_region_sz requirement" (SeongJae Park) An additional DAMON kunit test and makes some adjustments to the addr_unit parameter handling - "mm/damon/core: make passed_sample_intervals comparisons overflow-safe" (SeongJae Park) Fix a hard-to-hit time overflow issue in DAMON core - "mm/damon: improve/fixup/update ratio calculation, test and documentation" (SeongJae Park) A batch of misc/minor improvements and fixups for DAMON - "mm: move vma_(kernel|mmu)_pagesize() out of hugetlb.c" (David Hildenbrand) Fix a possible issue with dax-device when CONFIG_HUGETLB=n. Some code movement was required. - "zram: recompression cleanups and tweaks" (Sergey Senozhatsky) A somewhat random mix of fixups, recompression cleanups and improvements in the zram code - "mm/damon: support multiple goal-based quota tuning algorithms" (SeongJae Park) Extend DAMOS quotas goal auto-tuning to support multiple tuning algorithms that users can select - "mm: thp: reduce unnecessary start_stop_khugepaged()" (Breno Leitao) Fix the khugpaged sysfs handling so we no longer spam the logs with reams of junk when starting/stopping khugepaged - "mm: improve map count checks" (Lorenzo Stoakes) Provide some cleanups and slight fixes in the mremap, mmap and vma code - "mm/damon: support addr_unit on default monitoring targets for modules" (SeongJae Park) Extend the use of DAMON core's addr_unit tunable - "mm: khugepaged cleanups and mTHP prerequisites" (Nico Pache) Cleanups to khugepaged and is a base for Nico's planned khugepaged mTHP support - "mm: memory hot(un)plug and SPARSEMEM cleanups" (David Hildenbrand) Code movement and cleanups in the memhotplug and sparsemem code - "mm: remove CONFIG_ARCH_ENABLE_MEMORY_HOTREMOVE and cleanup CONFIG_MIGRATION" (David Hildenbrand) Rationalize some memhotplug Kconfig support - "change young flag check functions to return bool" (Baolin Wang) Cleanups to change all young flag check functions to return bool - "mm/damon/sysfs: fix memory leak and NULL dereference issues" (Josh Law and SeongJae Park) Fix a few potential DAMON bugs - "mm/vma: convert vm_flags_t to vma_flags_t in vma code" (Lorenzo Stoakes) Convert a lot of the existing use of the legacy vm_flags_t data type to the new vma_flags_t type which replaces it. Mainly in the vma code. - "mm: expand mmap_prepare functionality and usage" (Lorenzo Stoakes) Expand the mmap_prepare functionality, which is intended to replace the deprecated f_op->mmap hook which has been the source of bugs and security issues for some time. Cleanups, documentation, extension of mmap_prepare into filesystem drivers - "mm/huge_memory: refactor zap_huge_pmd()" (Lorenzo Stoakes) Simplify and clean up zap_huge_pmd(). Additional cleanups around vm_normal_folio_pmd() and the softleaf functionality are performed. * tag 'mm-stable-2026-04-13-21-45' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (369 commits) mm: fix deferred split queue races during migration mm/khugepaged: fix issue with tracking lock mm/huge_memory: add and use has_deposited_pgtable() mm/huge_memory: add and use normal_or_softleaf_folio_pmd() mm: add softleaf_is_valid_pmd_entry(), pmd_to_softleaf_folio() mm/huge_memory: separate out the folio part of zap_huge_pmd() mm/huge_memory: use mm instead of tlb->mm mm/huge_memory: remove unnecessary sanity checks mm/huge_memory: deduplicate zap deposited table call mm/huge_memory: remove unnecessary VM_BUG_ON_PAGE() mm/huge_memory: add a common exit path to zap_huge_pmd() mm/huge_memory: handle buggy PMD entry in zap_huge_pmd() mm/huge_memory: have zap_huge_pmd return a boolean, add kdoc mm/huge: avoid big else branch in zap_huge_pmd() mm/huge_memory: simplify vma_is_specal_huge() mm: on remap assert that input range within the proposed VMA mm: add mmap_action_map_kernel_pages[_full]() uio: replace deprecated mmap hook with mmap_prepare in uio_info drivers: hv: vmbus: replace deprecated mmap hook with mmap_prepare mm: allow handling of stacked mmap_prepare hooks in more drivers ...
10 daysMerge tag 'x86-cleanups-2026-04-13' of ↵Linus Torvalds-2/+0
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 cleanups from Ingo Molnar: - Consolidate AMD and Hygon cases in parse_topology() (Wei Wang) - asm constraints cleanups in __iowrite32_copy() (Uros Bizjak) - Drop AMD Extended Interrupt LVT macros (Naveen N Rao) - Don't use REALLY_SLOW_IO for delays (Juergen Gross) - paravirt cleanups (Juergen Gross) - FPU code cleanups (Borislav Petkov) - split-lock handling code cleanups (Borislav Petkov, Ronan Pigott) * tag 'x86-cleanups-2026-04-13' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86/fpu: Correct the comment explaining what xfeatures_in_use() does x86/split_lock: Don't warn about unknown split_lock_detect parameter x86/fpu: Correct misspelled xfeaures_to_write local var x86/apic: Drop AMD Extended Interrupt LVT macros x86/cpu/topology: Consolidate AMD and Hygon cases in parse_topology() block/floppy: Don't use REALLY_SLOW_IO for delays x86/paravirt: Replace io_delay() hook with a bool x86/irqflags: Preemptively move include paravirt.h directive where it belongs x86/split_lock: Restructure the unwieldy switch-case in sld_state_show() x86/local: Remove trailing semicolon from _ASM_XADD in local_add_return() x86/asm: Use inout "+" asm onstraint modifiers in __iowrite32_copy()
11 daysMerge tag 'for-7.1/block-20260411' of ↵Linus Torvalds-509/+1151
git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux Pull block updates from Jens Axboe: - Add shared memory zero-copy I/O support for ublk, bypassing per-I/O copies between kernel and userspace by matching registered buffer PFNs at I/O time. Includes selftests. - Refactor bio integrity to support filesystem initiated integrity operations and arbitrary buffer alignment. - Clean up bio allocation, splitting bio_alloc_bioset() into clear fast and slow paths. Add bio_await() and bio_submit_or_kill() helpers, unify synchronous bi_end_io callbacks. - Fix zone write plug refcount handling and plug removal races. Add support for serializing zone writes at QD=1 for rotational zoned devices, yielding significant throughput improvements. - Add SED-OPAL ioctls for Single User Mode management and a STACK_RESET command. - Add io_uring passthrough (uring_cmd) support to the BSG layer. - Replace pp_buf in partition scanning with struct seq_buf. - zloop improvements and cleanups. - drbd genl cleanup, switching to pre_doit/post_doit. - NVMe pull request via Keith: - Fabrics authentication updates - Enhanced block queue limits support - Workqueue usage updates - A new write zeroes device quirk - Tagset cleanup fix for loop device - MD pull requests via Yu Kuai: - Fix raid5 soft lockup in retry_aligned_read() - Fix raid10 deadlock with check operation and nowait requests - Fix raid1 overlapping writes on writemostly disks - Fix sysfs deadlock on array_state=clear - Proactive RAID-5 parity building with llbitmap, with write_zeroes_unmap optimization for initial sync - Fix llbitmap barrier ordering, rdev skipping, and bitmap_ops version mismatch fallback - Fix bcache use-after-free and uninitialized closure - Validate raid5 journal metadata payload size - Various cleanups - Various other fixes, improvements, and cleanups * tag 'for-7.1/block-20260411' of git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux: (146 commits) ublk: fix tautological comparison warning in ublk_ctrl_reg_buf scsi: bsg: fix buffer overflow in scsi_bsg_uring_cmd() block: refactor blkdev_zone_mgmt_ioctl MAINTAINERS: update ublk driver maintainer email Documentation: ublk: address review comments for SHMEM_ZC docs ublk: allow buffer registration before device is started ublk: replace xarray with IDA for shmem buffer index allocation ublk: simplify PFN range loop in __ublk_ctrl_reg_buf ublk: verify all pages in multi-page bvec fall within registered range ublk: widen ublk_shmem_buf_reg.len to __u64 for 4GB buffer support xfs: use bio_await in xfs_zone_gc_reset_sync block: add a bio_submit_or_kill helper block: factor out a bio_await helper block: unify the synchronous bi_end_io callbacks xfs: fix number of GC bvecs selftests/ublk: add read-only buffer registration test selftests/ublk: add filesystem fio verify test for shmem_zc selftests/ublk: add hugetlbfs shmem_zc test for loop target selftests/ublk: add shared memory zero-copy test selftests/ublk: add UBLK_F_SHMEM_ZC support for loop target ...
2026-04-10ublk: fix tautological comparison warning in ublk_ctrl_reg_bufMing Lei-8/+6
On 32-bit architectures, 'unsigned long size' can never exceed UBLK_SHMEM_BUF_SIZE_MAX (1ULL << 32), causing a tautological comparison warning. Validate buf_reg.len (__u64) directly before using it, and consolidate all input validation into a single check. Also remove the unnecessary local variables 'addr' and 'size' since buf_reg.addr and buf_reg.len can be used directly. Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202604101952.3NOzqnu9-lkp@intel.com/ Fixes: 23b3b6f0b584 ("ublk: widen ublk_shmem_buf_reg.len to __u64 for 4GB buffer support") Signed-off-by: Ming Lei <tom.leiming@gmail.com> Link: https://patch.msgid.link/20260410124136.3983429-1-tom.leiming@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-04-09ublk: allow buffer registration before device is startedMing Lei-51/+28
Before START_DEV, there is no disk, no queue, no I/O dispatch, so the maple tree can be safely modified under ub->mutex alone without freezing the queue. Add ublk_lock_buf_tree()/ublk_unlock_buf_tree() helpers that take ub->mutex first, then freeze the queue if device is started. This ordering (mutex -> freeze) is safe because ublk_stop_dev_unlocked() already holds ub->mutex when calling del_gendisk() which freezes the queue. Suggested-by: Caleb Sander Mateos <csander@purestorage.com> Signed-off-by: Ming Lei <tom.leiming@gmail.com> Link: https://patch.msgid.link/20260409133020.3780098-6-tom.leiming@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-04-09ublk: replace xarray with IDA for shmem buffer index allocationMing Lei-46/+46
Remove struct ublk_buf which only contained nr_pages that was never read after registration. Use IDA for pure index allocation instead of xarray. Make __ublk_ctrl_unreg_buf() return int so the caller can detect invalid index without a separate lookup. Simplify ublk_buf_cleanup() to walk the maple tree directly and unpin all pages in one pass, instead of iterating the xarray by buffer index. Suggested-by: Caleb Sander Mateos <csander@purestorage.com> Signed-off-by: Ming Lei <tom.leiming@gmail.com> Link: https://patch.msgid.link/20260409133020.3780098-5-tom.leiming@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-04-09ublk: simplify PFN range loop in __ublk_ctrl_reg_bufMing Lei-3/+2
Use the for-loop increment instead of a manual `i++` past the last page, and fix the mtree_insert_range end key accordingly. Suggested-by: Caleb Sander Mateos <csander@purestorage.com> Signed-off-by: Ming Lei <tom.leiming@gmail.com> Link: https://patch.msgid.link/20260409133020.3780098-4-tom.leiming@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-04-09ublk: verify all pages in multi-page bvec fall within registered rangeMing Lei-6/+11
rq_for_each_bvec() yields multi-page bvecs where bv_page is only the first page. ublk_try_buf_match() only validated the start PFN against the maple tree, but a bvec can span multiple pages past the end of a registered range. Use mas_walk() instead of mtree_load() to obtain the range boundaries stored in the maple tree, and check that the bvec's end PFN does not exceed the range. Also remove base_pfn from struct ublk_buf_range since mas.index already provides the range start PFN. Reported-by: Caleb Sander Mateos <csander@purestorage.com> Signed-off-by: Ming Lei <tom.leiming@gmail.com> Link: https://patch.msgid.link/20260409133020.3780098-3-tom.leiming@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-04-09ublk: widen ublk_shmem_buf_reg.len to __u64 for 4GB buffer supportMing Lei-1/+8
The __u32 len field cannot represent a 4GB buffer (0x100000000 overflows to 0). Change it to __u64 so buffers up to 4GB can be registered. Add a reserved field for alignment and validate it is zero. The kernel enforces a default max of 4GB (UBLK_SHMEM_BUF_SIZE_MAX) which may be increased in future. Signed-off-by: Ming Lei <tom.leiming@gmail.com> Link: https://patch.msgid.link/20260409133020.3780098-2-tom.leiming@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-04-07ublk: eliminate permanent pages[] array from struct ublk_bufMing Lei-32/+55
The pages[] array (kvmalloc'd, 8 bytes per page = 2MB for a 1GB buffer) was stored permanently in struct ublk_buf but only needed during pin_user_pages_fast() and maple tree construction. Since the maple tree already stores PFN ranges via ublk_buf_range, struct page pointers can be recovered via pfn_to_page() during unregistration. Make pages[] a temporary allocation in ublk_ctrl_reg_buf(), freed immediately after the maple tree is built. Rewrite __ublk_ctrl_unreg_buf() to iterate the maple tree for matching buf_index entries, recovering struct page pointers via pfn_to_page() and unpinning in batches of 32. Simplify ublk_buf_erase_ranges() to iterate the maple tree by buf_index instead of walking the now-removed pages[] array. Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://patch.msgid.link/20260331153207.3635125-5-ming.lei@redhat.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-04-07ublk: enable UBLK_F_SHMEM_ZC feature flagMing Lei-3/+4
Add UBLK_F_SHMEM_ZC (1ULL << 19) to the UAPI header and UBLK_F_ALL. Switch ublk_support_shmem_zc() and ublk_dev_support_shmem_zc() from returning false to checking the actual flag, enabling the shared memory zero-copy feature for devices that request it. Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://patch.msgid.link/20260331153207.3635125-4-ming.lei@redhat.com [axboe: ublk_buf_reg -> ublk_shmem_buf_reg errors] Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-04-07ublk: add PFN-based buffer matching in I/O pathMing Lei-1/+76
Add ublk_try_buf_match() which walks a request's bio_vecs, looks up each page's PFN in the per-device maple tree, and verifies all pages belong to the same registered buffer at contiguous offsets. Add ublk_iod_is_shmem_zc() inline helper for checking whether a request uses the shmem zero-copy path. Integrate into the I/O path: - ublk_setup_iod(): if pages match a registered buffer, set UBLK_IO_F_SHMEM_ZC and encode buffer index + offset in addr - ublk_start_io(): skip ublk_map_io() for zero-copy requests - __ublk_complete_rq(): skip ublk_unmap_io() for zero-copy requests The feature remains disabled (ublk_support_shmem_zc() returns false) until the UBLK_F_SHMEM_ZC flag is enabled in the next patch. Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://patch.msgid.link/20260331153207.3635125-3-ming.lei@redhat.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-04-07ublk: add UBLK_U_CMD_REG_BUF/UNREG_BUF control commandsMing Lei-0/+295
Add control commands for registering and unregistering shared memory buffers for zero-copy I/O: - UBLK_U_CMD_REG_BUF (0x18): pins pages from userspace, inserts PFN ranges into a per-device maple tree for O(log n) lookup during I/O. Buffer pointers are tracked in a per-device xarray. Returns the assigned buffer index. - UBLK_U_CMD_UNREG_BUF (0x19): removes PFN entries and unpins pages. Queue freeze/unfreeze is handled internally so userspace need not quiesce the device during registration. Also adds: - UBLK_IO_F_SHMEM_ZC flag and addr encoding helpers in UAPI header (16-bit buffer index supporting up to 65536 buffers) - Data structures (ublk_buf, ublk_buf_range) and xarray/maple tree - __ublk_ctrl_reg_buf() helper for PFN insertion with error unwinding - __ublk_ctrl_unreg_buf() helper for cleanup reuse - ublk_support_shmem_zc() / ublk_dev_support_shmem_zc() stubs (returning false — feature not enabled yet) Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://patch.msgid.link/20260331153207.3635125-2-ming.lei@redhat.com [axboe: fixup ublk_buf_reg -> ublk_shmem_buf_reg errors, comments] Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-04-07drbd: use get_random_u64() where appropriateDavid Carlier-3/+3
Use the typed random integer helpers instead of get_random_bytes() when filling a single integer variable. The helpers return the value directly, require no pointer or size argument, and better express intent. Signed-off-by: David Carlier <devnexen@gmail.com> Reviewed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com> Link: https://patch.msgid.link/20260405154704.4610-1-devnexen@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-04-06drbd: remove DRBD_GENLA_F_MANDATORY flag handlingChristoph Böhmwalder-79/+6
DRBD used a custom mechanism to mark netlink attributes as "mandatory": bit 14 of nla_type was repurposed as DRBD_GENLA_F_MANDATORY. Attributes sent from userspace that had this bit present and that were unknown to the kernel would lead to an error. Since commit ef6243acb478 ("genetlink: optionally validate strictly/dumps"), the generic netlink layer rejects unknown top-level attributes when strict validation is enabled. DRBD never opted out of strict validation, so unknown top-level attributes are already rejected by the netlink core. The mandatory flag mechanism was required for nested attributes, because these are parsed liberally, silently dropping attributes unknown to the kernel. This prepares for the move to a new YNL-based family, which will use the now-default strict parsing. The current family is not expected to gain any new attributes, which makes this change safe. Old userspace that still sets bit 14 is unaffected: nla_type() strips it before __nla_validate_parse() performs attribute validation, so the bit never reaches DRBD. Remove all references to the mandatory flag in DRBD. Cc: Johannes Berg <johannes.berg@intel.com> Cc: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com> Link: https://patch.msgid.link/20260403132953.2248751-1-christoph.boehmwalder@linbit.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-04-06ublk: reset per-IO canceled flag on each fetchUday Shankar-8/+13
If a ublk server starts recovering devices but dies before issuing fetch commands for all IOs, cancellation of the fetch commands that were successfully issued may never complete. This is because the per-IO canceled flag can remain set even after the fetch for that IO has been submitted - the per-IO canceled flags for all IOs in a queue are reset together only once all IOs for that queue have been fetched. So if a nonempty proper subset of the IOs for a queue are fetched when the ublk server dies, the IOs in that subset will never successfully be canceled, as their canceled flags remain set, and this prevents ublk_cancel_cmd from actually calling io_uring_cmd_done on the commands, despite the fact that they are outstanding. Fix this by resetting the per-IO cancel flags immediately when each IO is fetched instead of waiting for all IOs for the queue (which may never happen). Signed-off-by: Uday Shankar <ushankar@purestorage.com> Fixes: 728cbac5fe21 ("ublk: move device reset into ublk_ch_release()") Reviewed-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: zhang, the-essence-of-life <zhangweize9@gmail.com> Link: https://patch.msgid.link/20260405-cancel-v2-1-02d711e643c2@purestorage.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-04-05zram: change scan_slots to return voidSergey Senozhatsky-9/+5
scan_slots_for_writeback() and scan_slots_for_recompress() work in a "best effort" fashion, if they cannot allocate memory for a new pp-slot candidate they just return and post-processing selects slots that were successfully scanned thus far. scan_slots functions never return errors and their callers never check the return status, so convert them to return void. Link: https://lkml.kernel.org/r/20260317032349.753645-1-senozhatsky@chromium.org Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org> Reviewed-by: SeongJae Park <sj@kernel.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-04-05zram: propagate read_from_bdev_async() errorsSergey Senozhatsky-7/+8
When read_from_bdev_async() fails to chain bio, for instance fails to allocate request or bio, we need to propagate the error condition so that upper layer is aware of it. zram already does that by setting BLK_STS_IOERR ->bi_status, but only for sync reads. Change async read path to return its error status so that async errors are also handled. Link: https://lkml.kernel.org/r/20260316015354.114465-1-senozhatsky@chromium.org Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org> Suggested-by: Brian Geffon <bgeffon@google.com> Acked-by: Brian Geffon <bgeffon@google.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Richard Chang <richardycc@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-04-05zram: optimize LZ4 dictionary compression performancegao xu-3/+26
Calling `LZ4_loadDict()` repeatedly in Zram causes significant overhead due to its internal dictionary pre-processing. This commit introduces a template stream mechanism to pre-process the dictionary only once when the dictionary is initially set or modified. It then efficiently copies this state for subsequent compressions. Verification Test Items: Test Platform: android16-6.12 1. Collect Anonymous Page Dataset 1) Apply the following patch: static bool zram_meta_alloc(struct zram *zram, u64 disksize) if (!huge_class_size) - huge_class_size = zs_huge_class_size(zram->mem_pool); + huge_class_size = 0; 2)Install multiple apps and monkey testing until SwapFree is close to 0. 3)Execute the following command to export data: dd if=/dev/block/zram0 of=/data/samples/zram_dump.img bs=4K 2. Train Dictionary Since LZ4 does not have a dedicated dictionary training tool, the zstd tool can be used for training[1]. The command is as follows: zstd --train /data/samples/* --split=4096 --maxdict=64KB -o /vendor/etc/dict_data 3. Test Code adb shell "dd if=/data/samples/zram_dump.img of=/dev/test_pattern bs=4096 count=131072 conv=fsync" adb shell "swapoff /dev/block/zram0" adb shell "echo 1 > /sys/block/zram0/reset" adb shell "echo lz4 > /sys/block/zram0/comp_algorithm" adb shell "echo dict=/vendor/etc/dict_data > /sys/block/zram0/algorithm_params" adb shell "echo 6G > /sys/block/zram0/disksize" echo "Start Compression" adb shell "taskset 80 dd if=/dev/test_pattern of=/dev/block/zram0 bs=4096 count=131072 conv=fsync" echo. echo "Start Decompression" adb shell "taskset 80 dd if=/dev/block/zram0 of=/dev/output_result bs=4096 count=131072 conv=fsync" echo "mm_stat:" adb shell "cat /sys/block/zram0/mm_stat" echo. Note: To ensure stable test results, it is best to lock the CPU frequency before executing the test. LZ4 supports dictionaries up to 64KB. Below are the test results for compression rates at various dictionary sizes: dict_size base patch 4 KB 156M/s 219M/s 8 KB 136M/s 217M/s 16KB 98M/s 214M/s 32KB 66M/s 225M/s 64KB 38M/s 224M/s When an LZ4 compression dictionary is enabled, compression speed is negatively impacted by the dictionary's size; larger dictionaries result in slower compression. This patch eliminates the influence of dictionary size on compression speed, ensuring consistent performance regardless of dictionary scale. Link: https://lkml.kernel.org/r/698181478c9c4b10aa21b4a847bdc706@honor.com Link: https://github.com/lz4/lz4?tab=readme-ov-file [1] Signed-off-by: gao xu <gaoxu2@honor.com> Acked-by: Sergey Senozhatsky <senozhatsky@chromium.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: Minchan Kim <minchan@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-04-05zram: unify and harden algo/priority params handlingSergey Senozhatsky-40/+66
We have two functions that accept algo= and priority= params - algorithm_params_store() and recompress_store(). This patch unifies and hardens handling of those parameters. There are 4 possible cases: - only priority= provided [recommended] We need to verify that provided priority value is within permitted range for each particular function. - both algo= and priority= provided We cannot prioritize one over another. All we should do is to verify that zram is configured in the way that user-space expects it to be. Namely that zram indeed has compressor algo= setup at given priority=. - only algo= provided [not recommended] We should lookup priority in compressors list. - none provided [not recommended] Just use function's defaults. Link: https://lkml.kernel.org/r/20260311084312.1766036-7-senozhatsky@chromium.org Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org> Suggested-by: Minchan Kim <minchan@kernel.org> Cc: Brian Geffon <bgeffon@google.com> Cc: gao xu <gaoxu2@honor.com> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-04-05zram: remove chained recompressionSergey Senozhatsky-60/+24
Chained recompression has unpredictable behavior and is not useful in practice. First, systems usually configure just one alternative recompression algorithm, which has slower compression/decompression but better compression ratio. A single alternative algorithm doesn't need chaining. Second, even with multiple recompression algorithms, chained recompression is suboptimal. If a lower priority algorithm succeeds, the page is never attempted with a higher priority algorithm, leading to worse memory savings. If a lower priority algorithm fails, the page is still attempted with a higher priority algorithm, wasting resources on the failed lower priority attempt. In either case, the system would be better off targeting a specific priority directly. Chained recompression also significantly complicates the code. Remove it. Link: https://lkml.kernel.org/r/20260311084312.1766036-6-senozhatsky@chromium.org Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org> Cc: Brian Geffon <bgeffon@google.com> Cc: gao xu <gaoxu2@honor.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-04-05zram: drop ->num_active_compsSergey Senozhatsky-14/+16
It's not entirely correct to use ->num_active_comps for max-prio limit, as ->num_active_comps just tells the number of configured algorithms, not the max configured priority. For instance, in the following theoretical example: [lz4] [nil] [nil] [deflate] ->num_active_comps is 2, while the actual max-prio is 3. Drop ->num_active_comps and use ZRAM_MAX_COMPS instead. Link: https://lkml.kernel.org/r/20260311084312.1766036-4-senozhatsky@chromium.org Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org> Suggested-by: Minchan Kim <minchan@kernel.org> Cc: Brian Geffon <bgeffon@google.com> Cc: gao xu <gaoxu2@honor.com> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-04-05zram: do not autocorrect bad recompression parametersSergey Senozhatsky-9/+8
Do not silently autocorrect bad recompression priority parameter value and just error out. Link: https://lkml.kernel.org/r/20260311084312.1766036-3-senozhatsky@chromium.org Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org> Suggested-by: Minchan Kim <minchan@kernel.org> Cc: Brian Geffon <bgeffon@google.com> Cc: gao xu <gaoxu2@honor.com> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-04-05zram: do not permit params change after initSergey Senozhatsky-0/+4
Patch series "zram: recompression cleanups and tweaks", v2. This series is a somewhat random mix of fixups, recompression cleanups and improvements partly based on internal conversations. A few patches in the series remove unexpected or confusing behaviour, e.g. auto correction of bad priority= param for recompression, which should have always been just an error. Then it also removes "chain recompression" which has a tricky, unexpected and confusing behaviour at times. We also unify and harden the handling of algo/priority params. There is also an addition of missing device lock in algorithm_params_store() which previously permitted modification of algo params while the device is active. This patch (of 6): First, algorithm_params_store(), like any sysfs handler, should grab device lock. Second, like any write() sysfs handler, it should grab device lock in exclusive mode. Third, it should not permit change of algos' parameters after device init, as this doesn't make sense - we cannot compress with one C/D dict and then just change C/D dict to a different one, for example. Another thing to notice is that algorithm_params_store() accesses device's ->comp_algs for algo priority lookup, which should be protected by device lock in exclusive mode in general. Link: https://lkml.kernel.org/r/20260311084312.1766036-1-senozhatsky@chromium.org Link: https://lkml.kernel.org/r/20260311084312.1766036-2-senozhatsky@chromium.org Fixes: 4eac932103a5 ("zram: introduce algorithm_params device attribute") Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org> Acked-by: Brian Geffon <bgeffon@google.com> Cc: gao xu <gaoxu2@honor.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Minchan Kim <minchan@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-04-05zram: use statically allocated compression algorithm namesgao xu-26/+13
Currently, zram dynamically allocates memory for compressor algorithm names when they are set by the user. This requires careful memory management, including explicit `kfree` calls and special handling to avoid freeing statically defined default compressor names. This patch refactors the way zram handles compression algorithm names. Instead of storing dynamically allocated copies, `zram->comp_algs` will now store pointers directly to the static name strings defined within the `zcomp_ops` backend structures, thereby removing the need for conditional `kfree` calls. Link: https://lkml.kernel.org/r/5bb2e9318d124dbcb2b743dcdce6a950@honor.com Signed-off-by: gao xu <gaoxu2@honor.com> Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: Minchan Kim <minchan@kernel.org> Cc: Suren Baghdasaryan <surenb@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-03-31zloop: add max_open_zones optionDamien Le Moal-16/+164
Introduce the new max_open_zones option to allow specifying a limit on the maximum number of open zones of a zloop device. This change allows creating a zloop device that can more closely mimick the characteristics of a physical SMR drive. When set to a non zero value, only up to max_open_zones zones can be in the implicit open (BLK_ZONE_COND_IMP_OPEN) and explicit open (BLK_ZONE_COND_EXP_OPEN) conditions at any time. The transition to the implicit open condition of a zone on a write operation can result in an implicit close of an already implicitly open zone. This is handled in the function zloop_do_open_zone(). This function also handles transitions to the explicit open condition. Implicit close transitions are handled using an LRU ordered list of open zones which is managed using the helper functions zloop_lru_rotate_open_zone() and zloop_lru_remove_open_zone(). Signed-off-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Link: https://patch.msgid.link/20260326203245.946830-1-dlemoal@kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-03-26drbd: Balance RCU calls in drbd_adm_dump_devices()Bart Van Assche-2/+6
Make drbd_adm_dump_devices() call rcu_read_lock() before rcu_read_unlock() is called. This has been detected by the Clang thread-safety analyzer. Tested-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Cc: Andreas Gruenbacher <agruenba@redhat.com> Fixes: a55bbd375d18 ("drbd: Backport the "status" command") Signed-off-by: Bart Van Assche <bvanassche@acm.org> Link: https://patch.msgid.link/20260326214054.284593-1-bvanassche@acm.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-03-25drbd: use genl pre_doit/post_doitChristoph Böhmwalder-253/+320
Every doit handler followed the same pattern: stack-allocate an adm_ctx, call drbd_adm_prepare() at the top, call drbd_adm_finish() at the bottom. This duplicated boilerplate across 25 handlers and made error paths inconsistent, since some handlers could miss sending the reply skb on early-exit paths. The generic netlink framework already provides pre_doit/post_doit hooks for exactly this purpose. An old comment even noted "this would be a good candidate for a pre_doit hook". Use them: - pre_doit heap-allocates adm_ctx, looks up per-command flags from a new drbd_genl_cmd_flags[] table, runs drbd_adm_prepare(), and stores the context in info->user_ptr[0]. - post_doit sends the reply, drops kref references for device/connection/resource, and frees the adm_ctx. - Handlers just receive adm_ctx from info->user_ptr[0], set reply_dh->ret_code, and return. All teardown is in post_doit. - drbd_adm_finish() is removed, superseded by post_doit. Signed-off-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com> Link: https://patch.msgid.link/20260324152907.2840984-1-christoph.boehmwalder@linbit.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-03-25zloop: forget write cache on force removalChristoph Hellwig-0/+97
Add a new options that causes zloop to truncate the zone files to the write pointer value recorded at the last cache flush to simulate unclean shutdowns. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Link: https://patch.msgid.link/20260323071156.2940772-3-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-03-25zloop: refactor zloop_rwChristoph Hellwig-116/+124
Split out two helpers functions to make the function more readable and to avoid conditional locking. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Bart Van Assche <bvanassche@acm.org> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Link: https://patch.msgid.link/20260323071156.2940772-2-hch@lst.de Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-03-22ublk: move cold paths out of __ublk_batch_dispatch() for icache efficiencyMing Lei-32/+38
Mark ublk_filter_unused_tags() as noinline since it is only called from the unlikely(needs_filter) branch. Extract the error-handling block from __ublk_batch_dispatch() into a new noinline ublk_batch_dispatch_fail() function to keep the hot path compact and icache-friendly. This also makes __ublk_batch_dispatch() more readable by separating the error recovery logic from the normal dispatch flow. Before: __ublk_batch_dispatch is ~1419 bytes After: __ublk_batch_dispatch is ~1090 bytes (-329 bytes, -23%) Signed-off-by: Ming Lei <ming.lei@redhat.com> Link: https://patch.msgid.link/20260318014112.3125432-1-ming.lei@redhat.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-03-22block/floppy: Don't use REALLY_SLOW_IO for delaysJuergen Gross-2/+0
Instead of defining REALLY_SLOW_IO before including io.h, add the required additional calls of native_io_delay() to the related functions in arch/x86/include/asm/floppy.h. Drop REALLY_SLOW_IO now too as it has no users. [ bp: Merge the REALLY_SLOW_IO removal into this patch. ] Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://patch.msgid.link/20260119182632.596369-4-jgross@suse.com
2026-03-21zram: do not slot_free() written-back slotsSergey Senozhatsky-25/+14
slot_free() basically completely resets the slots by clearing all of its flags and attributes. While zram_writeback_complete() restores some of flags back (those that are necessary for async read decompression) we still lose a lot of slot's metadata. For example, slot's ac-time, or ZRAM_INCOMPRESSIBLE. More importantly, restoring flags/attrs requires extra attention as some of the flags are directly affecting zram device stats. And the original code did not pay that attention. Namely ZRAM_HUGE slots handling in zram_writeback_complete(). The call to slot_free() would decrement ->huge_pages, however when zram_writeback_complete() restored the slot's ZRAM_HUGE flag, it would not get reflected in an incremented ->huge_pages. So when the slot would finally get freed, slot_free() would decrement ->huge_pages again, leading to underflow. Fix this by open-coding the required memory free and stats updates in zram_writeback_complete(), rather than calling the destructive slot_free(). Since we now preserve the ZRAM_HUGE flag on written-back slots (for the deferred decompression path), we also update slot_free() to skip decrementing ->huge_pages if ZRAM_WB is set. Link: https://lkml.kernel.org/r/20260320023143.2372879-1-senozhatsky@chromium.org Link: https://lkml.kernel.org/r/20260319034912.1894770-1-senozhatsky@chromium.org Fixes: d38fab605c667 ("zram: introduce compressed data writeback") Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org> Acked-by: Minchan Kim <minchan@kernel.org> Cc: Brian Geffon <bgeffon@google.com> Cc: Richard Chang <richardycc@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-03-14ublk: report BLK_SPLIT_INTERVAL_CAPABLECaleb Sander Mateos-1/+1
The ublk driver doesn't access request integrity buffers directly, it only copies them to/from the ublk server in ublk_copy_user_integrity(). ublk_copy_user_integrity() uses bio_for_each_integrity_vec() to walk all the integrity segments. ublk devices are therefore capable of handling requests with integrity intervals split across segments. Set BLK_SPLIT_INTERVAL_CAPABLE in the struct blk_integrity flags for ublk devices to opt out of the integrity-interval dma_alignment limit. Reviewed-by: Ming Lei <ming.lei@redhat.com> Reviewed-by: Keith Busch <kbusch@kernel.org> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Caleb Sander Mateos <csander@purestorage.com> Signed-off-by: Keith Busch <kbusch@kernel.org> Link: https://patch.msgid.link/20260313144701.1221652-3-kbusch@meta.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-03-13Merge tag 'block-7.0-20260312' of ↵Linus Torvalds-4/+12
git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux Pull block fixes from Jens Axboe: - NVMe pull request via Keith: - Fix nvme-pci IRQ race and slab-out-of-bounds access - Fix recursive workqueue locking for target async events - Various cleanups - Fix a potential NULL pointer dereference in ublk on size setting - ublk automatic partition scanning fix - Two s390 dasd fixes * tag 'block-7.0-20260312' of git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux: nvme: Annotate struct nvme_dhchap_key with __counted_by nvme-core: do not pass empty queue_limits to blk_mq_alloc_queue() nvme-pci: Fix race bug in nvme_poll_irqdisable() nvmet: move async event work off nvmet-wq nvme-pci: Fix slab-out-of-bounds in nvme_dbbuf_set s390/dasd: Copy detected format information to secondary device s390/dasd: Move quiesce state with pprc swap ublk: don't clear GD_SUPPRESS_PART_SCAN for unprivileged daemons ublk: fix NULL pointer dereference in ublk_ctrl_set_size()
2026-03-10Merge tag 'mm-hotfixes-stable-2026-03-09-16-36' of ↵Linus Torvalds-13/+13
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull misc fixes from Andrew Morton: "15 hotfixes. 6 are cc:stable. 14 are for MM. Singletons, with one doubleton - please see the changelogs for details" * tag 'mm-hotfixes-stable-2026-03-09-16-36' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: MAINTAINERS, mailmap: update email address for Lorenzo Stoakes mm/mmu_notifier: clean up mmu_notifier.h kernel-doc uaccess: correct kernel-doc parameter format mm/huge_memory: fix a folio_split() race condition with folio_try_get() MAINTAINERS: add co-maintainer and reviewer for SLAB ALLOCATOR MAINTAINERS: add RELAY entry memcg: fix slab accounting in refill_obj_stock() trylock path mm/hugetlb.c: use __pa() instead of virt_to_phys() in early bootmem alloc code zram: rename writeback_compressed device attr tools/testing: fix testing/vma and testing/radix-tree build Revert "ptdesc: remove references to folios from __pagetable_ctor() and pagetable_dtor()" mm/cma: move put_page_testzero() out of VM_WARN_ON in cma_release() mm/damon/core: clear walk_control on inactive context in damos_walk() mm: memfd_luo: always dirty all folios mm: memfd_luo: always make all folios uptodate
2026-03-09ublk: don't clear GD_SUPPRESS_PART_SCAN for unprivileged daemonsMing Lei-1/+3
When UBLK_F_NO_AUTO_PART_SCAN is set, GD_SUPPRESS_PART_SCAN is cleared unconditionally, including for unprivileged daemons. Keep it consistent with the code block for setting GD_SUPPRESS_PART_SCAN by not clearing it for unprivileged daemons. In reality this isn't a problem because ioctl(BLKRRPART) requires CAP_SYS_ADMIN, but it is more reliable to not clear the bit. Cc: Alexander Atanasov <alex@zazolabs.com> Fixes: 8443e2087e70 ("ublk: add UBLK_F_NO_AUTO_PART_SCAN feature flag") Signed-off-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-03-06ublk: fix NULL pointer dereference in ublk_ctrl_set_size()Mehul Rao-3/+9
ublk_ctrl_set_size() unconditionally dereferences ub->ub_disk via set_capacity_and_notify() without checking if it is NULL. ub->ub_disk is NULL before UBLK_CMD_START_DEV completes (it is only assigned in ublk_ctrl_start_dev()) and after UBLK_CMD_STOP_DEV runs (ublk_detach_disk() sets it to NULL). Since the UBLK_CMD_UPDATE_SIZE handler performs no state validation, a user can trigger a NULL pointer dereference by sending UPDATE_SIZE to a device that has been added but not yet started, or one that has been stopped. Fix this by checking ub->ub_disk under ub->mutex before dereferencing it, and returning -ENODEV if the disk is not available. Fixes: 98b995660bff ("ublk: Add UBLK_U_CMD_UPDATE_SIZE") Cc: stable@vger.kernel.org Signed-off-by: Mehul Rao <mehulrao@gmail.com> Reviewed-by: Ming Lei <ming.lei@redhat.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-03-04zram: rename writeback_compressed device attrSergey Senozhatsky-13/+13
Rename writeback_compressed attr to compressed_writeback to avoid possible confusion and have more natural naming. writeback_compressed may look like an alternative version of writeback while in fact writeback_compressed only sets a writeback property. Make this distinction more clear with a new compressed_writeback name. This updates a feature which is new in 7.0-rcX. Link: https://lkml.kernel.org/r/20260226025429.1042083-1-senozhatsky@chromium.org Fixes: 4c1d61389e8e ("zram: introduce writeback_compressed device attribute") Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org> Suggested-by: Minchan Kim <minchan@kernel.org> Acked-by: Minchan Kim <minchan@kernel.org> Cc: Brian Geffon <bgeffon@google.com> Cc: Richard Chang <richardycc@google.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: "Christoph Böhmwalder" <christoph.boehmwalder@linbit.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Lars Ellenberg <lars.ellenberg@linbit.com> Cc: Philipp Reisner <philipp.reisner@linbit.com> Cc: Shuah Khan <skhan@linuxfoundation.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2026-02-27Merge tag 'block-7.0-20260227' of ↵Linus Torvalds-46/+64
git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux Pull block fixes from Jens Axboe: "Two sets of fixes, one for drbd, and one for the zoned loop driver" * tag 'block-7.0-20260227' of git://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux: zloop: check for spurious options passed to remove zloop: advertise a volatile write cache drbd: fix null-pointer dereference on local read error drbd: Replace deprecated strcpy with strscpy drbd: fix "LOGIC BUG" in drbd_al_begin_io_nonblock()
2026-02-24zloop: check for spurious options passed to removeChristoph Hellwig-1/+6
Zloop uses a command option parser for all control commands, but most options are only valid for adding a new device. Check for incorrectly specified options in the remove handler. Fixes: eb0570c7df23 ("block: new zoned loop block device driver") Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-02-24zloop: advertise a volatile write cacheChristoph Hellwig-6/+18
Zloop is file system backed and thus needs to sync the underlying file system to persist data. Set BLK_FEAT_WRITE_CACHE so that the block layer actually send flush commands, and fix the flush implementation as sync_filesystem requires s_umount to be held and the code currently misses that. Fixes: eb0570c7df23 ("block: new zoned loop block device driver") Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Damien Le Moal <dlemoal@kernel.org> Signed-off-by: Jens Axboe <axboe@kernel.dk>
2026-02-22Convert remaining multi-line kmalloc_obj/flex GFP_KERNEL usesKees Cook-4/+3
Conversion performed via this Coccinelle script: // SPDX-License-Identifier: GPL-2.0-only // Options: --include-headers-for-types --all-includes --include-headers --keep-comments virtual patch @gfp depends on patch && !(file in "tools") && !(file in "samples")@ identifier ALLOC = {kmalloc_obj,kmalloc_objs,kmalloc_flex, kzalloc_obj,kzalloc_objs,kzalloc_flex, kvmalloc_obj,kvmalloc_objs,kvmalloc_flex, kvzalloc_obj,kvzalloc_objs,kvzalloc_flex}; @@ ALLOC(... - , GFP_KERNEL ) $ make coccicheck MODE=patch COCCI=gfp.cocci Build and boot tested x86_64 with Fedora 42's GCC and Clang: Linux version 6.19.0+ (user@host) (gcc (GCC) 15.2.1 20260123 (Red Hat 15.2.1-7), GNU ld version 2.44-12.fc42) #1 SMP PREEMPT_DYNAMIC 1970-01-01 Linux version 6.19.0+ (user@host) (clang version 20.1.8 (Fedora 20.1.8-4.fc42), LLD 20.1.8) #1 SMP PREEMPT_DYNAMIC 1970-01-01 Signed-off-by: Kees Cook <kees@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2026-02-21Convert more 'alloc_obj' cases to default GFP_KERNEL argumentsLinus Torvalds-16/+8
This converts some of the visually simpler cases that have been split over multiple lines. I only did the ones that are easy to verify the resulting diff by having just that final GFP_KERNEL argument on the next line. Somebody should probably do a proper coccinelle script for this, but for me the trivial script actually resulted in an assertion failure in the middle of the script. I probably had made it a bit _too_ trivial. So after fighting that far a while I decided to just do some of the syntactically simpler cases with variations of the previous 'sed' scripts. The more syntactically complex multi-line cases would mostly really want whitespace cleanup anyway. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2026-02-21Convert 'alloc_flex' family to use the new default GFP_KERNEL argumentLinus Torvalds-2/+2
This is the exact same thing as the 'alloc_obj()' version, only much smaller because there are a lot fewer users of the *alloc_flex() interface. As with alloc_obj() version, this was done entirely with mindless brute force, using the same script, except using 'flex' in the pattern rather than 'objs*'. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2026-02-21Convert 'alloc_obj' family to use the new default GFP_KERNEL argumentLinus Torvalds-63/+63
This was done entirely with mindless brute force, using git grep -l '\<k[vmz]*alloc_objs*(.*, GFP_KERNEL)' | xargs sed -i 's/\(alloc_objs*(.*\), GFP_KERNEL)/\1)/' to convert the new alloc_obj() users that had a simple GFP_KERNEL argument to just drop that argument. Note that due to the extreme simplicity of the scripting, any slightly more complex cases spread over multiple lines would not be triggered: they definitely exist, but this covers the vast bulk of the cases, and the resulting diff is also then easier to check automatically. For the same reason the 'flex' versions will be done as a separate conversion. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2026-02-21Merge tag 'kmalloc_obj-treewide-v7.0-rc1' of ↵Linus Torvalds-127/+122
git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux Pull kmalloc_obj conversion from Kees Cook: "This does the tree-wide conversion to kmalloc_obj() and friends using coccinelle, with a subsequent small manual cleanup of whitespace alignment that coccinelle does not handle. This uncovered a clang bug in __builtin_counted_by_ref(), so the conversion is preceded by disabling that for current versions of clang. The imminent clang 22.1 release has the fix. I've done allmodconfig build tests for x86_64, arm64, i386, and arm. I did defconfig builds for alpha, m68k, mips, parisc, powerpc, riscv, s390, sparc, sh, arc, csky, xtensa, hexagon, and openrisc" * tag 'kmalloc_obj-treewide-v7.0-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: kmalloc_obj: Clean up after treewide replacements treewide: Replace kmalloc with kmalloc_obj for non-scalar types compiler_types: Disable __builtin_counted_by_ref for Clang