summaryrefslogtreecommitdiffstats
path: root/tools/perf/scripts/python/stackcollapse.py
diff options
context:
space:
mode:
authorQu Wenruo <wqu@suse.com>2023-09-02 08:13:52 +0800
committerDavid Sterba <dsterba@suse.com>2023-10-12 16:44:03 +0200
commit686c4a5a42635e0d2889e3eb461c554fd0b616b4 (patch)
tree5235f3aa787d51bc5ec38d1b81d549ca215e31a6 /tools/perf/scripts/python/stackcollapse.py
parent2a3a1dd99e043b64c0f61cb3960040fd697d87bf (diff)
downloadlinux-686c4a5a42635e0d2889e3eb461c554fd0b616b4.tar.gz
linux-686c4a5a42635e0d2889e3eb461c554fd0b616b4.zip
btrfs: qgroup: iterate qgroups without memory allocation for qgroup_reserve()
Qgroup heavily relies on ulist to go through all the involved qgroups, but since we're using ulist inside fs_info->qgroup_lock spinlock, this means we're doing a lot of GFP_ATOMIC allocations. This patch reduces the GFP_ATOMIC usage for qgroup_reserve() by eliminating the memory allocation completely. This is done by moving the needed memory to btrfs_qgroup::iterator list_head, so that we can put all the involved qgroup into a on-stack list, thus eliminating the need to allocate memory while holding spinlock. The only cost is the slightly higher memory usage, but considering the reduce GFP_ATOMIC during a hot path, it should still be acceptable. Function qgroup_reserve() is the perfect start point for this conversion. Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>
Diffstat (limited to 'tools/perf/scripts/python/stackcollapse.py')
0 files changed, 0 insertions, 0 deletions