aboutsummaryrefslogtreecommitdiffstats
path: root/net/lapb/lapb_iface.c (unfollow)
AgeCommit message (Collapse)AuthorFilesLines
2025-07-17et131x: Add missing check after DMA mapThomas Fourier1-0/+36
The DMA map functions can fail and should be tested for errors. If the mapping fails, unmap and return an error. Signed-off-by: Thomas Fourier <fourier.thomas@gmail.com> Acked-by: Mark Einon <mark.einon@gmail.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250716094733.28734-2-fourier.thomas@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17net: ag71xx: Add missing check after DMA mapThomas Fourier1-0/+9
The DMA map functions can fail and should be tested for errors. Signed-off-by: Thomas Fourier <fourier.thomas@gmail.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250716095733.37452-3-fourier.thomas@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17selftests/drivers/net: Support ipv6 for napi_id testTianyi Cui2-11/+28
Add support for IPv6 environment for napi_id test. Test Plan: ./run_kselftest.sh -t drivers/net:napi_id.py TAP version 13 1..1 # timeout set to 45 # selftests: drivers/net: napi_id.py # TAP version 13 # 1..1 # ok 1 napi_id.test_napi_id # # Totals: pass:1 fail:0 xfail:0 xpass:0 skip:0 error:0 ok 1 selftests: drivers/net: napi_id.py Signed-off-by: Tianyi Cui <1997cui@gmail.com> Link: https://patch.msgid.link/20250717011913.1248816-1-1997cui@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17ibmvnic: Use ndo_get_stats64 to fix inaccurate SAR reportingMingming Cao1-7/+20
VNIC testing on multi-core Power systems showed SAR stats drift and packet rate inconsistencies under load. Implements ndo_get_stats64 to provide safe aggregation of queue-level atomic64 counters into rtnl_link_stats64 for use by tools like 'ip -s', 'ifconfig', and 'sar'. Switch to ndo_get_stats64 to align SAR reporting with the standard kernel interface for retrieving netdev stats. This removes redundant per-adapter stat updates, reduces overhead, eliminates cacheline bouncing from hot path updates, and improves the accuracy of reported packet rates. Signed-off-by: Mingming Cao <mmc@linux.ibm.com> Reviewed-by: Brian King <bjking1@linux.ibm.com> Reviewed-by: Dave Marquardt <davemarq@linux.ibm.com> Reviewed-by: Simon Horman <horms@kernel.org> ---- Changes since v3: link to v3: https://www.spinics.net/lists/netdev/msg1107999.html -- keep per queue counters as u64 (this patch) and drop off patch 1 in v3 Link: https://patch.msgid.link/20250716152115.61143-1-mmc@linux.ibm.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17net/mlx5e: Properly access RCU protected qdisc_sleeping variableLeon Romanovsky1-1/+1
qdisc_sleeping variable is declared as "struct Qdisc __rcu" and as such needs proper annotation while accessing it. Without rtnl_dereference(), the following error is generated by sparse: drivers/net/ethernet/mellanox/mlx5/core/en/qos.c:377:40: warning: incorrect type in initializer (different address spaces) drivers/net/ethernet/mellanox/mlx5/core/en/qos.c:377:40: expected struct Qdisc *qdisc drivers/net/ethernet/mellanox/mlx5/core/en/qos.c:377:40: got struct Qdisc [noderef] __rcu *qdisc_sleeping Signed-off-by: Leon Romanovsky <leonro@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Link: https://patch.msgid.link/1752675472-201445-4-git-send-email-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17net/mlx5e: fix kdoc warning on eswitch.hMoshe Shemesh1-1/+1
Fix the following kdoc warning: git ls-files *.[ch] | egrep drivers/net/ethernet/mellanox/mlx5/core/ |\ xargs scripts/kernel-doc --none drivers/net/ethernet/mellanox/mlx5/core/eswitch.h:824: warning: cannot understand function prototype: 'struct mlx5_esw_event_info ' Signed-off-by: Moshe Shemesh <moshe@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Link: https://patch.msgid.link/1752675472-201445-3-git-send-email-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17net/mlx5: HWS, Enable IPSec hardware offload in legacy modeLama Kayal1-2/+1
IPSec hardware offload in legacy mode should not be affected by the steering mode, hence it should also work properly with hmfs mode. Remove steering mode validation when calculating the cap for packet offload, this will also enable the missing cap MLX5_IPSEC_CAP_PRIO needed for crypto offload. Signed-off-by: Lama Kayal <lkayal@nvidia.com> Reviewed-by: Jianbo Liu <jianbol@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Link: https://patch.msgid.link/1752675472-201445-2-git-send-email-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17net: pcs: xpcs: mask readl() return value to 16 bitsJack Ping CHNG1-2/+2
readl() returns 32-bit value but Clause 22/45 registers are 16-bit wide. Masking with 0xFFFF avoids using garbage upper bits. Signed-off-by: Jack Ping CHNG <jchng@maxlinear.com> Reviewed-by: Maxime Chevallier <maxime.chevallier@bootlin.com> Link: https://patch.msgid.link/20250716030349.3796806-1-jchng@maxlinear.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17net/mlx5: Fix an IS_ERR() vs NULL bug in esw_qos_move_node()Dan Carpenter1-2/+3
The __esw_qos_alloc_node() function returns NULL on error. It doesn't return error pointers. Update the error checking to match. Fixes: 96619c485fa6 ("net/mlx5: Add support for setting tc-bw on nodes") Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/0ce4ec2a-2b5d-4652-9638-e715a99902a7@sabinyo.mountain Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17net: ethernet: mtk_wed: Fix NULL vs IS_ERR() bug in mtk_wed_get_memory_region()Dan Carpenter1-1/+3
We recently changed this from using devm_ioremap() to using devm_ioremap_resource() and unfortunately the former returns NULL while the latter returns error pointers. The check for errors needs to be updated as well. Fixes: e27dba1951ce ("net: Use of_reserved_mem_region_to_resource{_byname}() for "memory-region"") Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org> Acked-by: Lorenzo Bianconi <lorenzo@kernel.org> Link: https://patch.msgid.link/87c10dbd-df86-4971-b4f5-40ba02c076fb@sabinyo.mountain Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17net: airoha: Fix a NULL vs IS_ERR() bug in airoha_npu_run_firmware()Dan Carpenter1-2/+2
The devm_ioremap_resource() function returns error pointers. It never returns NULL. Update the check to match. Fixes: e27dba1951ce ("net: Use of_reserved_mem_region_to_resource{_byname}() for "memory-region"") Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org> Acked-by: Lorenzo Bianconi <lorenzo@kernel.org> Link: https://patch.msgid.link/fc6d194e-6bf5-49ca-bc77-3fdfda62c434@sabinyo.mountain Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17net: phy: qcom: qca807x: Support PHY counterLuo Jie1-0/+25
Within the QCA807X PHY operation's config_init() function, enable CRC checking for received and transmitted frames and configure counter to clear after being read to support counter recording. Additionally, add support for PHY counter operations. Signed-off-by: Luo Jie <quic_luoj@quicinc.com> Link: https://patch.msgid.link/20250715-qcom_phy_counter-v3-3-8b0e460a527b@quicinc.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17net: phy: qcom: qca808x: Support PHY counterLuo Jie1-0/+23
Enable CRC checking for received and transmitted frames, and configure counters to clear after being read within config_init() for accurate counter recording. Additionally, add PHY counter operations and integrate shared functions. Signed-off-by: Luo Jie <quic_luoj@quicinc.com> Link: https://patch.msgid.link/20250715-qcom_phy_counter-v3-2-8b0e460a527b@quicinc.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17net: phy: qcom: Add PHY counter supportLuo Jie2-0/+98
Add PHY counter functionality to the shared library. The implementation is identical for the current QCA807X and QCA808X PHYs. The PHY counter can be configured to perform CRC checking for both received and transmitted packets. Additionally, the packet counter can be set to automatically clear after it is read. The PHY counter includes 32-bit packet counters for both RX (received) and TX (transmitted) packets, as well as 16-bit counters for recording CRC error packets for both RX and TX. Signed-off-by: Luo Jie <quic_luoj@quicinc.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/20250715-qcom_phy_counter-v3-1-8b0e460a527b@quicinc.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17netdevsim: remove redundant branchDennis Chen1-4/+1
bool notify is referenced nowhere else in the function except to check whether or not to call rtnl_offload_xstats_notify(). Remove it and move the call to the previous branch. Signed-off-by: Dennis Chen <dechen@redhat.com> Reviewed-by: Simon Horman <horms@kernel.org> Reviewed-by: Petr Machata <petrm@nvidia.com> Link: https://patch.msgid.link/20250716165750.561175-1-dechen@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17selftests: net: prevent Python from buffering the outputJakub Kicinski1-3/+4
Make sure Python doesn't buffer the output, otherwise for some tests we may see false positive timeouts in NIPA. NIPA thinks that a machine has hung if the test doesn't print anything for 3min. This is also nice to heave for running the tests manually, especially in vng. Reviewed-by: Petr Machata <petrm@nvidia.com> Link: https://patch.msgid.link/20250716205712.1787325-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17neighbour: Update pneigh_entry in pneigh_create().Kuniyuki Iwashima3-23/+20
neigh_add() updates pneigh_entry() found or created by pneigh_create(). This update is serialised by RTNL, but we will remove it. Let's move the update part to pneigh_create() and make it return errno instead of a pointer of pneigh_entry. Now, the pneigh code is RTNL free. Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com> Link: https://patch.msgid.link/20250716221221.442239-16-kuniyu@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17neighbour: Protect tbl->phash_buckets[] with a dedicated mutex.Kuniyuki Iwashima2-18/+22
tbl->phash_buckets[] is only modified in the slow path by pneigh_create() and pneigh_delete() under the table lock. Both of them are called under RTNL, so no extra lock is needed, but we will remove RTNL from the paths. pneigh_create() looks up a pneigh_entry, and this part can be lockless, but it would complicate the logic like 1. lookup 2. allocate pengih_entry for GFP_KERNEL 3. lookup again but under lock 4. if found, return it after freeing the allocated memory 5. else, return the new one Instead, let's add a per-table mutex and run lookup and allocation under it. Note that updating pneigh_entry part in neigh_add() is still protected by RTNL and will be moved to pneigh_create() in the next patch. Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com> Link: https://patch.msgid.link/20250716221221.442239-15-kuniyu@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17neighbour: Drop read_lock_bh(&tbl->lock) in pneigh_lookup().Kuniyuki Iwashima1-27/+16
Now, all callers of pneigh_lookup() are under RCU, and the read lock there is no longer needed. Let's drop the lock, inline __pneigh_lookup_1() to pneigh_lookup(), and call it from pneigh_create(). The next patch will remove tbl->lock from pneigh_create(). Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com> Link: https://patch.msgid.link/20250716221221.442239-14-kuniyu@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17neighbour: Remove __pneigh_lookup().Kuniyuki Iwashima3-17/+2
__pneigh_lookup() is the lockless version of pneigh_lookup(), but its only caller pndisc_is_router() holds the table lock and reads pneigh_netry.flags. This is because accessing pneigh_entry after pneigh_lookup() was illegal unless the caller holds RTNL or the table lock. Now, pneigh_entry is guaranteed to be alive during the RCU critical section. Let's call pneigh_lookup() and use READ_ONCE() for n->flags in pndisc_is_router() and remove __pneigh_lookup(). Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com> Link: https://patch.msgid.link/20250716221221.442239-13-kuniyu@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17neighbour: Use rcu_dereference() in pneigh_get_{first,next}().Kuniyuki Iwashima1-5/+5
Now pneigh_entry is guaranteed to be alive during the RCU critical section even without holding tbl->lock. Let's use rcu_dereference() in pneigh_get_{first,next}(). Note that neigh_seq_start() still holds tbl->lock for the normal neighbour entry. Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com> Link: https://patch.msgid.link/20250716221221.442239-12-kuniyu@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17neighbour: Drop read_lock_bh(&tbl->lock) in pneigh_dump_table().Kuniyuki Iwashima1-8/+3
Now pneigh_entry is guaranteed to be alive during the RCU critical section even without holding tbl->lock. Let's drop read_lock_bh(&tbl->lock) and use rcu_dereference() to iterate tbl->phash_buckets[] in pneigh_dump_table() Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com> Link: https://patch.msgid.link/20250716221221.442239-11-kuniyu@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17neighbour: Convert RTM_GETNEIGH to RCU.Kuniyuki Iwashima1-10/+15
Only __dev_get_by_index() is the RTNL dependant in neigh_get(). Let's replace it with dev_get_by_index_rcu() and convert RTM_GETNEIGH to RCU. Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com> Link: https://patch.msgid.link/20250716221221.442239-10-kuniyu@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17neighbour: Annotate access to struct pneigh_entry.{flags,protocol}.Kuniyuki Iwashima1-5/+8
We will convert pneigh readers to RCU, and its flags and protocol will be read locklessly. Let's annotate the access to the two fields. Note that all access to pn->permanent is under RTNL (neigh_add() and pneigh_ifdown_and_unlock()), so WRITE_ONCE() and READ_ONCE() are not needed. Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com> Link: https://patch.msgid.link/20250716221221.442239-9-kuniyu@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17neighbour: Free pneigh_entry after RCU grace period.Kuniyuki Iwashima2-17/+32
We will convert RTM_GETNEIGH to RCU. neigh_get() looks up pneigh_entry by pneigh_lookup() and passes it to pneigh_fill_info(). Then, we must ensure that the entry is alive till pneigh_fill_info() completes, but read_lock_bh(&tbl->lock) in pneigh_lookup() does not guarantee that. Also, we will convert all readers of tbl->phash_buckets[] to RCU. Let's use call_rcu() to free pneigh_entry and update phash_buckets[] and ->next by rcu_assign_pointer(). pneigh_ifdown_and_unlock() uses list_head to avoid overwriting ->next and moving RCU iterators to another list. pndisc_destructor() (only IPv6 ndisc uses this) uses a mutex, so it is not delayed to call_rcu(), where we cannot sleep. This is fine because the mcast code works with RCU and ipv6_dev_mc_dec() frees mcast objects after RCU grace period. While at it, we change the return type of pneigh_ifdown_and_unlock() to void. Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com> Link: https://patch.msgid.link/20250716221221.442239-8-kuniyu@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17neighbour: Annotate neigh_table.phash_buckets and pneigh_entry.next with __rcu.Kuniyuki Iwashima2-23/+33
The next patch will free pneigh_entry with call_rcu(). Then, we need to annotate neigh_table.phash_buckets[] and pneigh_entry.next with __rcu. To make the next patch cleaner, let's annotate the fields in advance. Currently, all accesses to the fields are under the neigh table lock, so rcu_dereference_protected() is used with 1 for now, but most of them (except in pneigh_delete() and pneigh_ifdown_and_unlock()) will be replaced with rcu_dereference() and rcu_dereference_check(). Note that pneigh_ifdown_and_unlock() changes pneigh_entry.next to a local list, which is illegal because the RCU iterator could be moved to another list. This part will be fixed in the next patch. Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com> Link: https://patch.msgid.link/20250716221221.442239-7-kuniyu@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17neighbour: Split pneigh_lookup().Kuniyuki Iwashima5-16/+36
pneigh_lookup() has ASSERT_RTNL() in the middle of the function, which is confusing. When called with the last argument, creat, 0, pneigh_lookup() literally looks up a proxy neighbour entry. This is the case of the reader path as the fast path and RTM_GETNEIGH. pneigh_lookup(), however, creates a pneigh_entry when called with creat 1 from RTM_NEWNEIGH and SIOCSARP, which require RTNL. Let's split pneigh_lookup() into two functions. We will convert all the reader paths to RCU, and read_lock_bh(&tbl->lock) in the new pneigh_lookup() will be dropped. Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com> Link: https://patch.msgid.link/20250716221221.442239-6-kuniyu@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17neighbour: Move neigh_find_table() to neigh_get().Kuniyuki Iwashima1-17/+20
neigh_valid_get_req() calls neigh_find_table() to fetch neigh_tables[]. neigh_find_table() uses rcu_dereference_rtnl(), but RTNL actually does not protect it at all; neigh_table_clear() can be called without RTNL and only waits for RCU readers by synchronize_rcu(). Fortunately, there is no bug because IPv4 is built-in, IPv6 cannot be unloaded, and DECNET was removed. To fetch neigh_tables[] by rcu_dereference() later, let's move neigh_find_table() from neigh_valid_get_req() to neigh_get(). Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com> Link: https://patch.msgid.link/20250716221221.442239-5-kuniyu@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17neighbour: Allocate skb in neigh_get().Kuniyuki Iwashima1-56/+32
We will remove RTNL for neigh_get() and run it under RCU instead. neigh_get_reply() and pneigh_get_reply() allocate skb with GFP_KERNEL. Let's move the allocation before __dev_get_by_index() in neigh_get(). Now, neigh_get_reply() and pneigh_get_reply() are inlined and rtnl_unicast() is factorised. We will convert pneigh_lookup() to __pneigh_lookup() later. Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com> Link: https://patch.msgid.link/20250716221221.442239-4-kuniyu@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17neighbour: Move two validations from neigh_get() to neigh_valid_get_req().Kuniyuki Iwashima1-13/+13
We will remove RTNL for neigh_get() and run it under RCU instead. neigh_get() returns -EINVAL in the following cases: * NDA_DST is not specified * Both ndm->ndm_ifindex and NTF_PROXY are not specified These validations do not require RCU. Let's move them to neigh_valid_get_req(). While at it, the extack string for the first case is replaced with NL_SET_ERR_ATTR_MISS(). Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com> Link: https://patch.msgid.link/20250716221221.442239-3-kuniyu@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17neighbour: Make neigh_valid_get_req() return ndmsg.Kuniyuki Iwashima1-24/+19
neigh_get() passes 4 local variable pointers to neigh_valid_get_req(). If it returns a pointer of struct ndmsg, we do not need to pass two of them. Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com> Link: https://patch.msgid.link/20250716221221.442239-2-kuniyu@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17selftests: drv-net: rss_api: test input-xfrm and hash fieldsJakub Kicinski1-0/+143
Test configuring input-xfrm and hash fields with all the limitations. Tested on mlx5 (CX6): # ./ksft-net-drv/drivers/net/hw/rss_api.py TAP version 13 1..10 ok 1 rss_api.test_rxfh_nl_set_fail ok 2 rss_api.test_rxfh_nl_set_indir ok 3 rss_api.test_rxfh_nl_set_indir_ctx ok 4 rss_api.test_rxfh_indir_ntf ok 5 rss_api.test_rxfh_indir_ctx_ntf ok 6 rss_api.test_rxfh_nl_set_key ok 7 rss_api.test_rxfh_fields ok 8 rss_api.test_rxfh_fields_set ok 9 rss_api.test_rxfh_fields_set_xfrm ok 10 rss_api.test_rxfh_fields_ntf # Totals: pass:10 fail:0 xfail:0 xpass:0 skip:0 error:0 Link: https://patch.msgid.link/20250716000331.1378807-12-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17ethtool: rss: support setting flow hashing fieldsJakub Kicinski4-10/+107
Add support for ETHTOOL_SRXFH (setting hashing fields) in RSS_SET. The tricky part is dealing with symmetric hashing. In netlink user can change the hashing fields and symmetric hash in one request, in IOCTL the two used to be set via different uAPI requests. Since fields and hash function config are still separate driver callbacks - changes to the two are not atomic. Keep things simple and validate the settings against both pre- and post- change ones. Meaning that we will reject the config request if user tries to correct the flow fields and set input_xfrm in one request, or disables input_xfrm and makes flow fields non-symmetric. We can adjust it later if there's a real need. Starting simple feels right, and potentially partially applying the settings isn't nice, either. Reviewed-by: Gal Pressman <gal@nvidia.com> Link: https://patch.msgid.link/20250716000331.1378807-11-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17ethtool: rss: support setting input-xfrm via NetlinkJakub Kicinski6-8/+71
Support configuring symmetric hashing via Netlink. We have the flow field config prepared as part of SET handling, so scan it for conflicts instead of querying the driver again. Reviewed-by: Gal Pressman <gal@nvidia.com> Link: https://patch.msgid.link/20250716000331.1378807-10-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17netlink: specs: define input-xfrm enum in the specJakub Kicinski2-3/+24
Help YNL decode the values for input-xfrm by defining the possible values in the spec. Don't define "no change" as it's an IOCTL artifact with no use in Netlink. With this change on mlx5 input-xfrm gets decoded: # ynl --family ethtool --dump rss-get [{'header': {'dev-index': 2, 'dev-name': 'eth0'}, 'hfunc': 1, 'hkey': b'V\xa8\xf9\x9 ...', 'indir': [0, 1, ... ], 'input-xfrm': {'sym-or-xor'}, <<< 'flow-hash': {'ah4': {'ip-dst', 'ip-src'}, 'ah6': {'ip-dst', 'ip-src'}, 'esp4': {'ip-dst', 'ip-src'}, 'esp6': {'ip-dst', 'ip-src'}, 'ip4': {'ip-dst', 'ip-src'}, 'ip6': {'ip-dst', 'ip-src'}, 'tcp4': {'l4-b-0-1', 'ip-dst', 'l4-b-2-3', 'ip-src'}, 'tcp6': {'l4-b-0-1', 'ip-dst', 'l4-b-2-3', 'ip-src'}, 'udp4': {'l4-b-0-1', 'ip-dst', 'l4-b-2-3', 'ip-src'}, 'udp6': {'l4-b-0-1', 'ip-dst', 'l4-b-2-3', 'ip-src'}} }] Reviewed-by: Gal Pressman <gal@nvidia.com> Link: https://patch.msgid.link/20250716000331.1378807-9-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17selftests: drv-net: rss_api: test setting hashing key via NetlinkJakub Kicinski1-0/+33
Test setting hashing key via Netlink. # ./tools/testing/selftests/drivers/net/hw/rss_api.py TAP version 13 1..7 ok 1 rss_api.test_rxfh_nl_set_fail ok 2 rss_api.test_rxfh_nl_set_indir ok 3 rss_api.test_rxfh_nl_set_indir_ctx ok 4 rss_api.test_rxfh_indir_ntf ok 5 rss_api.test_rxfh_indir_ctx_ntf ok 6 rss_api.test_rxfh_nl_set_key ok 7 rss_api.test_rxfh_fields # Totals: pass:7 fail:0 xfail:0 xpass:0 skip:0 error:0 Link: https://patch.msgid.link/20250716000331.1378807-8-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17ethtool: rss: support setting hkey via NetlinkJakub Kicinski3-1/+42
Support setting RSS hashing key via ethtool Netlink. Use the Netlink policy to make sure user doesn't pass an empty key, "resetting" the key is not a thing. Reviewed-by: Gal Pressman <gal@nvidia.com> Link: https://patch.msgid.link/20250716000331.1378807-7-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17ethtool: rss: support setting hfunc via NetlinkJakub Kicinski3-1/+13
Support setting RSS hash function / algo via ethtool Netlink. Like IOCTL we don't validate that the function is within the range known to the kernel. The drivers do a pretty good job validating the inputs, and the IDs are technically "dynamically queried" rather than part of uAPI. Only change should be that in Netlink we don't support user explicitly passing ETH_RSS_HASH_NO_CHANGE (0), if no change is requested the attribute should be absent. The ETH_RSS_HASH_NO_CHANGE is retained in driver-facing API for consistency (not that I see a strong reason for it). Reviewed-by: Gal Pressman <gal@nvidia.com> Link: https://patch.msgid.link/20250716000331.1378807-6-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17selftests: drv-net: rss_api: test setting indirection table via NetlinkJakub Kicinski1-3/+93
Test setting indirection table via Netlink. # ./tools/testing/selftests/drivers/net/hw/rss_api.py TAP version 13 1..6 ok 1 rss_api.test_rxfh_nl_set_fail ok 2 rss_api.test_rxfh_nl_set_indir ok 3 rss_api.test_rxfh_nl_set_indir_ctx ok 4 rss_api.test_rxfh_indir_ntf ok 5 rss_api.test_rxfh_indir_ctx_ntf ok 6 rss_api.test_rxfh_fields # Totals: pass:6 fail:0 xfail:0 xpass:0 skip:0 error:0 Reviewed-by: Edward Cree <ecree.xilinx@gmail.com> Link: https://patch.msgid.link/20250716000331.1378807-5-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17tools: ynl: support packing binary arrays of scalarsJakub Kicinski1-1/+6
We support decoding a binary type with a scalar subtype already, add support for sending such arrays to the kernel. While at it also support using "None" to indicate that the binary attribute should be empty. I couldn't decide whether empty binary should be [] or None, but there should be no harm in supporting both. Link: https://patch.msgid.link/20250716000331.1378807-4-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17selftests: drv-net: rss_api: factor out checking min queue countJakub Kicinski1-8/+9
Multiple tests check min queue count, create a helper. Link: https://patch.msgid.link/20250716000331.1378807-3-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17ethtool: rss: initial RSS_SET (indirection table handling)Jakub Kicinski6-1/+242
Add initial support for RSS_SET, for now only operations on the indirection table are supported. Unlike the ioctl don't check if at least one parameter is being changed. This is how other ethtool-nl ops behave, so pick the ethtool-nl consistency vs copying ioctl behavior. There are two special cases here: 1) resetting the table to defaults; 2) support for tables of different size. For (1) I use an empty Netlink attribute (array of size 0). (2) may require some background. AFAICT a lot of modern devices allow allocating RSS tables of different sizes. mlx5 can upsize its tables, bnxt has some "table size calculation", and Intel folks asked about RSS table sizing in context of resource allocation in the past. The ethtool IOCTL API has a concept of table size, but right now the user is expected to provide a table exactly the size the device requests. Some drivers may change the table size at runtime (in response to queue count changes) but the user is not in control of this. What's not great is that all RSS contexts share the same table size. For example a device with 128 queues enabled, 16 RSS contexts 8 queues in each will likely have 256 entry tables for each of the 16 contexts, while 32 would be more than enough given each context only has 8 queues. To address this the Netlink API should avoid enforcing table size at the uAPI level, and should allow the user to express the min table size they expect. To fully solve (2) we will need more driver plumbing but at the uAPI level this patch allows the user to specify a table size smaller than what the device advertises. The device table size must be a multiple of the user requested table size. We then replicate the user-provided table to fill the full device size table. This addresses the "allow the user to express the min table size" objective, while not enforcing any fixed size. From Netlink perspective .get_rxfh_indir_size() is now de facto the "max" table size supported by the device. We may choose to support table replication in ethtool, too, when we actually plumb this thru the device APIs. Initially I was considering moving full pattern generation to the kernel (which queues to use, at which frequency and what min sequence length). I don't think this complexity would buy us much and most if not all devices have pow-2 table sizes, which simplifies the replication a lot. Reviewed-by: Gal Pressman <gal@nvidia.com> Link: https://patch.msgid.link/20250716000331.1378807-2-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17net/mlx5e: TX, Fix dma unmapping for devmem txDragos Tatulea3-7/+17
net_iovs should have the dma address set to 0 so that netmem_dma_unmap_page_attrs() correctly skips the unmap. This was not done in mlx5 when support for devmem tx was added and resulted in the splat below when the platform iommu was enabled. This patch addresses the issue by using netmem_dma_unmap_addr_set() which handles the net_iov case when setting the dma address. A small refactoring of mlx5e_dma_push() was required to be able to use this API. The function was split in two versions and each version called accordingly. Note that netmem_dma_unmap_addr_set() introduces an additional if case. Splat: WARNING: CPU: 14 PID: 2587 at drivers/iommu/dma-iommu.c:1228 iommu_dma_unmap_page+0x7d/0x90 Modules linked in: [...] Unloaded tainted modules: i10nm_edac(E):1 fjes(E):1 CPU: 14 UID: 0 PID: 2587 Comm: ncdevmem Tainted: G S E 6.15.0+ #3 PREEMPT(voluntary) Tainted: [S]=CPU_OUT_OF_SPEC, [E]=UNSIGNED_MODULE Hardware name: HPE ProLiant DL380 Gen10 Plus/ProLiant DL380 Gen10 Plus, BIOS U46 06/01/2022 RIP: 0010:iommu_dma_unmap_page+0x7d/0x90 Code: [...] RSP: 0000:ff6b1e3ea0b2fc58 EFLAGS: 00010246 RAX: 0000000000000000 RBX: ff46ef2d0a2340c8 RCX: 0000000000000000 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000001 RBP: 0000000000000001 R08: 0000000000000000 R09: ffffffff8827a120 R10: 0000000000000000 R11: 0000000000000000 R12: 00000000d8000000 R13: 0000000000000008 R14: 0000000000000001 R15: 0000000000000000 FS: 00007feb69adf740(0000) GS:ff46ef2c779f1000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007feb69cca000 CR3: 0000000154b97006 CR4: 0000000000773ef0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 PKRU: 55555554 Call Trace: <TASK> dma_unmap_page_attrs+0x227/0x250 mlx5e_poll_tx_cq+0x163/0x510 [mlx5_core] mlx5e_napi_poll+0x94/0x720 [mlx5_core] __napi_poll+0x28/0x1f0 net_rx_action+0x33a/0x420 ? mlx5e_completion_event+0x3d/0x40 [mlx5_core] handle_softirqs+0xe8/0x2f0 __irq_exit_rcu+0xcd/0xf0 common_interrupt+0x47/0xa0 asm_common_interrupt+0x26/0x40 RIP: 0033:0x7feb69cd08ec Code: [...] RSP: 002b:00007ffc01b8c880 EFLAGS: 00000246 RAX: 00000000c3a60cf7 RBX: 0000000000045e12 RCX: 000000000000000e RDX: 00000000000035b4 RSI: 0000000000000000 RDI: 00007ffc01b8c8c0 RBP: 00007ffc01b8c8b0 R08: 0000000000000000 R09: 0000000000000064 R10: 00007ffc01b8c8c0 R11: 0000000000000000 R12: 00007feb69cca000 R13: 00007ffc01b90e48 R14: 0000000000427e18 R15: 00007feb69d07000 </TASK> Cc: Mina Almasry <almasrymina@google.com> Reported-by: Stanislav Fomichev <stfomichev@gmail.com> Closes: https://lore.kernel.org/all/aFM6r9kFHeTdj-25@mini-arch/ Fixes: 5a842c288cfa ("net/mlx5e: Add TX support for netmems") Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Reviewed-by: Carolina Jubran <cjubran@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Acked-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/1752649242-147678-1-git-send-email-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17rxrpc: Fix to use conn aborts for conn-wide failuresDavid Howells5-19/+37
Fix rxrpc to use connection-level aborts for things that affect the whole connection, such as the service ID not matching a local service. Fixes: 57af281e5389 ("rxrpc: Tidy up abort generation infrastructure") Reported-by: Jeffrey Altman <jaltman@auristor.com> Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-by: Jeffrey Altman <jaltman@auristor.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org Link: https://patch.msgid.link/20250717074350.3767366-6-dhowells@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17rxrpc: Fix transmission of an abort in response to an abortDavid Howells1-0/+3
Under some circumstances, such as when a server socket is closing, ABORT packets will be generated in response to incoming packets. Unfortunately, this also may include generating aborts in response to incoming aborts - which may cause a cycle. It appears this may be made possible by giving the client a multicast address. Fix this such that rxrpc_reject_packet() will refuse to generate aborts in response to aborts. Fixes: 248f219cb8bc ("rxrpc: Rewrite the data and ack handling code") Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-by: Jeffrey Altman <jaltman@auristor.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Junvyyang, Tencent Zhuque Lab <zhuque@tencent.com> cc: LePremierHomme <kwqcheii@proton.me> cc: Linus Torvalds <torvalds@linux-foundation.org> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org Link: https://patch.msgid.link/20250717074350.3767366-5-dhowells@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17rxrpc: Fix notification vs call-release vs recvmsgDavid Howells3-17/+18
When a call is released, rxrpc takes the spinlock and removes it from ->recvmsg_q in an effort to prevent racing recvmsg() invocations from seeing the same call. Now, rxrpc_recvmsg() only takes the spinlock when actually removing a call from the queue; it doesn't, however, take it in the lead up to that when it checks to see if the queue is empty. It *does* hold the socket lock, which prevents a recvmsg/recvmsg race - but this doesn't prevent sendmsg from ending the call because sendmsg() drops the socket lock and relies on the call->user_mutex. Fix this by firstly removing the bit in rxrpc_release_call() that dequeues the released call and, instead, rely on recvmsg() to simply discard released calls (done in a preceding fix). Secondly, rxrpc_notify_socket() is abandoned if the call is already marked as released rather than trying to be clever by setting both pointers in call->recvmsg_link to NULL to trick list_empty(). This isn't perfect and can still race, resulting in a released call on the queue, but recvmsg() will now clean that up. Fixes: 17926a79320a ("[AF_RXRPC]: Provide secure RxRPC sockets for use by userspace and kernel both") Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-by: Jeffrey Altman <jaltman@auristor.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Junvyyang, Tencent Zhuque Lab <zhuque@tencent.com> cc: LePremierHomme <kwqcheii@proton.me> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org Link: https://patch.msgid.link/20250717074350.3767366-4-dhowells@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17rxrpc: Fix recv-recv race of completed callDavid Howells3-2/+21
If a call receives an event (such as incoming data), the call gets placed on the socket's queue and a thread in recvmsg can be awakened to go and process it. Once the thread has picked up the call off of the queue, further events will cause it to be requeued, and once the socket lock is dropped (recvmsg uses call->user_mutex to allow the socket to be used in parallel), a second thread can come in and its recvmsg can pop the call off the socket queue again. In such a case, the first thread will be receiving stuff from the call and the second thread will be blocked on call->user_mutex. The first thread can, at this point, process both the event that it picked call for and the event that the second thread picked the call for and may see the call terminate - in which case the call will be "released", decoupling the call from the user call ID assigned to it (RXRPC_USER_CALL_ID in the control message). The first thread will return okay, but then the second thread will wake up holding the user_mutex and, if it sees that the call has been released by the first thread, it will BUG thusly: kernel BUG at net/rxrpc/recvmsg.c:474! Fix this by just dequeuing the call and ignoring it if it is seen to be already released. We can't tell userspace about it anyway as the user call ID has become stale. Fixes: 248f219cb8bc ("rxrpc: Rewrite the data and ack handling code") Reported-by: Junvyyang, Tencent Zhuque Lab <zhuque@tencent.com> Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-by: Jeffrey Altman <jaltman@auristor.com> cc: LePremierHomme <kwqcheii@proton.me> cc: Marc Dionne <marc.dionne@auristor.com> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org Link: https://patch.msgid.link/20250717074350.3767366-3-dhowells@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17rxrpc: Fix irq-disabled in local_bh_enable()David Howells3-4/+4
The rxrpc_assess_MTU_size() function calls down into the IP layer to find out the MTU size for a route. When accepting an incoming call, this is called from rxrpc_new_incoming_call() which holds interrupts disabled across the code that calls down to it. Unfortunately, the IP layer uses local_bh_enable() which, config dependent, throws a warning if IRQs are enabled: WARNING: CPU: 1 PID: 5544 at kernel/softirq.c:387 __local_bh_enable_ip+0x43/0xd0 ... RIP: 0010:__local_bh_enable_ip+0x43/0xd0 ... Call Trace: <TASK> rt_cache_route+0x7e/0xa0 rt_set_nexthop.isra.0+0x3b3/0x3f0 __mkroute_output+0x43a/0x460 ip_route_output_key_hash+0xf7/0x140 ip_route_output_flow+0x1b/0x90 rxrpc_assess_MTU_size.isra.0+0x2a0/0x590 rxrpc_new_incoming_peer+0x46/0x120 rxrpc_alloc_incoming_call+0x1b1/0x400 rxrpc_new_incoming_call+0x1da/0x5e0 rxrpc_input_packet+0x827/0x900 rxrpc_io_thread+0x403/0xb60 kthread+0x2f7/0x310 ret_from_fork+0x2a/0x230 ret_from_fork_asm+0x1a/0x30 ... hardirqs last enabled at (23): _raw_spin_unlock_irq+0x24/0x50 hardirqs last disabled at (24): _raw_read_lock_irq+0x17/0x70 softirqs last enabled at (0): copy_process+0xc61/0x2730 softirqs last disabled at (25): rt_add_uncached_list+0x3c/0x90 Fix this by moving the call to rxrpc_assess_MTU_size() out of rxrpc_init_peer() and further up the stack where it can be done without interrupts disabled. It shouldn't be a problem for rxrpc_new_incoming_call() to do it after the locks are dropped as pmtud is going to be performed by the I/O thread - and we're in the I/O thread at this point. Fixes: a2ea9a907260 ("rxrpc: Use irq-disabling spinlocks between app and I/O thread") Signed-off-by: David Howells <dhowells@redhat.com> Reviewed-by: Jeffrey Altman <jaltman@auristor.com> cc: Marc Dionne <marc.dionne@auristor.com> cc: Junvyyang, Tencent Zhuque Lab <zhuque@tencent.com> cc: LePremierHomme <kwqcheii@proton.me> cc: Simon Horman <horms@kernel.org> cc: linux-afs@lists.infradead.org Link: https://patch.msgid.link/20250717074350.3767366-2-dhowells@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17selftests/tc-testing: Test htb_dequeue_tree with deactivation and row emptyingWilliam Liu1-0/+26
Ensure that any deactivation and row emptying that occurs during htb_dequeue_tree does not cause a kernel panic. This scenario originally triggered a kernel BUG_ON, and we are checking for a graceful fail now. Signed-off-by: William Liu <will@willsroot.io> Signed-off-by: Savino Dicanosa <savy@syst3mfailure.io> Link: https://patch.msgid.link/20250717022912.221426-1-will@willsroot.io Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2025-07-17net/sched: Return NULL when htb_lookup_leaf encounters an empty rbtreeWilliam Liu1-1/+3
htb_lookup_leaf has a BUG_ON that can trigger with the following: tc qdisc del dev lo root tc qdisc add dev lo root handle 1: htb default 1 tc class add dev lo parent 1: classid 1:1 htb rate 64bit tc qdisc add dev lo parent 1:1 handle 2: netem tc qdisc add dev lo parent 2:1 handle 3: blackhole ping -I lo -c1 -W0.001 127.0.0.1 The root cause is the following: 1. htb_dequeue calls htb_dequeue_tree which calls the dequeue handler on the selected leaf qdisc 2. netem_dequeue calls enqueue on the child qdisc 3. blackhole_enqueue drops the packet and returns a value that is not just NET_XMIT_SUCCESS 4. Because of this, netem_dequeue calls qdisc_tree_reduce_backlog, and since qlen is now 0, it calls htb_qlen_notify -> htb_deactivate -> htb_deactiviate_prios -> htb_remove_class_from_row -> htb_safe_rb_erase 5. As this is the only class in the selected hprio rbtree, __rb_change_child in __rb_erase_augmented sets the rb_root pointer to NULL 6. Because blackhole_dequeue returns NULL, netem_dequeue returns NULL, which causes htb_dequeue_tree to call htb_lookup_leaf with the same hprio rbtree, and fail the BUG_ON The function graph for this scenario is shown here: 0) | htb_enqueue() { 0) + 13.635 us | netem_enqueue(); 0) 4.719 us | htb_activate_prios(); 0) # 2249.199 us | } 0) | htb_dequeue() { 0) 2.355 us | htb_lookup_leaf(); 0) | netem_dequeue() { 0) + 11.061 us | blackhole_enqueue(); 0) | qdisc_tree_reduce_backlog() { 0) | qdisc_lookup_rcu() { 0) 1.873 us | qdisc_match_from_root(); 0) 6.292 us | } 0) 1.894 us | htb_search(); 0) | htb_qlen_notify() { 0) 2.655 us | htb_deactivate_prios(); 0) 6.933 us | } 0) + 25.227 us | } 0) 1.983 us | blackhole_dequeue(); 0) + 86.553 us | } 0) # 2932.761 us | qdisc_warn_nonwc(); 0) | htb_lookup_leaf() { 0) | BUG_ON(); ------------------------------------------ The full original bug report can be seen here [1]. We can fix this just by returning NULL instead of the BUG_ON, as htb_dequeue_tree returns NULL when htb_lookup_leaf returns NULL. [1] https://lore.kernel.org/netdev/pF5XOOIim0IuEfhI-SOxTgRvNoDwuux7UHKnE_Y5-zVd4wmGvNk2ceHjKb8ORnzw0cGwfmVu42g9dL7XyJLf1NEzaztboTWcm0Ogxuojoeo=@willsroot.io/ Fixes: 512bb43eb542 ("pkt_sched: sch_htb: Optimize WARN_ONs in htb_dequeue_tree() etc.") Signed-off-by: William Liu <will@willsroot.io> Signed-off-by: Savino Dicanosa <savy@syst3mfailure.io> Link: https://patch.msgid.link/20250717022816.221364-1-will@willsroot.io Signed-off-by: Jakub Kicinski <kuba@kernel.org>