diff options
| author | Linus Torvalds <torvalds@linux-foundation.org> | 2022-12-13 15:47:48 -0800 |
|---|---|---|
| committer | Linus Torvalds <torvalds@linux-foundation.org> | 2022-12-13 15:47:48 -0800 |
| commit | 7e68dd7d07a28faa2e6574dd6b9dbd90cdeaae91 (patch) | |
| tree | ae0427c5a3b905f24b3a44b510a9bcf35d9b67a3 /drivers/net/ipa/ipa_endpoint.c | |
| parent | Merge tag 'xtensa-20221213' of https://github.com/jcmvbkbc/linux-xtensa (diff) | |
| parent | ipvs: fix type warning in do_div() on 32 bit (diff) | |
| download | linux-7e68dd7d07a28faa2e6574dd6b9dbd90cdeaae91.tar.gz linux-7e68dd7d07a28faa2e6574dd6b9dbd90cdeaae91.zip | |
Merge tag 'net-next-6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next
Pull networking updates from Paolo Abeni:
"Core:
- Allow live renaming when an interface is up
- Add retpoline wrappers for tc, improving considerably the
performances of complex queue discipline configurations
- Add inet drop monitor support
- A few GRO performance improvements
- Add infrastructure for atomic dev stats, addressing long standing
data races
- De-duplicate common code between OVS and conntrack offloading
infrastructure
- A bunch of UBSAN_BOUNDS/FORTIFY_SOURCE improvements
- Netfilter: introduce packet parser for tunneled packets
- Replace IPVS timer-based estimators with kthreads to scale up the
workload with the number of available CPUs
- Add the helper support for connection-tracking OVS offload
BPF:
- Support for user defined BPF objects: the use case is to allocate
own objects, build own object hierarchies and use the building
blocks to build own data structures flexibly, for example, linked
lists in BPF
- Make cgroup local storage available to non-cgroup attached BPF
programs
- Avoid unnecessary deadlock detection and failures wrt BPF task
storage helpers
- A relevant bunch of BPF verifier fixes and improvements
- Veristat tool improvements to support custom filtering, sorting,
and replay of results
- Add LLVM disassembler as default library for dumping JITed code
- Lots of new BPF documentation for various BPF maps
- Add bpf_rcu_read_{,un}lock() support for sleepable programs
- Add RCU grace period chaining to BPF to wait for the completion of
access from both sleepable and non-sleepable BPF programs
- Add support storing struct task_struct objects as kptrs in maps
- Improve helper UAPI by explicitly defining BPF_FUNC_xxx integer
values
- Add libbpf *_opts API-variants for bpf_*_get_fd_by_id() functions
Protocols:
- TCP: implement Protective Load Balancing across switch links
- TCP: allow dynamically disabling TCP-MD5 static key, reverting back
to fast[er]-path
- UDP: Introduce optional per-netns hash lookup table
- IPv6: simplify and cleanup sockets disposal
- Netlink: support different type policies for each generic netlink
operation
- MPTCP: add MSG_FASTOPEN and FastOpen listener side support
- MPTCP: add netlink notification support for listener sockets events
- SCTP: add VRF support, allowing sctp sockets binding to VRF devices
- Add bridging MAC Authentication Bypass (MAB) support
- Extensions for Ethernet VPN bridging implementation to better
support multicast scenarios
- More work for Wi-Fi 7 support, comprising conversion of all the
existing drivers to internal TX queue usage
- IPSec: introduce a new offload type (packet offload) allowing
complete header processing and crypto offloading
- IPSec: extended ack support for more descriptive XFRM error
reporting
- RXRPC: increase SACK table size and move processing into a
per-local endpoint kernel thread, reducing considerably the
required locking
- IEEE 802154: synchronous send frame and extended filtering support,
initial support for scanning available 15.4 networks
- Tun: bump the link speed from 10Mbps to 10Gbps
- Tun/VirtioNet: implement UDP segmentation offload support
Driver API:
- PHY/SFP: improve power level switching between standard level 1 and
the higher power levels
- New API for netdev <-> devlink_port linkage
- PTP: convert existing drivers to new frequency adjustment
implementation
- DSA: add support for rx offloading
- Autoload DSA tagging driver when dynamically changing protocol
- Add new PCP and APPTRUST attributes to Data Center Bridging
- Add configuration support for 800Gbps link speed
- Add devlink port function attribute to enable/disable RoCE and
migratable
- Extend devlink-rate to support strict prioriry and weighted fair
queuing
- Add devlink support to directly reading from region memory
- New device tree helper to fetch MAC address from nvmem
- New big TCP helper to simplify temporary header stripping
New hardware / drivers:
- Ethernet:
- Marvel Octeon CNF95N and CN10KB Ethernet Switches
- Marvel Prestera AC5X Ethernet Switch
- WangXun 10 Gigabit NIC
- Motorcomm yt8521 Gigabit Ethernet
- Microchip ksz9563 Gigabit Ethernet Switch
- Microsoft Azure Network Adapter
- Linux Automation 10Base-T1L adapter
- PHY:
- Aquantia AQR112 and AQR412
- Motorcomm YT8531S
- PTP:
- Orolia ART-CARD
- WiFi:
- MediaTek Wi-Fi 7 (802.11be) devices
- RealTek rtw8821cu, rtw8822bu, rtw8822cu and rtw8723du USB
devices
- Bluetooth:
- Broadcom BCM4377/4378/4387 Bluetooth chipsets
- Realtek RTL8852BE and RTL8723DS
- Cypress.CYW4373A0 WiFi + Bluetooth combo device
Drivers:
- CAN:
- gs_usb: bus error reporting support
- kvaser_usb: listen only and bus error reporting support
- Ethernet NICs:
- Intel (100G):
- extend action skbedit to RX queue mapping
- implement devlink-rate support
- support direct read from memory
- nVidia/Mellanox (mlx5):
- SW steering improvements, increasing rules update rate
- Support for enhanced events compression
- extend H/W offload packet manipulation capabilities
- implement IPSec packet offload mode
- nVidia/Mellanox (mlx4):
- better big TCP support
- Netronome Ethernet NICs (nfp):
- IPsec offload support
- add support for multicast filter
- Broadcom:
- RSS and PTP support improvements
- AMD/SolarFlare:
- netlink extened ack improvements
- add basic flower matches to offload, and related stats
- Virtual NICs:
- ibmvnic: introduce affinity hint support
- small / embedded:
- FreeScale fec: add initial XDP support
- Marvel mv643xx_eth: support MII/GMII/RGMII modes for Kirkwood
- TI am65-cpsw: add suspend/resume support
- Mediatek MT7986: add RX wireless wthernet dispatch support
- Realtek 8169: enable GRO software interrupt coalescing per
default
- Ethernet high-speed switches:
- Microchip (sparx5):
- add support for Sparx5 TC/flower H/W offload via VCAP
- Mellanox mlxsw:
- add 802.1X and MAC Authentication Bypass offload support
- add ip6gre support
- Embedded Ethernet switches:
- Mediatek (mtk_eth_soc):
- improve PCS implementation, add DSA untag support
- enable flow offload support
- Renesas:
- add rswitch R-Car Gen4 gPTP support
- Microchip (lan966x):
- add full XDP support
- add TC H/W offload via VCAP
- enable PTP on bridge interfaces
- Microchip (ksz8):
- add MTU support for KSZ8 series
- Qualcomm 802.11ax WiFi (ath11k):
- support configuring channel dwell time during scan
- MediaTek WiFi (mt76):
- enable Wireless Ethernet Dispatch (WED) offload support
- add ack signal support
- enable coredump support
- remain_on_channel support
- Intel WiFi (iwlwifi):
- enable Wi-Fi 7 Extremely High Throughput (EHT) PHY capabilities
- 320 MHz channels support
- RealTek WiFi (rtw89):
- new dynamic header firmware format support
- wake-over-WLAN support"
* tag 'net-next-6.2' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (2002 commits)
ipvs: fix type warning in do_div() on 32 bit
net: lan966x: Remove a useless test in lan966x_ptp_add_trap()
net: ipa: add IPA v4.7 support
dt-bindings: net: qcom,ipa: Add SM6350 compatible
bnxt: Use generic HBH removal helper in tx path
IPv6/GRO: generic helper to remove temporary HBH/jumbo header in driver
selftests: forwarding: Add bridge MDB test
selftests: forwarding: Rename bridge_mdb test
bridge: mcast: Support replacement of MDB port group entries
bridge: mcast: Allow user space to specify MDB entry routing protocol
bridge: mcast: Allow user space to add (*, G) with a source list and filter mode
bridge: mcast: Add support for (*, G) with a source list and filter mode
bridge: mcast: Avoid arming group timer when (S, G) corresponds to a source
bridge: mcast: Add a flag for user installed source entries
bridge: mcast: Expose __br_multicast_del_group_src()
bridge: mcast: Expose br_multicast_new_group_src()
bridge: mcast: Add a centralized error path
bridge: mcast: Place netlink policy before validation functions
bridge: mcast: Split (*, G) and (S, G) addition into different functions
bridge: mcast: Do not derive entry type from its filter mode
...
Diffstat (limited to 'drivers/net/ipa/ipa_endpoint.c')
| -rw-r--r-- | drivers/net/ipa/ipa_endpoint.c | 277 |
1 files changed, 160 insertions, 117 deletions
diff --git a/drivers/net/ipa/ipa_endpoint.c b/drivers/net/ipa/ipa_endpoint.c index 093e11ec7c2d..136932464261 100644 --- a/drivers/net/ipa/ipa_endpoint.c +++ b/drivers/net/ipa/ipa_endpoint.c @@ -243,42 +243,47 @@ static bool ipa_endpoint_data_valid_one(struct ipa *ipa, u32 count, return true; } -static bool ipa_endpoint_data_valid(struct ipa *ipa, u32 count, - const struct ipa_gsi_endpoint_data *data) +/* Validate endpoint configuration data. Return max defined endpoint ID */ +static u32 ipa_endpoint_max(struct ipa *ipa, u32 count, + const struct ipa_gsi_endpoint_data *data) { const struct ipa_gsi_endpoint_data *dp = data; struct device *dev = &ipa->pdev->dev; enum ipa_endpoint_name name; + u32 max; if (count > IPA_ENDPOINT_COUNT) { dev_err(dev, "too many endpoints specified (%u > %u)\n", count, IPA_ENDPOINT_COUNT); - return false; + return 0; } /* Make sure needed endpoints have defined data */ if (ipa_gsi_endpoint_data_empty(&data[IPA_ENDPOINT_AP_COMMAND_TX])) { dev_err(dev, "command TX endpoint not defined\n"); - return false; + return 0; } if (ipa_gsi_endpoint_data_empty(&data[IPA_ENDPOINT_AP_LAN_RX])) { dev_err(dev, "LAN RX endpoint not defined\n"); - return false; + return 0; } if (ipa_gsi_endpoint_data_empty(&data[IPA_ENDPOINT_AP_MODEM_TX])) { dev_err(dev, "AP->modem TX endpoint not defined\n"); - return false; + return 0; } if (ipa_gsi_endpoint_data_empty(&data[IPA_ENDPOINT_AP_MODEM_RX])) { dev_err(dev, "AP<-modem RX endpoint not defined\n"); - return false; + return 0; } - for (name = 0; name < count; name++, dp++) + max = 0; + for (name = 0; name < count; name++, dp++) { if (!ipa_endpoint_data_valid_one(ipa, count, data, dp)) - return false; + return 0; + max = max_t(u32, max, dp->endpoint_id); + } - return true; + return max; } /* Allocate a transaction to use on a non-command endpoint */ @@ -345,29 +350,32 @@ ipa_endpoint_program_delay(struct ipa_endpoint *endpoint, bool enable) static bool ipa_endpoint_aggr_active(struct ipa_endpoint *endpoint) { - u32 mask = BIT(endpoint->endpoint_id); + u32 endpoint_id = endpoint->endpoint_id; struct ipa *ipa = endpoint->ipa; + u32 unit = endpoint_id / 32; const struct ipa_reg *reg; u32 val; - WARN_ON(!(mask & ipa->available)); + WARN_ON(!test_bit(endpoint_id, ipa->available)); reg = ipa_reg(ipa, STATE_AGGR_ACTIVE); - val = ioread32(ipa->reg_virt + ipa_reg_offset(reg)); + val = ioread32(ipa->reg_virt + ipa_reg_n_offset(reg, unit)); - return !!(val & mask); + return !!(val & BIT(endpoint_id % 32)); } static void ipa_endpoint_force_close(struct ipa_endpoint *endpoint) { - u32 mask = BIT(endpoint->endpoint_id); + u32 endpoint_id = endpoint->endpoint_id; + u32 mask = BIT(endpoint_id % 32); struct ipa *ipa = endpoint->ipa; + u32 unit = endpoint_id / 32; const struct ipa_reg *reg; - WARN_ON(!(mask & ipa->available)); + WARN_ON(!test_bit(endpoint_id, ipa->available)); reg = ipa_reg(ipa, AGGR_FORCE_CLOSE); - iowrite32(mask, ipa->reg_virt + ipa_reg_offset(reg)); + iowrite32(mask, ipa->reg_virt + ipa_reg_n_offset(reg, unit)); } /** @@ -426,10 +434,10 @@ ipa_endpoint_program_suspend(struct ipa_endpoint *endpoint, bool enable) */ void ipa_endpoint_modem_pause_all(struct ipa *ipa, bool enable) { - u32 endpoint_id; + u32 endpoint_id = 0; - for (endpoint_id = 0; endpoint_id < IPA_ENDPOINT_MAX; endpoint_id++) { - struct ipa_endpoint *endpoint = &ipa->endpoint[endpoint_id]; + while (endpoint_id < ipa->endpoint_count) { + struct ipa_endpoint *endpoint = &ipa->endpoint[endpoint_id++]; if (endpoint->ee_id != GSI_EE_MODEM) continue; @@ -448,8 +456,8 @@ void ipa_endpoint_modem_pause_all(struct ipa *ipa, bool enable) /* Reset all modem endpoints to use the default exception endpoint */ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa) { - u32 initialized = ipa->initialized; struct gsi_trans *trans; + u32 endpoint_id; u32 count; /* We need one command per modem TX endpoint, plus the commands @@ -463,14 +471,11 @@ int ipa_endpoint_modem_exception_reset_all(struct ipa *ipa) return -EBUSY; } - while (initialized) { - u32 endpoint_id = __ffs(initialized); + for_each_set_bit(endpoint_id, ipa->defined, ipa->endpoint_count) { struct ipa_endpoint *endpoint; const struct ipa_reg *reg; u32 offset; - initialized ^= BIT(endpoint_id); - /* We only reset modem TX endpoints */ endpoint = &ipa->endpoint[endpoint_id]; if (!(endpoint->ee_id == GSI_EE_MODEM && endpoint->toward_ipa)) @@ -1008,10 +1013,10 @@ static void ipa_endpoint_init_hol_block_disable(struct ipa_endpoint *endpoint) void ipa_endpoint_modem_hol_block_clear_all(struct ipa *ipa) { - u32 i; + u32 endpoint_id = 0; - for (i = 0; i < IPA_ENDPOINT_MAX; i++) { - struct ipa_endpoint *endpoint = &ipa->endpoint[i]; + while (endpoint_id < ipa->endpoint_count) { + struct ipa_endpoint *endpoint = &ipa->endpoint[endpoint_id++]; if (endpoint->toward_ipa || endpoint->ee_id != GSI_EE_MODEM) continue; @@ -1661,6 +1666,7 @@ static void ipa_endpoint_program(struct ipa_endpoint *endpoint) int ipa_endpoint_enable_one(struct ipa_endpoint *endpoint) { + u32 endpoint_id = endpoint->endpoint_id; struct ipa *ipa = endpoint->ipa; struct gsi *gsi = &ipa->gsi; int ret; @@ -1670,37 +1676,35 @@ int ipa_endpoint_enable_one(struct ipa_endpoint *endpoint) dev_err(&ipa->pdev->dev, "error %d starting %cX channel %u for endpoint %u\n", ret, endpoint->toward_ipa ? 'T' : 'R', - endpoint->channel_id, endpoint->endpoint_id); + endpoint->channel_id, endpoint_id); return ret; } if (!endpoint->toward_ipa) { - ipa_interrupt_suspend_enable(ipa->interrupt, - endpoint->endpoint_id); + ipa_interrupt_suspend_enable(ipa->interrupt, endpoint_id); ipa_endpoint_replenish_enable(endpoint); } - ipa->enabled |= BIT(endpoint->endpoint_id); + __set_bit(endpoint_id, ipa->enabled); return 0; } void ipa_endpoint_disable_one(struct ipa_endpoint *endpoint) { - u32 mask = BIT(endpoint->endpoint_id); + u32 endpoint_id = endpoint->endpoint_id; struct ipa *ipa = endpoint->ipa; struct gsi *gsi = &ipa->gsi; int ret; - if (!(ipa->enabled & mask)) + if (!test_bit(endpoint_id, ipa->enabled)) return; - ipa->enabled ^= mask; + __clear_bit(endpoint_id, endpoint->ipa->enabled); if (!endpoint->toward_ipa) { ipa_endpoint_replenish_disable(endpoint); - ipa_interrupt_suspend_disable(ipa->interrupt, - endpoint->endpoint_id); + ipa_interrupt_suspend_disable(ipa->interrupt, endpoint_id); } /* Note that if stop fails, the channel's state is not well-defined */ @@ -1708,7 +1712,7 @@ void ipa_endpoint_disable_one(struct ipa_endpoint *endpoint) if (ret) dev_err(&ipa->pdev->dev, "error %d attempting to stop endpoint %u\n", ret, - endpoint->endpoint_id); + endpoint_id); } void ipa_endpoint_suspend_one(struct ipa_endpoint *endpoint) @@ -1717,7 +1721,7 @@ void ipa_endpoint_suspend_one(struct ipa_endpoint *endpoint) struct gsi *gsi = &endpoint->ipa->gsi; int ret; - if (!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id))) + if (!test_bit(endpoint->endpoint_id, endpoint->ipa->enabled)) return; if (!endpoint->toward_ipa) { @@ -1737,7 +1741,7 @@ void ipa_endpoint_resume_one(struct ipa_endpoint *endpoint) struct gsi *gsi = &endpoint->ipa->gsi; int ret; - if (!(endpoint->ipa->enabled & BIT(endpoint->endpoint_id))) + if (!test_bit(endpoint->endpoint_id, endpoint->ipa->enabled)) return; if (!endpoint->toward_ipa) @@ -1797,12 +1801,12 @@ static void ipa_endpoint_setup_one(struct ipa_endpoint *endpoint) ipa_endpoint_program(endpoint); - endpoint->ipa->set_up |= BIT(endpoint->endpoint_id); + __set_bit(endpoint->endpoint_id, endpoint->ipa->set_up); } static void ipa_endpoint_teardown_one(struct ipa_endpoint *endpoint) { - endpoint->ipa->set_up &= ~BIT(endpoint->endpoint_id); + __clear_bit(endpoint->endpoint_id, endpoint->ipa->set_up); if (!endpoint->toward_ipa) cancel_delayed_work_sync(&endpoint->replenish_work); @@ -1812,45 +1816,39 @@ static void ipa_endpoint_teardown_one(struct ipa_endpoint *endpoint) void ipa_endpoint_setup(struct ipa *ipa) { - u32 initialized = ipa->initialized; - - ipa->set_up = 0; - while (initialized) { - u32 endpoint_id = __ffs(initialized); - - initialized ^= BIT(endpoint_id); + u32 endpoint_id; + for_each_set_bit(endpoint_id, ipa->defined, ipa->endpoint_count) ipa_endpoint_setup_one(&ipa->endpoint[endpoint_id]); - } } void ipa_endpoint_teardown(struct ipa *ipa) { - u32 set_up = ipa->set_up; - - while (set_up) { - u32 endpoint_id = __fls(set_up); - - set_up ^= BIT(endpoint_id); + u32 endpoint_id; + for_each_set_bit(endpoint_id, ipa->set_up, ipa->endpoint_count) ipa_endpoint_teardown_one(&ipa->endpoint[endpoint_id]); - } - ipa->set_up = 0; +} + +void ipa_endpoint_deconfig(struct ipa *ipa) +{ + ipa->available_count = 0; + bitmap_free(ipa->available); + ipa->available = NULL; } int ipa_endpoint_config(struct ipa *ipa) { struct device *dev = &ipa->pdev->dev; const struct ipa_reg *reg; - u32 initialized; + u32 endpoint_id; + u32 tx_count; + u32 rx_count; u32 rx_base; - u32 rx_mask; - u32 tx_mask; - int ret = 0; - u32 max; + u32 limit; u32 val; - /* Prior to IPAv3.5, the FLAVOR_0 register was not supported. + /* Prior to IPA v3.5, the FLAVOR_0 register was not supported. * Furthermore, the endpoints were not grouped such that TX * endpoint numbers started with 0 and RX endpoints had numbers * higher than all TX endpoints, so we can't do the simple @@ -1861,61 +1859,78 @@ int ipa_endpoint_config(struct ipa *ipa) * assume the configuration is valid. */ if (ipa->version < IPA_VERSION_3_5) { - ipa->available = ~0; + ipa->available = bitmap_zalloc(IPA_ENDPOINT_MAX, GFP_KERNEL); + if (!ipa->available) + return -ENOMEM; + ipa->available_count = IPA_ENDPOINT_MAX; + + bitmap_set(ipa->available, 0, IPA_ENDPOINT_MAX); + return 0; } /* Find out about the endpoints supplied by the hardware, and ensure - * the highest one doesn't exceed the number we support. + * the highest one doesn't exceed the number supported by software. */ reg = ipa_reg(ipa, FLAVOR_0); val = ioread32(ipa->reg_virt + ipa_reg_offset(reg)); - /* Our RX is an IPA producer */ + /* Our RX is an IPA producer; our TX is an IPA consumer. */ + tx_count = ipa_reg_decode(reg, MAX_CONS_PIPES, val); + rx_count = ipa_reg_decode(reg, MAX_PROD_PIPES, val); rx_base = ipa_reg_decode(reg, PROD_LOWEST, val); - max = rx_base + ipa_reg_decode(reg, MAX_PROD_PIPES, val); - if (max > IPA_ENDPOINT_MAX) { - dev_err(dev, "too many endpoints (%u > %u)\n", - max, IPA_ENDPOINT_MAX); + + limit = rx_base + rx_count; + if (limit > IPA_ENDPOINT_MAX) { + dev_err(dev, "too many endpoints, %u > %u\n", + limit, IPA_ENDPOINT_MAX); return -EINVAL; } - rx_mask = GENMASK(max - 1, rx_base); - /* Our TX is an IPA consumer */ - max = ipa_reg_decode(reg, MAX_CONS_PIPES, val); - tx_mask = GENMASK(max - 1, 0); - - ipa->available = rx_mask | tx_mask; + /* Allocate and initialize the available endpoint bitmap */ + ipa->available = bitmap_zalloc(limit, GFP_KERNEL); + if (!ipa->available) + return -ENOMEM; + ipa->available_count = limit; - /* Check for initialized endpoints not supported by the hardware */ - if (ipa->initialized & ~ipa->available) { - dev_err(dev, "unavailable endpoint id(s) 0x%08x\n", - ipa->initialized & ~ipa->available); - ret = -EINVAL; /* Report other errors too */ - } + /* Mark all supported RX and TX endpoints as available */ + bitmap_set(ipa->available, 0, tx_count); + bitmap_set(ipa->available, rx_base, rx_count); - initialized = ipa->initialized; - while (initialized) { - u32 endpoint_id = __ffs(initialized); + for_each_set_bit(endpoint_id, ipa->defined, ipa->endpoint_count) { struct ipa_endpoint *endpoint; - initialized ^= BIT(endpoint_id); + if (endpoint_id >= limit) { + dev_err(dev, "invalid endpoint id, %u > %u\n", + endpoint_id, limit - 1); + goto err_free_bitmap; + } + + if (!test_bit(endpoint_id, ipa->available)) { + dev_err(dev, "unavailable endpoint id %u\n", + endpoint_id); + goto err_free_bitmap; + } /* Make sure it's pointing in the right direction */ endpoint = &ipa->endpoint[endpoint_id]; - if ((endpoint_id < rx_base) != endpoint->toward_ipa) { - dev_err(dev, "endpoint id %u wrong direction\n", - endpoint_id); - ret = -EINVAL; + if (endpoint->toward_ipa) { + if (endpoint_id < tx_count) + continue; + } else if (endpoint_id >= rx_base) { + continue; } + + dev_err(dev, "endpoint id %u wrong direction\n", endpoint_id); + goto err_free_bitmap; } - return ret; -} + return 0; -void ipa_endpoint_deconfig(struct ipa *ipa) -{ - ipa->available = 0; /* Nothing more to do */ +err_free_bitmap: + ipa_endpoint_deconfig(ipa); + + return -EINVAL; } static void ipa_endpoint_init_one(struct ipa *ipa, enum ipa_endpoint_name name, @@ -1936,46 +1951,64 @@ static void ipa_endpoint_init_one(struct ipa *ipa, enum ipa_endpoint_name name, endpoint->toward_ipa = data->toward_ipa; endpoint->config = data->endpoint.config; - ipa->initialized |= BIT(endpoint->endpoint_id); + __set_bit(endpoint->endpoint_id, ipa->defined); } static void ipa_endpoint_exit_one(struct ipa_endpoint *endpoint) { - endpoint->ipa->initialized &= ~BIT(endpoint->endpoint_id); + __clear_bit(endpoint->endpoint_id, endpoint->ipa->defined); memset(endpoint, 0, sizeof(*endpoint)); } void ipa_endpoint_exit(struct ipa *ipa) { - u32 initialized = ipa->initialized; - - while (initialized) { - u32 endpoint_id = __fls(initialized); + u32 endpoint_id; - initialized ^= BIT(endpoint_id); + ipa->filtered = 0; + for_each_set_bit(endpoint_id, ipa->defined, ipa->endpoint_count) ipa_endpoint_exit_one(&ipa->endpoint[endpoint_id]); - } + + bitmap_free(ipa->enabled); + ipa->enabled = NULL; + bitmap_free(ipa->set_up); + ipa->set_up = NULL; + bitmap_free(ipa->defined); + ipa->defined = NULL; + memset(ipa->name_map, 0, sizeof(ipa->name_map)); memset(ipa->channel_map, 0, sizeof(ipa->channel_map)); } /* Returns a bitmask of endpoints that support filtering, or 0 on error */ -u32 ipa_endpoint_init(struct ipa *ipa, u32 count, +int ipa_endpoint_init(struct ipa *ipa, u32 count, const struct ipa_gsi_endpoint_data *data) { enum ipa_endpoint_name name; - u32 filter_map; + u32 filtered; BUILD_BUG_ON(!IPA_REPLENISH_BATCH); - if (!ipa_endpoint_data_valid(ipa, count, data)) - return 0; /* Error */ + /* Number of endpoints is one more than the maximum ID */ + ipa->endpoint_count = ipa_endpoint_max(ipa, count, data) + 1; + if (!ipa->endpoint_count) + return -EINVAL; + + /* Initialize endpoint state bitmaps */ + ipa->defined = bitmap_zalloc(ipa->endpoint_count, GFP_KERNEL); + if (!ipa->defined) + return -ENOMEM; + + ipa->set_up = bitmap_zalloc(ipa->endpoint_count, GFP_KERNEL); + if (!ipa->set_up) + goto err_free_defined; - ipa->initialized = 0; + ipa->enabled = bitmap_zalloc(ipa->endpoint_count, GFP_KERNEL); + if (!ipa->enabled) + goto err_free_set_up; - filter_map = 0; + filtered = 0; for (name = 0; name < count; name++, data++) { if (ipa_gsi_endpoint_data_empty(data)) continue; /* Skip over empty slots */ @@ -1983,18 +2016,28 @@ u32 ipa_endpoint_init(struct ipa *ipa, u32 count, ipa_endpoint_init_one(ipa, name, data); if (data->endpoint.filter_support) - filter_map |= BIT(data->endpoint_id); + filtered |= BIT(data->endpoint_id); if (data->ee_id == GSI_EE_MODEM && data->toward_ipa) ipa->modem_tx_count++; } - if (!ipa_filter_map_valid(ipa, filter_map)) - goto err_endpoint_exit; + /* Make sure the set of filtered endpoints is valid */ + if (!ipa_filtered_valid(ipa, filtered)) { + ipa_endpoint_exit(ipa); - return filter_map; /* Non-zero bitmask */ + return -EINVAL; + } -err_endpoint_exit: - ipa_endpoint_exit(ipa); + ipa->filtered = filtered; - return 0; /* Error */ + return 0; + +err_free_set_up: + bitmap_free(ipa->set_up); + ipa->set_up = NULL; +err_free_defined: + bitmap_free(ipa->defined); + ipa->defined = NULL; + + return -ENOMEM; } |
