mirror of
				https://github.com/torvalds/linux.git
				synced 2025-10-31 00:28:52 +02:00 
			
		
		
		
	Including fixes from CAN, netfilter, wireguard and IPsec.
Current release - regressions:
 
  - rxrpc: fix use of page_frag_alloc_align(), it changed semantics
    and we added a new caller in a different subtree
 
  - xfrm: allow UDP encapsulation only in offload modes
 
 Current release - new code bugs:
 
  - tcp: fix refcnt handling in __inet_hash_connect()
 
  - Revert "net: Re-use and set mono_delivery_time bit for userspace tstamp
    packets", conflicted with some expectations in BPF uAPI
 
 Previous releases - regressions:
 
  - ipv4: raw: fix sending packets from raw sockets via IPsec tunnels
 
  - devlink: fix devlink's parallel command processing
 
  - veth: do not manipulate GRO when using XDP
 
  - esp: fix bad handling of pages from page_pool
 
 Previous releases - always broken:
 
  - report RCU QS for busy network kthreads (with Paul McK's blessing)
 
  - tcp/rds: fix use-after-free on netns with kernel TCP reqsk
 
  - virt: vmxnet3: fix missing reserved tailroom with XDP
 
 Misc:
 
  - couple of build fixes for Documentation
 
 Signed-off-by: Jakub Kicinski <kuba@kernel.org>
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCAAdFiEE6jPA+I1ugmIBA4hXMUZtbf5SIrsFAmX8bXsACgkQMUZtbf5S
 IrsfBg/+KzrEx0tB/Af57ZZGZ5PMjPy+XFDox4iFfHm338UFuGXVvZrXd7G+6YkH
 ZwWeF5YDPKzwIEiZ5D3hewZPlkLH0Eg88q74chlE0gUv7t1jhuQHUdIVeFnPcLbN
 t/8AcCZCJ2fENbr1iNnzZON1RW0fVOl+SDxhiSYFeFqii6FywDfqWL/h0u86H/AF
 KRktgb0LzH0waH6IiefVV1NZyjnZwmQ6+UVQerTzUnQmWhV1xQKoO3MQpZuFRvr6
 O+kPZMkrqnTCCy7RO1BexS5cefqc80i5Z25FLGcaHgpnYd2pDNDMMxqrhqO9Y0Pv
 6u/tLgRxzVUDXWouzREIRe50Z9GJswkg78zilAhpqYiHRjd8jaBH6y+9mhGFc7F8
 iVAx02WfJhlk0aynFf2qZmR7PQIb9XjtFJ7OAeJrno9UD7zAubtikGM/6m6IZfRV
 TD1mze95RVnNjbHZMeg6oNLFUMJXVTobtvtqk5pTQvsNsmSYGFvkvWC5/P6ycyYt
 pMx6E0PA/ZCnQAlThCOCzFa5BO+It3RJHcQJhgbOzHrlWKwmrjBKcKJcLLcxFSUt
 4wwjdEcG1Bo2wdnsjwsQwJDHQW+M9TSLdLM3YVptM9jbqOMizoqr6/xSykg3H4wZ
 t/dSiYSsEr06z7lvwbAjUXJ/mfszZ+JsVAFXAN7ahcM4OZb5WTQ=
 =gpLl
 -----END PGP SIGNATURE-----
Merge tag 'net-6.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
Pull networking fixes from Jakub Kicinski:
 "Including fixes from CAN, netfilter, wireguard and IPsec.
  I'd like to highlight [ lowlight? - Linus ] Florian W stepping down as
  a netfilter maintainer due to constant stream of bug reports. Not sure
  what we can do but IIUC this is not the first such case.
  Current release - regressions:
   - rxrpc: fix use of page_frag_alloc_align(), it changed semantics and
     we added a new caller in a different subtree
   - xfrm: allow UDP encapsulation only in offload modes
  Current release - new code bugs:
   - tcp: fix refcnt handling in __inet_hash_connect()
   - Revert "net: Re-use and set mono_delivery_time bit for userspace
     tstamp packets", conflicted with some expectations in BPF uAPI
  Previous releases - regressions:
   - ipv4: raw: fix sending packets from raw sockets via IPsec tunnels
   - devlink: fix devlink's parallel command processing
   - veth: do not manipulate GRO when using XDP
   - esp: fix bad handling of pages from page_pool
  Previous releases - always broken:
   - report RCU QS for busy network kthreads (with Paul McK's blessing)
   - tcp/rds: fix use-after-free on netns with kernel TCP reqsk
   - virt: vmxnet3: fix missing reserved tailroom with XDP
  Misc:
   - couple of build fixes for Documentation"
* tag 'net-6.9-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (59 commits)
  selftests: forwarding: Fix ping failure due to short timeout
  MAINTAINERS: step down as netfilter maintainer
  netfilter: nf_tables: Fix a memory leak in nf_tables_updchain
  net: dsa: mt7530: fix handling of all link-local frames
  net: dsa: mt7530: fix link-local frames that ingress vlan filtering ports
  bpf: report RCU QS in cpumap kthread
  net: report RCU QS on threaded NAPI repolling
  rcu: add a helper to report consolidated flavor QS
  ionic: update documentation for XDP support
  lib/bitmap: Fix bitmap_scatter() and bitmap_gather() kernel doc
  netfilter: nf_tables: do not compare internal table flags on updates
  netfilter: nft_set_pipapo: release elements in clone only from destroy path
  octeontx2-af: Use separate handlers for interrupts
  octeontx2-pf: Send UP messages to VF only when VF is up.
  octeontx2-pf: Use default max_active works instead of one
  octeontx2-pf: Wait till detach_resources msg is complete
  octeontx2: Detect the mbox up or down message via register
  devlink: fix port new reply cmd type
  tcp: Clear req->syncookie in reqsk_alloc().
  net/bnx2x: Prevent access to a freed page in page_pool
  ...
			
			
This commit is contained in:
		
						commit
						cba9ffdb99
					
				
					 71 changed files with 608 additions and 344 deletions
				
			
		|  | @ -99,6 +99,12 @@ Minimal SR-IOV support is currently offered and can be enabled by setting | |||
| the sysfs 'sriov_numvfs' value, if supported by your particular firmware | ||||
| configuration. | ||||
| 
 | ||||
| XDP | ||||
| --- | ||||
| 
 | ||||
| Support for XDP includes the basics, plus Jumbo frames, Redirect and | ||||
| ndo_xmit.  There is no current support for zero-copy sockets or HW offload. | ||||
| 
 | ||||
| Statistics | ||||
| ========== | ||||
| 
 | ||||
|  | @ -138,6 +144,12 @@ Driver port specific:: | |||
|      rx_csum_none: 0 | ||||
|      rx_csum_complete: 3 | ||||
|      rx_csum_error: 0 | ||||
|      xdp_drop: 0 | ||||
|      xdp_aborted: 0 | ||||
|      xdp_pass: 0 | ||||
|      xdp_tx: 0 | ||||
|      xdp_redirect: 0 | ||||
|      xdp_frames: 0 | ||||
| 
 | ||||
| Driver queue specific:: | ||||
| 
 | ||||
|  | @ -149,9 +161,12 @@ Driver queue specific:: | |||
|      tx_0_frags: 0 | ||||
|      tx_0_tso: 0 | ||||
|      tx_0_tso_bytes: 0 | ||||
|      tx_0_hwstamp_valid: 0 | ||||
|      tx_0_hwstamp_invalid: 0 | ||||
|      tx_0_csum_none: 3 | ||||
|      tx_0_csum: 0 | ||||
|      tx_0_vlan_inserted: 0 | ||||
|      tx_0_xdp_frames: 0 | ||||
|      rx_0_pkts: 2 | ||||
|      rx_0_bytes: 120 | ||||
|      rx_0_dma_map_err: 0 | ||||
|  | @ -159,8 +174,15 @@ Driver queue specific:: | |||
|      rx_0_csum_none: 0 | ||||
|      rx_0_csum_complete: 0 | ||||
|      rx_0_csum_error: 0 | ||||
|      rx_0_hwstamp_valid: 0 | ||||
|      rx_0_hwstamp_invalid: 0 | ||||
|      rx_0_dropped: 0 | ||||
|      rx_0_vlan_stripped: 0 | ||||
|      rx_0_xdp_drop: 0 | ||||
|      rx_0_xdp_aborted: 0 | ||||
|      rx_0_xdp_pass: 0 | ||||
|      rx_0_xdp_tx: 0 | ||||
|      rx_0_xdp_redirect: 0 | ||||
| 
 | ||||
| Firmware port specific:: | ||||
| 
 | ||||
|  |  | |||
|  | @ -87,35 +87,35 @@ all using the same instance under "priv->mdev". | |||
| 
 | ||||
| Observability | ||||
| ============= | ||||
| The relation between PF, irq, napi, and queue can be observed via netlink spec: | ||||
| The relation between PF, irq, napi, and queue can be observed via netlink spec:: | ||||
| 
 | ||||
| $ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml --dump queue-get --json='{"ifindex": 13}' | ||||
| [{'id': 0, 'ifindex': 13, 'napi-id': 539, 'type': 'rx'}, | ||||
|  {'id': 1, 'ifindex': 13, 'napi-id': 540, 'type': 'rx'}, | ||||
|  {'id': 2, 'ifindex': 13, 'napi-id': 541, 'type': 'rx'}, | ||||
|  {'id': 3, 'ifindex': 13, 'napi-id': 542, 'type': 'rx'}, | ||||
|  {'id': 4, 'ifindex': 13, 'napi-id': 543, 'type': 'rx'}, | ||||
|  {'id': 0, 'ifindex': 13, 'napi-id': 539, 'type': 'tx'}, | ||||
|  {'id': 1, 'ifindex': 13, 'napi-id': 540, 'type': 'tx'}, | ||||
|  {'id': 2, 'ifindex': 13, 'napi-id': 541, 'type': 'tx'}, | ||||
|  {'id': 3, 'ifindex': 13, 'napi-id': 542, 'type': 'tx'}, | ||||
|  {'id': 4, 'ifindex': 13, 'napi-id': 543, 'type': 'tx'}] | ||||
|   $ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml --dump queue-get --json='{"ifindex": 13}' | ||||
|   [{'id': 0, 'ifindex': 13, 'napi-id': 539, 'type': 'rx'}, | ||||
|    {'id': 1, 'ifindex': 13, 'napi-id': 540, 'type': 'rx'}, | ||||
|    {'id': 2, 'ifindex': 13, 'napi-id': 541, 'type': 'rx'}, | ||||
|    {'id': 3, 'ifindex': 13, 'napi-id': 542, 'type': 'rx'}, | ||||
|    {'id': 4, 'ifindex': 13, 'napi-id': 543, 'type': 'rx'}, | ||||
|    {'id': 0, 'ifindex': 13, 'napi-id': 539, 'type': 'tx'}, | ||||
|    {'id': 1, 'ifindex': 13, 'napi-id': 540, 'type': 'tx'}, | ||||
|    {'id': 2, 'ifindex': 13, 'napi-id': 541, 'type': 'tx'}, | ||||
|    {'id': 3, 'ifindex': 13, 'napi-id': 542, 'type': 'tx'}, | ||||
|    {'id': 4, 'ifindex': 13, 'napi-id': 543, 'type': 'tx'}] | ||||
| 
 | ||||
| $ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml --dump napi-get --json='{"ifindex": 13}' | ||||
| [{'id': 543, 'ifindex': 13, 'irq': 42}, | ||||
|  {'id': 542, 'ifindex': 13, 'irq': 41}, | ||||
|  {'id': 541, 'ifindex': 13, 'irq': 40}, | ||||
|  {'id': 540, 'ifindex': 13, 'irq': 39}, | ||||
|  {'id': 539, 'ifindex': 13, 'irq': 36}] | ||||
|   $ ./tools/net/ynl/cli.py --spec Documentation/netlink/specs/netdev.yaml --dump napi-get --json='{"ifindex": 13}' | ||||
|   [{'id': 543, 'ifindex': 13, 'irq': 42}, | ||||
|    {'id': 542, 'ifindex': 13, 'irq': 41}, | ||||
|    {'id': 541, 'ifindex': 13, 'irq': 40}, | ||||
|    {'id': 540, 'ifindex': 13, 'irq': 39}, | ||||
|    {'id': 539, 'ifindex': 13, 'irq': 36}] | ||||
| 
 | ||||
| Here you can clearly observe our channels distribution policy: | ||||
| Here you can clearly observe our channels distribution policy:: | ||||
| 
 | ||||
| $ ls /proc/irq/{36,39,40,41,42}/mlx5* -d -1 | ||||
| /proc/irq/36/mlx5_comp1@pci:0000:08:00.0 | ||||
| /proc/irq/39/mlx5_comp1@pci:0000:09:00.0 | ||||
| /proc/irq/40/mlx5_comp2@pci:0000:08:00.0 | ||||
| /proc/irq/41/mlx5_comp2@pci:0000:09:00.0 | ||||
| /proc/irq/42/mlx5_comp3@pci:0000:08:00.0 | ||||
|   $ ls /proc/irq/{36,39,40,41,42}/mlx5* -d -1 | ||||
|   /proc/irq/36/mlx5_comp1@pci:0000:08:00.0 | ||||
|   /proc/irq/39/mlx5_comp1@pci:0000:09:00.0 | ||||
|   /proc/irq/40/mlx5_comp2@pci:0000:08:00.0 | ||||
|   /proc/irq/41/mlx5_comp2@pci:0000:09:00.0 | ||||
|   /proc/irq/42/mlx5_comp3@pci:0000:08:00.0 | ||||
| 
 | ||||
| Steering | ||||
| ======== | ||||
|  |  | |||
|  | @ -13,7 +13,7 @@ struct_dev_ifalias*                 ifalias | |||
| unsigned_long                       mem_end                                                          | ||||
| unsigned_long                       mem_start                                                        | ||||
| unsigned_long                       base_addr                                                        | ||||
| unsigned_long                       state                                                            | ||||
| unsigned_long                       state                   read_mostly         read_mostly         netif_running(dev) | ||||
| struct_list_head                    dev_list                                                         | ||||
| struct_list_head                    napi_list                                                        | ||||
| struct_list_head                    unreg_list                                                       | ||||
|  |  | |||
|  | @ -15237,7 +15237,6 @@ F:	drivers/net/ethernet/neterion/ | |||
| NETFILTER | ||||
| M:	Pablo Neira Ayuso <pablo@netfilter.org> | ||||
| M:	Jozsef Kadlecsik <kadlec@netfilter.org> | ||||
| M:	Florian Westphal <fw@strlen.de> | ||||
| L:	netfilter-devel@vger.kernel.org | ||||
| L:	coreteam@netfilter.org | ||||
| S:	Maintained | ||||
|  |  | |||
|  | @ -370,8 +370,8 @@ static const struct kvaser_pciefd_irq_mask kvaser_pciefd_sf2_irq_mask = { | |||
| 
 | ||||
| static const struct kvaser_pciefd_irq_mask kvaser_pciefd_xilinx_irq_mask = { | ||||
| 	.kcan_rx0 = BIT(4), | ||||
| 	.kcan_tx = { BIT(16), BIT(17), BIT(18), BIT(19) }, | ||||
| 	.all = GENMASK(19, 16) | BIT(4), | ||||
| 	.kcan_tx = { BIT(16), BIT(17), BIT(18), BIT(19), BIT(20), BIT(21), BIT(22), BIT(23) }, | ||||
| 	.all = GENMASK(23, 16) | BIT(4), | ||||
| }; | ||||
| 
 | ||||
| static const struct kvaser_pciefd_dev_ops kvaser_pciefd_altera_dev_ops = { | ||||
|  |  | |||
|  | @ -950,20 +950,56 @@ static void mt7530_setup_port5(struct dsa_switch *ds, phy_interface_t interface) | |||
| 	mutex_unlock(&priv->reg_mutex); | ||||
| } | ||||
| 
 | ||||
| /* On page 205, section "8.6.3 Frame filtering" of the active standard, IEEE Std
 | ||||
|  * 802.1Q™-2022, it is stated that frames with 01:80:C2:00:00:00-0F as MAC DA | ||||
|  * must only be propagated to C-VLAN and MAC Bridge components. That means | ||||
|  * VLAN-aware and VLAN-unaware bridges. On the switch designs with CPU ports, | ||||
|  * these frames are supposed to be processed by the CPU (software). So we make | ||||
|  * the switch only forward them to the CPU port. And if received from a CPU | ||||
|  * port, forward to a single port. The software is responsible of making the | ||||
|  * switch conform to the latter by setting a single port as destination port on | ||||
|  * the special tag. | ||||
|  * | ||||
|  * This switch intellectual property cannot conform to this part of the standard | ||||
|  * fully. Whilst the REV_UN frame tag covers the remaining :04-0D and :0F MAC | ||||
|  * DAs, it also includes :22-FF which the scope of propagation is not supposed | ||||
|  * to be restricted for these MAC DAs. | ||||
|  */ | ||||
| static void | ||||
| mt753x_trap_frames(struct mt7530_priv *priv) | ||||
| { | ||||
| 	/* Trap BPDUs to the CPU port(s) */ | ||||
| 	mt7530_rmw(priv, MT753X_BPC, MT753X_BPDU_PORT_FW_MASK, | ||||
| 	/* Trap 802.1X PAE frames and BPDUs to the CPU port(s) and egress them
 | ||||
| 	 * VLAN-untagged. | ||||
| 	 */ | ||||
| 	mt7530_rmw(priv, MT753X_BPC, MT753X_PAE_EG_TAG_MASK | | ||||
| 		   MT753X_PAE_PORT_FW_MASK | MT753X_BPDU_EG_TAG_MASK | | ||||
| 		   MT753X_BPDU_PORT_FW_MASK, | ||||
| 		   MT753X_PAE_EG_TAG(MT7530_VLAN_EG_UNTAGGED) | | ||||
| 		   MT753X_PAE_PORT_FW(MT753X_BPDU_CPU_ONLY) | | ||||
| 		   MT753X_BPDU_EG_TAG(MT7530_VLAN_EG_UNTAGGED) | | ||||
| 		   MT753X_BPDU_CPU_ONLY); | ||||
| 
 | ||||
| 	/* Trap 802.1X PAE frames to the CPU port(s) */ | ||||
| 	mt7530_rmw(priv, MT753X_BPC, MT753X_PAE_PORT_FW_MASK, | ||||
| 		   MT753X_PAE_PORT_FW(MT753X_BPDU_CPU_ONLY)); | ||||
| 	/* Trap frames with :01 and :02 MAC DAs to the CPU port(s) and egress
 | ||||
| 	 * them VLAN-untagged. | ||||
| 	 */ | ||||
| 	mt7530_rmw(priv, MT753X_RGAC1, MT753X_R02_EG_TAG_MASK | | ||||
| 		   MT753X_R02_PORT_FW_MASK | MT753X_R01_EG_TAG_MASK | | ||||
| 		   MT753X_R01_PORT_FW_MASK, | ||||
| 		   MT753X_R02_EG_TAG(MT7530_VLAN_EG_UNTAGGED) | | ||||
| 		   MT753X_R02_PORT_FW(MT753X_BPDU_CPU_ONLY) | | ||||
| 		   MT753X_R01_EG_TAG(MT7530_VLAN_EG_UNTAGGED) | | ||||
| 		   MT753X_BPDU_CPU_ONLY); | ||||
| 
 | ||||
| 	/* Trap LLDP frames with :0E MAC DA to the CPU port(s) */ | ||||
| 	mt7530_rmw(priv, MT753X_RGAC2, MT753X_R0E_PORT_FW_MASK, | ||||
| 		   MT753X_R0E_PORT_FW(MT753X_BPDU_CPU_ONLY)); | ||||
| 	/* Trap frames with :03 and :0E MAC DAs to the CPU port(s) and egress
 | ||||
| 	 * them VLAN-untagged. | ||||
| 	 */ | ||||
| 	mt7530_rmw(priv, MT753X_RGAC2, MT753X_R0E_EG_TAG_MASK | | ||||
| 		   MT753X_R0E_PORT_FW_MASK | MT753X_R03_EG_TAG_MASK | | ||||
| 		   MT753X_R03_PORT_FW_MASK, | ||||
| 		   MT753X_R0E_EG_TAG(MT7530_VLAN_EG_UNTAGGED) | | ||||
| 		   MT753X_R0E_PORT_FW(MT753X_BPDU_CPU_ONLY) | | ||||
| 		   MT753X_R03_EG_TAG(MT7530_VLAN_EG_UNTAGGED) | | ||||
| 		   MT753X_BPDU_CPU_ONLY); | ||||
| } | ||||
| 
 | ||||
| static void | ||||
|  | @ -2192,22 +2228,16 @@ mt7530_setup(struct dsa_switch *ds) | |||
| 		} | ||||
| 	} | ||||
| 
 | ||||
| 	/* Disable LEDs before reset to prevent the MT7530 sampling a
 | ||||
| 	 * potentially incorrect HT_XTAL_FSEL value. | ||||
| 	 */ | ||||
| 	mt7530_write(priv, MT7530_LED_EN, 0); | ||||
| 	usleep_range(1000, 1100); | ||||
| 
 | ||||
| 	/* Reset whole chip through gpio pin or memory-mapped registers for
 | ||||
| 	 * different type of hardware | ||||
| 	 */ | ||||
| 	if (priv->mcm) { | ||||
| 		reset_control_assert(priv->rstc); | ||||
| 		usleep_range(1000, 1100); | ||||
| 		usleep_range(5000, 5100); | ||||
| 		reset_control_deassert(priv->rstc); | ||||
| 	} else { | ||||
| 		gpiod_set_value_cansleep(priv->reset, 0); | ||||
| 		usleep_range(1000, 1100); | ||||
| 		usleep_range(5000, 5100); | ||||
| 		gpiod_set_value_cansleep(priv->reset, 1); | ||||
| 	} | ||||
| 
 | ||||
|  | @ -2420,11 +2450,11 @@ mt7531_setup(struct dsa_switch *ds) | |||
| 	 */ | ||||
| 	if (priv->mcm) { | ||||
| 		reset_control_assert(priv->rstc); | ||||
| 		usleep_range(1000, 1100); | ||||
| 		usleep_range(5000, 5100); | ||||
| 		reset_control_deassert(priv->rstc); | ||||
| 	} else { | ||||
| 		gpiod_set_value_cansleep(priv->reset, 0); | ||||
| 		usleep_range(1000, 1100); | ||||
| 		usleep_range(5000, 5100); | ||||
| 		gpiod_set_value_cansleep(priv->reset, 1); | ||||
| 	} | ||||
| 
 | ||||
|  |  | |||
|  | @ -65,14 +65,33 @@ enum mt753x_id { | |||
| 
 | ||||
| /* Registers for BPDU and PAE frame control*/ | ||||
| #define MT753X_BPC			0x24 | ||||
| #define  MT753X_BPDU_PORT_FW_MASK	GENMASK(2, 0) | ||||
| #define  MT753X_PAE_EG_TAG_MASK		GENMASK(24, 22) | ||||
| #define  MT753X_PAE_EG_TAG(x)		FIELD_PREP(MT753X_PAE_EG_TAG_MASK, x) | ||||
| #define  MT753X_PAE_PORT_FW_MASK	GENMASK(18, 16) | ||||
| #define  MT753X_PAE_PORT_FW(x)		FIELD_PREP(MT753X_PAE_PORT_FW_MASK, x) | ||||
| #define  MT753X_BPDU_EG_TAG_MASK	GENMASK(8, 6) | ||||
| #define  MT753X_BPDU_EG_TAG(x)		FIELD_PREP(MT753X_BPDU_EG_TAG_MASK, x) | ||||
| #define  MT753X_BPDU_PORT_FW_MASK	GENMASK(2, 0) | ||||
| 
 | ||||
| /* Register for :01 and :02 MAC DA frame control */ | ||||
| #define MT753X_RGAC1			0x28 | ||||
| #define  MT753X_R02_EG_TAG_MASK		GENMASK(24, 22) | ||||
| #define  MT753X_R02_EG_TAG(x)		FIELD_PREP(MT753X_R02_EG_TAG_MASK, x) | ||||
| #define  MT753X_R02_PORT_FW_MASK	GENMASK(18, 16) | ||||
| #define  MT753X_R02_PORT_FW(x)		FIELD_PREP(MT753X_R02_PORT_FW_MASK, x) | ||||
| #define  MT753X_R01_EG_TAG_MASK		GENMASK(8, 6) | ||||
| #define  MT753X_R01_EG_TAG(x)		FIELD_PREP(MT753X_R01_EG_TAG_MASK, x) | ||||
| #define  MT753X_R01_PORT_FW_MASK	GENMASK(2, 0) | ||||
| 
 | ||||
| /* Register for :03 and :0E MAC DA frame control */ | ||||
| #define MT753X_RGAC2			0x2c | ||||
| #define  MT753X_R0E_EG_TAG_MASK		GENMASK(24, 22) | ||||
| #define  MT753X_R0E_EG_TAG(x)		FIELD_PREP(MT753X_R0E_EG_TAG_MASK, x) | ||||
| #define  MT753X_R0E_PORT_FW_MASK	GENMASK(18, 16) | ||||
| #define  MT753X_R0E_PORT_FW(x)		FIELD_PREP(MT753X_R0E_PORT_FW_MASK, x) | ||||
| #define  MT753X_R03_EG_TAG_MASK		GENMASK(8, 6) | ||||
| #define  MT753X_R03_EG_TAG(x)		FIELD_PREP(MT753X_R03_EG_TAG_MASK, x) | ||||
| #define  MT753X_R03_PORT_FW_MASK	GENMASK(2, 0) | ||||
| 
 | ||||
| enum mt753x_bpdu_port_fw { | ||||
| 	MT753X_BPDU_FOLLOW_MFC, | ||||
|  | @ -253,6 +272,7 @@ enum mt7530_port_mode { | |||
| enum mt7530_vlan_port_eg_tag { | ||||
| 	MT7530_VLAN_EG_DISABLED = 0, | ||||
| 	MT7530_VLAN_EG_CONSISTENT = 1, | ||||
| 	MT7530_VLAN_EG_UNTAGGED = 4, | ||||
| }; | ||||
| 
 | ||||
| enum mt7530_vlan_port_attr { | ||||
|  |  | |||
|  | @ -1002,9 +1002,6 @@ static inline void bnx2x_set_fw_mac_addr(__le16 *fw_hi, __le16 *fw_mid, | |||
| static inline void bnx2x_free_rx_mem_pool(struct bnx2x *bp, | ||||
| 					  struct bnx2x_alloc_pool *pool) | ||||
| { | ||||
| 	if (!pool->page) | ||||
| 		return; | ||||
| 
 | ||||
| 	put_page(pool->page); | ||||
| 
 | ||||
| 	pool->page = NULL; | ||||
|  | @ -1015,6 +1012,9 @@ static inline void bnx2x_free_rx_sge_range(struct bnx2x *bp, | |||
| { | ||||
| 	int i; | ||||
| 
 | ||||
| 	if (!fp->page_pool.page) | ||||
| 		return; | ||||
| 
 | ||||
| 	if (fp->mode == TPA_MODE_DISABLED) | ||||
| 		return; | ||||
| 
 | ||||
|  |  | |||
|  | @ -1338,7 +1338,7 @@ static irqreturn_t cgx_fwi_event_handler(int irq, void *data) | |||
| 
 | ||||
| 		/* Release thread waiting for completion  */ | ||||
| 		lmac->cmd_pend = false; | ||||
| 		wake_up_interruptible(&lmac->wq_cmd_cmplt); | ||||
| 		wake_up(&lmac->wq_cmd_cmplt); | ||||
| 		break; | ||||
| 	case CGX_EVT_ASYNC: | ||||
| 		if (cgx_event_is_linkevent(event)) | ||||
|  |  | |||
|  | @ -214,11 +214,12 @@ int otx2_mbox_busy_poll_for_rsp(struct otx2_mbox *mbox, int devid) | |||
| } | ||||
| EXPORT_SYMBOL(otx2_mbox_busy_poll_for_rsp); | ||||
| 
 | ||||
| void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid) | ||||
| static void otx2_mbox_msg_send_data(struct otx2_mbox *mbox, int devid, u64 data) | ||||
| { | ||||
| 	struct otx2_mbox_dev *mdev = &mbox->dev[devid]; | ||||
| 	struct mbox_hdr *tx_hdr, *rx_hdr; | ||||
| 	void *hw_mbase = mdev->hwbase; | ||||
| 	u64 intr_val; | ||||
| 
 | ||||
| 	tx_hdr = hw_mbase + mbox->tx_start; | ||||
| 	rx_hdr = hw_mbase + mbox->rx_start; | ||||
|  | @ -254,14 +255,52 @@ void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid) | |||
| 
 | ||||
| 	spin_unlock(&mdev->mbox_lock); | ||||
| 
 | ||||
| 	/* Check if interrupt pending */ | ||||
| 	intr_val = readq((void __iomem *)mbox->reg_base + | ||||
| 		     (mbox->trigger | (devid << mbox->tr_shift))); | ||||
| 
 | ||||
| 	intr_val |= data; | ||||
| 	/* The interrupt should be fired after num_msgs is written
 | ||||
| 	 * to the shared memory | ||||
| 	 */ | ||||
| 	writeq(1, (void __iomem *)mbox->reg_base + | ||||
| 	writeq(intr_val, (void __iomem *)mbox->reg_base + | ||||
| 	       (mbox->trigger | (devid << mbox->tr_shift))); | ||||
| } | ||||
| 
 | ||||
| void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid) | ||||
| { | ||||
| 	otx2_mbox_msg_send_data(mbox, devid, MBOX_DOWN_MSG); | ||||
| } | ||||
| EXPORT_SYMBOL(otx2_mbox_msg_send); | ||||
| 
 | ||||
| void otx2_mbox_msg_send_up(struct otx2_mbox *mbox, int devid) | ||||
| { | ||||
| 	otx2_mbox_msg_send_data(mbox, devid, MBOX_UP_MSG); | ||||
| } | ||||
| EXPORT_SYMBOL(otx2_mbox_msg_send_up); | ||||
| 
 | ||||
| bool otx2_mbox_wait_for_zero(struct otx2_mbox *mbox, int devid) | ||||
| { | ||||
| 	u64 data; | ||||
| 
 | ||||
| 	data = readq((void __iomem *)mbox->reg_base + | ||||
| 		     (mbox->trigger | (devid << mbox->tr_shift))); | ||||
| 
 | ||||
| 	/* If data is non-zero wait for ~1ms and return to caller
 | ||||
| 	 * whether data has changed to zero or not after the wait. | ||||
| 	 */ | ||||
| 	if (!data) | ||||
| 		return true; | ||||
| 
 | ||||
| 	usleep_range(950, 1000); | ||||
| 
 | ||||
| 	data = readq((void __iomem *)mbox->reg_base + | ||||
| 		     (mbox->trigger | (devid << mbox->tr_shift))); | ||||
| 
 | ||||
| 	return data == 0; | ||||
| } | ||||
| EXPORT_SYMBOL(otx2_mbox_wait_for_zero); | ||||
| 
 | ||||
| struct mbox_msghdr *otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid, | ||||
| 					    int size, int size_rsp) | ||||
| { | ||||
|  |  | |||
|  | @ -16,6 +16,9 @@ | |||
| 
 | ||||
| #define MBOX_SIZE		SZ_64K | ||||
| 
 | ||||
| #define MBOX_DOWN_MSG		1 | ||||
| #define MBOX_UP_MSG		2 | ||||
| 
 | ||||
| /* AF/PF: PF initiated, PF/VF VF initiated */ | ||||
| #define MBOX_DOWN_RX_START	0 | ||||
| #define MBOX_DOWN_RX_SIZE	(46 * SZ_1K) | ||||
|  | @ -101,6 +104,7 @@ int otx2_mbox_regions_init(struct otx2_mbox *mbox, void __force **hwbase, | |||
| 			   struct pci_dev *pdev, void __force *reg_base, | ||||
| 			   int direction, int ndevs, unsigned long *bmap); | ||||
| void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid); | ||||
| void otx2_mbox_msg_send_up(struct otx2_mbox *mbox, int devid); | ||||
| int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid); | ||||
| int otx2_mbox_busy_poll_for_rsp(struct otx2_mbox *mbox, int devid); | ||||
| struct mbox_msghdr *otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid, | ||||
|  | @ -118,6 +122,8 @@ static inline struct mbox_msghdr *otx2_mbox_alloc_msg(struct otx2_mbox *mbox, | |||
| 	return otx2_mbox_alloc_msg_rsp(mbox, devid, size, 0); | ||||
| } | ||||
| 
 | ||||
| bool otx2_mbox_wait_for_zero(struct otx2_mbox *mbox, int devid); | ||||
| 
 | ||||
| /* Mailbox message types */ | ||||
| #define MBOX_MSG_MASK				0xFFFF | ||||
| #define MBOX_MSG_INVALID			0xFFFE | ||||
|  |  | |||
|  | @ -121,13 +121,17 @@ int mcs_add_intr_wq_entry(struct mcs *mcs, struct mcs_intr_event *event) | |||
| static int mcs_notify_pfvf(struct mcs_intr_event *event, struct rvu *rvu) | ||||
| { | ||||
| 	struct mcs_intr_info *req; | ||||
| 	int err, pf; | ||||
| 	int pf; | ||||
| 
 | ||||
| 	pf = rvu_get_pf(event->pcifunc); | ||||
| 
 | ||||
| 	mutex_lock(&rvu->mbox_lock); | ||||
| 
 | ||||
| 	req = otx2_mbox_alloc_msg_mcs_intr_notify(rvu, pf); | ||||
| 	if (!req) | ||||
| 	if (!req) { | ||||
| 		mutex_unlock(&rvu->mbox_lock); | ||||
| 		return -ENOMEM; | ||||
| 	} | ||||
| 
 | ||||
| 	req->mcs_id = event->mcs_id; | ||||
| 	req->intr_mask = event->intr_mask; | ||||
|  | @ -135,10 +139,11 @@ static int mcs_notify_pfvf(struct mcs_intr_event *event, struct rvu *rvu) | |||
| 	req->hdr.pcifunc = event->pcifunc; | ||||
| 	req->lmac_id = event->lmac_id; | ||||
| 
 | ||||
| 	otx2_mbox_msg_send(&rvu->afpf_wq_info.mbox_up, pf); | ||||
| 	err = otx2_mbox_wait_for_rsp(&rvu->afpf_wq_info.mbox_up, pf); | ||||
| 	if (err) | ||||
| 		dev_warn(rvu->dev, "MCS notification to pf %d failed\n", pf); | ||||
| 	otx2_mbox_wait_for_zero(&rvu->afpf_wq_info.mbox_up, pf); | ||||
| 
 | ||||
| 	otx2_mbox_msg_send_up(&rvu->afpf_wq_info.mbox_up, pf); | ||||
| 
 | ||||
| 	mutex_unlock(&rvu->mbox_lock); | ||||
| 
 | ||||
| 	return 0; | ||||
| } | ||||
|  |  | |||
|  | @ -2119,7 +2119,7 @@ MBOX_MESSAGES | |||
| 	} | ||||
| } | ||||
| 
 | ||||
| static void __rvu_mbox_handler(struct rvu_work *mwork, int type) | ||||
| static void __rvu_mbox_handler(struct rvu_work *mwork, int type, bool poll) | ||||
| { | ||||
| 	struct rvu *rvu = mwork->rvu; | ||||
| 	int offset, err, id, devid; | ||||
|  | @ -2186,6 +2186,9 @@ static void __rvu_mbox_handler(struct rvu_work *mwork, int type) | |||
| 	} | ||||
| 	mw->mbox_wrk[devid].num_msgs = 0; | ||||
| 
 | ||||
| 	if (poll) | ||||
| 		otx2_mbox_wait_for_zero(mbox, devid); | ||||
| 
 | ||||
| 	/* Send mbox responses to VF/PF */ | ||||
| 	otx2_mbox_msg_send(mbox, devid); | ||||
| } | ||||
|  | @ -2193,15 +2196,18 @@ static void __rvu_mbox_handler(struct rvu_work *mwork, int type) | |||
| static inline void rvu_afpf_mbox_handler(struct work_struct *work) | ||||
| { | ||||
| 	struct rvu_work *mwork = container_of(work, struct rvu_work, work); | ||||
| 	struct rvu *rvu = mwork->rvu; | ||||
| 
 | ||||
| 	__rvu_mbox_handler(mwork, TYPE_AFPF); | ||||
| 	mutex_lock(&rvu->mbox_lock); | ||||
| 	__rvu_mbox_handler(mwork, TYPE_AFPF, true); | ||||
| 	mutex_unlock(&rvu->mbox_lock); | ||||
| } | ||||
| 
 | ||||
| static inline void rvu_afvf_mbox_handler(struct work_struct *work) | ||||
| { | ||||
| 	struct rvu_work *mwork = container_of(work, struct rvu_work, work); | ||||
| 
 | ||||
| 	__rvu_mbox_handler(mwork, TYPE_AFVF); | ||||
| 	__rvu_mbox_handler(mwork, TYPE_AFVF, false); | ||||
| } | ||||
| 
 | ||||
| static void __rvu_mbox_up_handler(struct rvu_work *mwork, int type) | ||||
|  | @ -2376,6 +2382,8 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw, | |||
| 		} | ||||
| 	} | ||||
| 
 | ||||
| 	mutex_init(&rvu->mbox_lock); | ||||
| 
 | ||||
| 	mbox_regions = kcalloc(num, sizeof(void *), GFP_KERNEL); | ||||
| 	if (!mbox_regions) { | ||||
| 		err = -ENOMEM; | ||||
|  | @ -2525,10 +2533,9 @@ static void rvu_queue_work(struct mbox_wq_info *mw, int first, | |||
| 	} | ||||
| } | ||||
| 
 | ||||
| static irqreturn_t rvu_mbox_intr_handler(int irq, void *rvu_irq) | ||||
| static irqreturn_t rvu_mbox_pf_intr_handler(int irq, void *rvu_irq) | ||||
| { | ||||
| 	struct rvu *rvu = (struct rvu *)rvu_irq; | ||||
| 	int vfs = rvu->vfs; | ||||
| 	u64 intr; | ||||
| 
 | ||||
| 	intr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PFAF_MBOX_INT); | ||||
|  | @ -2542,6 +2549,18 @@ static irqreturn_t rvu_mbox_intr_handler(int irq, void *rvu_irq) | |||
| 
 | ||||
| 	rvu_queue_work(&rvu->afpf_wq_info, 0, rvu->hw->total_pfs, intr); | ||||
| 
 | ||||
| 	return IRQ_HANDLED; | ||||
| } | ||||
| 
 | ||||
| static irqreturn_t rvu_mbox_intr_handler(int irq, void *rvu_irq) | ||||
| { | ||||
| 	struct rvu *rvu = (struct rvu *)rvu_irq; | ||||
| 	int vfs = rvu->vfs; | ||||
| 	u64 intr; | ||||
| 
 | ||||
| 	/* Sync with mbox memory region */ | ||||
| 	rmb(); | ||||
| 
 | ||||
| 	/* Handle VF interrupts */ | ||||
| 	if (vfs > 64) { | ||||
| 		intr = rvupf_read64(rvu, RVU_PF_VFPF_MBOX_INTX(1)); | ||||
|  | @ -2886,7 +2905,7 @@ static int rvu_register_interrupts(struct rvu *rvu) | |||
| 	/* Register mailbox interrupt handler */ | ||||
| 	sprintf(&rvu->irq_name[RVU_AF_INT_VEC_MBOX * NAME_SIZE], "RVUAF Mbox"); | ||||
| 	ret = request_irq(pci_irq_vector(rvu->pdev, RVU_AF_INT_VEC_MBOX), | ||||
| 			  rvu_mbox_intr_handler, 0, | ||||
| 			  rvu_mbox_pf_intr_handler, 0, | ||||
| 			  &rvu->irq_name[RVU_AF_INT_VEC_MBOX * NAME_SIZE], rvu); | ||||
| 	if (ret) { | ||||
| 		dev_err(rvu->dev, | ||||
|  |  | |||
|  | @ -591,6 +591,8 @@ struct rvu { | |||
| 	spinlock_t		mcs_intrq_lock; | ||||
| 	/* CPT interrupt lock */ | ||||
| 	spinlock_t		cpt_intr_lock; | ||||
| 
 | ||||
| 	struct mutex		mbox_lock; /* Serialize mbox up and down msgs */ | ||||
| }; | ||||
| 
 | ||||
| static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val) | ||||
|  |  | |||
|  | @ -232,7 +232,7 @@ static void cgx_notify_pfs(struct cgx_link_event *event, struct rvu *rvu) | |||
| 	struct cgx_link_user_info *linfo; | ||||
| 	struct cgx_link_info_msg *msg; | ||||
| 	unsigned long pfmap; | ||||
| 	int err, pfid; | ||||
| 	int pfid; | ||||
| 
 | ||||
| 	linfo = &event->link_uinfo; | ||||
| 	pfmap = cgxlmac_to_pfmap(rvu, event->cgx_id, event->lmac_id); | ||||
|  | @ -255,16 +255,22 @@ static void cgx_notify_pfs(struct cgx_link_event *event, struct rvu *rvu) | |||
| 			continue; | ||||
| 		} | ||||
| 
 | ||||
| 		mutex_lock(&rvu->mbox_lock); | ||||
| 
 | ||||
| 		/* Send mbox message to PF */ | ||||
| 		msg = otx2_mbox_alloc_msg_cgx_link_event(rvu, pfid); | ||||
| 		if (!msg) | ||||
| 		if (!msg) { | ||||
| 			mutex_unlock(&rvu->mbox_lock); | ||||
| 			continue; | ||||
| 		} | ||||
| 
 | ||||
| 		msg->link_info = *linfo; | ||||
| 		otx2_mbox_msg_send(&rvu->afpf_wq_info.mbox_up, pfid); | ||||
| 		err = otx2_mbox_wait_for_rsp(&rvu->afpf_wq_info.mbox_up, pfid); | ||||
| 		if (err) | ||||
| 			dev_warn(rvu->dev, "notification to pf %d failed\n", | ||||
| 				 pfid); | ||||
| 
 | ||||
| 		otx2_mbox_wait_for_zero(&rvu->afpf_wq_info.mbox_up, pfid); | ||||
| 
 | ||||
| 		otx2_mbox_msg_send_up(&rvu->afpf_wq_info.mbox_up, pfid); | ||||
| 
 | ||||
| 		mutex_unlock(&rvu->mbox_lock); | ||||
| 	} while (pfmap); | ||||
| } | ||||
| 
 | ||||
|  |  | |||
|  | @ -1592,7 +1592,7 @@ int otx2_detach_resources(struct mbox *mbox) | |||
| 	detach->partial = false; | ||||
| 
 | ||||
| 	/* Send detach request to AF */ | ||||
| 	otx2_mbox_msg_send(&mbox->mbox, 0); | ||||
| 	otx2_sync_mbox_msg(mbox); | ||||
| 	mutex_unlock(&mbox->lock); | ||||
| 	return 0; | ||||
| } | ||||
|  |  | |||
|  | @ -815,7 +815,7 @@ static inline int otx2_sync_mbox_up_msg(struct mbox *mbox, int devid) | |||
| 
 | ||||
| 	if (!otx2_mbox_nonempty(&mbox->mbox_up, devid)) | ||||
| 		return 0; | ||||
| 	otx2_mbox_msg_send(&mbox->mbox_up, devid); | ||||
| 	otx2_mbox_msg_send_up(&mbox->mbox_up, devid); | ||||
| 	err = otx2_mbox_wait_for_rsp(&mbox->mbox_up, devid); | ||||
| 	if (err) | ||||
| 		return err; | ||||
|  |  | |||
|  | @ -292,8 +292,8 @@ static int otx2_pf_flr_init(struct otx2_nic *pf, int num_vfs) | |||
| 	return 0; | ||||
| } | ||||
| 
 | ||||
| static void otx2_queue_work(struct mbox *mw, struct workqueue_struct *mbox_wq, | ||||
| 			    int first, int mdevs, u64 intr, int type) | ||||
| static void otx2_queue_vf_work(struct mbox *mw, struct workqueue_struct *mbox_wq, | ||||
| 			       int first, int mdevs, u64 intr) | ||||
| { | ||||
| 	struct otx2_mbox_dev *mdev; | ||||
| 	struct otx2_mbox *mbox; | ||||
|  | @ -307,40 +307,26 @@ static void otx2_queue_work(struct mbox *mw, struct workqueue_struct *mbox_wq, | |||
| 
 | ||||
| 		mbox = &mw->mbox; | ||||
| 		mdev = &mbox->dev[i]; | ||||
| 		if (type == TYPE_PFAF) | ||||
| 			otx2_sync_mbox_bbuf(mbox, i); | ||||
| 		hdr = mdev->mbase + mbox->rx_start; | ||||
| 		/* The hdr->num_msgs is set to zero immediately in the interrupt
 | ||||
| 		 * handler to  ensure that it holds a correct value next time | ||||
| 		 * when the interrupt handler is called. | ||||
| 		 * pf->mbox.num_msgs holds the data for use in pfaf_mbox_handler | ||||
| 		 * pf>mbox.up_num_msgs holds the data for use in | ||||
| 		 * pfaf_mbox_up_handler. | ||||
| 		 * handler to ensure that it holds a correct value next time | ||||
| 		 * when the interrupt handler is called. pf->mw[i].num_msgs | ||||
| 		 * holds the data for use in otx2_pfvf_mbox_handler and | ||||
| 		 * pf->mw[i].up_num_msgs holds the data for use in | ||||
| 		 * otx2_pfvf_mbox_up_handler. | ||||
| 		 */ | ||||
| 		if (hdr->num_msgs) { | ||||
| 			mw[i].num_msgs = hdr->num_msgs; | ||||
| 			hdr->num_msgs = 0; | ||||
| 			if (type == TYPE_PFAF) | ||||
| 				memset(mbox->hwbase + mbox->rx_start, 0, | ||||
| 				       ALIGN(sizeof(struct mbox_hdr), | ||||
| 					     sizeof(u64))); | ||||
| 
 | ||||
| 			queue_work(mbox_wq, &mw[i].mbox_wrk); | ||||
| 		} | ||||
| 
 | ||||
| 		mbox = &mw->mbox_up; | ||||
| 		mdev = &mbox->dev[i]; | ||||
| 		if (type == TYPE_PFAF) | ||||
| 			otx2_sync_mbox_bbuf(mbox, i); | ||||
| 		hdr = mdev->mbase + mbox->rx_start; | ||||
| 		if (hdr->num_msgs) { | ||||
| 			mw[i].up_num_msgs = hdr->num_msgs; | ||||
| 			hdr->num_msgs = 0; | ||||
| 			if (type == TYPE_PFAF) | ||||
| 				memset(mbox->hwbase + mbox->rx_start, 0, | ||||
| 				       ALIGN(sizeof(struct mbox_hdr), | ||||
| 					     sizeof(u64))); | ||||
| 
 | ||||
| 			queue_work(mbox_wq, &mw[i].mbox_up_wrk); | ||||
| 		} | ||||
| 	} | ||||
|  | @ -356,8 +342,10 @@ static void otx2_forward_msg_pfvf(struct otx2_mbox_dev *mdev, | |||
| 	/* Msgs are already copied, trigger VF's mbox irq */ | ||||
| 	smp_wmb(); | ||||
| 
 | ||||
| 	otx2_mbox_wait_for_zero(pfvf_mbox, devid); | ||||
| 
 | ||||
| 	offset = pfvf_mbox->trigger | (devid << pfvf_mbox->tr_shift); | ||||
| 	writeq(1, (void __iomem *)pfvf_mbox->reg_base + offset); | ||||
| 	writeq(MBOX_DOWN_MSG, (void __iomem *)pfvf_mbox->reg_base + offset); | ||||
| 
 | ||||
| 	/* Restore VF's mbox bounce buffer region address */ | ||||
| 	src_mdev->mbase = bbuf_base; | ||||
|  | @ -547,7 +535,7 @@ static void otx2_pfvf_mbox_up_handler(struct work_struct *work) | |||
| end: | ||||
| 		offset = mbox->rx_start + msg->next_msgoff; | ||||
| 		if (mdev->msgs_acked == (vf_mbox->up_num_msgs - 1)) | ||||
| 			__otx2_mbox_reset(mbox, 0); | ||||
| 			__otx2_mbox_reset(mbox, vf_idx); | ||||
| 		mdev->msgs_acked++; | ||||
| 	} | ||||
| } | ||||
|  | @ -564,8 +552,7 @@ static irqreturn_t otx2_pfvf_mbox_intr_handler(int irq, void *pf_irq) | |||
| 	if (vfs > 64) { | ||||
| 		intr = otx2_read64(pf, RVU_PF_VFPF_MBOX_INTX(1)); | ||||
| 		otx2_write64(pf, RVU_PF_VFPF_MBOX_INTX(1), intr); | ||||
| 		otx2_queue_work(mbox, pf->mbox_pfvf_wq, 64, vfs, intr, | ||||
| 				TYPE_PFVF); | ||||
| 		otx2_queue_vf_work(mbox, pf->mbox_pfvf_wq, 64, vfs, intr); | ||||
| 		if (intr) | ||||
| 			trace_otx2_msg_interrupt(mbox->mbox.pdev, "VF(s) to PF", intr); | ||||
| 		vfs = 64; | ||||
|  | @ -574,7 +561,7 @@ static irqreturn_t otx2_pfvf_mbox_intr_handler(int irq, void *pf_irq) | |||
| 	intr = otx2_read64(pf, RVU_PF_VFPF_MBOX_INTX(0)); | ||||
| 	otx2_write64(pf, RVU_PF_VFPF_MBOX_INTX(0), intr); | ||||
| 
 | ||||
| 	otx2_queue_work(mbox, pf->mbox_pfvf_wq, 0, vfs, intr, TYPE_PFVF); | ||||
| 	otx2_queue_vf_work(mbox, pf->mbox_pfvf_wq, 0, vfs, intr); | ||||
| 
 | ||||
| 	if (intr) | ||||
| 		trace_otx2_msg_interrupt(mbox->mbox.pdev, "VF(s) to PF", intr); | ||||
|  | @ -597,8 +584,9 @@ static int otx2_pfvf_mbox_init(struct otx2_nic *pf, int numvfs) | |||
| 	if (!pf->mbox_pfvf) | ||||
| 		return -ENOMEM; | ||||
| 
 | ||||
| 	pf->mbox_pfvf_wq = alloc_ordered_workqueue("otx2_pfvf_mailbox", | ||||
| 						   WQ_HIGHPRI | WQ_MEM_RECLAIM); | ||||
| 	pf->mbox_pfvf_wq = alloc_workqueue("otx2_pfvf_mailbox", | ||||
| 					   WQ_UNBOUND | WQ_HIGHPRI | | ||||
| 					   WQ_MEM_RECLAIM, 0); | ||||
| 	if (!pf->mbox_pfvf_wq) | ||||
| 		return -ENOMEM; | ||||
| 
 | ||||
|  | @ -821,20 +809,22 @@ static void otx2_pfaf_mbox_handler(struct work_struct *work) | |||
| 	struct mbox *af_mbox; | ||||
| 	struct otx2_nic *pf; | ||||
| 	int offset, id; | ||||
| 	u16 num_msgs; | ||||
| 
 | ||||
| 	af_mbox = container_of(work, struct mbox, mbox_wrk); | ||||
| 	mbox = &af_mbox->mbox; | ||||
| 	mdev = &mbox->dev[0]; | ||||
| 	rsp_hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start); | ||||
| 	num_msgs = rsp_hdr->num_msgs; | ||||
| 
 | ||||
| 	offset = mbox->rx_start + ALIGN(sizeof(*rsp_hdr), MBOX_MSG_ALIGN); | ||||
| 	pf = af_mbox->pfvf; | ||||
| 
 | ||||
| 	for (id = 0; id < af_mbox->num_msgs; id++) { | ||||
| 	for (id = 0; id < num_msgs; id++) { | ||||
| 		msg = (struct mbox_msghdr *)(mdev->mbase + offset); | ||||
| 		otx2_process_pfaf_mbox_msg(pf, msg); | ||||
| 		offset = mbox->rx_start + msg->next_msgoff; | ||||
| 		if (mdev->msgs_acked == (af_mbox->num_msgs - 1)) | ||||
| 		if (mdev->msgs_acked == (num_msgs - 1)) | ||||
| 			__otx2_mbox_reset(mbox, 0); | ||||
| 		mdev->msgs_acked++; | ||||
| 	} | ||||
|  | @ -945,12 +935,14 @@ static void otx2_pfaf_mbox_up_handler(struct work_struct *work) | |||
| 	int offset, id, devid = 0; | ||||
| 	struct mbox_hdr *rsp_hdr; | ||||
| 	struct mbox_msghdr *msg; | ||||
| 	u16 num_msgs; | ||||
| 
 | ||||
| 	rsp_hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start); | ||||
| 	num_msgs = rsp_hdr->num_msgs; | ||||
| 
 | ||||
| 	offset = mbox->rx_start + ALIGN(sizeof(*rsp_hdr), MBOX_MSG_ALIGN); | ||||
| 
 | ||||
| 	for (id = 0; id < af_mbox->up_num_msgs; id++) { | ||||
| 	for (id = 0; id < num_msgs; id++) { | ||||
| 		msg = (struct mbox_msghdr *)(mdev->mbase + offset); | ||||
| 
 | ||||
| 		devid = msg->pcifunc & RVU_PFVF_FUNC_MASK; | ||||
|  | @ -959,10 +951,11 @@ static void otx2_pfaf_mbox_up_handler(struct work_struct *work) | |||
| 			otx2_process_mbox_msg_up(pf, msg); | ||||
| 		offset = mbox->rx_start + msg->next_msgoff; | ||||
| 	} | ||||
| 	if (devid) { | ||||
| 	/* Forward to VF iff VFs are really present */ | ||||
| 	if (devid && pci_num_vf(pf->pdev)) { | ||||
| 		otx2_forward_vf_mbox_msgs(pf, &pf->mbox.mbox_up, | ||||
| 					  MBOX_DIR_PFVF_UP, devid - 1, | ||||
| 					  af_mbox->up_num_msgs); | ||||
| 					  num_msgs); | ||||
| 		return; | ||||
| 	} | ||||
| 
 | ||||
|  | @ -972,16 +965,49 @@ static void otx2_pfaf_mbox_up_handler(struct work_struct *work) | |||
| static irqreturn_t otx2_pfaf_mbox_intr_handler(int irq, void *pf_irq) | ||||
| { | ||||
| 	struct otx2_nic *pf = (struct otx2_nic *)pf_irq; | ||||
| 	struct mbox *mbox; | ||||
| 	struct mbox *mw = &pf->mbox; | ||||
| 	struct otx2_mbox_dev *mdev; | ||||
| 	struct otx2_mbox *mbox; | ||||
| 	struct mbox_hdr *hdr; | ||||
| 	u64 mbox_data; | ||||
| 
 | ||||
| 	/* Clear the IRQ */ | ||||
| 	otx2_write64(pf, RVU_PF_INT, BIT_ULL(0)); | ||||
| 
 | ||||
| 	mbox = &pf->mbox; | ||||
| 
 | ||||
| 	trace_otx2_msg_interrupt(mbox->mbox.pdev, "AF to PF", BIT_ULL(0)); | ||||
| 	mbox_data = otx2_read64(pf, RVU_PF_PFAF_MBOX0); | ||||
| 
 | ||||
| 	otx2_queue_work(mbox, pf->mbox_wq, 0, 1, 1, TYPE_PFAF); | ||||
| 	if (mbox_data & MBOX_UP_MSG) { | ||||
| 		mbox_data &= ~MBOX_UP_MSG; | ||||
| 		otx2_write64(pf, RVU_PF_PFAF_MBOX0, mbox_data); | ||||
| 
 | ||||
| 		mbox = &mw->mbox_up; | ||||
| 		mdev = &mbox->dev[0]; | ||||
| 		otx2_sync_mbox_bbuf(mbox, 0); | ||||
| 
 | ||||
| 		hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start); | ||||
| 		if (hdr->num_msgs) | ||||
| 			queue_work(pf->mbox_wq, &mw->mbox_up_wrk); | ||||
| 
 | ||||
| 		trace_otx2_msg_interrupt(pf->pdev, "UP message from AF to PF", | ||||
| 					 BIT_ULL(0)); | ||||
| 	} | ||||
| 
 | ||||
| 	if (mbox_data & MBOX_DOWN_MSG) { | ||||
| 		mbox_data &= ~MBOX_DOWN_MSG; | ||||
| 		otx2_write64(pf, RVU_PF_PFAF_MBOX0, mbox_data); | ||||
| 
 | ||||
| 		mbox = &mw->mbox; | ||||
| 		mdev = &mbox->dev[0]; | ||||
| 		otx2_sync_mbox_bbuf(mbox, 0); | ||||
| 
 | ||||
| 		hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start); | ||||
| 		if (hdr->num_msgs) | ||||
| 			queue_work(pf->mbox_wq, &mw->mbox_wrk); | ||||
| 
 | ||||
| 		trace_otx2_msg_interrupt(pf->pdev, "DOWN reply from AF to PF", | ||||
| 					 BIT_ULL(0)); | ||||
| 	} | ||||
| 
 | ||||
| 	return IRQ_HANDLED; | ||||
| } | ||||
|  | @ -3087,6 +3113,7 @@ static void otx2_vf_link_event_task(struct work_struct *work) | |||
| 	struct otx2_vf_config *config; | ||||
| 	struct cgx_link_info_msg *req; | ||||
| 	struct mbox_msghdr *msghdr; | ||||
| 	struct delayed_work *dwork; | ||||
| 	struct otx2_nic *pf; | ||||
| 	int vf_idx; | ||||
| 
 | ||||
|  | @ -3095,10 +3122,24 @@ static void otx2_vf_link_event_task(struct work_struct *work) | |||
| 	vf_idx = config - config->pf->vf_configs; | ||||
| 	pf = config->pf; | ||||
| 
 | ||||
| 	if (config->intf_down) | ||||
| 		return; | ||||
| 
 | ||||
| 	mutex_lock(&pf->mbox.lock); | ||||
| 
 | ||||
| 	dwork = &config->link_event_work; | ||||
| 
 | ||||
| 	if (!otx2_mbox_wait_for_zero(&pf->mbox_pfvf[0].mbox_up, vf_idx)) { | ||||
| 		schedule_delayed_work(dwork, msecs_to_jiffies(100)); | ||||
| 		mutex_unlock(&pf->mbox.lock); | ||||
| 		return; | ||||
| 	} | ||||
| 
 | ||||
| 	msghdr = otx2_mbox_alloc_msg_rsp(&pf->mbox_pfvf[0].mbox_up, vf_idx, | ||||
| 					 sizeof(*req), sizeof(struct msg_rsp)); | ||||
| 	if (!msghdr) { | ||||
| 		dev_err(pf->dev, "Failed to create VF%d link event\n", vf_idx); | ||||
| 		mutex_unlock(&pf->mbox.lock); | ||||
| 		return; | ||||
| 	} | ||||
| 
 | ||||
|  | @ -3107,7 +3148,11 @@ static void otx2_vf_link_event_task(struct work_struct *work) | |||
| 	req->hdr.sig = OTX2_MBOX_REQ_SIG; | ||||
| 	memcpy(&req->link_info, &pf->linfo, sizeof(req->link_info)); | ||||
| 
 | ||||
| 	otx2_mbox_wait_for_zero(&pf->mbox_pfvf[0].mbox_up, vf_idx); | ||||
| 
 | ||||
| 	otx2_sync_mbox_up_msg(&pf->mbox_pfvf[0], vf_idx); | ||||
| 
 | ||||
| 	mutex_unlock(&pf->mbox.lock); | ||||
| } | ||||
| 
 | ||||
| static int otx2_sriov_enable(struct pci_dev *pdev, int numvfs) | ||||
|  |  | |||
|  | @ -89,16 +89,20 @@ static void otx2vf_vfaf_mbox_handler(struct work_struct *work) | |||
| 	struct otx2_mbox *mbox; | ||||
| 	struct mbox *af_mbox; | ||||
| 	int offset, id; | ||||
| 	u16 num_msgs; | ||||
| 
 | ||||
| 	af_mbox = container_of(work, struct mbox, mbox_wrk); | ||||
| 	mbox = &af_mbox->mbox; | ||||
| 	mdev = &mbox->dev[0]; | ||||
| 	rsp_hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start); | ||||
| 	if (af_mbox->num_msgs == 0) | ||||
| 	num_msgs = rsp_hdr->num_msgs; | ||||
| 
 | ||||
| 	if (num_msgs == 0) | ||||
| 		return; | ||||
| 
 | ||||
| 	offset = mbox->rx_start + ALIGN(sizeof(*rsp_hdr), MBOX_MSG_ALIGN); | ||||
| 
 | ||||
| 	for (id = 0; id < af_mbox->num_msgs; id++) { | ||||
| 	for (id = 0; id < num_msgs; id++) { | ||||
| 		msg = (struct mbox_msghdr *)(mdev->mbase + offset); | ||||
| 		otx2vf_process_vfaf_mbox_msg(af_mbox->pfvf, msg); | ||||
| 		offset = mbox->rx_start + msg->next_msgoff; | ||||
|  | @ -151,6 +155,7 @@ static void otx2vf_vfaf_mbox_up_handler(struct work_struct *work) | |||
| 	struct mbox *vf_mbox; | ||||
| 	struct otx2_nic *vf; | ||||
| 	int offset, id; | ||||
| 	u16 num_msgs; | ||||
| 
 | ||||
| 	vf_mbox = container_of(work, struct mbox, mbox_up_wrk); | ||||
| 	vf = vf_mbox->pfvf; | ||||
|  | @ -158,12 +163,14 @@ static void otx2vf_vfaf_mbox_up_handler(struct work_struct *work) | |||
| 	mdev = &mbox->dev[0]; | ||||
| 
 | ||||
| 	rsp_hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start); | ||||
| 	if (vf_mbox->up_num_msgs == 0) | ||||
| 	num_msgs = rsp_hdr->num_msgs; | ||||
| 
 | ||||
| 	if (num_msgs == 0) | ||||
| 		return; | ||||
| 
 | ||||
| 	offset = mbox->rx_start + ALIGN(sizeof(*rsp_hdr), MBOX_MSG_ALIGN); | ||||
| 
 | ||||
| 	for (id = 0; id < vf_mbox->up_num_msgs; id++) { | ||||
| 	for (id = 0; id < num_msgs; id++) { | ||||
| 		msg = (struct mbox_msghdr *)(mdev->mbase + offset); | ||||
| 		otx2vf_process_mbox_msg_up(vf, msg); | ||||
| 		offset = mbox->rx_start + msg->next_msgoff; | ||||
|  | @ -178,40 +185,48 @@ static irqreturn_t otx2vf_vfaf_mbox_intr_handler(int irq, void *vf_irq) | |||
| 	struct otx2_mbox_dev *mdev; | ||||
| 	struct otx2_mbox *mbox; | ||||
| 	struct mbox_hdr *hdr; | ||||
| 	u64 mbox_data; | ||||
| 
 | ||||
| 	/* Clear the IRQ */ | ||||
| 	otx2_write64(vf, RVU_VF_INT, BIT_ULL(0)); | ||||
| 
 | ||||
| 	mbox_data = otx2_read64(vf, RVU_VF_VFPF_MBOX0); | ||||
| 
 | ||||
| 	/* Read latest mbox data */ | ||||
| 	smp_rmb(); | ||||
| 
 | ||||
| 	/* Check for PF => VF response messages */ | ||||
| 	mbox = &vf->mbox.mbox; | ||||
| 	mdev = &mbox->dev[0]; | ||||
| 	otx2_sync_mbox_bbuf(mbox, 0); | ||||
| 	if (mbox_data & MBOX_DOWN_MSG) { | ||||
| 		mbox_data &= ~MBOX_DOWN_MSG; | ||||
| 		otx2_write64(vf, RVU_VF_VFPF_MBOX0, mbox_data); | ||||
| 
 | ||||
| 	trace_otx2_msg_interrupt(mbox->pdev, "PF to VF", BIT_ULL(0)); | ||||
| 		/* Check for PF => VF response messages */ | ||||
| 		mbox = &vf->mbox.mbox; | ||||
| 		mdev = &mbox->dev[0]; | ||||
| 		otx2_sync_mbox_bbuf(mbox, 0); | ||||
| 
 | ||||
| 	hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start); | ||||
| 	if (hdr->num_msgs) { | ||||
| 		vf->mbox.num_msgs = hdr->num_msgs; | ||||
| 		hdr->num_msgs = 0; | ||||
| 		memset(mbox->hwbase + mbox->rx_start, 0, | ||||
| 		       ALIGN(sizeof(struct mbox_hdr), sizeof(u64))); | ||||
| 		queue_work(vf->mbox_wq, &vf->mbox.mbox_wrk); | ||||
| 		hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start); | ||||
| 		if (hdr->num_msgs) | ||||
| 			queue_work(vf->mbox_wq, &vf->mbox.mbox_wrk); | ||||
| 
 | ||||
| 		trace_otx2_msg_interrupt(mbox->pdev, "DOWN reply from PF to VF", | ||||
| 					 BIT_ULL(0)); | ||||
| 	} | ||||
| 	/* Check for PF => VF notification messages */ | ||||
| 	mbox = &vf->mbox.mbox_up; | ||||
| 	mdev = &mbox->dev[0]; | ||||
| 	otx2_sync_mbox_bbuf(mbox, 0); | ||||
| 
 | ||||
| 	hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start); | ||||
| 	if (hdr->num_msgs) { | ||||
| 		vf->mbox.up_num_msgs = hdr->num_msgs; | ||||
| 		hdr->num_msgs = 0; | ||||
| 		memset(mbox->hwbase + mbox->rx_start, 0, | ||||
| 		       ALIGN(sizeof(struct mbox_hdr), sizeof(u64))); | ||||
| 		queue_work(vf->mbox_wq, &vf->mbox.mbox_up_wrk); | ||||
| 	if (mbox_data & MBOX_UP_MSG) { | ||||
| 		mbox_data &= ~MBOX_UP_MSG; | ||||
| 		otx2_write64(vf, RVU_VF_VFPF_MBOX0, mbox_data); | ||||
| 
 | ||||
| 		/* Check for PF => VF notification messages */ | ||||
| 		mbox = &vf->mbox.mbox_up; | ||||
| 		mdev = &mbox->dev[0]; | ||||
| 		otx2_sync_mbox_bbuf(mbox, 0); | ||||
| 
 | ||||
| 		hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start); | ||||
| 		if (hdr->num_msgs) | ||||
| 			queue_work(vf->mbox_wq, &vf->mbox.mbox_up_wrk); | ||||
| 
 | ||||
| 		trace_otx2_msg_interrupt(mbox->pdev, "UP message from PF to VF", | ||||
| 					 BIT_ULL(0)); | ||||
| 	} | ||||
| 
 | ||||
| 	return IRQ_HANDLED; | ||||
|  | @ -760,8 +775,8 @@ static void otx2vf_remove(struct pci_dev *pdev) | |||
| 	otx2_mcam_flow_del(vf); | ||||
| 	otx2_shutdown_tc(vf); | ||||
| 	otx2_shutdown_qos(vf); | ||||
| 	otx2vf_disable_mbox_intr(vf); | ||||
| 	otx2_detach_resources(&vf->mbox); | ||||
| 	otx2vf_disable_mbox_intr(vf); | ||||
| 	free_percpu(vf->hw.lmt_info); | ||||
| 	if (test_bit(CN10K_LMTST, &vf->hw.cap_flag)) | ||||
| 		qmem_free(vf->dev, vf->dync_lmt); | ||||
|  |  | |||
|  | @ -677,8 +677,7 @@ static int mtk_mac_finish(struct phylink_config *config, unsigned int mode, | |||
| 	mcr_cur = mtk_r32(mac->hw, MTK_MAC_MCR(mac->id)); | ||||
| 	mcr_new = mcr_cur; | ||||
| 	mcr_new |= MAC_MCR_IPG_CFG | MAC_MCR_FORCE_MODE | | ||||
| 		   MAC_MCR_BACKOFF_EN | MAC_MCR_BACKPR_EN | MAC_MCR_FORCE_LINK | | ||||
| 		   MAC_MCR_RX_FIFO_CLR_DIS; | ||||
| 		   MAC_MCR_BACKOFF_EN | MAC_MCR_BACKPR_EN | MAC_MCR_RX_FIFO_CLR_DIS; | ||||
| 
 | ||||
| 	/* Only update control register when needed! */ | ||||
| 	if (mcr_new != mcr_cur) | ||||
|  | @ -694,7 +693,7 @@ static void mtk_mac_link_down(struct phylink_config *config, unsigned int mode, | |||
| 					   phylink_config); | ||||
| 	u32 mcr = mtk_r32(mac->hw, MTK_MAC_MCR(mac->id)); | ||||
| 
 | ||||
| 	mcr &= ~(MAC_MCR_TX_EN | MAC_MCR_RX_EN); | ||||
| 	mcr &= ~(MAC_MCR_TX_EN | MAC_MCR_RX_EN | MAC_MCR_FORCE_LINK); | ||||
| 	mtk_w32(mac->hw, mcr, MTK_MAC_MCR(mac->id)); | ||||
| } | ||||
| 
 | ||||
|  | @ -803,7 +802,7 @@ static void mtk_mac_link_up(struct phylink_config *config, | |||
| 	if (rx_pause) | ||||
| 		mcr |= MAC_MCR_FORCE_RX_FC; | ||||
| 
 | ||||
| 	mcr |= MAC_MCR_TX_EN | MAC_MCR_RX_EN; | ||||
| 	mcr |= MAC_MCR_TX_EN | MAC_MCR_RX_EN | MAC_MCR_FORCE_LINK; | ||||
| 	mtk_w32(mac->hw, mcr, MTK_MAC_MCR(mac->id)); | ||||
| } | ||||
| 
 | ||||
|  |  | |||
|  | @ -994,7 +994,7 @@ void mtk_ppe_start(struct mtk_ppe *ppe) | |||
| 			 MTK_PPE_KEEPALIVE_DISABLE) | | ||||
| 	      FIELD_PREP(MTK_PPE_TB_CFG_HASH_MODE, 1) | | ||||
| 	      FIELD_PREP(MTK_PPE_TB_CFG_SCAN_MODE, | ||||
| 			 MTK_PPE_SCAN_MODE_KEEPALIVE_AGE) | | ||||
| 			 MTK_PPE_SCAN_MODE_CHECK_AGE) | | ||||
| 	      FIELD_PREP(MTK_PPE_TB_CFG_ENTRY_NUM, | ||||
| 			 MTK_PPE_ENTRIES_SHIFT); | ||||
| 	if (mtk_is_netsys_v2_or_greater(ppe->eth)) | ||||
|  | @ -1090,17 +1090,21 @@ int mtk_ppe_stop(struct mtk_ppe *ppe) | |||
| 
 | ||||
| 	mtk_ppe_cache_enable(ppe, false); | ||||
| 
 | ||||
| 	/* disable offload engine */ | ||||
| 	ppe_clear(ppe, MTK_PPE_GLO_CFG, MTK_PPE_GLO_CFG_EN); | ||||
| 	ppe_w32(ppe, MTK_PPE_FLOW_CFG, 0); | ||||
| 
 | ||||
| 	/* disable aging */ | ||||
| 	val = MTK_PPE_TB_CFG_AGE_NON_L4 | | ||||
| 	      MTK_PPE_TB_CFG_AGE_UNBIND | | ||||
| 	      MTK_PPE_TB_CFG_AGE_TCP | | ||||
| 	      MTK_PPE_TB_CFG_AGE_UDP | | ||||
| 	      MTK_PPE_TB_CFG_AGE_TCP_FIN; | ||||
| 	      MTK_PPE_TB_CFG_AGE_TCP_FIN | | ||||
| 		  MTK_PPE_TB_CFG_SCAN_MODE; | ||||
| 	ppe_clear(ppe, MTK_PPE_TB_CFG, val); | ||||
| 
 | ||||
| 	return mtk_ppe_wait_busy(ppe); | ||||
| 	if (mtk_ppe_wait_busy(ppe)) | ||||
| 		return -ETIMEDOUT; | ||||
| 
 | ||||
| 	/* disable offload engine */ | ||||
| 	ppe_clear(ppe, MTK_PPE_GLO_CFG, MTK_PPE_GLO_CFG_EN); | ||||
| 	ppe_w32(ppe, MTK_PPE_FLOW_CFG, 0); | ||||
| 
 | ||||
| 	return 0; | ||||
| } | ||||
|  |  | |||
|  | @ -571,7 +571,7 @@ static int txgbe_clock_register(struct txgbe *txgbe) | |||
| 	char clk_name[32]; | ||||
| 	struct clk *clk; | ||||
| 
 | ||||
| 	snprintf(clk_name, sizeof(clk_name), "i2c_designware.%d", | ||||
| 	snprintf(clk_name, sizeof(clk_name), "i2c_dw.%d", | ||||
| 		 pci_dev_id(pdev)); | ||||
| 
 | ||||
| 	clk = clk_register_fixed_rate(NULL, clk_name, NULL, 0, 156250000); | ||||
|  |  | |||
|  | @ -2831,8 +2831,8 @@ EXPORT_SYMBOL(genphy_resume); | |||
| int genphy_loopback(struct phy_device *phydev, bool enable) | ||||
| { | ||||
| 	if (enable) { | ||||
| 		u16 val, ctl = BMCR_LOOPBACK; | ||||
| 		int ret; | ||||
| 		u16 ctl = BMCR_LOOPBACK; | ||||
| 		int ret, val; | ||||
| 
 | ||||
| 		ctl |= mii_bmcr_encode_fixed(phydev->speed, phydev->duplex); | ||||
| 
 | ||||
|  |  | |||
|  | @ -1464,8 +1464,6 @@ static netdev_features_t veth_fix_features(struct net_device *dev, | |||
| 		if (peer_priv->_xdp_prog) | ||||
| 			features &= ~NETIF_F_GSO_SOFTWARE; | ||||
| 	} | ||||
| 	if (priv->_xdp_prog) | ||||
| 		features |= NETIF_F_GRO; | ||||
| 
 | ||||
| 	return features; | ||||
| } | ||||
|  | @ -1569,14 +1567,6 @@ static int veth_xdp_set(struct net_device *dev, struct bpf_prog *prog, | |||
| 		} | ||||
| 
 | ||||
| 		if (!old_prog) { | ||||
| 			if (!veth_gro_requested(dev)) { | ||||
| 				/* user-space did not require GRO, but adding
 | ||||
| 				 * XDP is supposed to get GRO working | ||||
| 				 */ | ||||
| 				dev->features |= NETIF_F_GRO; | ||||
| 				netdev_features_change(dev); | ||||
| 			} | ||||
| 
 | ||||
| 			peer->hw_features &= ~NETIF_F_GSO_SOFTWARE; | ||||
| 			peer->max_mtu = max_mtu; | ||||
| 		} | ||||
|  | @ -1592,14 +1582,6 @@ static int veth_xdp_set(struct net_device *dev, struct bpf_prog *prog, | |||
| 			if (dev->flags & IFF_UP) | ||||
| 				veth_disable_xdp(dev); | ||||
| 
 | ||||
| 			/* if user-space did not require GRO, since adding XDP
 | ||||
| 			 * enabled it, clear it now | ||||
| 			 */ | ||||
| 			if (!veth_gro_requested(dev)) { | ||||
| 				dev->features &= ~NETIF_F_GRO; | ||||
| 				netdev_features_change(dev); | ||||
| 			} | ||||
| 
 | ||||
| 			if (peer) { | ||||
| 				peer->hw_features |= NETIF_F_GSO_SOFTWARE; | ||||
| 				peer->max_mtu = ETH_MAX_MTU; | ||||
|  |  | |||
|  | @ -382,12 +382,12 @@ vmxnet3_process_xdp(struct vmxnet3_adapter *adapter, | |||
| 	page = rbi->page; | ||||
| 	dma_sync_single_for_cpu(&adapter->pdev->dev, | ||||
| 				page_pool_get_dma_addr(page) + | ||||
| 				rq->page_pool->p.offset, rcd->len, | ||||
| 				rq->page_pool->p.offset, rbi->len, | ||||
| 				page_pool_get_dma_dir(rq->page_pool)); | ||||
| 
 | ||||
| 	xdp_init_buff(&xdp, rbi->len, &rq->xdp_rxq); | ||||
| 	xdp_init_buff(&xdp, PAGE_SIZE, &rq->xdp_rxq); | ||||
| 	xdp_prepare_buff(&xdp, page_address(page), rq->page_pool->p.offset, | ||||
| 			 rcd->len, false); | ||||
| 			 rbi->len, false); | ||||
| 	xdp_buff_clear_frags_flag(&xdp); | ||||
| 
 | ||||
| 	xdp_prog = rcu_dereference(rq->adapter->xdp_bpf_prog); | ||||
|  |  | |||
|  | @ -780,7 +780,7 @@ static const struct of_device_id qmc_hdlc_id_table[] = { | |||
| 	{ .compatible = "fsl,qmc-hdlc" }, | ||||
| 	{} /* sentinel */ | ||||
| }; | ||||
| MODULE_DEVICE_TABLE(of, qmc_hdlc_driver); | ||||
| MODULE_DEVICE_TABLE(of, qmc_hdlc_id_table); | ||||
| 
 | ||||
| static struct platform_driver qmc_hdlc_driver = { | ||||
| 	.driver = { | ||||
|  |  | |||
|  | @ -237,7 +237,6 @@ static const struct net_device_ops netdev_ops = { | |||
| 	.ndo_open		= wg_open, | ||||
| 	.ndo_stop		= wg_stop, | ||||
| 	.ndo_start_xmit		= wg_xmit, | ||||
| 	.ndo_get_stats64	= dev_get_tstats64 | ||||
| }; | ||||
| 
 | ||||
| static void wg_destruct(struct net_device *dev) | ||||
|  | @ -262,7 +261,6 @@ static void wg_destruct(struct net_device *dev) | |||
| 	rcu_barrier(); /* Wait for all the peers to be actually freed. */ | ||||
| 	wg_ratelimiter_uninit(); | ||||
| 	memzero_explicit(&wg->static_identity, sizeof(wg->static_identity)); | ||||
| 	free_percpu(dev->tstats); | ||||
| 	kvfree(wg->index_hashtable); | ||||
| 	kvfree(wg->peer_hashtable); | ||||
| 	mutex_unlock(&wg->device_update_lock); | ||||
|  | @ -297,6 +295,7 @@ static void wg_setup(struct net_device *dev) | |||
| 	dev->hw_enc_features |= WG_NETDEV_FEATURES; | ||||
| 	dev->mtu = ETH_DATA_LEN - overhead; | ||||
| 	dev->max_mtu = round_down(INT_MAX, MESSAGE_PADDING_MULTIPLE) - overhead; | ||||
| 	dev->pcpu_stat_type = NETDEV_PCPU_STAT_TSTATS; | ||||
| 
 | ||||
| 	SET_NETDEV_DEVTYPE(dev, &device_type); | ||||
| 
 | ||||
|  | @ -331,14 +330,10 @@ static int wg_newlink(struct net *src_net, struct net_device *dev, | |||
| 	if (!wg->index_hashtable) | ||||
| 		goto err_free_peer_hashtable; | ||||
| 
 | ||||
| 	dev->tstats = netdev_alloc_pcpu_stats(struct pcpu_sw_netstats); | ||||
| 	if (!dev->tstats) | ||||
| 		goto err_free_index_hashtable; | ||||
| 
 | ||||
| 	wg->handshake_receive_wq = alloc_workqueue("wg-kex-%s", | ||||
| 			WQ_CPU_INTENSIVE | WQ_FREEZABLE, 0, dev->name); | ||||
| 	if (!wg->handshake_receive_wq) | ||||
| 		goto err_free_tstats; | ||||
| 		goto err_free_index_hashtable; | ||||
| 
 | ||||
| 	wg->handshake_send_wq = alloc_workqueue("wg-kex-%s", | ||||
| 			WQ_UNBOUND | WQ_FREEZABLE, 0, dev->name); | ||||
|  | @ -397,8 +392,6 @@ static int wg_newlink(struct net *src_net, struct net_device *dev, | |||
| 	destroy_workqueue(wg->handshake_send_wq); | ||||
| err_destroy_handshake_receive: | ||||
| 	destroy_workqueue(wg->handshake_receive_wq); | ||||
| err_free_tstats: | ||||
| 	free_percpu(dev->tstats); | ||||
| err_free_index_hashtable: | ||||
| 	kvfree(wg->index_hashtable); | ||||
| err_free_peer_hashtable: | ||||
|  |  | |||
|  | @ -164,8 +164,8 @@ get_peer(struct wg_peer *peer, struct sk_buff *skb, struct dump_ctx *ctx) | |||
| 	if (!allowedips_node) | ||||
| 		goto no_allowedips; | ||||
| 	if (!ctx->allowedips_seq) | ||||
| 		ctx->allowedips_seq = peer->device->peer_allowedips.seq; | ||||
| 	else if (ctx->allowedips_seq != peer->device->peer_allowedips.seq) | ||||
| 		ctx->allowedips_seq = ctx->wg->peer_allowedips.seq; | ||||
| 	else if (ctx->allowedips_seq != ctx->wg->peer_allowedips.seq) | ||||
| 		goto no_allowedips; | ||||
| 
 | ||||
| 	allowedips_nest = nla_nest_start(skb, WGPEER_A_ALLOWEDIPS); | ||||
|  | @ -255,17 +255,17 @@ static int wg_get_device_dump(struct sk_buff *skb, struct netlink_callback *cb) | |||
| 	if (!peers_nest) | ||||
| 		goto out; | ||||
| 	ret = 0; | ||||
| 	/* If the last cursor was removed via list_del_init in peer_remove, then
 | ||||
| 	lockdep_assert_held(&wg->device_update_lock); | ||||
| 	/* If the last cursor was removed in peer_remove or peer_remove_all, then
 | ||||
| 	 * we just treat this the same as there being no more peers left. The | ||||
| 	 * reason is that seq_nr should indicate to userspace that this isn't a | ||||
| 	 * coherent dump anyway, so they'll try again. | ||||
| 	 */ | ||||
| 	if (list_empty(&wg->peer_list) || | ||||
| 	    (ctx->next_peer && list_empty(&ctx->next_peer->peer_list))) { | ||||
| 	    (ctx->next_peer && ctx->next_peer->is_dead)) { | ||||
| 		nla_nest_cancel(skb, peers_nest); | ||||
| 		goto out; | ||||
| 	} | ||||
| 	lockdep_assert_held(&wg->device_update_lock); | ||||
| 	peer = list_prepare_entry(ctx->next_peer, &wg->peer_list, peer_list); | ||||
| 	list_for_each_entry_continue(peer, &wg->peer_list, peer_list) { | ||||
| 		if (get_peer(peer, skb, ctx)) { | ||||
|  |  | |||
|  | @ -251,7 +251,7 @@ static bool decrypt_packet(struct sk_buff *skb, struct noise_keypair *keypair) | |||
| 
 | ||||
| 	if (unlikely(!READ_ONCE(keypair->receiving.is_valid) || | ||||
| 		  wg_birthdate_has_expired(keypair->receiving.birthdate, REJECT_AFTER_TIME) || | ||||
| 		  keypair->receiving_counter.counter >= REJECT_AFTER_MESSAGES)) { | ||||
| 		  READ_ONCE(keypair->receiving_counter.counter) >= REJECT_AFTER_MESSAGES)) { | ||||
| 		WRITE_ONCE(keypair->receiving.is_valid, false); | ||||
| 		return false; | ||||
| 	} | ||||
|  | @ -318,7 +318,7 @@ static bool counter_validate(struct noise_replay_counter *counter, u64 their_cou | |||
| 		for (i = 1; i <= top; ++i) | ||||
| 			counter->backtrack[(i + index_current) & | ||||
| 				((COUNTER_BITS_TOTAL / BITS_PER_LONG) - 1)] = 0; | ||||
| 		counter->counter = their_counter; | ||||
| 		WRITE_ONCE(counter->counter, their_counter); | ||||
| 	} | ||||
| 
 | ||||
| 	index &= (COUNTER_BITS_TOTAL / BITS_PER_LONG) - 1; | ||||
|  | @ -463,7 +463,7 @@ int wg_packet_rx_poll(struct napi_struct *napi, int budget) | |||
| 			net_dbg_ratelimited("%s: Packet has invalid nonce %llu (max %llu)\n", | ||||
| 					    peer->device->dev->name, | ||||
| 					    PACKET_CB(skb)->nonce, | ||||
| 					    keypair->receiving_counter.counter); | ||||
| 					    READ_ONCE(keypair->receiving_counter.counter)); | ||||
| 			goto next; | ||||
| 		} | ||||
| 
 | ||||
|  |  | |||
|  | @ -991,7 +991,7 @@ struct qman_portal { | |||
| 	/* linked-list of CSCN handlers. */ | ||||
| 	struct list_head cgr_cbs; | ||||
| 	/* list lock */ | ||||
| 	spinlock_t cgr_lock; | ||||
| 	raw_spinlock_t cgr_lock; | ||||
| 	struct work_struct congestion_work; | ||||
| 	struct work_struct mr_work; | ||||
| 	char irqname[MAX_IRQNAME]; | ||||
|  | @ -1281,7 +1281,7 @@ static int qman_create_portal(struct qman_portal *portal, | |||
| 		/* if the given mask is NULL, assume all CGRs can be seen */ | ||||
| 		qman_cgrs_fill(&portal->cgrs[0]); | ||||
| 	INIT_LIST_HEAD(&portal->cgr_cbs); | ||||
| 	spin_lock_init(&portal->cgr_lock); | ||||
| 	raw_spin_lock_init(&portal->cgr_lock); | ||||
| 	INIT_WORK(&portal->congestion_work, qm_congestion_task); | ||||
| 	INIT_WORK(&portal->mr_work, qm_mr_process_task); | ||||
| 	portal->bits = 0; | ||||
|  | @ -1456,11 +1456,14 @@ static void qm_congestion_task(struct work_struct *work) | |||
| 	union qm_mc_result *mcr; | ||||
| 	struct qman_cgr *cgr; | ||||
| 
 | ||||
| 	spin_lock(&p->cgr_lock); | ||||
| 	/*
 | ||||
| 	 * FIXME: QM_MCR_TIMEOUT is 10ms, which is too long for a raw spinlock! | ||||
| 	 */ | ||||
| 	raw_spin_lock_irq(&p->cgr_lock); | ||||
| 	qm_mc_start(&p->p); | ||||
| 	qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCONGESTION); | ||||
| 	if (!qm_mc_result_timeout(&p->p, &mcr)) { | ||||
| 		spin_unlock(&p->cgr_lock); | ||||
| 		raw_spin_unlock_irq(&p->cgr_lock); | ||||
| 		dev_crit(p->config->dev, "QUERYCONGESTION timeout\n"); | ||||
| 		qman_p_irqsource_add(p, QM_PIRQ_CSCI); | ||||
| 		return; | ||||
|  | @ -1476,7 +1479,7 @@ static void qm_congestion_task(struct work_struct *work) | |||
| 	list_for_each_entry(cgr, &p->cgr_cbs, node) | ||||
| 		if (cgr->cb && qman_cgrs_get(&c, cgr->cgrid)) | ||||
| 			cgr->cb(p, cgr, qman_cgrs_get(&rr, cgr->cgrid)); | ||||
| 	spin_unlock(&p->cgr_lock); | ||||
| 	raw_spin_unlock_irq(&p->cgr_lock); | ||||
| 	qman_p_irqsource_add(p, QM_PIRQ_CSCI); | ||||
| } | ||||
| 
 | ||||
|  | @ -2440,7 +2443,7 @@ int qman_create_cgr(struct qman_cgr *cgr, u32 flags, | |||
| 	preempt_enable(); | ||||
| 
 | ||||
| 	cgr->chan = p->config->channel; | ||||
| 	spin_lock(&p->cgr_lock); | ||||
| 	raw_spin_lock_irq(&p->cgr_lock); | ||||
| 
 | ||||
| 	if (opts) { | ||||
| 		struct qm_mcc_initcgr local_opts = *opts; | ||||
|  | @ -2477,7 +2480,7 @@ int qman_create_cgr(struct qman_cgr *cgr, u32 flags, | |||
| 	    qman_cgrs_get(&p->cgrs[1], cgr->cgrid)) | ||||
| 		cgr->cb(p, cgr, 1); | ||||
| out: | ||||
| 	spin_unlock(&p->cgr_lock); | ||||
| 	raw_spin_unlock_irq(&p->cgr_lock); | ||||
| 	put_affine_portal(); | ||||
| 	return ret; | ||||
| } | ||||
|  | @ -2512,7 +2515,7 @@ int qman_delete_cgr(struct qman_cgr *cgr) | |||
| 		return -EINVAL; | ||||
| 
 | ||||
| 	memset(&local_opts, 0, sizeof(struct qm_mcc_initcgr)); | ||||
| 	spin_lock_irqsave(&p->cgr_lock, irqflags); | ||||
| 	raw_spin_lock_irqsave(&p->cgr_lock, irqflags); | ||||
| 	list_del(&cgr->node); | ||||
| 	/*
 | ||||
| 	 * If there are no other CGR objects for this CGRID in the list, | ||||
|  | @ -2537,7 +2540,7 @@ int qman_delete_cgr(struct qman_cgr *cgr) | |||
| 		/* add back to the list */ | ||||
| 		list_add(&cgr->node, &p->cgr_cbs); | ||||
| release_lock: | ||||
| 	spin_unlock_irqrestore(&p->cgr_lock, irqflags); | ||||
| 	raw_spin_unlock_irqrestore(&p->cgr_lock, irqflags); | ||||
| 	put_affine_portal(); | ||||
| 	return ret; | ||||
| } | ||||
|  | @ -2577,9 +2580,9 @@ static int qman_update_cgr(struct qman_cgr *cgr, struct qm_mcc_initcgr *opts) | |||
| 	if (!p) | ||||
| 		return -EINVAL; | ||||
| 
 | ||||
| 	spin_lock_irqsave(&p->cgr_lock, irqflags); | ||||
| 	raw_spin_lock_irqsave(&p->cgr_lock, irqflags); | ||||
| 	ret = qm_modify_cgr(cgr, 0, opts); | ||||
| 	spin_unlock_irqrestore(&p->cgr_lock, irqflags); | ||||
| 	raw_spin_unlock_irqrestore(&p->cgr_lock, irqflags); | ||||
| 	put_affine_portal(); | ||||
| 	return ret; | ||||
| } | ||||
|  |  | |||
|  | @ -522,17 +522,18 @@ static inline void bitmap_replace(unsigned long *dst, | |||
|  * | ||||
|  * (Bits 0, 1, 2, 3, 4, 5 are copied to the bits 0, 1, 4, 8, 9, 12) | ||||
|  * | ||||
|  * A more 'visual' description of the operation: | ||||
|  * src:  0000000001011010 | ||||
|  *                 |||||| | ||||
|  *          +------+||||| | ||||
|  *          |  +----+|||| | ||||
|  *          |  |+----+||| | ||||
|  *          |  ||   +-+|| | ||||
|  *          |  ||   |  || | ||||
|  * mask: ...v..vv...v..vv | ||||
|  *       ...0..11...0..10 | ||||
|  * dst:  0000001100000010 | ||||
|  * A more 'visual' description of the operation:: | ||||
|  * | ||||
|  *	src:  0000000001011010 | ||||
|  *	                |||||| | ||||
|  *	         +------+||||| | ||||
|  *	         |  +----+|||| | ||||
|  *	         |  |+----+||| | ||||
|  *	         |  ||   +-+|| | ||||
|  *	         |  ||   |  || | ||||
|  *	mask: ...v..vv...v..vv | ||||
|  *	      ...0..11...0..10 | ||||
|  *	dst:  0000001100000010 | ||||
|  * | ||||
|  * A relationship exists between bitmap_scatter() and bitmap_gather(). | ||||
|  * bitmap_gather() can be seen as the 'reverse' bitmap_scatter() operation. | ||||
|  | @ -568,16 +569,17 @@ static inline void bitmap_scatter(unsigned long *dst, const unsigned long *src, | |||
|  * | ||||
|  * (Bits 0, 1, 4, 8, 9, 12 are copied to the bits 0, 1, 2, 3, 4, 5) | ||||
|  * | ||||
|  * A more 'visual' description of the operation: | ||||
|  * mask: ...v..vv...v..vv | ||||
|  * src:  0000001100000010 | ||||
|  *          ^  ^^   ^   0 | ||||
|  *          |  ||   |  10 | ||||
|  *          |  ||   > 010 | ||||
|  *          |  |+--> 1010 | ||||
|  *          |  +--> 11010 | ||||
|  *          +----> 011010 | ||||
|  * dst:  0000000000011010 | ||||
|  * A more 'visual' description of the operation:: | ||||
|  * | ||||
|  *	mask: ...v..vv...v..vv | ||||
|  *	src:  0000001100000010 | ||||
|  *	         ^  ^^   ^   0 | ||||
|  *	         |  ||   |  10 | ||||
|  *	         |  ||   > 010 | ||||
|  *	         |  |+--> 1010 | ||||
|  *	         |  +--> 11010 | ||||
|  *	         +----> 011010 | ||||
|  *	dst:  0000000000011010 | ||||
|  * | ||||
|  * A relationship exists between bitmap_gather() and bitmap_scatter(). See | ||||
|  * bitmap_scatter() for the bitmap scatter detailed operations. | ||||
|  |  | |||
|  | @ -2072,6 +2072,7 @@ struct net_device { | |||
| 		struct pcpu_sw_netstats __percpu	*tstats; | ||||
| 		struct pcpu_dstats __percpu		*dstats; | ||||
| 	}; | ||||
| 	unsigned long		state; | ||||
| 	unsigned int		flags; | ||||
| 	unsigned short		hard_header_len; | ||||
| 	netdev_features_t	features; | ||||
|  | @ -2117,7 +2118,6 @@ struct net_device { | |||
| 	 *	part of the usual set specified in Space.c. | ||||
| 	 */ | ||||
| 
 | ||||
| 	unsigned long		state; | ||||
| 
 | ||||
| 	struct list_head	dev_list; | ||||
| 	struct list_head	napi_list; | ||||
|  |  | |||
|  | @ -247,6 +247,37 @@ do { \ | |||
| 	cond_resched(); \ | ||||
| } while (0) | ||||
| 
 | ||||
| /**
 | ||||
|  * rcu_softirq_qs_periodic - Report RCU and RCU-Tasks quiescent states | ||||
|  * @old_ts: jiffies at start of processing. | ||||
|  * | ||||
|  * This helper is for long-running softirq handlers, such as NAPI threads in | ||||
|  * networking. The caller should initialize the variable passed in as @old_ts | ||||
|  * at the beginning of the softirq handler. When invoked frequently, this macro | ||||
|  * will invoke rcu_softirq_qs() every 100 milliseconds thereafter, which will | ||||
|  * provide both RCU and RCU-Tasks quiescent states. Note that this macro | ||||
|  * modifies its old_ts argument. | ||||
|  * | ||||
|  * Because regions of code that have disabled softirq act as RCU read-side | ||||
|  * critical sections, this macro should be invoked with softirq (and | ||||
|  * preemption) enabled. | ||||
|  * | ||||
|  * The macro is not needed when CONFIG_PREEMPT_RT is defined. RT kernels would | ||||
|  * have more chance to invoke schedule() calls and provide necessary quiescent | ||||
|  * states. As a contrast, calling cond_resched() only won't achieve the same | ||||
|  * effect because cond_resched() does not provide RCU-Tasks quiescent states. | ||||
|  */ | ||||
| #define rcu_softirq_qs_periodic(old_ts) \ | ||||
| do { \ | ||||
| 	if (!IS_ENABLED(CONFIG_PREEMPT_RT) && \ | ||||
| 	    time_after(jiffies, (old_ts) + HZ / 10)) { \ | ||||
| 		preempt_disable(); \ | ||||
| 		rcu_softirq_qs(); \ | ||||
| 		preempt_enable(); \ | ||||
| 		(old_ts) = jiffies; \ | ||||
| 	} \ | ||||
| } while (0) | ||||
| 
 | ||||
| /*
 | ||||
|  * Infrastructure to implement the synchronize_() primitives in | ||||
|  * TREE_RCU and rcu_barrier_() primitives in TINY_RCU. | ||||
|  |  | |||
|  | @ -822,9 +822,9 @@ typedef unsigned char *sk_buff_data_t; | |||
|  *	@decrypted: Decrypted SKB | ||||
|  *	@slow_gro: state present at GRO time, slower prepare step required | ||||
|  *	@mono_delivery_time: When set, skb->tstamp has the | ||||
|  *		delivery_time in mono clock base (i.e., EDT) or a clock base chosen | ||||
|  *		by SO_TXTIME. If zero, skb->tstamp has the (rcv) timestamp at | ||||
|  *		ingress. | ||||
|  *		delivery_time in mono clock base (i.e. EDT).  Otherwise, the | ||||
|  *		skb->tstamp has the (rcv) timestamp at ingress and | ||||
|  *		delivery_time at egress. | ||||
|  *	@napi_id: id of the NAPI struct this skb came from | ||||
|  *	@sender_cpu: (aka @napi_id) source CPU in XPS | ||||
|  *	@alloc_cpu: CPU which did the skb allocation. | ||||
|  | @ -3523,6 +3523,16 @@ int skb_cow_data_for_xdp(struct page_pool *pool, struct sk_buff **pskb, | |||
| 			 struct bpf_prog *prog); | ||||
| bool napi_pp_put_page(struct page *page, bool napi_safe); | ||||
| 
 | ||||
| static inline void | ||||
| skb_page_unref(const struct sk_buff *skb, struct page *page, bool napi_safe) | ||||
| { | ||||
| #ifdef CONFIG_PAGE_POOL | ||||
| 	if (skb->pp_recycle && napi_pp_put_page(page, napi_safe)) | ||||
| 		return; | ||||
| #endif | ||||
| 	put_page(page); | ||||
| } | ||||
| 
 | ||||
| static inline void | ||||
| napi_frag_unref(skb_frag_t *frag, bool recycle, bool napi_safe) | ||||
| { | ||||
|  |  | |||
|  | @ -422,13 +422,6 @@ extern long __sys_recvmsg_sock(struct socket *sock, struct msghdr *msg, | |||
| 			       struct user_msghdr __user *umsg, | ||||
| 			       struct sockaddr __user *uaddr, | ||||
| 			       unsigned int flags); | ||||
| extern int sendmsg_copy_msghdr(struct msghdr *msg, | ||||
| 			       struct user_msghdr __user *umsg, unsigned flags, | ||||
| 			       struct iovec **iov); | ||||
| extern int recvmsg_copy_msghdr(struct msghdr *msg, | ||||
| 			       struct user_msghdr __user *umsg, unsigned flags, | ||||
| 			       struct sockaddr __user **uaddr, | ||||
| 			       struct iovec **iov); | ||||
| extern int __copy_msghdr(struct msghdr *kmsg, | ||||
| 			 struct user_msghdr *umsg, | ||||
| 			 struct sockaddr __user **save_addr); | ||||
|  |  | |||
|  | @ -61,7 +61,11 @@ struct request_sock { | |||
| 	struct request_sock		*dl_next; | ||||
| 	u16				mss; | ||||
| 	u8				num_retrans; /* number of retransmits */ | ||||
| 	u8				syncookie:1; /* syncookie: encode tcpopts in timestamp */ | ||||
| 	u8				syncookie:1; /* True if
 | ||||
| 						      * 1) tcpopts needs to be encoded in | ||||
| 						      *    TS of SYN+ACK | ||||
| 						      * 2) ACK is validated by BPF kfunc. | ||||
| 						      */ | ||||
| 	u8				num_timeout:7; /* number of timeouts */ | ||||
| 	u32				ts_recent; | ||||
| 	struct timer_list		rsk_timer; | ||||
|  | @ -144,6 +148,7 @@ reqsk_alloc(const struct request_sock_ops *ops, struct sock *sk_listener, | |||
| 	sk_node_init(&req_to_sk(req)->sk_node); | ||||
| 	sk_tx_queue_clear(req_to_sk(req)); | ||||
| 	req->saved_syn = NULL; | ||||
| 	req->syncookie = 0; | ||||
| 	req->timeout = 0; | ||||
| 	req->num_timeout = 0; | ||||
| 	req->num_retrans = 0; | ||||
|  |  | |||
|  | @ -263,6 +263,7 @@ static int cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames, | |||
| static int cpu_map_kthread_run(void *data) | ||||
| { | ||||
| 	struct bpf_cpu_map_entry *rcpu = data; | ||||
| 	unsigned long last_qs = jiffies; | ||||
| 
 | ||||
| 	complete(&rcpu->kthread_running); | ||||
| 	set_current_state(TASK_INTERRUPTIBLE); | ||||
|  | @ -288,10 +289,12 @@ static int cpu_map_kthread_run(void *data) | |||
| 			if (__ptr_ring_empty(rcpu->queue)) { | ||||
| 				schedule(); | ||||
| 				sched = 1; | ||||
| 				last_qs = jiffies; | ||||
| 			} else { | ||||
| 				__set_current_state(TASK_RUNNING); | ||||
| 			} | ||||
| 		} else { | ||||
| 			rcu_softirq_qs_periodic(last_qs); | ||||
| 			sched = cond_resched(); | ||||
| 		} | ||||
| 
 | ||||
|  |  | |||
|  | @ -2245,7 +2245,7 @@ void dev_queue_xmit_nit(struct sk_buff *skb, struct net_device *dev) | |||
| 	rcu_read_lock(); | ||||
| again: | ||||
| 	list_for_each_entry_rcu(ptype, ptype_list, list) { | ||||
| 		if (ptype->ignore_outgoing) | ||||
| 		if (READ_ONCE(ptype->ignore_outgoing)) | ||||
| 			continue; | ||||
| 
 | ||||
| 		/* Never send packets back to the socket
 | ||||
|  | @ -6743,6 +6743,8 @@ static int napi_threaded_poll(void *data) | |||
| 	void *have; | ||||
| 
 | ||||
| 	while (!napi_thread_wait(napi)) { | ||||
| 		unsigned long last_qs = jiffies; | ||||
| 
 | ||||
| 		for (;;) { | ||||
| 			bool repoll = false; | ||||
| 
 | ||||
|  | @ -6767,6 +6769,7 @@ static int napi_threaded_poll(void *data) | |||
| 			if (!repoll) | ||||
| 				break; | ||||
| 
 | ||||
| 			rcu_softirq_qs_periodic(last_qs); | ||||
| 			cond_resched(); | ||||
| 		} | ||||
| 	} | ||||
|  | @ -11665,11 +11668,12 @@ static void __init net_dev_struct_check(void) | |||
| 
 | ||||
| 	/* TXRX read-mostly hotpath */ | ||||
| 	CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_txrx, lstats); | ||||
| 	CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_txrx, state); | ||||
| 	CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_txrx, flags); | ||||
| 	CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_txrx, hard_header_len); | ||||
| 	CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_txrx, features); | ||||
| 	CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_txrx, ip6_ptr); | ||||
| 	CACHELINE_ASSERT_GROUP_SIZE(struct net_device, net_device_read_txrx, 38); | ||||
| 	CACHELINE_ASSERT_GROUP_SIZE(struct net_device, net_device_read_txrx, 46); | ||||
| 
 | ||||
| 	/* RX read-mostly hotpath */ | ||||
| 	CACHELINE_ASSERT_GROUP_MEMBER(struct net_device, net_device_read_rx, ptype_specific); | ||||
|  |  | |||
|  | @ -193,12 +193,13 @@ devlink_get_from_attrs_lock(struct net *net, struct nlattr **attrs, | |||
| 	devname = nla_data(attrs[DEVLINK_ATTR_DEV_NAME]); | ||||
| 
 | ||||
| 	devlinks_xa_for_each_registered_get(net, index, devlink) { | ||||
| 		devl_dev_lock(devlink, dev_lock); | ||||
| 		if (devl_is_registered(devlink) && | ||||
| 		    strcmp(devlink->dev->bus->name, busname) == 0 && | ||||
| 		    strcmp(dev_name(devlink->dev), devname) == 0) | ||||
| 			return devlink; | ||||
| 		devl_dev_unlock(devlink, dev_lock); | ||||
| 		if (strcmp(devlink->dev->bus->name, busname) == 0 && | ||||
| 		    strcmp(dev_name(devlink->dev), devname) == 0) { | ||||
| 			devl_dev_lock(devlink, dev_lock); | ||||
| 			if (devl_is_registered(devlink)) | ||||
| 				return devlink; | ||||
| 			devl_dev_unlock(devlink, dev_lock); | ||||
| 		} | ||||
| 		devlink_put(devlink); | ||||
| 	} | ||||
| 
 | ||||
|  |  | |||
|  | @ -889,7 +889,7 @@ int devlink_nl_port_new_doit(struct sk_buff *skb, struct genl_info *info) | |||
| 		err = -ENOMEM; | ||||
| 		goto err_out_port_del; | ||||
| 	} | ||||
| 	err = devlink_nl_port_fill(msg, devlink_port, DEVLINK_CMD_NEW, | ||||
| 	err = devlink_nl_port_fill(msg, devlink_port, DEVLINK_CMD_PORT_NEW, | ||||
| 				   info->snd_portid, info->snd_seq, 0, NULL); | ||||
| 	if (WARN_ON_ONCE(err)) | ||||
| 		goto err_out_msg_free; | ||||
|  |  | |||
|  | @ -228,6 +228,10 @@ struct hsr_node *hsr_get_node(struct hsr_port *port, struct list_head *node_db, | |||
| 	 */ | ||||
| 	if (ethhdr->h_proto == htons(ETH_P_PRP) || | ||||
| 	    ethhdr->h_proto == htons(ETH_P_HSR)) { | ||||
| 		/* Check if skb contains hsr_ethhdr */ | ||||
| 		if (skb->mac_len < sizeof(struct hsr_ethhdr)) | ||||
| 			return NULL; | ||||
| 
 | ||||
| 		/* Use the existing sequence_nr from the tag as starting point
 | ||||
| 		 * for filtering duplicate frames. | ||||
| 		 */ | ||||
|  |  | |||
|  | @ -148,14 +148,21 @@ static struct notifier_block hsr_nb = { | |||
| 
 | ||||
| static int __init hsr_init(void) | ||||
| { | ||||
| 	int res; | ||||
| 	int err; | ||||
| 
 | ||||
| 	BUILD_BUG_ON(sizeof(struct hsr_tag) != HSR_HLEN); | ||||
| 
 | ||||
| 	register_netdevice_notifier(&hsr_nb); | ||||
| 	res = hsr_netlink_init(); | ||||
| 	err = register_netdevice_notifier(&hsr_nb); | ||||
| 	if (err) | ||||
| 		return err; | ||||
| 
 | ||||
| 	return res; | ||||
| 	err = hsr_netlink_init(); | ||||
| 	if (err) { | ||||
| 		unregister_netdevice_notifier(&hsr_nb); | ||||
| 		return err; | ||||
| 	} | ||||
| 
 | ||||
| 	return 0; | ||||
| } | ||||
| 
 | ||||
| static void __exit hsr_exit(void) | ||||
|  |  | |||
|  | @ -95,7 +95,7 @@ static inline struct scatterlist *esp_req_sg(struct crypto_aead *aead, | |||
| 			     __alignof__(struct scatterlist)); | ||||
| } | ||||
| 
 | ||||
| static void esp_ssg_unref(struct xfrm_state *x, void *tmp) | ||||
| static void esp_ssg_unref(struct xfrm_state *x, void *tmp, struct sk_buff *skb) | ||||
| { | ||||
| 	struct crypto_aead *aead = x->data; | ||||
| 	int extralen = 0; | ||||
|  | @ -114,7 +114,7 @@ static void esp_ssg_unref(struct xfrm_state *x, void *tmp) | |||
| 	 */ | ||||
| 	if (req->src != req->dst) | ||||
| 		for (sg = sg_next(req->src); sg; sg = sg_next(sg)) | ||||
| 			put_page(sg_page(sg)); | ||||
| 			skb_page_unref(skb, sg_page(sg), false); | ||||
| } | ||||
| 
 | ||||
| #ifdef CONFIG_INET_ESPINTCP | ||||
|  | @ -260,7 +260,7 @@ static void esp_output_done(void *data, int err) | |||
| 	} | ||||
| 
 | ||||
| 	tmp = ESP_SKB_CB(skb)->tmp; | ||||
| 	esp_ssg_unref(x, tmp); | ||||
| 	esp_ssg_unref(x, tmp, skb); | ||||
| 	kfree(tmp); | ||||
| 
 | ||||
| 	if (xo && (xo->flags & XFRM_DEV_RESUME)) { | ||||
|  | @ -639,7 +639,7 @@ int esp_output_tail(struct xfrm_state *x, struct sk_buff *skb, struct esp_info * | |||
| 	} | ||||
| 
 | ||||
| 	if (sg != dsg) | ||||
| 		esp_ssg_unref(x, tmp); | ||||
| 		esp_ssg_unref(x, tmp, skb); | ||||
| 
 | ||||
| 	if (!err && x->encap && x->encap->encap_type == TCP_ENCAP_ESPINTCP) | ||||
| 		err = esp_output_tail_tcp(x, skb); | ||||
|  |  | |||
|  | @ -1135,7 +1135,7 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row, | |||
| 		sock_prot_inuse_add(net, sk->sk_prot, -1); | ||||
| 
 | ||||
| 		spin_lock(lock); | ||||
| 		sk_nulls_del_node_init_rcu(sk); | ||||
| 		__sk_nulls_del_node_init_rcu(sk); | ||||
| 		spin_unlock(lock); | ||||
| 
 | ||||
| 		sk->sk_hash = 0; | ||||
|  |  | |||
|  | @ -263,12 +263,12 @@ void __inet_twsk_schedule(struct inet_timewait_sock *tw, int timeo, bool rearm) | |||
| } | ||||
| EXPORT_SYMBOL_GPL(__inet_twsk_schedule); | ||||
| 
 | ||||
| /* Remove all non full sockets (TIME_WAIT and NEW_SYN_RECV) for dead netns */ | ||||
| void inet_twsk_purge(struct inet_hashinfo *hashinfo, int family) | ||||
| { | ||||
| 	struct inet_timewait_sock *tw; | ||||
| 	struct sock *sk; | ||||
| 	struct hlist_nulls_node *node; | ||||
| 	unsigned int slot; | ||||
| 	struct sock *sk; | ||||
| 
 | ||||
| 	for (slot = 0; slot <= hashinfo->ehash_mask; slot++) { | ||||
| 		struct inet_ehash_bucket *head = &hashinfo->ehash[slot]; | ||||
|  | @ -277,38 +277,35 @@ void inet_twsk_purge(struct inet_hashinfo *hashinfo, int family) | |||
| 		rcu_read_lock(); | ||||
| restart: | ||||
| 		sk_nulls_for_each_rcu(sk, node, &head->chain) { | ||||
| 			if (sk->sk_state != TCP_TIME_WAIT) { | ||||
| 				/* A kernel listener socket might not hold refcnt for net,
 | ||||
| 				 * so reqsk_timer_handler() could be fired after net is | ||||
| 				 * freed.  Userspace listener and reqsk never exist here. | ||||
| 				 */ | ||||
| 				if (unlikely(sk->sk_state == TCP_NEW_SYN_RECV && | ||||
| 					     hashinfo->pernet)) { | ||||
| 					struct request_sock *req = inet_reqsk(sk); | ||||
| 			int state = inet_sk_state_load(sk); | ||||
| 
 | ||||
| 					inet_csk_reqsk_queue_drop_and_put(req->rsk_listener, req); | ||||
| 				} | ||||
| 
 | ||||
| 				continue; | ||||
| 			} | ||||
| 
 | ||||
| 			tw = inet_twsk(sk); | ||||
| 			if ((tw->tw_family != family) || | ||||
| 				refcount_read(&twsk_net(tw)->ns.count)) | ||||
| 			if ((1 << state) & ~(TCPF_TIME_WAIT | | ||||
| 					     TCPF_NEW_SYN_RECV)) | ||||
| 				continue; | ||||
| 
 | ||||
| 			if (unlikely(!refcount_inc_not_zero(&tw->tw_refcnt))) | ||||
| 			if (sk->sk_family != family || | ||||
| 			    refcount_read(&sock_net(sk)->ns.count)) | ||||
| 				continue; | ||||
| 
 | ||||
| 			if (unlikely((tw->tw_family != family) || | ||||
| 				     refcount_read(&twsk_net(tw)->ns.count))) { | ||||
| 				inet_twsk_put(tw); | ||||
| 			if (unlikely(!refcount_inc_not_zero(&sk->sk_refcnt))) | ||||
| 				continue; | ||||
| 
 | ||||
| 			if (unlikely(sk->sk_family != family || | ||||
| 				     refcount_read(&sock_net(sk)->ns.count))) { | ||||
| 				sock_gen_put(sk); | ||||
| 				goto restart; | ||||
| 			} | ||||
| 
 | ||||
| 			rcu_read_unlock(); | ||||
| 			local_bh_disable(); | ||||
| 			inet_twsk_deschedule_put(tw); | ||||
| 			if (state == TCP_TIME_WAIT) { | ||||
| 				inet_twsk_deschedule_put(inet_twsk(sk)); | ||||
| 			} else { | ||||
| 				struct request_sock *req = inet_reqsk(sk); | ||||
| 
 | ||||
| 				inet_csk_reqsk_queue_drop_and_put(req->rsk_listener, | ||||
| 								  req); | ||||
| 			} | ||||
| 			local_bh_enable(); | ||||
| 			goto restart_rcu; | ||||
| 		} | ||||
|  |  | |||
|  | @ -1458,7 +1458,6 @@ struct sk_buff *__ip_make_skb(struct sock *sk, | |||
| 	skb->priority = (cork->tos != -1) ? cork->priority: READ_ONCE(sk->sk_priority); | ||||
| 	skb->mark = cork->mark; | ||||
| 	skb->tstamp = cork->transmit_time; | ||||
| 	skb->mono_delivery_time = !!skb->tstamp; | ||||
| 	/*
 | ||||
| 	 * Steal rt from cork.dst to avoid a pair of atomic_inc/atomic_dec | ||||
| 	 * on dst refcount | ||||
|  |  | |||
|  | @ -357,10 +357,10 @@ static int raw_send_hdrinc(struct sock *sk, struct flowi4 *fl4, | |||
| 		goto error; | ||||
| 	skb_reserve(skb, hlen); | ||||
| 
 | ||||
| 	skb->protocol = htons(ETH_P_IP); | ||||
| 	skb->priority = READ_ONCE(sk->sk_priority); | ||||
| 	skb->mark = sockc->mark; | ||||
| 	skb->tstamp = sockc->transmit_time; | ||||
| 	skb->mono_delivery_time = !!skb->tstamp; | ||||
| 	skb_dst_set(skb, &rt->dst); | ||||
| 	*rtp = NULL; | ||||
| 
 | ||||
|  |  | |||
|  | @ -474,6 +474,9 @@ struct sock *cookie_v4_check(struct sock *sk, struct sk_buff *skb) | |||
| 				  ireq->wscale_ok, &rcv_wscale, | ||||
| 				  dst_metric(&rt->dst, RTAX_INITRWND)); | ||||
| 
 | ||||
| 	/* req->syncookie is set true only if ACK is validated
 | ||||
| 	 * by BPF kfunc, then, rcv_wscale is already configured. | ||||
| 	 */ | ||||
| 	if (!req->syncookie) | ||||
| 		ireq->rcv_wscale = rcv_wscale; | ||||
| 	ireq->ecn_ok &= cookie_ecn_ok(net, &rt->dst); | ||||
|  |  | |||
|  | @ -398,10 +398,6 @@ void tcp_twsk_purge(struct list_head *net_exit_list, int family) | |||
| 			/* Even if tw_refcount == 1, we must clean up kernel reqsk */ | ||||
| 			inet_twsk_purge(net->ipv4.tcp_death_row.hashinfo, family); | ||||
| 		} else if (!purged_once) { | ||||
| 			/* The last refcount is decremented in tcp_sk_exit_batch() */ | ||||
| 			if (refcount_read(&net->ipv4.tcp_death_row.tw_refcount) == 1) | ||||
| 				continue; | ||||
| 
 | ||||
| 			inet_twsk_purge(&tcp_hashinfo, family); | ||||
| 			purged_once = true; | ||||
| 		} | ||||
|  |  | |||
|  | @ -112,7 +112,7 @@ static inline struct scatterlist *esp_req_sg(struct crypto_aead *aead, | |||
| 			     __alignof__(struct scatterlist)); | ||||
| } | ||||
| 
 | ||||
| static void esp_ssg_unref(struct xfrm_state *x, void *tmp) | ||||
| static void esp_ssg_unref(struct xfrm_state *x, void *tmp, struct sk_buff *skb) | ||||
| { | ||||
| 	struct crypto_aead *aead = x->data; | ||||
| 	int extralen = 0; | ||||
|  | @ -131,7 +131,7 @@ static void esp_ssg_unref(struct xfrm_state *x, void *tmp) | |||
| 	 */ | ||||
| 	if (req->src != req->dst) | ||||
| 		for (sg = sg_next(req->src); sg; sg = sg_next(sg)) | ||||
| 			put_page(sg_page(sg)); | ||||
| 			skb_page_unref(skb, sg_page(sg), false); | ||||
| } | ||||
| 
 | ||||
| #ifdef CONFIG_INET6_ESPINTCP | ||||
|  | @ -294,7 +294,7 @@ static void esp_output_done(void *data, int err) | |||
| 	} | ||||
| 
 | ||||
| 	tmp = ESP_SKB_CB(skb)->tmp; | ||||
| 	esp_ssg_unref(x, tmp); | ||||
| 	esp_ssg_unref(x, tmp, skb); | ||||
| 	kfree(tmp); | ||||
| 
 | ||||
| 	esp_output_encap_csum(skb); | ||||
|  | @ -677,7 +677,7 @@ int esp6_output_tail(struct xfrm_state *x, struct sk_buff *skb, struct esp_info | |||
| 	} | ||||
| 
 | ||||
| 	if (sg != dsg) | ||||
| 		esp_ssg_unref(x, tmp); | ||||
| 		esp_ssg_unref(x, tmp, skb); | ||||
| 
 | ||||
| 	if (!err && x->encap && x->encap->encap_type == TCP_ENCAP_ESPINTCP) | ||||
| 		err = esp_output_tail_tcp(x, skb); | ||||
|  |  | |||
|  | @ -1925,7 +1925,7 @@ struct sk_buff *__ip6_make_skb(struct sock *sk, | |||
| 	skb->priority = READ_ONCE(sk->sk_priority); | ||||
| 	skb->mark = cork->base.mark; | ||||
| 	skb->tstamp = cork->base.transmit_time; | ||||
| 	skb->mono_delivery_time = !!skb->tstamp; | ||||
| 
 | ||||
| 	ip6_cork_steal_dst(skb, cork); | ||||
| 	IP6_INC_STATS(net, rt->rt6i_idev, IPSTATS_MIB_OUTREQUESTS); | ||||
| 	if (proto == IPPROTO_ICMPV6) { | ||||
|  |  | |||
|  | @ -622,7 +622,7 @@ static int rawv6_send_hdrinc(struct sock *sk, struct msghdr *msg, int length, | |||
| 	skb->priority = READ_ONCE(sk->sk_priority); | ||||
| 	skb->mark = sockc->mark; | ||||
| 	skb->tstamp = sockc->transmit_time; | ||||
| 	skb->mono_delivery_time = !!skb->tstamp; | ||||
| 
 | ||||
| 	skb_put(skb, length); | ||||
| 	skb_reset_network_header(skb); | ||||
| 	iph = ipv6_hdr(skb); | ||||
|  |  | |||
|  | @ -258,6 +258,9 @@ struct sock *cookie_v6_check(struct sock *sk, struct sk_buff *skb) | |||
| 				  ireq->wscale_ok, &rcv_wscale, | ||||
| 				  dst_metric(dst, RTAX_INITRWND)); | ||||
| 
 | ||||
| 	/* req->syncookie is set true only if ACK is validated
 | ||||
| 	 * by BPF kfunc, then, rcv_wscale is already configured. | ||||
| 	 */ | ||||
| 	if (!req->syncookie) | ||||
| 		ireq->rcv_wscale = rcv_wscale; | ||||
| 	ireq->ecn_ok &= cookie_ecn_ok(net, dst); | ||||
|  |  | |||
|  | @ -1213,7 +1213,7 @@ static int nf_tables_updtable(struct nft_ctx *ctx) | |||
| 	if (flags & ~NFT_TABLE_F_MASK) | ||||
| 		return -EOPNOTSUPP; | ||||
| 
 | ||||
| 	if (flags == ctx->table->flags) | ||||
| 	if (flags == (ctx->table->flags & NFT_TABLE_F_MASK)) | ||||
| 		return 0; | ||||
| 
 | ||||
| 	if ((nft_table_has_owner(ctx->table) && | ||||
|  | @ -2631,19 +2631,6 @@ static int nf_tables_updchain(struct nft_ctx *ctx, u8 genmask, u8 policy, | |||
| 		} | ||||
| 	} | ||||
| 
 | ||||
| 	if (nla[NFTA_CHAIN_COUNTERS]) { | ||||
| 		if (!nft_is_base_chain(chain)) { | ||||
| 			err = -EOPNOTSUPP; | ||||
| 			goto err_hooks; | ||||
| 		} | ||||
| 
 | ||||
| 		stats = nft_stats_alloc(nla[NFTA_CHAIN_COUNTERS]); | ||||
| 		if (IS_ERR(stats)) { | ||||
| 			err = PTR_ERR(stats); | ||||
| 			goto err_hooks; | ||||
| 		} | ||||
| 	} | ||||
| 
 | ||||
| 	if (!(table->flags & NFT_TABLE_F_DORMANT) && | ||||
| 	    nft_is_base_chain(chain) && | ||||
| 	    !list_empty(&hook.list)) { | ||||
|  | @ -2658,6 +2645,20 @@ static int nf_tables_updchain(struct nft_ctx *ctx, u8 genmask, u8 policy, | |||
| 	} | ||||
| 
 | ||||
| 	unregister = true; | ||||
| 
 | ||||
| 	if (nla[NFTA_CHAIN_COUNTERS]) { | ||||
| 		if (!nft_is_base_chain(chain)) { | ||||
| 			err = -EOPNOTSUPP; | ||||
| 			goto err_hooks; | ||||
| 		} | ||||
| 
 | ||||
| 		stats = nft_stats_alloc(nla[NFTA_CHAIN_COUNTERS]); | ||||
| 		if (IS_ERR(stats)) { | ||||
| 			err = PTR_ERR(stats); | ||||
| 			goto err_hooks; | ||||
| 		} | ||||
| 	} | ||||
| 
 | ||||
| 	err = -ENOMEM; | ||||
| 	trans = nft_trans_alloc(ctx, NFT_MSG_NEWCHAIN, | ||||
| 				sizeof(struct nft_trans_chain)); | ||||
|  |  | |||
|  | @ -2329,8 +2329,6 @@ static void nft_pipapo_destroy(const struct nft_ctx *ctx, | |||
| 	if (m) { | ||||
| 		rcu_barrier(); | ||||
| 
 | ||||
| 		nft_set_pipapo_match_destroy(ctx, set, m); | ||||
| 
 | ||||
| 		for_each_possible_cpu(cpu) | ||||
| 			pipapo_free_scratch(m, cpu); | ||||
| 		free_percpu(m->scratch); | ||||
|  | @ -2342,8 +2340,7 @@ static void nft_pipapo_destroy(const struct nft_ctx *ctx, | |||
| 	if (priv->clone) { | ||||
| 		m = priv->clone; | ||||
| 
 | ||||
| 		if (priv->dirty) | ||||
| 			nft_set_pipapo_match_destroy(ctx, set, m); | ||||
| 		nft_set_pipapo_match_destroy(ctx, set, m); | ||||
| 
 | ||||
| 		for_each_possible_cpu(cpu) | ||||
| 			pipapo_free_scratch(priv->clone, cpu); | ||||
|  |  | |||
|  | @ -2057,7 +2057,7 @@ static int packet_sendmsg_spkt(struct socket *sock, struct msghdr *msg, | |||
| 	skb->priority = READ_ONCE(sk->sk_priority); | ||||
| 	skb->mark = READ_ONCE(sk->sk_mark); | ||||
| 	skb->tstamp = sockc.transmit_time; | ||||
| 	skb->mono_delivery_time = !!skb->tstamp; | ||||
| 
 | ||||
| 	skb_setup_tx_timestamp(skb, sockc.tsflags); | ||||
| 
 | ||||
| 	if (unlikely(extra_len == 4)) | ||||
|  | @ -2586,7 +2586,6 @@ static int tpacket_fill_skb(struct packet_sock *po, struct sk_buff *skb, | |||
| 	skb->priority = READ_ONCE(po->sk.sk_priority); | ||||
| 	skb->mark = READ_ONCE(po->sk.sk_mark); | ||||
| 	skb->tstamp = sockc->transmit_time; | ||||
| 	skb->mono_delivery_time = !!skb->tstamp; | ||||
| 	skb_setup_tx_timestamp(skb, sockc->tsflags); | ||||
| 	skb_zcopy_set_nouarg(skb, ph.raw); | ||||
| 
 | ||||
|  | @ -3065,7 +3064,6 @@ static int packet_snd(struct socket *sock, struct msghdr *msg, size_t len) | |||
| 	skb->priority = READ_ONCE(sk->sk_priority); | ||||
| 	skb->mark = sockc.mark; | ||||
| 	skb->tstamp = sockc.transmit_time; | ||||
| 	skb->mono_delivery_time = !!skb->tstamp; | ||||
| 
 | ||||
| 	if (unlikely(extra_len == 4)) | ||||
| 		skb->no_fcs = 1; | ||||
|  | @ -4000,7 +3998,7 @@ packet_setsockopt(struct socket *sock, int level, int optname, sockptr_t optval, | |||
| 		if (val < 0 || val > 1) | ||||
| 			return -EINVAL; | ||||
| 
 | ||||
| 		po->prot_hook.ignore_outgoing = !!val; | ||||
| 		WRITE_ONCE(po->prot_hook.ignore_outgoing, !!val); | ||||
| 		return 0; | ||||
| 	} | ||||
| 	case PACKET_TX_HAS_OFF: | ||||
|  | @ -4134,7 +4132,7 @@ static int packet_getsockopt(struct socket *sock, int level, int optname, | |||
| 		       0); | ||||
| 		break; | ||||
| 	case PACKET_IGNORE_OUTGOING: | ||||
| 		val = po->prot_hook.ignore_outgoing; | ||||
| 		val = READ_ONCE(po->prot_hook.ignore_outgoing); | ||||
| 		break; | ||||
| 	case PACKET_ROLLOVER_STATS: | ||||
| 		if (!po->rollover) | ||||
|  |  | |||
|  | @ -103,13 +103,12 @@ EXPORT_SYMBOL_GPL(rds_send_path_reset); | |||
| 
 | ||||
| static int acquire_in_xmit(struct rds_conn_path *cp) | ||||
| { | ||||
| 	return test_and_set_bit(RDS_IN_XMIT, &cp->cp_flags) == 0; | ||||
| 	return test_and_set_bit_lock(RDS_IN_XMIT, &cp->cp_flags) == 0; | ||||
| } | ||||
| 
 | ||||
| static void release_in_xmit(struct rds_conn_path *cp) | ||||
| { | ||||
| 	clear_bit(RDS_IN_XMIT, &cp->cp_flags); | ||||
| 	smp_mb__after_atomic(); | ||||
| 	clear_bit_unlock(RDS_IN_XMIT, &cp->cp_flags); | ||||
| 	/*
 | ||||
| 	 * We don't use wait_on_bit()/wake_up_bit() because our waking is in a | ||||
| 	 * hot path and finding waiters is very rare.  We don't want to walk | ||||
|  |  | |||
|  | @ -349,8 +349,8 @@ static int rxrpc_send_data(struct rxrpc_sock *rx, | |||
| 			 */ | ||||
| 			remain = more ? INT_MAX : msg_data_left(msg); | ||||
| 			txb = call->conn->security->alloc_txbuf(call, remain, sk->sk_allocation); | ||||
| 			if (IS_ERR(txb)) { | ||||
| 				ret = PTR_ERR(txb); | ||||
| 			if (!txb) { | ||||
| 				ret = -ENOMEM; | ||||
| 				goto maybe_error; | ||||
| 			} | ||||
| 		} | ||||
|  |  | |||
|  | @ -33,8 +33,8 @@ struct rxrpc_txbuf *rxrpc_alloc_data_txbuf(struct rxrpc_call *call, size_t data_ | |||
| 	total = hoff + sizeof(*whdr) + data_size; | ||||
| 
 | ||||
| 	mutex_lock(&call->conn->tx_data_alloc_lock); | ||||
| 	buf = page_frag_alloc_align(&call->conn->tx_data_alloc, total, gfp, | ||||
| 				    ~(data_align - 1) & ~(L1_CACHE_BYTES - 1)); | ||||
| 	buf = __page_frag_alloc_align(&call->conn->tx_data_alloc, total, gfp, | ||||
| 				      ~(data_align - 1) & ~(L1_CACHE_BYTES - 1)); | ||||
| 	mutex_unlock(&call->conn->tx_data_alloc_lock); | ||||
| 	if (!buf) { | ||||
| 		kfree(txb); | ||||
|  |  | |||
|  | @ -10,6 +10,7 @@ | |||
|  */ | ||||
| 
 | ||||
| #include <linux/jhash.h> | ||||
| #include <linux/module.h> | ||||
| #include <linux/sizes.h> | ||||
| #include <linux/vmalloc.h> | ||||
| #include <net/pkt_cls.h> | ||||
|  | @ -563,6 +564,7 @@ static struct Qdisc_ops fq_pie_qdisc_ops __read_mostly = { | |||
| 	.dump_stats	= fq_pie_dump_stats, | ||||
| 	.owner		= THIS_MODULE, | ||||
| }; | ||||
| MODULE_ALIAS_NET_SCH("fq_pie"); | ||||
| 
 | ||||
| static int __init fq_pie_module_init(void) | ||||
| { | ||||
|  |  | |||
|  | @ -997,7 +997,8 @@ static const struct nla_policy entry_policy[TCA_TAPRIO_SCHED_ENTRY_MAX + 1] = { | |||
| }; | ||||
| 
 | ||||
| static const struct nla_policy taprio_tc_policy[TCA_TAPRIO_TC_ENTRY_MAX + 1] = { | ||||
| 	[TCA_TAPRIO_TC_ENTRY_INDEX]	   = { .type = NLA_U32 }, | ||||
| 	[TCA_TAPRIO_TC_ENTRY_INDEX]	   = NLA_POLICY_MAX(NLA_U32, | ||||
| 							    TC_QOPT_MAX_QUEUE), | ||||
| 	[TCA_TAPRIO_TC_ENTRY_MAX_SDU]	   = { .type = NLA_U32 }, | ||||
| 	[TCA_TAPRIO_TC_ENTRY_FP]	   = NLA_POLICY_RANGE(NLA_U32, | ||||
| 							      TC_FP_EXPRESS, | ||||
|  |  | |||
							
								
								
									
										14
									
								
								net/socket.c
									
									
									
									
									
								
							
							
						
						
									
										14
									
								
								net/socket.c
									
									
									
									
									
								
							|  | @ -2600,9 +2600,9 @@ static int ____sys_sendmsg(struct socket *sock, struct msghdr *msg_sys, | |||
| 	return err; | ||||
| } | ||||
| 
 | ||||
| int sendmsg_copy_msghdr(struct msghdr *msg, | ||||
| 			struct user_msghdr __user *umsg, unsigned flags, | ||||
| 			struct iovec **iov) | ||||
| static int sendmsg_copy_msghdr(struct msghdr *msg, | ||||
| 			       struct user_msghdr __user *umsg, unsigned flags, | ||||
| 			       struct iovec **iov) | ||||
| { | ||||
| 	int err; | ||||
| 
 | ||||
|  | @ -2753,10 +2753,10 @@ SYSCALL_DEFINE4(sendmmsg, int, fd, struct mmsghdr __user *, mmsg, | |||
| 	return __sys_sendmmsg(fd, mmsg, vlen, flags, true); | ||||
| } | ||||
| 
 | ||||
| int recvmsg_copy_msghdr(struct msghdr *msg, | ||||
| 			struct user_msghdr __user *umsg, unsigned flags, | ||||
| 			struct sockaddr __user **uaddr, | ||||
| 			struct iovec **iov) | ||||
| static int recvmsg_copy_msghdr(struct msghdr *msg, | ||||
| 			       struct user_msghdr __user *umsg, unsigned flags, | ||||
| 			       struct sockaddr __user **uaddr, | ||||
| 			       struct iovec **iov) | ||||
| { | ||||
| 	ssize_t err; | ||||
| 
 | ||||
|  |  | |||
|  | @ -407,7 +407,8 @@ bool xfrm_dev_offload_ok(struct sk_buff *skb, struct xfrm_state *x) | |||
| 	struct xfrm_dst *xdst = (struct xfrm_dst *)dst; | ||||
| 	struct net_device *dev = x->xso.dev; | ||||
| 
 | ||||
| 	if (!x->type_offload) | ||||
| 	if (!x->type_offload || | ||||
| 	    (x->xso.type == XFRM_DEV_OFFLOAD_UNSPECIFIED && x->encap)) | ||||
| 		return false; | ||||
| 
 | ||||
| 	if (x->xso.type == XFRM_DEV_OFFLOAD_PACKET || | ||||
|  |  | |||
|  | @ -20,6 +20,7 @@ CFLAGS_ethtool:=$(call get_hdr_inc,_LINUX_ETHTOOL_NETLINK_H_,ethtool_netlink.h) | |||
| CFLAGS_handshake:=$(call get_hdr_inc,_LINUX_HANDSHAKE_H,handshake.h) | ||||
| CFLAGS_mptcp_pm:=$(call get_hdr_inc,_LINUX_MPTCP_PM_H,mptcp_pm.h) | ||||
| CFLAGS_netdev:=$(call get_hdr_inc,_LINUX_NETDEV_H,netdev.h) | ||||
| CFLAGS_nlctrl:=$(call get_hdr_inc,__LINUX_GENERIC_NETLINK_H,genetlink.h) | ||||
| CFLAGS_nfsd:=$(call get_hdr_inc,_LINUX_NFSD_NETLINK_H,nfsd_netlink.h) | ||||
| CFLAGS_ovs_datapath:=$(call get_hdr_inc,__LINUX_OPENVSWITCH_H,openvswitch.h) | ||||
| CFLAGS_ovs_flow:=$(call get_hdr_inc,__LINUX_OPENVSWITCH_H,openvswitch.h) | ||||
|  |  | |||
|  | @ -422,13 +422,6 @@ extern long __sys_recvmsg_sock(struct socket *sock, struct msghdr *msg, | |||
| 			       struct user_msghdr __user *umsg, | ||||
| 			       struct sockaddr __user *uaddr, | ||||
| 			       unsigned int flags); | ||||
| extern int sendmsg_copy_msghdr(struct msghdr *msg, | ||||
| 			       struct user_msghdr __user *umsg, unsigned flags, | ||||
| 			       struct iovec **iov); | ||||
| extern int recvmsg_copy_msghdr(struct msghdr *msg, | ||||
| 			       struct user_msghdr __user *umsg, unsigned flags, | ||||
| 			       struct sockaddr __user **uaddr, | ||||
| 			       struct iovec **iov); | ||||
| extern int __copy_msghdr(struct msghdr *kmsg, | ||||
| 			 struct user_msghdr *umsg, | ||||
| 			 struct sockaddr __user **save_addr); | ||||
|  |  | |||
|  | @ -354,7 +354,7 @@ __ping_ipv4() | |||
| 
 | ||||
| 	# Send 100 packets and verify that at least 100 packets hit the rule, | ||||
| 	# to overcome ARP noise. | ||||
| 	PING_COUNT=100 PING_TIMEOUT=11 ping_do $dev $dst_ip | ||||
| 	PING_COUNT=100 PING_TIMEOUT=20 ping_do $dev $dst_ip | ||||
| 	check_err $? "Ping failed" | ||||
| 
 | ||||
| 	tc_check_at_least_x_packets "dev $rp1 egress" 101 10 100 | ||||
|  | @ -410,7 +410,7 @@ __ping_ipv6() | |||
| 
 | ||||
| 	# Send 100 packets and verify that at least 100 packets hit the rule, | ||||
| 	# to overcome neighbor discovery noise. | ||||
| 	PING_COUNT=100 PING_TIMEOUT=11 ping6_do $dev $dst_ip | ||||
| 	PING_COUNT=100 PING_TIMEOUT=20 ping6_do $dev $dst_ip | ||||
| 	check_err $? "Ping failed" | ||||
| 
 | ||||
| 	tc_check_at_least_x_packets "dev $rp1 egress" 101 100 | ||||
|  |  | |||
|  | @ -457,7 +457,7 @@ __ping_ipv4() | |||
| 
 | ||||
| 	# Send 100 packets and verify that at least 100 packets hit the rule, | ||||
| 	# to overcome ARP noise. | ||||
| 	PING_COUNT=100 PING_TIMEOUT=11 ping_do $dev $dst_ip | ||||
| 	PING_COUNT=100 PING_TIMEOUT=20 ping_do $dev $dst_ip | ||||
| 	check_err $? "Ping failed" | ||||
| 
 | ||||
| 	tc_check_at_least_x_packets "dev $rp1 egress" 101 10 100 | ||||
|  | @ -522,7 +522,7 @@ __ping_ipv6() | |||
| 
 | ||||
| 	# Send 100 packets and verify that at least 100 packets hit the rule, | ||||
| 	# to overcome neighbor discovery noise. | ||||
| 	PING_COUNT=100 PING_TIMEOUT=11 ping6_do $dev $dst_ip | ||||
| 	PING_COUNT=100 PING_TIMEOUT=20 ping6_do $dev $dst_ip | ||||
| 	check_err $? "Ping failed" | ||||
| 
 | ||||
| 	tc_check_at_least_x_packets "dev $rp1 egress" 101 100 | ||||
|  |  | |||
|  | @ -217,6 +217,7 @@ for family in 4 6; do | |||
| 	cleanup | ||||
| 
 | ||||
| 	create_ns | ||||
| 	ip netns exec $NS_DST ethtool -K veth$DST generic-receive-offload on | ||||
| 	ip netns exec $NS_DST ethtool -K veth$DST rx-gro-list on | ||||
| 	run_test "GRO frag list" $BM_NET$DST 1 0 | ||||
| 	cleanup | ||||
|  | @ -227,6 +228,7 @@ for family in 4 6; do | |||
| 	# use NAT to circumvent GRO FWD check | ||||
| 	create_ns | ||||
| 	ip -n $NS_DST addr add dev veth$DST $BM_NET$DST_NAT/$SUFFIX | ||||
| 	ip netns exec $NS_DST ethtool -K veth$DST generic-receive-offload on | ||||
| 	ip netns exec $NS_DST ethtool -K veth$DST rx-udp-gro-forwarding on | ||||
| 	ip netns exec $NS_DST $IPT -t nat -I PREROUTING -d $BM_NET$DST_NAT \ | ||||
| 					-j DNAT --to-destination $BM_NET$DST | ||||
|  | @ -240,6 +242,7 @@ for family in 4 6; do | |||
| 	cleanup | ||||
| 
 | ||||
| 	create_vxlan_pair | ||||
| 	ip netns exec $NS_DST ethtool -K veth$DST generic-receive-offload on | ||||
| 	ip netns exec $NS_DST ethtool -K veth$DST rx-gro-list on | ||||
| 	run_test "GRO frag list over UDP tunnel" $OL_NET$DST 1 1 | ||||
| 	cleanup | ||||
|  | @ -247,6 +250,7 @@ for family in 4 6; do | |||
| 	# use NAT to circumvent GRO FWD check | ||||
| 	create_vxlan_pair | ||||
| 	ip -n $NS_DST addr add dev $VXDEV$DST $OL_NET$DST_NAT/$SUFFIX | ||||
| 	ip netns exec $NS_DST ethtool -K veth$DST generic-receive-offload on | ||||
| 	ip netns exec $NS_DST ethtool -K veth$DST rx-udp-gro-forwarding on | ||||
| 	ip netns exec $NS_DST $IPT -t nat -I PREROUTING -d $OL_NET$DST_NAT \ | ||||
| 					-j DNAT --to-destination $OL_NET$DST | ||||
|  |  | |||
|  | @ -249,9 +249,9 @@ cleanup | |||
| create_ns | ||||
| ip -n $NS_DST link set dev veth$DST up | ||||
| ip -n $NS_DST link set dev veth$DST xdp object ${BPF_FILE} section xdp | ||||
| chk_gro_flag "gro vs xdp while down - gro flag on" $DST on | ||||
| chk_gro_flag "gro vs xdp while down - gro flag off" $DST off | ||||
| ip -n $NS_DST link set dev veth$DST down | ||||
| chk_gro_flag "                      - after down" $DST on | ||||
| chk_gro_flag "                      - after down" $DST off | ||||
| ip -n $NS_DST link set dev veth$DST xdp off | ||||
| chk_gro_flag "                      - after xdp off" $DST off | ||||
| ip -n $NS_DST link set dev veth$DST up | ||||
|  | @ -260,6 +260,21 @@ ip -n $NS_SRC link set dev veth$SRC xdp object ${BPF_FILE} section xdp | |||
| chk_gro_flag "                      - after peer xdp" $DST off | ||||
| cleanup | ||||
| 
 | ||||
| create_ns | ||||
| ip -n $NS_DST link set dev veth$DST up | ||||
| ip -n $NS_DST link set dev veth$DST xdp object ${BPF_FILE} section xdp | ||||
| ip netns exec $NS_DST ethtool -K veth$DST generic-receive-offload on | ||||
| chk_gro_flag "gro vs xdp while down - gro flag on" $DST on | ||||
| ip -n $NS_DST link set dev veth$DST down | ||||
| chk_gro_flag "                      - after down" $DST on | ||||
| ip -n $NS_DST link set dev veth$DST xdp off | ||||
| chk_gro_flag "                      - after xdp off" $DST on | ||||
| ip -n $NS_DST link set dev veth$DST up | ||||
| chk_gro_flag "                      - after up" $DST on | ||||
| ip -n $NS_SRC link set dev veth$SRC xdp object ${BPF_FILE} section xdp | ||||
| chk_gro_flag "                      - after peer xdp" $DST on | ||||
| cleanup | ||||
| 
 | ||||
| create_ns | ||||
| chk_channels "default channels" $DST 1 1 | ||||
| 
 | ||||
|  | @ -327,11 +342,14 @@ if [ $CPUS -gt 2 ]; then | |||
| fi | ||||
| 
 | ||||
| ip -n $NS_DST link set dev veth$DST xdp object ${BPF_FILE} section xdp 2>/dev/null | ||||
| chk_gro_flag "with xdp attached - gro flag" $DST on | ||||
| chk_gro_flag "with xdp attached - gro flag" $DST off | ||||
| chk_gro_flag "        - peer gro flag" $SRC off | ||||
| chk_tso_flag "        - tso flag" $SRC off | ||||
| chk_tso_flag "        - peer tso flag" $DST on | ||||
| ip netns exec $NS_DST ethtool -K veth$DST rx-udp-gro-forwarding on | ||||
| chk_gro "        - no aggregation" 10 | ||||
| ip netns exec $NS_DST ethtool -K veth$DST generic-receive-offload on | ||||
| chk_gro_flag "        - gro flag with GRO on" $DST on | ||||
| chk_gro "        - aggregation" 1 | ||||
| 
 | ||||
| 
 | ||||
|  |  | |||
|  | @ -3,6 +3,7 @@ CONFIG_ARCH_RV32I=y | |||
| CONFIG_MMU=y | ||||
| CONFIG_FPU=y | ||||
| CONFIG_SOC_VIRT=y | ||||
| CONFIG_RISCV_ISA_FALLBACK=y | ||||
| CONFIG_SERIAL_8250=y | ||||
| CONFIG_SERIAL_8250_CONSOLE=y | ||||
| CONFIG_SERIAL_OF_PLATFORM=y | ||||
|  |  | |||
|  | @ -2,6 +2,7 @@ CONFIG_ARCH_RV64I=y | |||
| CONFIG_MMU=y | ||||
| CONFIG_FPU=y | ||||
| CONFIG_SOC_VIRT=y | ||||
| CONFIG_RISCV_ISA_FALLBACK=y | ||||
| CONFIG_SERIAL_8250=y | ||||
| CONFIG_SERIAL_8250_CONSOLE=y | ||||
| CONFIG_SERIAL_OF_PLATFORM=y | ||||
|  |  | |||
		Loading…
	
		Reference in a new issue
	
	 Linus Torvalds
						Linus Torvalds