USB/Thunderbolt changes for 6.18-rc1

Here is the big set of USB and thunderbolt drivers for 6.18-rc1.  It was
 another normal development cycle, with lots of the usual drivers getting
 updates:
   - Thunderbolt driver cleanups and additions
   - dwc3 driver updates
   - dwc2 driver updates
   - typec driver updates
   - xhci driver updates and additions
   - offload USB engine updates for better power management
   - unused tracepoint removals
   - usb gadget fixes and updates as more users start to rely on these
     drivers instead of the "old" function gadget drivers
   - new USB device ids
   - other minor driver USB driver updates
   - new USB I/O driver framework and driver additions
 
 The last item, the usb i/o driver, has an i2c and gpio driver added
 through this tree.  Those drivers were acked by the respective subsystem
 maintainers, but you will get a merge conflict with the i2c tree where
 new drivers were added in the same places in a Kconfig and Makefile.
 The merge conflict is simple, just take both sides.
 
 All of these have been in linux-next for a while, with the only issue
 being the i2c tree merge conflicts.
 
 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
 -----BEGIN PGP SIGNATURE-----
 
 iG0EABECAC0WIQT0tgzFv3jCIUoxPcsxR9QN2y37KQUCaOEo8Q8cZ3JlZ0Brcm9h
 aC5jb20ACgkQMUfUDdst+ynpOQCgkenJzjsGVHhl/tm447z3pQ8NtvQAn2GfxMF9
 4jQlUtr6McyzCLVPOZRD
 =pPei
 -----END PGP SIGNATURE-----

Merge tag 'usb-6.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb

Pull USB / Thunderbolt updates from Greg KH:
 "Here is the big set of USB and thunderbolt drivers for 6.18-rc1. It
  was another normal development cycle, with lots of the usual drivers
  getting updates:

   - Thunderbolt driver cleanups and additions

   - dwc3 driver updates

   - dwc2 driver updates

   - typec driver updates

   - xhci driver updates and additions

   - offload USB engine updates for better power management

   - unused tracepoint removals

   - usb gadget fixes and updates as more users start to rely on these
     drivers instead of the "old" function gadget drivers

   - new USB device ids

   - other minor driver USB driver updates

   - new USB I/O driver framework and driver additions"

  The last item, the usb i/o driver, has an i2c and gpio driver added
  through this tree. Those drivers were acked by the respective
  subsystem maintainers.

  All of these have been in linux-next for a while"

* tag 'usb-6.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb: (132 commits)
  usb: vhci-hcd: Prevent suspending virtually attached devices
  USB: serial: option: add SIMCom 8230C compositions
  thunderbolt: Fix use-after-free in tb_dp_dprx_work
  usb: xhci: align PORTSC trace with one-based port numbering
  usb: xhci: correct indentation for PORTSC tracing function
  usb: xhci: improve TR Dequeue Pointer mask
  usb: xhci-pci: add support for hosts with zero USB3 ports
  usb: xhci: Update a comment about Stop Endpoint retries
  Revert "usb: xhci: Avoid Stop Endpoint retry loop if the endpoint seems Running"
  usb: gadget: f_rndis: Refactor bind path to use __free()
  usb: gadget: f_ecm: Refactor bind path to use __free()
  usb: gadget: f_acm: Refactor bind path to use __free()
  usb: gadget: f_ncm: Refactor bind path to use __free()
  usb: gadget: Introduce free_usb_request helper
  usb: gadget: Store endpoint pointer in usb_request
  usb: host: xhci-rcar: Add Renesas RZ/G3E USB3 Host driver support
  usb: host: xhci-plat: Add .post_resume_quirk for struct xhci_plat_priv
  usb: host: xhci-rcar: Move R-Car reg definitions
  dt-bindings: usb: Document Renesas RZ/G3E USB3HOST
  usb: gadget: f_fs: Fix epfile null pointer access after ep enable.
  ...
This commit is contained in:
Linus Torvalds 2025-10-04 16:07:08 -07:00
commit c6006b8ca1
131 changed files with 4793 additions and 1226 deletions

View file

@ -1890,6 +1890,11 @@ S: Reading
S: RG6 2NU
S: United Kingdom
N: Michael Jamet
E: michael.jamet@intel.com
D: Thunderbolt/USB4 driver maintainer
D: Thunderbolt/USB4 networking driver maintainer
N: Dave Jeffery
E: dhjeffery@gmail.com
D: SCSI hacks and IBM ServeRAID RAID driver maintenance

View file

@ -0,0 +1,39 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/usb/intel,ixp4xx-udc.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Intel IXP4xx SoC USB Device Controller (UDC)
description: The IXP4xx SoCs has a full-speed USB Device
Controller with 16 endpoints and a built-in transceiver.
maintainers:
- Linus Walleij <linus.walleij@linaro.org>
properties:
compatible:
const: intel,ixp4xx-udc
reg:
maxItems: 1
interrupts:
maxItems: 1
required:
- compatible
- reg
- interrupts
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/irq.h>
usb@c800b000 {
compatible = "intel,ixp4xx-udc";
reg = <0xc800b000 0x1000>;
interrupts = <12 IRQ_TYPE_LEVEL_HIGH>;
};

View file

@ -1,23 +0,0 @@
Tegra SOC USB controllers
The device node for a USB controller that is part of a Tegra
SOC is as described in the document "Open Firmware Recommended
Practice : Universal Serial Bus" with the following modifications
and additions :
Required properties :
- compatible : For Tegra20, must contain "nvidia,tegra20-ehci".
For Tegra30, must contain "nvidia,tegra30-ehci". Otherwise, must contain
"nvidia,<chip>-ehci" plus at least one of the above, where <chip> is
tegra114, tegra124, tegra132, or tegra210.
- nvidia,phy : phandle of the PHY that the controller is connected to.
- clocks : Must contain one entry, for the module clock.
See ../clocks/clock-bindings.txt for details.
- resets : Must contain an entry for each entry in reset-names.
See ../reset/reset.txt for details.
- reset-names : Must include the following entries:
- usb
Optional properties:
- nvidia,needs-double-reset : boolean is to be set for some of the Tegra20
USB ports, which need reset twice due to hardware issues.

View file

@ -0,0 +1,87 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/usb/renesas,rzg3e-xhci.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Renesas RZ/G3E USB 3.2 Gen2 Host controller
maintainers:
- Biju Das <biju.das.jz@bp.renesas.com>
properties:
compatible:
const: renesas,r9a09g047-xhci
reg:
maxItems: 1
interrupts:
items:
- description: Logical OR of all interrupt signals.
- description: System management interrupt
- description: Host system error interrupt
- description: Power management event interrupt
- description: xHC interrupt
interrupt-names:
items:
- const: all
- const: smi
- const: hse
- const: pme
- const: xhc
clocks:
maxItems: 1
phys:
maxItems: 2
phy-names:
items:
- const: usb2-phy
- const: usb3-phy
power-domains:
maxItems: 1
resets:
maxItems: 1
required:
- compatible
- reg
- interrupts
- interrupt-names
- clocks
- power-domains
- resets
- phys
- phy-names
allOf:
- $ref: usb-xhci.yaml
additionalProperties: false
examples:
- |
#include <dt-bindings/clock/renesas,r9a09g047-cpg.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
usb@15850000 {
compatible = "renesas,r9a09g047-xhci";
reg = <0x15850000 0x10000>;
interrupts = <GIC_SPI 759 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 758 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 757 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 756 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 755 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "all", "smi", "hse", "pme", "xhc";
clocks = <&cpg CPG_MOD 0xaf>;
power-domains = <&cpg>;
resets = <&cpg 0xaa>;
phys = <&usb3_phy>, <&usb3_phy>;
phy-names = "usb2-phy", "usb3-phy";
};

View file

@ -59,6 +59,12 @@ properties:
- renesas,usbhs-r8a77995 # R-Car D3
- const: renesas,rcar-gen3-usbhs
- const: renesas,usbhs-r9a09g077 # RZ/T2H
- items:
- const: renesas,usbhs-r9a09g087 # RZ/N2H
- const: renesas,usbhs-r9a09g077 # RZ/T2H
reg:
maxItems: 1
@ -141,9 +147,25 @@ allOf:
required:
- resets
else:
properties:
interrupts:
maxItems: 1
if:
properties:
compatible:
contains:
const: renesas,usbhs-r9a09g077
then:
properties:
resets: false
clocks:
maxItems: 1
interrupts:
items:
- description: USB function interrupt USB_FI
- description: USB function DMA0 transmit completion interrupt USB_FDMA0
- description: USB function DMA1 transmit completion interrupt USB_FDMA1
else:
properties:
interrupts:
maxItems: 1
additionalProperties: false

View file

@ -1,22 +0,0 @@
Samsung S3C2410 and compatible SoC USB controller
OHCI
Required properties:
- compatible: should be "samsung,s3c2410-ohci" for USB host controller
- reg: address and length of the controller memory mapped region
- interrupts: interrupt number for the USB OHCI controller
- clocks: Should reference the bus and host clocks
- clock-names: Should contain two strings
"usb-bus-host" for the USB bus clock
"usb-host" for the USB host clock
Example:
usb0: ohci@49000000 {
compatible = "samsung,s3c2410-ohci";
reg = <0x49000000 0x100>;
interrupts = <0 0 26 3>;
clocks = <&clocks UCLK>, <&clocks HCLK_USBH>;
clock-names = "usb-bus-host", "usb-host";
};

View file

@ -0,0 +1,121 @@
# SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/usb/spacemit,k1-dwc3.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: SpacemiT K1 SuperSpeed DWC3 USB SoC Controller
maintainers:
- Ze Huang <huang.ze@linux.dev>
description: |
The SpacemiT K1 embeds a DWC3 USB IP Core which supports Host functions
for USB 3.0 and DRD for USB 2.0.
Key features:
- USB3.0 SuperSpeed and USB2.0 High/Full/Low-Speed support
- Supports low-power modes (USB2.0 suspend, USB3.0 U1/U2/U3)
- Internal DMA controller and flexible endpoint FIFO sizing
Communication Interface:
- Use of PIPE3 (125MHz) interface for USB3.0 PHY
- Use of UTMI+ (30/60MHz) interface for USB2.0 PHY
allOf:
- $ref: snps,dwc3-common.yaml#
properties:
compatible:
const: spacemit,k1-dwc3
reg:
maxItems: 1
clocks:
maxItems: 1
clock-names:
const: usbdrd30
interrupts:
maxItems: 1
phys:
items:
- description: phandle to USB2/HS PHY
- description: phandle to USB3/SS PHY
phy-names:
items:
- const: usb2-phy
- const: usb3-phy
resets:
items:
- description: USB3.0 AHB reset
- description: USB3.0 VCC reset
- description: USB3.0 PHY reset
reset-names:
items:
- const: ahb
- const: vcc
- const: phy
reset-delay:
$ref: /schemas/types.yaml#/definitions/uint32
default: 2
description: delay after reset sequence [us]
vbus-supply:
description: A phandle to the regulator supplying the VBUS voltage.
required:
- compatible
- reg
- clocks
- clock-names
- interrupts
- phys
- phy-names
- resets
- reset-names
unevaluatedProperties: false
examples:
- |
usb@c0a00000 {
compatible = "spacemit,k1-dwc3";
reg = <0xc0a00000 0x10000>;
clocks = <&syscon_apmu 16>;
clock-names = "usbdrd30";
interrupts = <125>;
phys = <&usb2phy>, <&usb3phy>;
phy-names = "usb2-phy", "usb3-phy";
resets = <&syscon_apmu 8>,
<&syscon_apmu 9>,
<&syscon_apmu 10>;
reset-names = "ahb", "vcc", "phy";
reset-delay = <2>;
vbus-supply = <&usb3_vbus>;
#address-cells = <1>;
#size-cells = <0>;
hub_2_0: hub@1 {
compatible = "usb2109,2817";
reg = <1>;
vdd-supply = <&usb3_vhub>;
peer-hub = <&hub_3_0>;
reset-gpios = <&gpio 3 28 1>;
};
hub_3_0: hub@2 {
compatible = "usb2109,817";
reg = <2>;
vdd-supply = <&usb3_vhub>;
peer-hub = <&hub_2_0>;
reset-gpios = <&gpio 3 28 1>;
};
};

View file

@ -0,0 +1,74 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/usb/ti,twl4030-usb.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Texas Instruments TWL4030 USB PHY and Comparator
maintainers:
- Peter Ujfalusi <peter.ujfalusi@gmail.com>
description:
Bindings for the USB PHY and comparator module found within the
TWL4030 family of companion chips. If a sibling node is compatible with
"ti,twl4030-bci", the driver for that node will query this device for
USB power status.
properties:
compatible:
const: ti,twl4030-usb
interrupts:
minItems: 1
items:
- description: OTG interrupt number for ID events.
- description: USB interrupt number for VBUS events.
usb1v5-supply:
description: Phandle to the vusb1v5 regulator.
usb1v8-supply:
description: Phandle to the vusb1v8 regulator.
usb3v1-supply:
description: Phandle to the vusb3v1 regulator.
usb_mode:
description: |
The mode used by the PHY to connect to the controller:
1: ULPI mode
2: CEA2011_3PIN mode
$ref: /schemas/types.yaml#/definitions/uint32
enum: [1, 2]
'#phy-cells':
const: 0
required:
- compatible
- interrupts
- usb1v5-supply
- usb1v8-supply
- usb3v1-supply
- usb_mode
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/irq.h>
usb-phy {
compatible = "ti,twl4030-usb";
interrupts = <10 IRQ_TYPE_LEVEL_HIGH>;
interrupt-parent = <&gic>;
usb1v5-supply = <&reg_vusb1v5>;
usb1v8-supply = <&reg_vusb1v8>;
usb3v1-supply = <&reg_vusb3v1>;
usb_mode = <1>;
#phy-cells = <0>;
};

View file

@ -0,0 +1,48 @@
# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause)
%YAML 1.2
---
$id: http://devicetree.org/schemas/usb/ti,twl6030-usb.yaml#
$schema: http://devicetree.org/meta-schemas/core.yaml#
title: Texas Instruments TWL6030 USB Comparator
maintainers:
- Peter Ujfalusi <peter.ujfalusi@gmail.com>
description:
Bindings for the USB comparator module found within the TWL6030
family of companion chips.
properties:
compatible:
const: ti,twl6030-usb
interrupts:
items:
- description: OTG for ID events in host mode
- description: USB device mode for VBUS events
usb-supply:
description:
Phandle to the VUSB regulator. For TWL6030, this should be the 'vusb'
regulator. For TWL6032 subclass, it should be the 'ldousb' regulator.
required:
- compatible
- interrupts
- usb-supply
additionalProperties: false
examples:
- |
#include <dt-bindings/interrupt-controller/irq.h>
usb {
compatible = "ti,twl6030-usb";
interrupts = <4 IRQ_TYPE_LEVEL_HIGH>, <10 IRQ_TYPE_LEVEL_HIGH>;
interrupt-parent = <&gic>;
usb-supply = <&reg_vusb>;
};

View file

@ -1,43 +0,0 @@
USB COMPARATOR OF TWL CHIPS
TWL6030 USB COMPARATOR
- compatible : Should be "ti,twl6030-usb"
- interrupts : Two interrupt numbers to the cpu should be specified. First
interrupt number is the otg interrupt number that raises ID interrupts when
the controller has to act as host and the second interrupt number is the
usb interrupt number that raises VBUS interrupts when the controller has to
act as device
- usb-supply : phandle to the regulator device tree node. It should be vusb
if it is twl6030 or ldousb if it is twl6032 subclass.
twl6030-usb {
compatible = "ti,twl6030-usb";
interrupts = < 4 10 >;
};
Board specific device node entry
&twl6030-usb {
usb-supply = <&vusb>;
};
TWL4030 USB PHY AND COMPARATOR
- compatible : Should be "ti,twl4030-usb"
- interrupts : The interrupt numbers to the cpu should be specified. First
interrupt number is the otg interrupt number that raises ID interrupts
and VBUS interrupts. The second interrupt number is optional.
- <supply-name>-supply : phandle to the regulator device tree node.
<supply-name> should be vusb1v5, vusb1v8 and vusb3v1
- usb_mode : The mode used by the phy to connect to the controller. "1"
specifies "ULPI" mode and "2" specifies "CEA2011_3PIN" mode.
If a sibling node is compatible "ti,twl4030-bci", then it will find
this device and query it for USB power status.
twl4030-usb {
compatible = "ti,twl4030-usb";
interrupts = < 10 4 >;
usb1v5-supply = <&vusb1v5>;
usb1v8-supply = <&vusb1v8>;
usb3v1-supply = <&vusb3v1>;
usb_mode = <1>;
};

View file

@ -240,7 +240,6 @@ additionalProperties: false
required:
- compatible
- reg
examples:
- |
@ -269,3 +268,11 @@ examples:
swap-dx-lanes = <1 2>;
};
};
- |
#include <dt-bindings/gpio/gpio.h>
usb-hub {
/* I2C is not connected */
compatible = "microchip,usb2512b";
reset-gpios = <&porta 8 GPIO_ACTIVE_LOW>;
};

View file

@ -12875,6 +12875,16 @@ S: Maintained
F: Documentation/admin-guide/pm/intel_uncore_frequency_scaling.rst
F: drivers/platform/x86/intel/uncore-frequency/
INTEL USBIO USB I/O EXPANDER DRIVERS
M: Israel Cepeda <israel.a.cepeda.lopez@intel.com>
M: Hans de Goede <hansg@kernel.org>
R: Sakari Ailus <sakari.ailus@linux.intel.com>
S: Maintained
F: drivers/gpio/gpio-usbio.c
F: drivers/i2c/busses/i2c-usbio.c
F: drivers/usb/misc/usbio.c
F: include/linux/usb/usbio.h
INTEL VENDOR SPECIFIC EXTENDED CAPABILITIES DRIVER
M: David E. Box <david.e.box@linux.intel.com>
S: Supported
@ -25492,7 +25502,6 @@ F: drivers/thunderbolt/dma_test.c
THUNDERBOLT DRIVER
M: Andreas Noever <andreas.noever@gmail.com>
M: Michael Jamet <michael.jamet@intel.com>
M: Mika Westerberg <westeri@kernel.org>
M: Yehezkel Bernat <YehezkelShB@gmail.com>
L: linux-usb@vger.kernel.org
@ -25503,7 +25512,6 @@ F: drivers/thunderbolt/
F: include/linux/thunderbolt.h
THUNDERBOLT NETWORK DRIVER
M: Michael Jamet <michael.jamet@intel.com>
M: Mika Westerberg <westeri@kernel.org>
M: Yehezkel Bernat <YehezkelShB@gmail.com>
L: netdev@vger.kernel.org

View file

@ -1951,6 +1951,17 @@ config GPIO_MPSSE
GPIO driver for FTDI's MPSSE interface. These can do input and
output. Each MPSSE provides 16 IO pins.
config GPIO_USBIO
tristate "Intel USBIO GPIO support"
depends on USB_USBIO
default USB_USBIO
help
Select this option to enable GPIO driver for the INTEL
USBIO driver stack.
This driver can also be built as a module. If so, the module
will be called gpio_usbio.
endmenu
menu "Virtual GPIO drivers"

View file

@ -194,6 +194,7 @@ obj-$(CONFIG_GPIO_TS5500) += gpio-ts5500.o
obj-$(CONFIG_GPIO_TWL4030) += gpio-twl4030.o
obj-$(CONFIG_GPIO_TWL6040) += gpio-twl6040.o
obj-$(CONFIG_GPIO_UNIPHIER) += gpio-uniphier.o
obj-$(CONFIG_GPIO_USBIO) += gpio-usbio.o
obj-$(CONFIG_GPIO_VF610) += gpio-vf610.o
obj-$(CONFIG_GPIO_VIPERBOARD) += gpio-viperboard.o
obj-$(CONFIG_GPIO_VIRTUSER) += gpio-virtuser.o

247
drivers/gpio/gpio-usbio.c Normal file
View file

@ -0,0 +1,247 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2025 Intel Corporation.
* Copyright (c) 2025 Red Hat, Inc.
*/
#include <linux/acpi.h>
#include <linux/auxiliary_bus.h>
#include <linux/cleanup.h>
#include <linux/device.h>
#include <linux/gpio/driver.h>
#include <linux/mutex.h>
#include <linux/types.h>
#include <linux/usb/usbio.h>
struct usbio_gpio_bank {
u8 config[USBIO_GPIOSPERBANK];
u32 bitmap;
};
struct usbio_gpio {
struct mutex config_mutex; /* Protects banks[x].config */
struct usbio_gpio_bank banks[USBIO_MAX_GPIOBANKS];
struct gpio_chip gc;
struct auxiliary_device *adev;
};
static const struct acpi_device_id usbio_gpio_acpi_hids[] = {
{ "INTC1007" }, /* MTL */
{ "INTC10B2" }, /* ARL */
{ "INTC10B5" }, /* LNL */
{ "INTC10E2" }, /* PTL */
{ }
};
static void usbio_gpio_get_bank_and_pin(struct gpio_chip *gc, unsigned int offset,
struct usbio_gpio_bank **bank_ret,
unsigned int *pin_ret)
{
struct usbio_gpio *gpio = gpiochip_get_data(gc);
struct device *dev = &gpio->adev->dev;
struct usbio_gpio_bank *bank;
unsigned int pin;
bank = &gpio->banks[offset / USBIO_GPIOSPERBANK];
pin = offset % USBIO_GPIOSPERBANK;
if (~bank->bitmap & BIT(pin)) {
/* The FW bitmap sometimes is invalid, warn and continue */
dev_warn_once(dev, FW_BUG "GPIO %u is not in FW pins bitmap\n", offset);
}
*bank_ret = bank;
*pin_ret = pin;
}
static int usbio_gpio_get_direction(struct gpio_chip *gc, unsigned int offset)
{
struct usbio_gpio_bank *bank;
unsigned int pin;
u8 cfg;
usbio_gpio_get_bank_and_pin(gc, offset, &bank, &pin);
cfg = bank->config[pin] & USBIO_GPIO_PINMOD_MASK;
return (cfg == USBIO_GPIO_PINMOD_OUTPUT) ?
GPIO_LINE_DIRECTION_OUT : GPIO_LINE_DIRECTION_IN;
}
static int usbio_gpio_get(struct gpio_chip *gc, unsigned int offset)
{
struct usbio_gpio *gpio = gpiochip_get_data(gc);
struct usbio_gpio_bank *bank;
struct usbio_gpio_rw gbuf;
unsigned int pin;
int ret;
usbio_gpio_get_bank_and_pin(gc, offset, &bank, &pin);
gbuf.bankid = offset / USBIO_GPIOSPERBANK;
gbuf.pincount = 1;
gbuf.pin = pin;
ret = usbio_control_msg(gpio->adev, USBIO_PKTTYPE_GPIO, USBIO_GPIOCMD_READ,
&gbuf, sizeof(gbuf) - sizeof(gbuf.value),
&gbuf, sizeof(gbuf));
if (ret != sizeof(gbuf))
return (ret < 0) ? ret : -EPROTO;
return (le32_to_cpu(gbuf.value) >> pin) & 1;
}
static int usbio_gpio_set(struct gpio_chip *gc, unsigned int offset, int value)
{
struct usbio_gpio *gpio = gpiochip_get_data(gc);
struct usbio_gpio_bank *bank;
struct usbio_gpio_rw gbuf;
unsigned int pin;
usbio_gpio_get_bank_and_pin(gc, offset, &bank, &pin);
gbuf.bankid = offset / USBIO_GPIOSPERBANK;
gbuf.pincount = 1;
gbuf.pin = pin;
gbuf.value = cpu_to_le32(value << pin);
return usbio_control_msg(gpio->adev, USBIO_PKTTYPE_GPIO, USBIO_GPIOCMD_WRITE,
&gbuf, sizeof(gbuf), NULL, 0);
}
static int usbio_gpio_update_config(struct gpio_chip *gc, unsigned int offset,
u8 mask, u8 value)
{
struct usbio_gpio *gpio = gpiochip_get_data(gc);
struct usbio_gpio_bank *bank;
struct usbio_gpio_init gbuf;
unsigned int pin;
usbio_gpio_get_bank_and_pin(gc, offset, &bank, &pin);
guard(mutex)(&gpio->config_mutex);
bank->config[pin] &= ~mask;
bank->config[pin] |= value;
gbuf.bankid = offset / USBIO_GPIOSPERBANK;
gbuf.config = bank->config[pin];
gbuf.pincount = 1;
gbuf.pin = pin;
return usbio_control_msg(gpio->adev, USBIO_PKTTYPE_GPIO, USBIO_GPIOCMD_INIT,
&gbuf, sizeof(gbuf), NULL, 0);
}
static int usbio_gpio_direction_input(struct gpio_chip *gc, unsigned int offset)
{
return usbio_gpio_update_config(gc, offset, USBIO_GPIO_PINMOD_MASK,
USBIO_GPIO_SET_PINMOD(USBIO_GPIO_PINMOD_INPUT));
}
static int usbio_gpio_direction_output(struct gpio_chip *gc,
unsigned int offset, int value)
{
int ret;
ret = usbio_gpio_update_config(gc, offset, USBIO_GPIO_PINMOD_MASK,
USBIO_GPIO_SET_PINMOD(USBIO_GPIO_PINMOD_OUTPUT));
if (ret)
return ret;
return usbio_gpio_set(gc, offset, value);
}
static int usbio_gpio_set_config(struct gpio_chip *gc, unsigned int offset,
unsigned long config)
{
u8 value;
switch (pinconf_to_config_param(config)) {
case PIN_CONFIG_BIAS_PULL_PIN_DEFAULT:
value = USBIO_GPIO_SET_PINCFG(USBIO_GPIO_PINCFG_DEFAULT);
break;
case PIN_CONFIG_BIAS_PULL_UP:
value = USBIO_GPIO_SET_PINCFG(USBIO_GPIO_PINCFG_PULLUP);
break;
case PIN_CONFIG_BIAS_PULL_DOWN:
value = USBIO_GPIO_SET_PINCFG(USBIO_GPIO_PINCFG_PULLDOWN);
break;
case PIN_CONFIG_DRIVE_PUSH_PULL:
value = USBIO_GPIO_SET_PINCFG(USBIO_GPIO_PINCFG_PUSHPULL);
break;
default:
return -ENOTSUPP;
}
return usbio_gpio_update_config(gc, offset, USBIO_GPIO_PINCFG_MASK, value);
}
static int usbio_gpio_probe(struct auxiliary_device *adev,
const struct auxiliary_device_id *adev_id)
{
struct usbio_gpio_bank_desc *bank_desc;
struct device *dev = &adev->dev;
struct usbio_gpio *gpio;
int bank, ret;
bank_desc = dev_get_platdata(dev);
if (!bank_desc)
return -EINVAL;
gpio = devm_kzalloc(dev, sizeof(*gpio), GFP_KERNEL);
if (!gpio)
return -ENOMEM;
ret = devm_mutex_init(dev, &gpio->config_mutex);
if (ret)
return ret;
gpio->adev = adev;
usbio_acpi_bind(gpio->adev, usbio_gpio_acpi_hids);
for (bank = 0; bank < USBIO_MAX_GPIOBANKS && bank_desc[bank].bmap; bank++)
gpio->banks[bank].bitmap = le32_to_cpu(bank_desc[bank].bmap);
gpio->gc.label = ACPI_COMPANION(dev) ?
acpi_dev_name(ACPI_COMPANION(dev)) : dev_name(dev);
gpio->gc.parent = dev;
gpio->gc.owner = THIS_MODULE;
gpio->gc.get_direction = usbio_gpio_get_direction;
gpio->gc.direction_input = usbio_gpio_direction_input;
gpio->gc.direction_output = usbio_gpio_direction_output;
gpio->gc.get = usbio_gpio_get;
gpio->gc.set = usbio_gpio_set;
gpio->gc.set_config = usbio_gpio_set_config;
gpio->gc.base = -1;
gpio->gc.ngpio = bank * USBIO_GPIOSPERBANK;
gpio->gc.can_sleep = true;
ret = devm_gpiochip_add_data(dev, &gpio->gc, gpio);
if (ret)
return ret;
if (has_acpi_companion(dev))
acpi_dev_clear_dependencies(ACPI_COMPANION(dev));
return 0;
}
static const struct auxiliary_device_id usbio_gpio_id_table[] = {
{ "usbio.usbio-gpio" },
{ }
};
MODULE_DEVICE_TABLE(auxiliary, usbio_gpio_id_table);
static struct auxiliary_driver usbio_gpio_driver = {
.name = USBIO_GPIO_CLIENT,
.probe = usbio_gpio_probe,
.id_table = usbio_gpio_id_table
};
module_auxiliary_driver(usbio_gpio_driver);
MODULE_DESCRIPTION("Intel USBIO GPIO driver");
MODULE_AUTHOR("Israel Cepeda <israel.a.cepeda.lopez@intel.com>");
MODULE_AUTHOR("Hans de Goede <hansg@kernel.org>");
MODULE_LICENSE("GPL");
MODULE_IMPORT_NS("USBIO");

View file

@ -1368,6 +1368,17 @@ config I2C_NCT6694
This driver can also be built as a module. If so, the module will
be called i2c-nct6694.
config I2C_USBIO
tristate "Intel USBIO I2C Adapter support"
depends on USB_USBIO
default USB_USBIO
help
Select this option to enable I2C driver for the INTEL
USBIO driver stack.
This driver can also be built as a module. If so, the module
will be called i2c_usbio.
config I2C_CP2615
tristate "Silicon Labs CP2615 USB sound card and I2C adapter"
depends on USB

View file

@ -136,6 +136,7 @@ obj-$(CONFIG_I2C_DIOLAN_U2C) += i2c-diolan-u2c.o
obj-$(CONFIG_I2C_DLN2) += i2c-dln2.o
obj-$(CONFIG_I2C_LJCA) += i2c-ljca.o
obj-$(CONFIG_I2C_NCT6694) += i2c-nct6694.o
obj-$(CONFIG_I2C_USBIO) += i2c-usbio.o
obj-$(CONFIG_I2C_CP2615) += i2c-cp2615.o
obj-$(CONFIG_I2C_PARPORT) += i2c-parport.o
obj-$(CONFIG_I2C_PCI1XXXX) += i2c-mchp-pci1xxxx.o

View file

@ -0,0 +1,320 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2025 Intel Corporation.
* Copyright (c) 2025 Red Hat, Inc.
*/
#include <linux/auxiliary_bus.h>
#include <linux/dev_printk.h>
#include <linux/device.h>
#include <linux/i2c.h>
#include <linux/types.h>
#include <linux/usb/usbio.h>
#define I2C_RW_OVERHEAD (sizeof(struct usbio_bulk_packet) + sizeof(struct usbio_i2c_rw))
struct usbio_i2c {
struct i2c_adapter adap;
struct auxiliary_device *adev;
struct usbio_i2c_rw *rwbuf;
unsigned long quirks;
u32 speed;
u16 txbuf_len;
u16 rxbuf_len;
};
static const struct acpi_device_id usbio_i2c_acpi_hids[] = {
{ "INTC1008" }, /* MTL */
{ "INTC10B3" }, /* ARL */
{ "INTC10B6" }, /* LNL */
{ "INTC10E3" }, /* PTL */
{ }
};
static const u32 usbio_i2c_speeds[] = {
I2C_MAX_STANDARD_MODE_FREQ,
I2C_MAX_FAST_MODE_FREQ,
I2C_MAX_FAST_MODE_PLUS_FREQ,
I2C_MAX_HIGH_SPEED_MODE_FREQ
};
static void usbio_i2c_uninit(struct i2c_adapter *adap, struct i2c_msg *msg)
{
struct usbio_i2c *i2c = i2c_get_adapdata(adap);
struct usbio_i2c_uninit ubuf;
ubuf.busid = i2c->adev->id;
ubuf.config = cpu_to_le16(msg->addr);
usbio_bulk_msg(i2c->adev, USBIO_PKTTYPE_I2C, USBIO_I2CCMD_UNINIT, true,
&ubuf, sizeof(ubuf), NULL, 0);
}
static int usbio_i2c_init(struct i2c_adapter *adap, struct i2c_msg *msg)
{
struct usbio_i2c *i2c = i2c_get_adapdata(adap);
struct usbio_i2c_init ibuf;
void *reply_buf;
u16 reply_len;
int ret;
ibuf.busid = i2c->adev->id;
ibuf.config = cpu_to_le16(msg->addr);
ibuf.speed = cpu_to_le32(i2c->speed);
if (i2c->quirks & USBIO_QUIRK_I2C_NO_INIT_ACK) {
reply_buf = NULL;
reply_len = 0;
} else {
reply_buf = &ibuf;
reply_len = sizeof(ibuf);
}
ret = usbio_bulk_msg(i2c->adev, USBIO_PKTTYPE_I2C, USBIO_I2CCMD_INIT, true,
&ibuf, sizeof(ibuf), reply_buf, reply_len);
if (ret != sizeof(ibuf))
return (ret < 0) ? ret : -EIO;
return 0;
}
static int usbio_i2c_read(struct i2c_adapter *adap, struct i2c_msg *msg)
{
struct usbio_i2c *i2c = i2c_get_adapdata(adap);
u16 rxchunk = i2c->rxbuf_len - I2C_RW_OVERHEAD;
struct usbio_i2c_rw *rbuf = i2c->rwbuf;
int ret;
rbuf->busid = i2c->adev->id;
rbuf->config = cpu_to_le16(msg->addr);
rbuf->size = cpu_to_le16(msg->len);
if (msg->len > rxchunk) {
/* Need to split the input buffer */
u16 len = 0;
do {
if (msg->len - len < rxchunk)
rxchunk = msg->len - len;
ret = usbio_bulk_msg(i2c->adev, USBIO_PKTTYPE_I2C,
USBIO_I2CCMD_READ, true,
rbuf, len == 0 ? sizeof(*rbuf) : 0,
rbuf, sizeof(*rbuf) + rxchunk);
if (ret < 0)
return ret;
memcpy(&msg->buf[len], rbuf->data, rxchunk);
len += rxchunk;
} while (msg->len > len);
return 0;
}
ret = usbio_bulk_msg(i2c->adev, USBIO_PKTTYPE_I2C, USBIO_I2CCMD_READ, true,
rbuf, sizeof(*rbuf), rbuf, sizeof(*rbuf) + msg->len);
if (ret != sizeof(*rbuf) + msg->len)
return (ret < 0) ? ret : -EIO;
memcpy(msg->buf, rbuf->data, msg->len);
return 0;
}
static int usbio_i2c_write(struct i2c_adapter *adap, struct i2c_msg *msg)
{
struct usbio_i2c *i2c = i2c_get_adapdata(adap);
u16 txchunk = i2c->txbuf_len - I2C_RW_OVERHEAD;
struct usbio_i2c_rw *wbuf = i2c->rwbuf;
int ret;
if (msg->len > txchunk) {
/* Need to split the output buffer */
u16 len = 0;
do {
wbuf->busid = i2c->adev->id;
wbuf->config = cpu_to_le16(msg->addr);
if (i2c->quirks & USBIO_QUIRK_I2C_USE_CHUNK_LEN)
wbuf->size = cpu_to_le16(txchunk);
else
wbuf->size = cpu_to_le16(msg->len);
memcpy(wbuf->data, &msg->buf[len], txchunk);
len += txchunk;
ret = usbio_bulk_msg(i2c->adev, USBIO_PKTTYPE_I2C,
USBIO_I2CCMD_WRITE, msg->len == len,
wbuf, sizeof(*wbuf) + txchunk,
wbuf, sizeof(*wbuf));
if (ret < 0)
return ret;
if (msg->len - len < txchunk)
txchunk = msg->len - len;
} while (msg->len > len);
return 0;
}
wbuf->busid = i2c->adev->id;
wbuf->config = cpu_to_le16(msg->addr);
wbuf->size = cpu_to_le16(msg->len);
memcpy(wbuf->data, msg->buf, msg->len);
ret = usbio_bulk_msg(i2c->adev, USBIO_PKTTYPE_I2C, USBIO_I2CCMD_WRITE, true,
wbuf, sizeof(*wbuf) + msg->len, wbuf, sizeof(*wbuf));
if (ret != sizeof(*wbuf) || le16_to_cpu(wbuf->size) != msg->len)
return (ret < 0) ? ret : -EIO;
return 0;
}
static int usbio_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num)
{
struct usbio_i2c *i2c = i2c_get_adapdata(adap);
int ret;
usbio_acquire(i2c->adev);
ret = usbio_i2c_init(adap, msgs);
if (ret)
goto out_release;
for (int i = 0; i < num; ret = ++i) {
if (msgs[i].flags & I2C_M_RD)
ret = usbio_i2c_read(adap, &msgs[i]);
else
ret = usbio_i2c_write(adap, &msgs[i]);
if (ret)
break;
}
usbio_i2c_uninit(adap, msgs);
out_release:
usbio_release(i2c->adev);
return ret;
}
static u32 usbio_i2c_func(struct i2c_adapter *adap)
{
return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL;
}
static const struct i2c_adapter_quirks usbio_i2c_quirks = {
.flags = I2C_AQ_NO_ZERO_LEN | I2C_AQ_NO_REP_START,
.max_read_len = SZ_4K,
.max_write_len = SZ_4K,
};
static const struct i2c_adapter_quirks usbio_i2c_quirks_max_rw_len52 = {
.flags = I2C_AQ_NO_ZERO_LEN | I2C_AQ_NO_REP_START,
.max_read_len = 52,
.max_write_len = 52,
};
static const struct i2c_algorithm usbio_i2c_algo = {
.master_xfer = usbio_i2c_xfer,
.functionality = usbio_i2c_func,
};
static int usbio_i2c_probe(struct auxiliary_device *adev,
const struct auxiliary_device_id *adev_id)
{
struct usbio_i2c_bus_desc *i2c_desc;
struct device *dev = &adev->dev;
struct usbio_i2c *i2c;
u32 max_speed;
int ret;
i2c_desc = dev_get_platdata(dev);
if (!i2c_desc)
return -EINVAL;
i2c = devm_kzalloc(dev, sizeof(*i2c), GFP_KERNEL);
if (!i2c)
return -ENOMEM;
i2c->adev = adev;
usbio_acpi_bind(i2c->adev, usbio_i2c_acpi_hids);
usbio_get_txrxbuf_len(i2c->adev, &i2c->txbuf_len, &i2c->rxbuf_len);
i2c->rwbuf = devm_kzalloc(dev, max(i2c->txbuf_len, i2c->rxbuf_len), GFP_KERNEL);
if (!i2c->rwbuf)
return -ENOMEM;
i2c->quirks = usbio_get_quirks(i2c->adev);
max_speed = usbio_i2c_speeds[i2c_desc->caps & USBIO_I2C_BUS_MODE_CAP_MASK];
if (max_speed < I2C_MAX_FAST_MODE_FREQ &&
(i2c->quirks & USBIO_QUIRK_I2C_ALLOW_400KHZ))
max_speed = I2C_MAX_FAST_MODE_FREQ;
i2c->speed = i2c_acpi_find_bus_speed(dev);
if (!i2c->speed)
i2c->speed = I2C_MAX_STANDARD_MODE_FREQ;
else if (i2c->speed > max_speed) {
dev_warn(dev, "Invalid speed %u adjusting to bus max %u\n",
i2c->speed, max_speed);
i2c->speed = max_speed;
}
i2c->adap.owner = THIS_MODULE;
i2c->adap.class = I2C_CLASS_HWMON;
i2c->adap.dev.parent = dev;
i2c->adap.algo = &usbio_i2c_algo;
if (i2c->quirks & USBIO_QUIRK_I2C_MAX_RW_LEN_52)
i2c->adap.quirks = &usbio_i2c_quirks_max_rw_len52;
else
i2c->adap.quirks = &usbio_i2c_quirks;
snprintf(i2c->adap.name, sizeof(i2c->adap.name), "%s.%d",
USBIO_I2C_CLIENT, i2c->adev->id);
device_set_node(&i2c->adap.dev, dev_fwnode(&adev->dev));
auxiliary_set_drvdata(adev, i2c);
i2c_set_adapdata(&i2c->adap, i2c);
ret = i2c_add_adapter(&i2c->adap);
if (ret)
return ret;
if (has_acpi_companion(&i2c->adap.dev))
acpi_dev_clear_dependencies(ACPI_COMPANION(&i2c->adap.dev));
return 0;
}
static void usbio_i2c_remove(struct auxiliary_device *adev)
{
struct usbio_i2c *i2c = auxiliary_get_drvdata(adev);
i2c_del_adapter(&i2c->adap);
}
static const struct auxiliary_device_id usbio_i2c_id_table[] = {
{ "usbio.usbio-i2c" },
{ }
};
MODULE_DEVICE_TABLE(auxiliary, usbio_i2c_id_table);
static struct auxiliary_driver usbio_i2c_driver = {
.name = USBIO_I2C_CLIENT,
.probe = usbio_i2c_probe,
.remove = usbio_i2c_remove,
.id_table = usbio_i2c_id_table
};
module_auxiliary_driver(usbio_i2c_driver);
MODULE_DESCRIPTION("Intel USBIO I2C driver");
MODULE_AUTHOR("Israel Cepeda <israel.a.cepeda.lopez@intel.com>");
MODULE_AUTHOR("Hans de Goede <hansg@kernel.org>");
MODULE_LICENSE("GPL");
MODULE_IMPORT_NS("USBIO");

View file

@ -538,7 +538,7 @@ static int uvc_parse_streaming(struct uvc_device *dev,
unsigned int nformats = 0, nframes = 0, nintervals = 0;
unsigned int size, i, n, p;
u32 *interval;
u16 psize;
u32 psize;
int ret = -EINVAL;
if (intf->cur_altsetting->desc.bInterfaceSubClass
@ -774,7 +774,7 @@ static int uvc_parse_streaming(struct uvc_device *dev,
streaming->header.bEndpointAddress);
if (ep == NULL)
continue;
psize = uvc_endpoint_max_bpi(dev->udev, ep);
psize = usb_endpoint_max_periodic_payload(dev->udev, ep);
if (psize > streaming->maxpsize)
streaming->maxpsize = psize;
}

View file

@ -1869,24 +1869,6 @@ static void uvc_video_stop_transfer(struct uvc_streaming *stream,
uvc_free_urb_buffers(stream);
}
/*
* Compute the maximum number of bytes per interval for an endpoint.
*/
u16 uvc_endpoint_max_bpi(struct usb_device *dev, struct usb_host_endpoint *ep)
{
u16 psize;
switch (dev->speed) {
case USB_SPEED_SUPER:
case USB_SPEED_SUPER_PLUS:
return le16_to_cpu(ep->ss_ep_comp.wBytesPerInterval);
default:
psize = usb_endpoint_maxp(&ep->desc);
psize *= usb_endpoint_maxp_mult(&ep->desc);
return psize;
}
}
/*
* Initialize isochronous URBs and allocate transfer buffers. The packet size
* is given by the endpoint.
@ -1897,10 +1879,10 @@ static int uvc_init_video_isoc(struct uvc_streaming *stream,
struct urb *urb;
struct uvc_urb *uvc_urb;
unsigned int npackets, i;
u16 psize;
u32 psize;
u32 size;
psize = uvc_endpoint_max_bpi(stream->dev->udev, ep);
psize = usb_endpoint_max_periodic_payload(stream->dev->udev, ep);
size = stream->ctrl.dwMaxVideoFrameSize;
npackets = uvc_alloc_urb_buffers(stream, size, psize, gfp_flags);
@ -2043,7 +2025,7 @@ static int uvc_video_start_transfer(struct uvc_streaming *stream,
continue;
/* Check if the bandwidth is high enough. */
psize = uvc_endpoint_max_bpi(stream->dev->udev, ep);
psize = usb_endpoint_max_periodic_payload(stream->dev->udev, ep);
if (psize >= bandwidth && psize < best_psize) {
altsetting = alts->desc.bAlternateSetting;
best_psize = psize;

View file

@ -458,7 +458,7 @@ struct uvc_streaming {
struct usb_interface *intf;
int intfnum;
u16 maxpsize;
u32 maxpsize;
struct uvc_streaming_header header;
enum v4l2_buf_type type;
@ -797,8 +797,6 @@ void uvc_ctrl_cleanup_fh(struct uvc_fh *handle);
/* Utility functions */
struct usb_host_endpoint *uvc_find_endpoint(struct usb_host_interface *alts,
u8 epaddr);
u16 uvc_endpoint_max_bpi(struct usb_device *dev, struct usb_host_endpoint *ep);
/* Quirks support */
void uvc_video_decode_isight(struct uvc_urb *uvc_urb,
struct uvc_buffer *buf,

View file

@ -3829,7 +3829,7 @@ DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MELLANOX, 0xcf80, quirk_no_pm_reset);
*/
static void quirk_thunderbolt_hotplug_msi(struct pci_dev *pdev)
{
if (pdev->is_hotplug_bridge &&
if (pdev->is_pciehp &&
(pdev->device != PCI_DEVICE_ID_INTEL_CACTUS_RIDGE_4C ||
pdev->revision <= 1))
pdev->no_msi = 1;

View file

@ -4,8 +4,8 @@ menuconfig USB4
depends on PCI
select APPLE_PROPERTIES if EFI_STUB && X86
select CRC32
select CRYPTO
select CRYPTO_HASH
select CRYPTO_LIB_SHA256
select CRYPTO_LIB_UTILS
select NVMEM
help
USB4 and Thunderbolt driver. USB4 is the public specification

View file

@ -86,7 +86,7 @@ static acpi_status tb_acpi_add_link(acpi_handle handle, u32 level, void *data,
* @nhi ACPI node. For each reference a device link is added. The link
* is automatically removed by the driver core.
*
* Returns %true if at least one link was created.
* Returns %true if at least one link was created, %false otherwise.
*/
bool tb_acpi_add_links(struct tb_nhi *nhi)
{
@ -113,8 +113,10 @@ bool tb_acpi_add_links(struct tb_nhi *nhi)
/**
* tb_acpi_is_native() - Did the platform grant native TBT/USB4 control
*
* Returns %true if the platform granted OS native control over
* TBT/USB4. In this case software based connection manager can be used,
* Return: %true if the platform granted OS native control over
* TBT/USB4, %false otherwise.
*
* When returned %true, software based connection manager can be used,
* otherwise there is firmware based connection manager running.
*/
bool tb_acpi_is_native(void)
@ -126,8 +128,8 @@ bool tb_acpi_is_native(void)
/**
* tb_acpi_may_tunnel_usb3() - Is USB3 tunneling allowed by the platform
*
* When software based connection manager is used, this function
* returns %true if platform allows native USB3 tunneling.
* Return: %true if software based connection manager is used and
* platform allows native USB 3.x tunneling, %false otherwise.
*/
bool tb_acpi_may_tunnel_usb3(void)
{
@ -139,8 +141,8 @@ bool tb_acpi_may_tunnel_usb3(void)
/**
* tb_acpi_may_tunnel_dp() - Is DisplayPort tunneling allowed by the platform
*
* When software based connection manager is used, this function
* returns %true if platform allows native DP tunneling.
* Return: %true if software based connection manager is used and
* platform allows native DP tunneling, %false otherwise.
*/
bool tb_acpi_may_tunnel_dp(void)
{
@ -152,8 +154,8 @@ bool tb_acpi_may_tunnel_dp(void)
/**
* tb_acpi_may_tunnel_pcie() - Is PCIe tunneling allowed by the platform
*
* When software based connection manager is used, this function
* returns %true if platform allows native PCIe tunneling.
* Return: %true if software based connection manager is used and
* platform allows native PCIe tunneling, %false otherwise.
*/
bool tb_acpi_may_tunnel_pcie(void)
{
@ -165,8 +167,8 @@ bool tb_acpi_may_tunnel_pcie(void)
/**
* tb_acpi_is_xdomain_allowed() - Are XDomain connections allowed
*
* When software based connection manager is used, this function
* returns %true if platform allows XDomain connections.
* Return: %true if software based connection manager is used and
* platform allows XDomain tunneling, %false otherwise.
*/
bool tb_acpi_is_xdomain_allowed(void)
{
@ -256,7 +258,7 @@ static int tb_acpi_retimer_set_power(struct tb_port *port, bool power)
*
* This should only be called if the USB4/TBT link is not up.
*
* Returns %0 on success.
* Return: %0 on success, negative errno otherwise.
*/
int tb_acpi_power_on_retimers(struct tb_port *port)
{
@ -270,7 +272,7 @@ int tb_acpi_power_on_retimers(struct tb_port *port)
* This is the opposite of tb_acpi_power_on_retimers(). After returning
* successfully the normal operations with the @port can continue.
*
* Returns %0 on success.
* Return: %0 on success, negative errno otherwise.
*/
int tb_acpi_power_off_retimers(struct tb_port *port)
{

View file

@ -64,10 +64,14 @@ static void tb_port_dummy_read(struct tb_port *port)
* @port: Port to find the capability for
* @offset: Previous capability offset (%0 for start)
*
* Returns dword offset of the next capability in port config space
* capability list and returns it. Passing %0 returns the first entry in
* the capability list. If no next capability is found returns %0. In case
* of failure returns negative errno.
* Finds dword offset of the next capability in port config space
* capability list. When passed %0 in @offset parameter, first entry
* will be returned, if it exists.
*
* Return:
* * Double word offset of the first or next capability - On success.
* * %0 - If no next capability is found.
* * Negative errno - Another error occurred.
*/
int tb_port_next_cap(struct tb_port *port, unsigned int offset)
{
@ -112,9 +116,10 @@ static int __tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap)
* @port: Port to find the capability for
* @cap: Capability to look
*
* Returns offset to start of capability or %-ENOENT if no such
* capability was found. Negative errno is returned if there was an
* error.
* Return:
* * Offset to the start of capability - On success.
* * %-ENOENT - If no such capability was found.
* * Negative errno - Another error occurred.
*/
int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap)
{
@ -137,10 +142,14 @@ int tb_port_find_cap(struct tb_port *port, enum tb_port_cap cap)
* @sw: Switch to find the capability for
* @offset: Previous capability offset (%0 for start)
*
* Finds dword offset of the next capability in router config space
* capability list and returns it. Passing %0 returns the first entry in
* the capability list. If no next capability is found returns %0. In case
* of failure returns negative errno.
* Finds dword offset of the next capability in port config space
* capability list. When passed %0 in @offset parameter, first entry
* will be returned, if it exists.
*
* Return:
* * Double word offset of the first or next capability - On success.
* * %0 - If no next capability is found.
* * Negative errno - Another error occurred.
*/
int tb_switch_next_cap(struct tb_switch *sw, unsigned int offset)
{
@ -181,9 +190,10 @@ int tb_switch_next_cap(struct tb_switch *sw, unsigned int offset)
* @sw: Switch to find the capability for
* @cap: Capability to look
*
* Returns offset to start of capability or %-ENOENT if no such
* capability was found. Negative errno is returned if there was an
* error.
* Return:
* * Offset to the start of capability - On success.
* * %-ENOENT - If no such capability was found.
* * Negative errno - Another error occurred.
*/
int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap)
{
@ -213,10 +223,13 @@ int tb_switch_find_cap(struct tb_switch *sw, enum tb_switch_cap cap)
* @sw: Switch to find the capability for
* @vsec: Vendor specific capability to look
*
* Functions enumerates vendor specific capabilities (VSEC) of a switch
* and returns offset when capability matching @vsec is found. If no
* such capability is found returns %-ENOENT. In case of error returns
* negative errno.
* This function enumerates vendor specific capabilities (VSEC) of a
* switch and returns offset when capability matching @vsec is found.
*
* Return:
* * Offset of capability - On success.
* * %-ENOENT - If capability was not found.
* * Negative errno - Another error occurred.
*/
int tb_switch_find_vse_cap(struct tb_switch *sw, enum tb_switch_vse_cap vsec)
{

View file

@ -167,7 +167,8 @@ static int tb_port_clx(struct tb_port *port)
* @port: USB4 port to check
* @clx: Mask of CL states to check
*
* Returns true if any of the given CL states is enabled for @port.
* Return: %true if any of the given CL states is enabled for @port,
* %false otherwise.
*/
bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx)
{
@ -177,6 +178,8 @@ bool tb_port_clx_is_enabled(struct tb_port *port, unsigned int clx)
/**
* tb_switch_clx_is_supported() - Is CLx supported on this type of router
* @sw: The router to check CLx support for
*
* Return: %true if CLx is supported, %false otherwise.
*/
static bool tb_switch_clx_is_supported(const struct tb_switch *sw)
{
@ -203,7 +206,7 @@ static bool tb_switch_clx_is_supported(const struct tb_switch *sw)
* Can be called for any router. Initializes the current CL state by
* reading it from the hardware.
*
* Returns %0 in case of success and negative errno in case of failure.
* Return: %0 on success, negative errno otherwise.
*/
int tb_switch_clx_init(struct tb_switch *sw)
{
@ -313,7 +316,7 @@ static bool validate_mask(unsigned int clx)
* is not inter-domain link. The complete set of conditions is described in CM
* Guide 1.0 section 8.1.
*
* Returns %0 on success or an error code on failure.
* Return: %0 on success, negative errno otherwise.
*/
int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx)
{
@ -390,8 +393,7 @@ int tb_switch_clx_enable(struct tb_switch *sw, unsigned int clx)
* Disables all CL states of the given router. Can be called on any
* router and if the states were not enabled already does nothing.
*
* Returns the CL states that were disabled or negative errno in case of
* failure.
* Return: CL states that were disabled or negative errno otherwise.
*/
int tb_switch_clx_disable(struct tb_switch *sw)
{

View file

@ -82,6 +82,8 @@ static DEFINE_MUTEX(tb_cfg_request_lock);
*
* This is refcounted object so when you are done with this, call
* tb_cfg_request_put() to it.
*
* Return: &struct tb_cfg_request on success, %NULL otherwise.
*/
struct tb_cfg_request *tb_cfg_request_alloc(void)
{
@ -359,7 +361,7 @@ static void tb_ctl_tx_callback(struct tb_ring *ring, struct ring_frame *frame,
*
* len must be a multiple of four.
*
* Return: Returns 0 on success or an error code on failure.
* Return: %0 on success, negative errno otherwise.
*/
static int tb_ctl_tx(struct tb_ctl *ctl, const void *data, size_t len,
enum tb_cfg_pkg_type type)
@ -539,6 +541,8 @@ static void tb_cfg_request_work(struct work_struct *work)
*
* This queues @req on the given control channel without waiting for it
* to complete. When the request completes @callback is called.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_cfg_request(struct tb_ctl *ctl, struct tb_cfg_request *req,
void (*callback)(void *), void *callback_data)
@ -605,6 +609,9 @@ static void tb_cfg_request_complete(void *data)
* triggers the request is canceled before function returns. Note the
* caller needs to make sure only one message for given switch is active
* at a time.
*
* Return: &struct tb_cfg_result with non-zero @err field if error
* has occurred.
*/
struct tb_cfg_result tb_cfg_request_sync(struct tb_ctl *ctl,
struct tb_cfg_request *req,
@ -641,7 +648,7 @@ struct tb_cfg_result tb_cfg_request_sync(struct tb_ctl *ctl,
*
* cb will be invoked once for every hot plug event.
*
* Return: Returns a pointer on success or NULL on failure.
* Return: Pointer to &struct tb_ctl, %NULL on failure.
*/
struct tb_ctl *tb_ctl_alloc(struct tb_nhi *nhi, int index, int timeout_msec,
event_cb cb, void *cb_data)
@ -764,8 +771,9 @@ void tb_ctl_stop(struct tb_ctl *ctl)
* @route: Router that originated the event
* @error: Pointer to the notification package
*
* Call this as response for non-plug notification to ack it. Returns
* %0 on success or an error code on failure.
* Call this as a response for non-plug notification to ack it.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_cfg_ack_notification(struct tb_ctl *ctl, u64 route,
const struct cfg_error_pkg *error)
@ -827,8 +835,9 @@ int tb_cfg_ack_notification(struct tb_ctl *ctl, u64 route,
* @port: Port where the hot plug/unplug happened
* @unplug: Ack hot plug or unplug
*
* Call this as response for hot plug/unplug event to ack it.
* Returns %0 on success or an error code on failure.
* Call this as a response for hot plug/unplug event to ack it.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_cfg_ack_plug(struct tb_ctl *ctl, u64 route, u32 port, bool unplug)
{
@ -895,6 +904,9 @@ static bool tb_cfg_copy(struct tb_cfg_request *req, const struct ctl_pkg *pkg)
* If the switch at route is incorrectly configured then we will not receive a
* reply (even though the switch will reset). The caller should check for
* -ETIMEDOUT and attempt to reconfigure the switch.
*
* Return: &struct tb_cfg_result with non-zero @err field if error
* has occurred.
*/
struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route)
{
@ -937,6 +949,9 @@ struct tb_cfg_result tb_cfg_reset(struct tb_ctl *ctl, u64 route)
* @timeout_msec: Timeout in ms how long to wait for the response
*
* Reads from router config space without translating the possible error.
*
* Return: &struct tb_cfg_result with non-zero @err field if error
* has occurred.
*/
struct tb_cfg_result tb_cfg_read_raw(struct tb_ctl *ctl, void *buffer,
u64 route, u32 port, enum tb_cfg_space space,
@ -1008,6 +1023,9 @@ struct tb_cfg_result tb_cfg_read_raw(struct tb_ctl *ctl, void *buffer,
* @timeout_msec: Timeout in ms how long to wait for the response
*
* Writes to router config space without translating the possible error.
*
* Return: &struct tb_cfg_result with non-zero @err field if error
* has occurred.
*/
struct tb_cfg_result tb_cfg_write_raw(struct tb_ctl *ctl, const void *buffer,
u64 route, u32 port, enum tb_cfg_space space,
@ -1150,8 +1168,7 @@ int tb_cfg_write(struct tb_ctl *ctl, const void *buffer, u64 route, u32 port,
* Reads the first dword from the switches TB_CFG_SWITCH config area and
* returns the port number from which the reply originated.
*
* Return: Returns the upstream port number on success or an error code on
* failure.
* Return: Upstream port number on success or negative error code on failure.
*/
int tb_cfg_get_upstream_port(struct tb_ctl *ctl, u64 route)
{

View file

@ -54,6 +54,7 @@ struct ctl_pkg {
* @kref: Reference count
* @ctl: Pointer to the control channel structure. Only set when the
* request is queued.
* @request: Request is stored here
* @request_size: Size of the request packet (in bytes)
* @request_type: Type of the request packet
* @response: Response is stored here

View file

@ -12,6 +12,7 @@
#include <linux/debugfs.h>
#include <linux/delay.h>
#include <linux/pm_runtime.h>
#include <linux/string_choices.h>
#include <linux/uaccess.h>
#include "tb.h"
@ -691,7 +692,7 @@ static int margining_caps_show(struct seq_file *s, void *not_used)
seq_printf(s, "0x%08x\n", margining->caps[i]);
seq_printf(s, "# software margining: %s\n",
supports_software(margining) ? "yes" : "no");
str_yes_no(supports_software(margining)));
if (supports_hardware(margining)) {
seq_puts(s, "# hardware margining: yes\n");
seq_puts(s, "# minimum BER level contour: ");

View file

@ -197,6 +197,8 @@ static int dma_find_port(struct tb_switch *sw)
*
* The DMA control port is functional also when the switch is in safe
* mode.
*
* Return: &struct tb_dma_port on success, %NULL otherwise.
*/
struct tb_dma_port *dma_port_alloc(struct tb_switch *sw)
{
@ -354,6 +356,8 @@ static int dma_port_flash_write_block(void *data, unsigned int dwaddress,
* @address: Address relative to the start of active region
* @buf: Buffer where the data is read
* @size: Size of the buffer
*
* Return: %0 on success, negative errno otherwise.
*/
int dma_port_flash_read(struct tb_dma_port *dma, unsigned int address,
void *buf, size_t size)
@ -372,6 +376,8 @@ int dma_port_flash_read(struct tb_dma_port *dma, unsigned int address,
* Writes block of data to the non-active flash region of the switch. If
* the address is given as %DMA_PORT_CSS_ADDRESS the block is written
* using CSS command.
*
* Return: %0 on success, negative errno otherwise.
*/
int dma_port_flash_write(struct tb_dma_port *dma, unsigned int address,
const void *buf, size_t size)
@ -393,6 +399,8 @@ int dma_port_flash_write(struct tb_dma_port *dma, unsigned int address,
* dma_port_flash_update_auth_status() to get status of this command.
* This is because if the switch in question is root switch the
* thunderbolt host controller gets reset as well.
*
* Return: %0 on success, negative errno otherwise.
*/
int dma_port_flash_update_auth(struct tb_dma_port *dma)
{
@ -410,12 +418,13 @@ int dma_port_flash_update_auth(struct tb_dma_port *dma)
* @status: Status code of the operation
*
* The function checks if there is status available from the last update
* auth command. Returns %0 if there is no status and no further
* action is required. If there is status, %1 is returned instead and
* @status holds the failure code.
* auth command.
*
* Negative return means there was an error reading status from the
* switch.
* Return:
* * %0 - If there is no status and no further action is required.
* * %1 - If there is some status. @status holds the failure code.
* * Negative errno - An error occurred when reading status from the
* switch.
*/
int dma_port_flash_update_auth_status(struct tb_dma_port *dma, u32 *status)
{
@ -446,6 +455,8 @@ int dma_port_flash_update_auth_status(struct tb_dma_port *dma, u32 *status)
* @dma: DMA control port
*
* Triggers power cycle to the switch.
*
* Return: %0 on success, negative errno otherwise.
*/
int dma_port_power_cycle(struct tb_dma_port *dma)
{

View file

@ -12,7 +12,8 @@
#include <linux/pm_runtime.h>
#include <linux/slab.h>
#include <linux/random.h>
#include <crypto/hash.h>
#include <crypto/sha2.h>
#include <crypto/utils.h>
#include "tb.h"
@ -368,7 +369,7 @@ static bool tb_domain_event_cb(void *data, enum tb_cfg_pkg_type type,
* Call tb_domain_put() to release the domain before it has been added
* to the system.
*
* Return: allocated domain structure on %NULL in case of error
* Return: Pointer to &struct tb or %NULL in case of error.
*/
struct tb *tb_domain_alloc(struct tb_nhi *nhi, int timeout_msec, size_t privsize)
{
@ -430,7 +431,7 @@ struct tb *tb_domain_alloc(struct tb_nhi *nhi, int timeout_msec, size_t privsize
* and release the domain after this function has been called, call
* tb_domain_remove().
*
* Return: %0 in case of success and negative errno in case of error
* Return: %0 on success, negative errno otherwise.
*/
int tb_domain_add(struct tb *tb, bool reset)
{
@ -518,6 +519,8 @@ void tb_domain_remove(struct tb *tb)
* @tb: Domain to suspend
*
* Suspends all devices in the domain and stops the control channel.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_domain_suspend_noirq(struct tb *tb)
{
@ -544,6 +547,8 @@ int tb_domain_suspend_noirq(struct tb *tb)
*
* Re-starts the control channel, and resumes all devices connected to
* the domain.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_domain_resume_noirq(struct tb *tb)
{
@ -643,6 +648,8 @@ int tb_domain_disapprove_switch(struct tb *tb, struct tb_switch *sw)
* This will approve switch by connection manager specific means. In
* case of success the connection manager will create PCIe tunnel from
* parent to @sw.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_domain_approve_switch(struct tb *tb, struct tb_switch *sw)
{
@ -708,8 +715,6 @@ int tb_domain_challenge_switch_key(struct tb *tb, struct tb_switch *sw)
u8 response[TB_SWITCH_KEY_SIZE];
u8 hmac[TB_SWITCH_KEY_SIZE];
struct tb_switch *parent_sw;
struct crypto_shash *tfm;
struct shash_desc *shash;
int ret;
if (!tb->cm_ops->approve_switch || !tb->cm_ops->challenge_switch_key)
@ -725,45 +730,15 @@ int tb_domain_challenge_switch_key(struct tb *tb, struct tb_switch *sw)
if (ret)
return ret;
tfm = crypto_alloc_shash("hmac(sha256)", 0, 0);
if (IS_ERR(tfm))
return PTR_ERR(tfm);
ret = crypto_shash_setkey(tfm, sw->key, TB_SWITCH_KEY_SIZE);
if (ret)
goto err_free_tfm;
shash = kzalloc(sizeof(*shash) + crypto_shash_descsize(tfm),
GFP_KERNEL);
if (!shash) {
ret = -ENOMEM;
goto err_free_tfm;
}
shash->tfm = tfm;
memset(hmac, 0, sizeof(hmac));
ret = crypto_shash_digest(shash, challenge, sizeof(hmac), hmac);
if (ret)
goto err_free_shash;
static_assert(sizeof(hmac) == SHA256_DIGEST_SIZE);
hmac_sha256_usingrawkey(sw->key, TB_SWITCH_KEY_SIZE,
challenge, sizeof(challenge), hmac);
/* The returned HMAC must match the one we calculated */
if (memcmp(response, hmac, sizeof(hmac))) {
ret = -EKEYREJECTED;
goto err_free_shash;
}
crypto_free_shash(tfm);
kfree(shash);
if (crypto_memneq(response, hmac, sizeof(hmac)))
return -EKEYREJECTED;
return tb->cm_ops->approve_switch(tb, sw);
err_free_shash:
kfree(shash);
err_free_tfm:
crypto_free_shash(tfm);
return ret;
}
/**
@ -773,7 +748,7 @@ int tb_domain_challenge_switch_key(struct tb *tb, struct tb_switch *sw)
* This needs to be called in preparation for NVM upgrade of the host
* controller. Makes sure all PCIe paths are disconnected.
*
* Return %0 on success and negative errno in case of error.
* Return: %0 on success and negative errno in case of error.
*/
int tb_domain_disconnect_pcie_paths(struct tb *tb)
{
@ -795,9 +770,11 @@ int tb_domain_disconnect_pcie_paths(struct tb *tb)
* Calls connection manager specific method to enable DMA paths to the
* XDomain in question.
*
* Return: 0% in case of success and negative errno otherwise. In
* particular returns %-ENOTSUPP if the connection manager
* implementation does not support XDomains.
* Return:
* * %0 - On success.
* * %-ENOTSUPP - If the connection manager implementation does not support
* XDomains.
* * Negative errno - An error occurred.
*/
int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,
@ -822,9 +799,11 @@ int tb_domain_approve_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
* Calls connection manager specific method to disconnect DMA paths to
* the XDomain in question.
*
* Return: 0% in case of success and negative errno otherwise. In
* particular returns %-ENOTSUPP if the connection manager
* implementation does not support XDomains.
* Return:
* * %0 - On success.
* * %-ENOTSUPP - If the connection manager implementation does not support
* XDomains.
* * Negative errno - An error occurred.
*/
int tb_domain_disconnect_xdomain_paths(struct tb *tb, struct tb_xdomain *xd,
int transmit_path, int transmit_ring,

View file

@ -298,6 +298,8 @@ struct tb_drom_entry_desc {
*
* Does not use the cached copy in sw->drom. Used during resume to check switch
* identity.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_drom_read_uid_only(struct tb_switch *sw, u64 *uid)
{
@ -709,7 +711,7 @@ static int tb_drom_device_read(struct tb_switch *sw)
* populates the fields in @sw accordingly. Can be called for any router
* generation.
*
* Returns %0 in case of success and negative errno otherwise.
* Return: %0 on success, negative errno otherwise.
*/
int tb_drom_read(struct tb_switch *sw)
{

View file

@ -14,6 +14,8 @@
* tb_lc_read_uuid() - Read switch UUID from link controller common register
* @sw: Switch whose UUID is read
* @uuid: UUID is placed here
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_lc_read_uuid(struct tb_switch *sw, u32 *uuid)
{
@ -52,9 +54,10 @@ static int find_port_lc_cap(struct tb_port *port)
* @port: Port that is reset
*
* Triggers downstream port reset through link controller registers.
* Returns %0 in case of success negative errno otherwise. Only supports
* non-USB4 routers with link controller (that's Thunderbolt 2 and
* Thunderbolt 3).
* Only supports non-USB4 routers with link controller (that's
* Thunderbolt 2 and Thunderbolt 3).
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_lc_reset_port(struct tb_port *port)
{
@ -132,6 +135,8 @@ static int tb_lc_set_port_configured(struct tb_port *port, bool configured)
* @port: Port that is set as configured
*
* Sets the port configured for power management purposes.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_lc_configure_port(struct tb_port *port)
{
@ -143,6 +148,8 @@ int tb_lc_configure_port(struct tb_port *port)
* @port: Port that is set as configured
*
* Sets the port unconfigured for power management purposes.
*
* Return: %0 on success, negative errno otherwise.
*/
void tb_lc_unconfigure_port(struct tb_port *port)
{
@ -184,8 +191,10 @@ static int tb_lc_set_xdomain_configured(struct tb_port *port, bool configure)
* tb_lc_configure_xdomain() - Inform LC that the link is XDomain
* @port: Switch downstream port connected to another host
*
* Sets the lane configured for XDomain accordingly so that the LC knows
* about this. Returns %0 in success and negative errno in failure.
* Sets the lane configured for XDomain accordingly so that LC knows
* about this.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_lc_configure_xdomain(struct tb_port *port)
{
@ -211,7 +220,7 @@ void tb_lc_unconfigure_xdomain(struct tb_port *port)
* sleep. Should be called for those downstream lane adapters that were
* not connected (tb_lc_configure_port() was not called) before sleep.
*
* Returns %0 in success and negative errno in case of failure.
* Return: %0 on success, negative errno otherwise.
*/
int tb_lc_start_lane_initialization(struct tb_port *port)
{
@ -244,6 +253,8 @@ int tb_lc_start_lane_initialization(struct tb_port *port)
*
* TB_LC_LINK_ATTR_CPS bit reflects if the link supports CLx including
* active cables (if connected on the link).
*
* Return: %true if CLx is supported, %false otherwise.
*/
bool tb_lc_is_clx_supported(struct tb_port *port)
{
@ -266,7 +277,8 @@ bool tb_lc_is_clx_supported(struct tb_port *port)
* tb_lc_is_usb_plugged() - Is there USB device connected to port
* @port: Device router lane 0 adapter
*
* Returns true if the @port has USB type-C device connected.
* Return: %true if the @port has USB Type-C device connected, %false
* otherwise.
*/
bool tb_lc_is_usb_plugged(struct tb_port *port)
{
@ -292,7 +304,8 @@ bool tb_lc_is_usb_plugged(struct tb_port *port)
* tb_lc_is_xhci_connected() - Is the internal xHCI connected
* @port: Device router lane 0 adapter
*
* Returns true if the internal xHCI has been connected to @port.
* Return: %true if the internal xHCI has been connected to
* @port, %false otherwise.
*/
bool tb_lc_is_xhci_connected(struct tb_port *port)
{
@ -343,9 +356,10 @@ static int __tb_lc_xhci_connect(struct tb_port *port, bool connect)
* tb_lc_xhci_connect() - Connect internal xHCI
* @port: Device router lane 0 adapter
*
* Tells LC to connect the internal xHCI to @port. Returns %0 on success
* and negative errno in case of failure. Can be called for Thunderbolt 3
* routers only.
* Tells LC to connect the internal xHCI to @port. Can be called for
* Thunderbolt 3 routers only.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_lc_xhci_connect(struct tb_port *port)
{
@ -408,6 +422,8 @@ static int tb_lc_set_wake_one(struct tb_switch *sw, unsigned int offset,
* @flags: Wakeup flags (%0 to disable)
*
* For each LC sets wake bits accordingly.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_lc_set_wake(struct tb_switch *sw, unsigned int flags)
{
@ -447,6 +463,8 @@ int tb_lc_set_wake(struct tb_switch *sw, unsigned int flags)
*
* Let the switch link controllers know that the switch is going to
* sleep.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_lc_set_sleep(struct tb_switch *sw)
{
@ -491,6 +509,8 @@ int tb_lc_set_sleep(struct tb_switch *sw)
*
* Checks whether conditions for lane bonding from parent to @sw are
* possible.
*
* Return: %true if lane bonding is possible, %false otherwise.
*/
bool tb_lc_lane_bonding_possible(struct tb_switch *sw)
{
@ -562,6 +582,8 @@ static int tb_lc_dp_sink_available(struct tb_switch *sw, int sink)
*
* Queries through LC SNK_ALLOCATION registers whether DP sink is available
* for the given DP IN port or not.
*
* Return: %true if DP sink is available, %false otherwise.
*/
bool tb_lc_dp_sink_query(struct tb_switch *sw, struct tb_port *in)
{
@ -586,10 +608,12 @@ bool tb_lc_dp_sink_query(struct tb_switch *sw, struct tb_port *in)
* @sw: Switch whose DP sink is allocated
* @in: DP IN port the DP sink is allocated for
*
* Allocate DP sink for @in via LC SNK_ALLOCATION registers. If the
* resource is available and allocation is successful returns %0. In all
* other cases returs negative errno. In particular %-EBUSY is returned if
* the resource was not available.
* Allocate DP sink for @in via LC SNK_ALLOCATION registers.
*
* Return:
* * %0 - If the resource is available and allocation is successful.
* * %-EBUSY - If resource is not available.
* * Negative errno - Another error occurred.
*/
int tb_lc_dp_sink_alloc(struct tb_switch *sw, struct tb_port *in)
{
@ -637,6 +661,8 @@ int tb_lc_dp_sink_alloc(struct tb_switch *sw, struct tb_port *in)
* @in: DP IN port whose DP sink is de-allocated
*
* De-allocate DP sink from @in using LC SNK_ALLOCATION registers.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_lc_dp_sink_dealloc(struct tb_switch *sw, struct tb_port *in)
{
@ -680,6 +706,8 @@ int tb_lc_dp_sink_dealloc(struct tb_switch *sw, struct tb_port *in)
*
* This is useful to let authentication cycle pass even without
* a Thunderbolt link present.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_lc_force_power(struct tb_switch *sw)
{

View file

@ -19,6 +19,7 @@
#include <linux/module.h>
#include <linux/delay.h>
#include <linux/property.h>
#include <linux/string_choices.h>
#include <linux/string_helpers.h>
#include "nhi.h"
@ -146,7 +147,7 @@ static void ring_interrupt_active(struct tb_ring *ring, bool active)
dev_WARN(&ring->nhi->pdev->dev,
"interrupt for %s %d is already %s\n",
RING_TYPE(ring), ring->hop,
active ? "enabled" : "disabled");
str_enabled_disabled(active));
if (active)
iowrite32(new, ring->nhi->iobase + reg);
@ -343,8 +344,10 @@ EXPORT_SYMBOL_GPL(__tb_ring_enqueue);
*
* This function can be called when @start_poll callback of the @ring
* has been called. It will read one completed frame from the ring and
* return it to the caller. Returns %NULL if there is no more completed
* frames.
* return it to the caller.
*
* Return: Pointer to &struct ring_frame, %NULL if there is no more
* completed frames.
*/
struct ring_frame *tb_ring_poll(struct tb_ring *ring)
{
@ -639,6 +642,8 @@ static struct tb_ring *tb_ring_alloc(struct tb_nhi *nhi, u32 hop, int size,
* @hop: HopID (ring) to allocate
* @size: Number of entries in the ring
* @flags: Flags for the ring
*
* Return: Pointer to &struct tb_ring, %NULL otherwise.
*/
struct tb_ring *tb_ring_alloc_tx(struct tb_nhi *nhi, int hop, int size,
unsigned int flags)
@ -660,6 +665,8 @@ EXPORT_SYMBOL_GPL(tb_ring_alloc_tx);
* interrupt is triggered and masked, instead of callback
* in each Rx frame.
* @poll_data: Optional data passed to @start_poll
*
* Return: Pointer to &struct tb_ring, %NULL otherwise.
*/
struct tb_ring *tb_ring_alloc_rx(struct tb_nhi *nhi, int hop, int size,
unsigned int flags, int e2e_tx_hop,
@ -853,8 +860,9 @@ EXPORT_SYMBOL_GPL(tb_ring_free);
* @cmd: Command to send
* @data: Data to be send with the command
*
* Sends mailbox command to the firmware running on NHI. Returns %0 in
* case of success and negative errno in case of failure.
* Sends mailbox command to the firmware running on NHI.
*
* Return: %0 on success, negative errno otherwise.
*/
int nhi_mailbox_cmd(struct tb_nhi *nhi, enum nhi_mailbox_cmd cmd, u32 data)
{
@ -890,6 +898,8 @@ int nhi_mailbox_cmd(struct tb_nhi *nhi, enum nhi_mailbox_cmd cmd, u32 data)
*
* The function reads current firmware operation mode using NHI mailbox
* registers and returns it to the caller.
*
* Return: &enum nhi_fw_mode.
*/
enum nhi_fw_mode nhi_mailbox_mode(struct tb_nhi *nhi)
{

View file

@ -21,6 +21,12 @@ enum ring_flags {
/**
* struct ring_desc - TX/RX ring entry
* @phys: DMA mapped address of the frame
* @length: Size of the ring
* @eof: End of frame protocol defined field
* @sof: Start of frame protocol defined field
* @flags: Ring descriptor flags
* @time: Fill with zero
*
* For TX set length/eof/sof.
* For RX length/eof/sof are set by the NHI.

View file

@ -278,9 +278,13 @@ static const struct tb_nvm_vendor retimer_nvm_vendors[] = {
* tb_nvm_alloc() - Allocate new NVM structure
* @dev: Device owning the NVM
*
* Allocates new NVM structure with unique @id and returns it. In case
* of error returns ERR_PTR(). Specifically returns %-EOPNOTSUPP if the
* NVM format of the @dev is not known by the kernel.
* Allocates new NVM structure with unique @id and returns it.
*
* Return:
* * Pointer to &struct tb_nvm - On success.
* * %-EOPNOTSUPP - If the NVM format of the @dev is not known by the
* kernel.
* * %ERR_PTR - In case of failure.
*/
struct tb_nvm *tb_nvm_alloc(struct device *dev)
{
@ -347,9 +351,10 @@ struct tb_nvm *tb_nvm_alloc(struct device *dev)
* tb_nvm_read_version() - Read and populate NVM version
* @nvm: NVM structure
*
* Uses vendor specific means to read out and fill in the existing
* active NVM version. Returns %0 in case of success and negative errno
* otherwise.
* Uses vendor specific means to read and fill out the existing
* active NVM version.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_nvm_read_version(struct tb_nvm *nvm)
{
@ -365,12 +370,11 @@ int tb_nvm_read_version(struct tb_nvm *nvm)
* tb_nvm_validate() - Validate new NVM image
* @nvm: NVM structure
*
* Runs vendor specific validation over the new NVM image and if all
* checks pass returns %0. As side effect updates @nvm->buf_data_start
* and @nvm->buf_data_size fields to match the actual data to be written
* to the NVM.
* Runs vendor specific validation over the new NVM image. As a
* side effect, updates @nvm->buf_data_start and @nvm->buf_data_size
* fields to match the actual data to be written to the NVM.
*
* If the validation does not pass then returns negative errno.
* Return: %0 on successful validation, negative errno otherwise.
*/
int tb_nvm_validate(struct tb_nvm *nvm)
{
@ -405,7 +409,7 @@ int tb_nvm_validate(struct tb_nvm *nvm)
* the image, this function does that. Can be called even if the device
* does not need this.
*
* Returns %0 in case of success and negative errno otherwise.
* Return: %0 on success, negative errno otherwise.
*/
int tb_nvm_write_headers(struct tb_nvm *nvm)
{
@ -423,7 +427,8 @@ int tb_nvm_write_headers(struct tb_nvm *nvm)
* Registers new active NVmem device for @nvm. The @reg_read is called
* directly from NVMem so it must handle possible concurrent access if
* needed. The first parameter passed to @reg_read is @nvm structure.
* Returns %0 in success and negative errno otherwise.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_nvm_add_active(struct tb_nvm *nvm, nvmem_reg_read_t reg_read)
{
@ -461,6 +466,11 @@ int tb_nvm_add_active(struct tb_nvm *nvm, nvmem_reg_read_t reg_read)
* Helper function to cache the new NVM image before it is actually
* written to the flash. Copies @bytes from @val to @nvm->buf starting
* from @offset.
*
* Return:
* * %0 - On success.
* * %-ENOMEM - If buffer allocation failed.
* * Negative errno - Another error occurred.
*/
int tb_nvm_write_buf(struct tb_nvm *nvm, unsigned int offset, void *val,
size_t bytes)
@ -488,7 +498,7 @@ int tb_nvm_write_buf(struct tb_nvm *nvm, unsigned int offset, void *val,
* needed. The first parameter passed to @reg_write is @nvm structure.
* The size of the NVMem device is set to %NVM_MAX_SIZE.
*
* Returns %0 in success and negative errno otherwise.
* Return: %0 on success, negative errno otherwise.
*/
int tb_nvm_add_non_active(struct tb_nvm *nvm, nvmem_reg_write_t reg_write)
{
@ -545,7 +555,7 @@ void tb_nvm_free(struct tb_nvm *nvm)
* This is a generic function that reads data from NVM or NVM like
* device.
*
* Returns %0 on success and negative errno otherwise.
* Return: %0 on success, negative errno otherwise.
*/
int tb_nvm_read_data(unsigned int address, void *buf, size_t size,
unsigned int retries, read_block_fn read_block,
@ -592,7 +602,7 @@ int tb_nvm_read_data(unsigned int address, void *buf, size_t size,
*
* This is generic function that writes data to NVM or NVM like device.
*
* Returns %0 on success and negative errno otherwise.
* Return: %0 on success, negative errno otherwise.
*/
int tb_nvm_write_data(unsigned int address, const void *buf, size_t size,
unsigned int retries, write_block_fn write_block,

View file

@ -96,7 +96,7 @@ static int tb_path_find_src_hopid(struct tb_port *src,
* that the @dst port is the expected one. If it is not, the path can be
* cleaned up by calling tb_path_deactivate() before tb_path_free().
*
* Return: Discovered path on success, %NULL in case of failure
* Return: Pointer to &struct tb_path, %NULL in case of failure.
*/
struct tb_path *tb_path_discover(struct tb_port *src, int src_hopid,
struct tb_port *dst, int dst_hopid,
@ -233,7 +233,7 @@ struct tb_path *tb_path_discover(struct tb_port *src, int src_hopid,
* links on the path, prioritizes using @link_nr but takes into account
* that the lanes may be bonded.
*
* Return: Returns a tb_path on success or NULL on failure.
* Return: Pointer to &struct tb_path, %NULL in case of failure.
*/
struct tb_path *tb_path_alloc(struct tb *tb, struct tb_port *src, int src_hopid,
struct tb_port *dst, int dst_hopid, int link_nr,
@ -452,7 +452,9 @@ static int __tb_path_deactivate_hop(struct tb_port *port, int hop_index,
* @hop_index: HopID of the path to be cleared
*
* This deactivates or clears a single path config space entry at
* @hop_index. Returns %0 in success and negative errno otherwise.
* @hop_index.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_path_deactivate_hop(struct tb_port *port, int hop_index)
{
@ -498,7 +500,7 @@ void tb_path_deactivate(struct tb_path *path)
* Activate a path starting with the last hop and iterating backwards. The
* caller must fill path->hops before calling tb_path_activate().
*
* Return: Returns 0 on success or an error code on failure.
* Return: %0 on success, negative errno otherwise.
*/
int tb_path_activate(struct tb_path *path)
{
@ -592,7 +594,7 @@ int tb_path_activate(struct tb_path *path)
* tb_path_is_invalid() - check whether any ports on the path are invalid
* @path: Path to check
*
* Return: Returns true if the path is invalid, false otherwise.
* Return: %true if the path is invalid, %false otherwise.
*/
bool tb_path_is_invalid(struct tb_path *path)
{
@ -613,6 +615,8 @@ bool tb_path_is_invalid(struct tb_path *path)
*
* Goes over all hops on path and checks if @port is any of them.
* Direction does not matter.
*
* Return: %true if port is on the path, %false otherwise.
*/
bool tb_path_port_on_path(const struct tb_path *path, const struct tb_port *port)
{

View file

@ -211,11 +211,13 @@ static struct tb_property_dir *__tb_property_parse_dir(const u32 *block,
*
* This function parses the XDomain properties data block into format that
* can be traversed using the helper functions provided by this module.
* Upon success returns the parsed directory. In case of error returns
* %NULL. The resulting &struct tb_property_dir needs to be released by
*
* The resulting &struct tb_property_dir needs to be released by
* calling tb_property_free_dir() when not needed anymore.
*
* The @block is expected to be root directory.
*
* Return: Pointer to &struct tb_property_dir, %NULL in case of failure.
*/
struct tb_property_dir *tb_property_parse_dir(const u32 *block,
size_t block_len)
@ -238,6 +240,8 @@ struct tb_property_dir *tb_property_parse_dir(const u32 *block,
*
* Creates new, empty property directory. If @uuid is %NULL then the
* directory is assumed to be root directory.
*
* Return: Pointer to &struct tb_property_dir, %NULL in case of failure.
*/
struct tb_property_dir *tb_property_create_dir(const uuid_t *uuid)
{
@ -481,9 +485,11 @@ static ssize_t __tb_property_format_dir(const struct tb_property_dir *dir,
* @block_len: Length of the property block
*
* This function formats the directory to the packed format that can be
* then send over the thunderbolt fabric to receiving host. Returns %0 in
* case of success and negative errno on faulure. Passing %NULL in @block
* returns number of entries the block takes.
* then sent over the thunderbolt fabric to receiving host.
*
* Passing %NULL in @block returns number of entries the block takes.
*
* Return: %0 on success, negative errno otherwise.
*/
ssize_t tb_property_format_dir(const struct tb_property_dir *dir, u32 *block,
size_t block_len)
@ -505,9 +511,9 @@ ssize_t tb_property_format_dir(const struct tb_property_dir *dir, u32 *block,
* tb_property_copy_dir() - Take a deep copy of directory
* @dir: Directory to copy
*
* This function takes a deep copy of @dir and returns back the copy. In
* case of error returns %NULL. The resulting directory needs to be
* released by calling tb_property_free_dir().
* The resulting directory needs to be released by calling tb_property_free_dir().
*
* Return: Pointer to &struct tb_property_dir, %NULL in case of failure.
*/
struct tb_property_dir *tb_property_copy_dir(const struct tb_property_dir *dir)
{
@ -577,6 +583,8 @@ struct tb_property_dir *tb_property_copy_dir(const struct tb_property_dir *dir)
* @parent: Directory to add the property
* @key: Key for the property
* @value: Immediate value to store with the property
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_property_add_immediate(struct tb_property_dir *parent, const char *key,
u32 value)
@ -606,6 +614,8 @@ EXPORT_SYMBOL_GPL(tb_property_add_immediate);
* @buflen: Number of bytes in the data buffer
*
* Function takes a copy of @buf and adds it to the directory.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_property_add_data(struct tb_property_dir *parent, const char *key,
const void *buf, size_t buflen)
@ -642,6 +652,8 @@ EXPORT_SYMBOL_GPL(tb_property_add_data);
* @text: String to add
*
* Function takes a copy of @text and adds it to the directory.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_property_add_text(struct tb_property_dir *parent, const char *key,
const char *text)
@ -676,6 +688,8 @@ EXPORT_SYMBOL_GPL(tb_property_add_text);
* @parent: Directory to add the property
* @key: Key for the property
* @dir: Directory to add
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_property_add_dir(struct tb_property_dir *parent, const char *key,
struct tb_property_dir *dir)
@ -716,8 +730,10 @@ EXPORT_SYMBOL_GPL(tb_property_remove);
* @key: Key to look for
* @type: Type of the property
*
* Finds and returns property from the given directory. Does not recurse
* into sub-directories. Returns %NULL if the property was not found.
* Finds and returns property from the given directory. Does not
* recurse into sub-directories.
*
* Return: Pointer to &struct tb_property, %NULL if the property was not found.
*/
struct tb_property *tb_property_find(struct tb_property_dir *dir,
const char *key, enum tb_property_type type)
@ -737,6 +753,8 @@ EXPORT_SYMBOL_GPL(tb_property_find);
* tb_property_get_next() - Get next property from directory
* @dir: Directory holding properties
* @prev: Previous property in the directory (%NULL returns the first)
*
* Return: Pointer to &struct tb_property, %NULL if property was not found.
*/
struct tb_property *tb_property_get_next(struct tb_property_dir *dir,
struct tb_property *prev)

View file

@ -27,8 +27,9 @@
* @buf: Data read from NVM is stored here
* @size: Number of bytes to read
*
* Reads retimer NVM and copies the contents to @buf. Returns %0 if the
* read was successful and negative errno in case of failure.
* Reads retimer NVM and copies the contents to @buf.
*
* Return: %0 if the read was successful, negative errno in case of failure.
*/
int tb_retimer_nvm_read(struct tb_retimer *rt, unsigned int address, void *buf,
size_t size)
@ -503,6 +504,8 @@ static struct tb_retimer *tb_port_find_retimer(struct tb_port *port, u8 index)
* Then Tries to enumerate on-board retimers connected to @port. Found
* retimers are registered as children of @port if @add is set. Does
* not scan for cable retimers for now.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_retimer_scan(struct tb_port *port, bool add)
{

View file

@ -290,8 +290,9 @@ static int nvm_authenticate(struct tb_switch *sw, bool auth_only)
* @size: Size of the buffer in bytes
*
* Reads from router NVM and returns the requested data in @buf. Locking
* is up to the caller. Returns %0 in success and negative errno in case
* of failure.
* is up to the caller.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_switch_nvm_read(struct tb_switch *sw, unsigned int address, void *buf,
size_t size)
@ -464,7 +465,7 @@ static void tb_dump_port(struct tb *tb, const struct tb_port *port)
*
* The port must have a TB_CAP_PHY (i.e. it should be a real port).
*
* Return: Returns an enum tb_port_state on success or an error code on failure.
* Return: &enum tb_port_state or negative error code on failure.
*/
int tb_port_state(struct tb_port *port)
{
@ -491,9 +492,11 @@ int tb_port_state(struct tb_port *port)
* switch resume). Otherwise we only wait if a device is registered but the link
* has not yet been established.
*
* Return: Returns an error code on failure. Returns 0 if the port is not
* connected or failed to reach state TB_PORT_UP within one second. Returns 1
* if the port is connected and in state TB_PORT_UP.
* Return:
* * %0 - If the port is not connected or failed to reach
* state %TB_PORT_UP within one second.
* * %1 - If the port is connected and in state %TB_PORT_UP.
* * Negative errno - An error occurred.
*/
int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged)
{
@ -562,7 +565,7 @@ int tb_wait_for_port(struct tb_port *port, bool wait_if_unplugged)
* Change the number of NFC credits allocated to @port by @credits. To remove
* NFC credits pass a negative amount of credits.
*
* Return: Returns 0 on success or an error code on failure.
* Return: %0 on success, negative errno otherwise.
*/
int tb_port_add_nfc_credits(struct tb_port *port, int credits)
{
@ -599,7 +602,7 @@ int tb_port_add_nfc_credits(struct tb_port *port, int credits)
* @port: Port whose counters to clear
* @counter: Counter index to clear
*
* Return: Returns 0 on success or an error code on failure.
* Return: %0 on success, negative errno otherwise.
*/
int tb_port_clear_counter(struct tb_port *port, int counter)
{
@ -614,6 +617,8 @@ int tb_port_clear_counter(struct tb_port *port, int counter)
*
* Needed for USB4 but can be called for any CIO/USB4 ports. Makes the
* downstream router accessible for CM.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_port_unlock(struct tb_port *port)
{
@ -659,6 +664,8 @@ static int __tb_port_enable(struct tb_port *port, bool enable)
* @port: Port to enable (can be %NULL)
*
* This is used for lane 0 and 1 adapters to enable it.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_port_enable(struct tb_port *port)
{
@ -670,6 +677,8 @@ int tb_port_enable(struct tb_port *port)
* @port: Port to disable (can be %NULL)
*
* This is used for lane 0 and 1 adapters to disable it.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_port_disable(struct tb_port *port)
{
@ -689,7 +698,7 @@ static int tb_port_reset(struct tb_port *port)
* This is a helper method for tb_switch_alloc. Does not check or initialize
* any downstream switches.
*
* Return: Returns 0 on success or an error code on failure.
* Return: %0 on success, negative errno otherwise.
*/
static int tb_init_port(struct tb_port *port)
{
@ -847,9 +856,9 @@ static inline bool tb_switch_is_reachable(const struct tb_switch *parent,
* link port, the function follows that link and returns another end on
* that same link.
*
* If the @end port has been reached, return %NULL.
*
* Domain tb->lock must be held when this function is called.
*
* Return: Pointer to &struct tb_port, %NULL if the @end port has been reached.
*/
struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end,
struct tb_port *prev)
@ -894,7 +903,7 @@ struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end,
* tb_port_get_link_speed() - Get current link speed
* @port: Port to check (USB4 or CIO)
*
* Returns link speed in Gb/s or negative errno in case of failure.
* Return: Link speed in Gb/s or negative errno in case of failure.
*/
int tb_port_get_link_speed(struct tb_port *port)
{
@ -926,9 +935,11 @@ int tb_port_get_link_speed(struct tb_port *port)
* tb_port_get_link_generation() - Returns link generation
* @port: Lane adapter
*
* Returns link generation as number or negative errno in case of
* failure. Does not distinguish between Thunderbolt 1 and Thunderbolt 2
* links so for those always returns 2.
* Return: Link generation as a number or negative errno in case of
* failure.
*
* Does not distinguish between Thunderbolt 1 and Thunderbolt 2
* links so for those always returns %2.
*/
int tb_port_get_link_generation(struct tb_port *port)
{
@ -952,8 +963,8 @@ int tb_port_get_link_generation(struct tb_port *port)
* tb_port_get_link_width() - Get current link width
* @port: Port to check (USB4 or CIO)
*
* Returns link width. Return the link width as encoded in &enum
* tb_link_width or negative errno in case of failure.
* Return: Link width encoded in &enum tb_link_width or
* negative errno in case of failure.
*/
int tb_port_get_link_width(struct tb_port *port)
{
@ -979,7 +990,9 @@ int tb_port_get_link_width(struct tb_port *port)
* @width: Widths to check (bitmask)
*
* Can be called to any lane adapter. Checks if given @width is
* supported by the hardware and returns %true if it is.
* supported by the hardware.
*
* Return: %true if link width is supported, %false otherwise.
*/
bool tb_port_width_supported(struct tb_port *port, unsigned int width)
{
@ -1016,7 +1029,7 @@ bool tb_port_width_supported(struct tb_port *port, unsigned int width)
* Sets the target link width of the lane adapter to @width. Does not
* enable/disable lane bonding. For that call tb_port_set_lane_bonding().
*
* Return: %0 in case of success and negative errno in case of error
* Return: %0 on success, negative errno otherwise.
*/
int tb_port_set_link_width(struct tb_port *port, enum tb_link_width width)
{
@ -1070,7 +1083,7 @@ int tb_port_set_link_width(struct tb_port *port, enum tb_link_width width)
* cases one should use tb_port_lane_bonding_enable() instead to enable
* lane bonding.
*
* Return: %0 in case of success and negative errno in case of error
* Return: %0 on success, negative errno otherwise.
*/
static int tb_port_set_lane_bonding(struct tb_port *port, bool bonding)
{
@ -1104,7 +1117,7 @@ static int tb_port_set_lane_bonding(struct tb_port *port, bool bonding)
* tb_port_wait_for_link_width() before enabling any paths through the
* link to make sure the link is in expected state.
*
* Return: %0 in case of success and negative errno in case of error
* Return: %0 on success, negative errno otherwise.
*/
int tb_port_lane_bonding_enable(struct tb_port *port)
{
@ -1181,9 +1194,14 @@ void tb_port_lane_bonding_disable(struct tb_port *port)
*
* Should be used after both ends of the link have been bonded (or
* bonding has been disabled) to wait until the link actually reaches
* the expected state. Returns %-ETIMEDOUT if the width was not reached
* within the given timeout, %0 if it did. Can be passed a mask of
* expected widths and succeeds if any of the widths is reached.
* the expected state.
*
* Can be passed a mask of expected widths.
*
* Return:
* * %0 - If link reaches any of the specified widths.
* * %-ETIMEDOUT - If link does not reach specified width.
* * Negative errno - Another error occurred.
*/
int tb_port_wait_for_link_width(struct tb_port *port, unsigned int width,
int timeout_msec)
@ -1248,6 +1266,8 @@ static int tb_port_do_update_credits(struct tb_port *port)
* After the link is bonded (or bonding was disabled) the port total
* credits may change, so this function needs to be called to re-read
* the credits. Updates also the second lane adapter.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_port_update_credits(struct tb_port *port)
{
@ -1303,6 +1323,8 @@ static bool tb_port_resume(struct tb_port *port)
/**
* tb_port_is_enabled() - Is the adapter port enabled
* @port: Port to check
*
* Return: %true if port is enabled, %false otherwise.
*/
bool tb_port_is_enabled(struct tb_port *port)
{
@ -1327,6 +1349,8 @@ bool tb_port_is_enabled(struct tb_port *port)
/**
* tb_usb3_port_is_enabled() - Is the USB3 adapter port enabled
* @port: USB3 adapter port to check
*
* Return: %true if port is enabled, %false otherwise.
*/
bool tb_usb3_port_is_enabled(struct tb_port *port)
{
@ -1343,6 +1367,8 @@ bool tb_usb3_port_is_enabled(struct tb_port *port)
* tb_usb3_port_enable() - Enable USB3 adapter port
* @port: USB3 adapter port to enable
* @enable: Enable/disable the USB3 adapter
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_usb3_port_enable(struct tb_port *port, bool enable)
{
@ -1358,6 +1384,8 @@ int tb_usb3_port_enable(struct tb_port *port, bool enable)
/**
* tb_pci_port_is_enabled() - Is the PCIe adapter port enabled
* @port: PCIe port to check
*
* Return: %true if port is enabled, %false otherwise.
*/
bool tb_pci_port_is_enabled(struct tb_port *port)
{
@ -1374,6 +1402,8 @@ bool tb_pci_port_is_enabled(struct tb_port *port)
* tb_pci_port_enable() - Enable PCIe adapter port
* @port: PCIe port to enable
* @enable: Enable/disable the PCIe adapter
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_pci_port_enable(struct tb_port *port, bool enable)
{
@ -1389,6 +1419,8 @@ int tb_pci_port_enable(struct tb_port *port, bool enable)
* @port: DP out port to check
*
* Checks if the DP OUT adapter port has HPD bit already set.
*
* Return: %1 if HPD is active, %0 otherwise.
*/
int tb_dp_port_hpd_is_active(struct tb_port *port)
{
@ -1408,6 +1440,8 @@ int tb_dp_port_hpd_is_active(struct tb_port *port)
* @port: Port to clear HPD
*
* If the DP IN port has HPD set, this function can be used to clear it.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_dp_port_hpd_clear(struct tb_port *port)
{
@ -1434,6 +1468,8 @@ int tb_dp_port_hpd_clear(struct tb_port *port)
* Programs specified Hop IDs for DP IN/OUT port. Can be called for USB4
* router DP adapters too but does not program the values as the fields
* are read-only.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_dp_port_set_hops(struct tb_port *port, unsigned int video,
unsigned int aux_tx, unsigned int aux_rx)
@ -1466,6 +1502,8 @@ int tb_dp_port_set_hops(struct tb_port *port, unsigned int video,
/**
* tb_dp_port_is_enabled() - Is DP adapter port enabled
* @port: DP adapter port to check
*
* Return: %true if DP port is enabled, %false otherwise.
*/
bool tb_dp_port_is_enabled(struct tb_port *port)
{
@ -1485,6 +1523,8 @@ bool tb_dp_port_is_enabled(struct tb_port *port)
*
* Once Hop IDs are programmed DP paths can be enabled or disabled by
* calling this function.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_dp_port_enable(struct tb_port *port, bool enable)
{
@ -1634,7 +1674,7 @@ static bool tb_switch_enumerated(struct tb_switch *sw)
*
* If the router is not enumerated does nothing.
*
* Returns %0 on success or negative errno in case of failure.
* Return: %0 on success, negative errno otherwise.
*/
int tb_switch_reset(struct tb_switch *sw)
{
@ -1670,8 +1710,12 @@ int tb_switch_reset(struct tb_switch *sw)
* @timeout_msec: Timeout in ms how long to wait
*
* Wait till the specified bits in specified offset reach specified value.
* Returns %0 in case of success, %-ETIMEDOUT if the @value was not reached
* within the given timeout or a negative errno in case of failure.
*
* Return:
* * %0 - On success.
* * %-ETIMEDOUT - If the @value was not reached within
* the given timeout.
* * Negative errno - In case of failure.
*/
int tb_switch_wait_for_bit(struct tb_switch *sw, u32 offset, u32 bit,
u32 value, int timeout_msec)
@ -1700,7 +1744,7 @@ int tb_switch_wait_for_bit(struct tb_switch *sw, u32 offset, u32 bit,
*
* Also configures a sane plug_events_delay of 255ms.
*
* Return: Returns 0 on success or an error code on failure.
* Return: %0 on success, negative errno otherwise.
*/
static int tb_plug_events_active(struct tb_switch *sw, bool active)
{
@ -2406,8 +2450,7 @@ static bool tb_switch_exceeds_max_depth(const struct tb_switch *sw, int depth)
* separately. The returned switch should be released by calling
* tb_switch_put().
*
* Return: Pointer to the allocated switch or ERR_PTR() in case of
* failure.
* Return: Pointer to &struct tb_switch or ERR_PTR() in case of failure.
*/
struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
u64 route)
@ -2526,7 +2569,7 @@ struct tb_switch *tb_switch_alloc(struct tb *tb, struct device *parent,
*
* The returned switch must be released by calling tb_switch_put().
*
* Return: Pointer to the allocated switch or ERR_PTR() in case of failure
* Return: Pointer to &struct tb_switch or ERR_PTR() in case of failure.
*/
struct tb_switch *
tb_switch_alloc_safe_mode(struct tb *tb, struct device *parent, u64 route)
@ -2562,7 +2605,7 @@ tb_switch_alloc_safe_mode(struct tb *tb, struct device *parent, u64 route)
* connection manager to use. Can be called to the switch again after
* resume from low power states to re-initialize it.
*
* Return: %0 in case of success and negative errno in case of failure
* Return: %0 on success, negative errno otherwise.
*/
int tb_switch_configure(struct tb_switch *sw)
{
@ -2625,7 +2668,7 @@ int tb_switch_configure(struct tb_switch *sw)
* Needs to be called before any tunnels can be setup through the
* router. Can be called to any router.
*
* Returns %0 in success and negative errno otherwise.
* Return: %0 on success, negative errno otherwise.
*/
int tb_switch_configuration_valid(struct tb_switch *sw)
{
@ -2900,6 +2943,8 @@ static void tb_switch_link_init(struct tb_switch *sw)
* Connection manager can call this function to enable lane bonding of a
* switch. If conditions are correct and both switches support the feature,
* lanes are bonded. It is safe to call this to any switch.
*
* Return: %0 on success, negative errno otherwise.
*/
static int tb_switch_lane_bonding_enable(struct tb_switch *sw)
{
@ -2950,6 +2995,8 @@ static int tb_switch_lane_bonding_enable(struct tb_switch *sw)
*
* Disables lane bonding between @sw and parent. This can be called even
* if lanes were not bonded originally.
*
* Return: %0 on success, negative errno otherwise.
*/
static int tb_switch_lane_bonding_disable(struct tb_switch *sw)
{
@ -3074,7 +3121,7 @@ static int tb_switch_asym_disable(struct tb_switch *sw)
*
* Does nothing for host router.
*
* Returns %0 in case of success, negative errno otherwise.
* Return: %0 on success, negative errno otherwise.
*/
int tb_switch_set_link_width(struct tb_switch *sw, enum tb_link_width width)
{
@ -3145,7 +3192,7 @@ int tb_switch_set_link_width(struct tb_switch *sw, enum tb_link_width width)
*
* It is recommended that this is called after lane bonding is enabled.
*
* Returns %0 on success and negative errno in case of error.
* Return: %0 on success and negative errno otherwise.
*/
int tb_switch_configure_link(struct tb_switch *sw)
{
@ -3245,7 +3292,7 @@ static int tb_switch_port_hotplug_enable(struct tb_switch *sw)
* exposed to the userspace when this function successfully returns. To
* remove and release the switch, call tb_switch_remove().
*
* Return: %0 in case of success and negative errno in case of failure
* Return: %0 on success, negative errno otherwise.
*/
int tb_switch_add(struct tb_switch *sw)
{
@ -3467,6 +3514,8 @@ static void tb_switch_check_wakes(struct tb_switch *sw)
* suspend. If this is resume from system sleep, notifies PM core about the
* wakes occurred during suspend. Disables all wakes, except USB4 wake of
* upstream port for USB4 routers that shall be always enabled.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_switch_resume(struct tb_switch *sw, bool runtime)
{
@ -3617,7 +3666,9 @@ void tb_switch_suspend(struct tb_switch *sw, bool runtime)
* @in: DP IN port
*
* Queries availability of DP resource for DP tunneling using switch
* specific means. Returns %true if resource is available.
* specific means.
*
* Return: %true if resource is available, %false otherwise.
*/
bool tb_switch_query_dp_resource(struct tb_switch *sw, struct tb_port *in)
{
@ -3633,7 +3684,8 @@ bool tb_switch_query_dp_resource(struct tb_switch *sw, struct tb_port *in)
*
* Allocates DP resource for DP tunneling. The resource must be
* available for this to succeed (see tb_switch_query_dp_resource()).
* Returns %0 in success and negative errno otherwise.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_switch_alloc_dp_resource(struct tb_switch *sw, struct tb_port *in)
{
@ -3718,6 +3770,8 @@ static int tb_switch_match(struct device *dev, const void *data)
*
* Returned switch has reference count increased so the caller needs to
* call tb_switch_put() when done with the switch.
*
* Return: Pointer to &struct tb_switch, %NULL if not found.
*/
struct tb_switch *tb_switch_find_by_link_depth(struct tb *tb, u8 link, u8 depth)
{
@ -3743,6 +3797,8 @@ struct tb_switch *tb_switch_find_by_link_depth(struct tb *tb, u8 link, u8 depth)
*
* Returned switch has reference count increased so the caller needs to
* call tb_switch_put() when done with the switch.
*
* Return: Pointer to &struct tb_switch, %NULL if not found.
*/
struct tb_switch *tb_switch_find_by_uuid(struct tb *tb, const uuid_t *uuid)
{
@ -3767,6 +3823,8 @@ struct tb_switch *tb_switch_find_by_uuid(struct tb *tb, const uuid_t *uuid)
*
* Returned switch has reference count increased so the caller needs to
* call tb_switch_put() when done with the switch.
*
* Return: Pointer to &struct tb_switch, %NULL if not found.
*/
struct tb_switch *tb_switch_find_by_route(struct tb *tb, u64 route)
{
@ -3791,6 +3849,8 @@ struct tb_switch *tb_switch_find_by_route(struct tb *tb, u64 route)
* tb_switch_find_port() - return the first port of @type on @sw or NULL
* @sw: Switch to find the port from
* @type: Port type to look for
*
* Return: Pointer to &struct tb_port, %NULL if not found.
*/
struct tb_port *tb_switch_find_port(struct tb_switch *sw,
enum tb_port_type type)
@ -3859,6 +3919,8 @@ static int tb_switch_pcie_bridge_write(struct tb_switch *sw, unsigned int bridge
* entry to PCIe L1 state. Shall be called after the upstream PCIe tunnel
* was configured. Due to Intel platforms limitation, shall be called only
* for first hop switch.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_switch_pcie_l1_enable(struct tb_switch *sw)
{
@ -3893,6 +3955,8 @@ int tb_switch_pcie_l1_enable(struct tb_switch *sw)
* connected to the type-C port. Call only after PCIe tunnel has been
* established. The function only does the connect if not done already
* so can be called several times for the same router.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_switch_xhci_connect(struct tb_switch *sw)
{

View file

@ -225,14 +225,12 @@ static int tb_enable_clx(struct tb_switch *sw)
return ret == -EOPNOTSUPP ? 0 : ret;
}
/**
* tb_disable_clx() - Disable CL states up to host router
* @sw: Router to start
/*
* Disables CL states from @sw up to the host router.
*
* Disables CL states from @sw up to the host router. Returns true if
* any CL state were disabled. This can be used to figure out whether
* the link was setup by us or the boot firmware so we don't
* accidentally enable them if they were not enabled during discovery.
* This can be used to figure out whether the link was setup by us or the
* boot firmware so we don't accidentally enable them if they were not
* enabled during discovery.
*/
static bool tb_disable_clx(struct tb_switch *sw)
{
@ -456,10 +454,8 @@ static void tb_scan_xdomain(struct tb_port *port)
}
}
/**
* tb_find_unused_port() - return the first inactive port on @sw
* @sw: Switch to find the port on
* @type: Port type to look for
/*
* Returns the first inactive port on @sw.
*/
static struct tb_port *tb_find_unused_port(struct tb_switch *sw,
enum tb_port_type type)
@ -549,6 +545,8 @@ static struct tb_tunnel *tb_find_first_usb3_tunnel(struct tb *tb,
* from @src_port to @dst_port. Does not take USB3 tunnel starting from
* @src_port and ending on @src_port into account because that bandwidth is
* already included in as part of the "first hop" USB3 tunnel.
*
* Return: %0 on success, negative errno otherwise.
*/
static int tb_consumed_usb3_pcie_bandwidth(struct tb *tb,
struct tb_port *src_port,
@ -601,6 +599,8 @@ static int tb_consumed_usb3_pcie_bandwidth(struct tb *tb,
* If there is bandwidth reserved for any of the groups between
* @src_port and @dst_port (but not yet used) that is also taken into
* account in the returned consumed bandwidth.
*
* Return: %0 on success, negative errno otherwise.
*/
static int tb_consumed_dp_bandwidth(struct tb *tb,
struct tb_port *src_port,
@ -701,6 +701,8 @@ static bool tb_asym_supported(struct tb_port *src_port, struct tb_port *dst_port
* single link at @port. If @include_asym is set then includes the
* additional banwdith if the links are transitioned into asymmetric to
* direction from @src_port to @dst_port.
*
* Return: %0 on success, negative errno otherwise.
*/
static int tb_maximum_bandwidth(struct tb *tb, struct tb_port *src_port,
struct tb_port *dst_port, struct tb_port *port,
@ -807,6 +809,8 @@ static int tb_maximum_bandwidth(struct tb *tb, struct tb_port *src_port,
* If @include_asym is true then includes also bandwidth that can be
* added when the links are transitioned into asymmetric (but does not
* transition the links).
*
* Return: %0 on success, negative errno otherwise.
*/
static int tb_available_bandwidth(struct tb *tb, struct tb_port *src_port,
struct tb_port *dst_port, int *available_up,
@ -1029,6 +1033,8 @@ static int tb_create_usb3_tunnels(struct tb_switch *sw)
* (requested + currently consumed) on that link exceed @asym_threshold.
*
* Must be called with available >= requested over all links.
*
* Return: %0 on success, negative errno otherwise.
*/
static int tb_configure_asym(struct tb *tb, struct tb_port *src_port,
struct tb_port *dst_port, int requested_up,
@ -1135,6 +1141,8 @@ static int tb_configure_asym(struct tb *tb, struct tb_port *src_port,
* Goes over each link from @src_port to @dst_port and tries to
* transition the link to symmetric if the currently consumed bandwidth
* allows and link asymmetric preference is ignored (if @keep_asym is %false).
*
* Return: %0 on success, negative errno otherwise.
*/
static int tb_configure_sym(struct tb *tb, struct tb_port *src_port,
struct tb_port *dst_port, bool keep_asym)
@ -3336,7 +3344,7 @@ static bool tb_apple_add_links(struct tb_nhi *nhi)
if (!pci_is_pcie(pdev))
continue;
if (pci_pcie_type(pdev) != PCI_EXP_TYPE_DOWNSTREAM ||
!pdev->is_hotplug_bridge)
!pdev->is_pciehp)
continue;
link = device_link_add(&pdev->dev, &nhi->pdev->dev,

View file

@ -324,7 +324,7 @@ struct usb4_port {
};
/**
* tb_retimer: Thunderbolt retimer
* struct tb_retimer - Thunderbolt retimer
* @dev: Device for the retimer
* @tb: Pointer to the domain the retimer belongs to
* @index: Retimer index facing the router USB4 port
@ -552,13 +552,14 @@ static inline void *tb_priv(struct tb *tb)
/**
* tb_upstream_port() - return the upstream port of a switch
* @sw: Router
*
* Every switch has an upstream port (for the root switch it is the NHI).
*
* During switch alloc/init tb_upstream_port()->remote may be NULL, even for
* non root switches (on the NHI port remote is always NULL).
*
* Return: Returns the upstream port of the switch.
* Return: Pointer to &struct tb_port.
*/
static inline struct tb_port *tb_upstream_port(struct tb_switch *sw)
{
@ -569,8 +570,8 @@ static inline struct tb_port *tb_upstream_port(struct tb_switch *sw)
* tb_is_upstream_port() - Is the port upstream facing
* @port: Port to check
*
* Returns true if @port is upstream facing port. In case of dual link
* ports both return true.
* Return: %true if @port is upstream facing port. In case of dual link
* ports, both return %true.
*/
static inline bool tb_is_upstream_port(const struct tb_port *port)
{
@ -613,7 +614,7 @@ static inline const char *tb_width_name(enum tb_link_width width)
* tb_port_has_remote() - Does the port have switch connected downstream
* @port: Port to check
*
* Returns true only when the port is primary port and has remote set.
* Return: %true only when the port is primary port and has remote set.
*/
static inline bool tb_port_has_remote(const struct tb_port *port)
{
@ -905,8 +906,9 @@ static inline struct tb_switch *tb_switch_parent(struct tb_switch *sw)
* tb_switch_downstream_port() - Return downstream facing port of parent router
* @sw: Device router pointer
*
* Only call for device routers. Returns the downstream facing port of
* the parent router.
* Call only for device routers.
*
* Return: Pointer to &struct tb_port or %NULL in case of failure.
*/
static inline struct tb_port *tb_switch_downstream_port(struct tb_switch *sw)
{
@ -918,6 +920,8 @@ static inline struct tb_port *tb_switch_downstream_port(struct tb_switch *sw)
/**
* tb_switch_depth() - Returns depth of the connected router
* @sw: Router
*
* Return: Router depth level as a number.
*/
static inline int tb_switch_depth(const struct tb_switch *sw)
{
@ -1010,6 +1014,9 @@ static inline bool tb_switch_is_tiger_lake(const struct tb_switch *sw)
* is handling @sw this function can be called. It is valid to call this
* after tb_switch_alloc() and tb_switch_configure() has been called
* (latter only for SW CM case).
*
* Return: %true if switch is handled by ICM, %false if handled by
* software CM.
*/
static inline bool tb_switch_is_icm(const struct tb_switch *sw)
{
@ -1037,6 +1044,8 @@ int tb_switch_tmu_configure(struct tb_switch *sw, enum tb_switch_tmu_mode mode);
*
* Checks if given router TMU mode is configured to @mode. Note the
* router TMU might not be enabled to this mode.
*
* Return: %true if TMU mode is equal to @mode, %false otherwise.
*/
static inline bool tb_switch_tmu_is_configured(const struct tb_switch *sw,
enum tb_switch_tmu_mode mode)
@ -1048,8 +1057,8 @@ static inline bool tb_switch_tmu_is_configured(const struct tb_switch *sw,
* tb_switch_tmu_is_enabled() - Checks if the specified TMU mode is enabled
* @sw: Router whose TMU mode to check
*
* Return true if hardware TMU configuration matches the requested
* configuration (and is not %TB_SWITCH_TMU_MODE_OFF).
* Return: %true if hardware TMU configuration matches the requested
* configuration (and is not %TB_SWITCH_TMU_MODE_OFF), %false otherwise.
*/
static inline bool tb_switch_tmu_is_enabled(const struct tb_switch *sw)
{
@ -1069,9 +1078,10 @@ int tb_switch_clx_disable(struct tb_switch *sw);
* @clx: The CLx states to check for
*
* Checks if the specified CLx is enabled on the router upstream link.
* Returns true if any of the given states is enabled.
*
* Not applicable for a host router.
*
* Return: %true if any of the given states is enabled, %false otherwise.
*/
static inline bool tb_switch_clx_is_enabled(const struct tb_switch *sw,
unsigned int clx)
@ -1103,7 +1113,7 @@ struct tb_port *tb_next_port_on_path(struct tb_port *start, struct tb_port *end,
* @src: Source adapter
* @dst: Destination adapter
*
* Returns %true only if the specified path from source adapter (@src)
* Return: %true only if the specified path from source adapter (@src)
* to destination adapter (@dst) is directed downstream.
*/
static inline bool
@ -1232,10 +1242,11 @@ static inline int tb_route_length(u64 route)
/**
* tb_downstream_route() - get route to downstream switch
* @port: Port to check
*
* Port must not be the upstream port (otherwise a loop is created).
*
* Return: Returns a route to the switch behind @port.
* Return: Route to the switch behind @port.
*/
static inline u64 tb_downstream_route(struct tb_port *port)
{
@ -1263,7 +1274,7 @@ static inline struct tb_switch *tb_xdomain_parent(struct tb_xdomain *xd)
* tb_xdomain_downstream_port() - Return downstream facing port of parent router
* @xd: Xdomain pointer
*
* Returns the downstream port the XDomain is connected to.
* Return: Pointer to &struct tb_port or %NULL in case of failure.
*/
static inline struct tb_port *tb_xdomain_downstream_port(struct tb_xdomain *xd)
{
@ -1291,7 +1302,7 @@ static inline struct tb_retimer *tb_to_retimer(struct device *dev)
* usb4_switch_version() - Returns USB4 version of the router
* @sw: Router to check
*
* Returns major version of USB4 router (%1 for v1, %2 for v2 and so
* Return: Major version of USB4 router (%1 for v1, %2 for v2 and so
* on). Can be called to pre-USB4 router too and in that case returns %0.
*/
static inline unsigned int usb4_switch_version(const struct tb_switch *sw)
@ -1303,7 +1314,7 @@ static inline unsigned int usb4_switch_version(const struct tb_switch *sw)
* tb_switch_is_usb4() - Is the switch USB4 compliant
* @sw: Switch to check
*
* Returns true if the @sw is USB4 compliant router, false otherwise.
* Return: %true if the @sw is USB4 compliant router, %false otherwise.
*/
static inline bool tb_switch_is_usb4(const struct tb_switch *sw)
{
@ -1355,7 +1366,7 @@ int usb4_port_asym_set_link_width(struct tb_port *port, enum tb_link_width width
int usb4_port_asym_start(struct tb_port *port);
/**
* enum tb_sb_target - Sideband transaction target
* enum usb4_sb_target - Sideband transaction target
* @USB4_SB_TARGET_ROUTER: Target is the router itself
* @USB4_SB_TARGET_PARTNER: Target is partner
* @USB4_SB_TARGET_RETIMER: Target is retimer
@ -1400,6 +1411,8 @@ enum usb4_margining_lane {
* @voltage_time_offset: Offset for voltage / time for software margining
* @optional_voltage_offset_range: Enable optional extended voltage range
* @right_high: %false if left/low margin test is performed, %true if right/high
* @upper_eye: %true if margin test is done on upper eye, %false if done on
* lower eye
* @time: %true if time margining is used instead of voltage
*/
struct usb4_port_margining_params {

View file

@ -405,6 +405,8 @@ static int tmu_mode_init(struct tb_switch *sw)
* This function must be called before other TMU related functions to
* makes the internal structures are filled in correctly. Does not
* change any hardware configuration.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_switch_tmu_init(struct tb_switch *sw)
{
@ -439,6 +441,8 @@ int tb_switch_tmu_init(struct tb_switch *sw)
* @sw: Switch whose time to update
*
* Updates switch local time using time posting procedure.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_switch_tmu_post_time(struct tb_switch *sw)
{
@ -555,6 +559,8 @@ static int disable_enhanced(struct tb_port *up, struct tb_port *down)
* @sw: Switch whose TMU to disable
*
* Turns off TMU of @sw if it is enabled. If not enabled does nothing.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_switch_tmu_disable(struct tb_switch *sw)
{
@ -938,6 +944,8 @@ static int tb_switch_tmu_change_mode(struct tb_switch *sw)
* Enables TMU of a router to be in uni-directional Normal/HiFi or
* bi-directional HiFi mode. Calling tb_switch_tmu_configure() is
* required before calling this function.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_switch_tmu_enable(struct tb_switch *sw)
{
@ -1017,9 +1025,11 @@ int tb_switch_tmu_enable(struct tb_switch *sw)
* Selects the TMU mode that is enabled when tb_switch_tmu_enable() is
* next called.
*
* Returns %0 in success and negative errno otherwise. Specifically
* returns %-EOPNOTSUPP if the requested mode is not possible (not
* supported by the router and/or topology).
* Return:
* * %0 - On success.
* * %-EOPNOTSUPP - If the requested mode is not possible (not supported by
* the router and/or topology).
* * Negative errno - Another error occurred.
*/
int tb_switch_tmu_configure(struct tb_switch *sw, enum tb_switch_tmu_mode mode)
{

View file

@ -121,6 +121,8 @@ static inline unsigned int tb_usable_credits(const struct tb_port *port)
* @port: Lane adapter to check
* @max_dp_streams: If non-%NULL stores maximum number of simultaneous DP
* streams possible through this lane adapter
*
* Return: Number of available credits.
*/
static unsigned int tb_available_credits(const struct tb_port *port,
size_t *max_dp_streams)
@ -415,8 +417,9 @@ static int tb_pci_init_path(struct tb_path *path)
* @alloc_hopid: Allocate HopIDs from visited ports
*
* If @down adapter is active, follows the tunnel to the PCIe upstream
* adapter and back. Returns the discovered tunnel or %NULL if there was
* no tunnel.
* adapter and back.
*
* Return: Pointer to &struct tb_tunnel or %NULL if there was no tunnel.
*/
struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down,
bool alloc_hopid)
@ -496,7 +499,7 @@ struct tb_tunnel *tb_tunnel_discover_pci(struct tb *tb, struct tb_port *down,
* Allocate a PCI tunnel. The ports must be of type TB_TYPE_PCIE_UP and
* TB_TYPE_PCIE_DOWN.
*
* Return: Returns a tb_tunnel on success or NULL on failure.
* Return: Pointer to @struct tb_tunnel or %NULL on failure.
*/
struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up,
struct tb_port *down)
@ -543,9 +546,12 @@ struct tb_tunnel *tb_tunnel_alloc_pci(struct tb *tb, struct tb_port *up,
*
* Can be called to any connected lane 0 adapter to find out how much
* bandwidth needs to be left in reserve for possible PCIe bulk traffic.
* Returns true if there is something to be reserved and writes the
* amount to @reserved_down/@reserved_up. Otherwise returns false and
* does not touch the parameters.
*
* Return:
* * %true - If there is something to be reserved. Writes the amount to
* @reserved_down/@reserved_up.
* * %false - Nothing to be reserved. Leaves @reserved_down/@reserved_up
* unmodified.
*/
bool tb_tunnel_reserved_pci(struct tb_port *port, int *reserved_up,
int *reserved_down)
@ -1073,6 +1079,7 @@ static void tb_dp_dprx_work(struct work_struct *work)
if (tunnel->callback)
tunnel->callback(tunnel, tunnel->callback_data);
tb_tunnel_put(tunnel);
}
static int tb_dp_dprx_start(struct tb_tunnel *tunnel)
@ -1100,8 +1107,8 @@ static void tb_dp_dprx_stop(struct tb_tunnel *tunnel)
if (tunnel->dprx_started) {
tunnel->dprx_started = false;
tunnel->dprx_canceled = true;
cancel_delayed_work(&tunnel->dprx_work);
tb_tunnel_put(tunnel);
if (cancel_delayed_work(&tunnel->dprx_work))
tb_tunnel_put(tunnel);
}
}
@ -1151,7 +1158,8 @@ static int tb_dp_activate(struct tb_tunnel *tunnel, bool active)
* @tunnel: DP tunnel to check
* @max_bw_rounded: Maximum bandwidth in Mb/s rounded up to the next granularity
*
* Returns maximum possible bandwidth for this tunnel in Mb/s.
* Return: Maximum possible bandwidth for this tunnel in Mb/s, negative errno
* in case of failure.
*/
static int tb_dp_bandwidth_mode_maximum_bandwidth(struct tb_tunnel *tunnel,
int *max_bw_rounded)
@ -1547,7 +1555,7 @@ static void tb_dp_dump(struct tb_tunnel *tunnel)
* and back. Returns the discovered tunnel or %NULL if there was no
* tunnel.
*
* Return: DP tunnel or %NULL if no tunnel found.
* Return: Pointer to &struct tb_tunnel or %NULL if no tunnel found.
*/
struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in,
bool alloc_hopid)
@ -1648,7 +1656,7 @@ struct tb_tunnel *tb_tunnel_discover_dp(struct tb *tb, struct tb_port *in,
* successful (or if it returns %false there was some sort of issue).
* The @callback is called without @tb->lock held.
*
* Return: Returns a tb_tunnel on success or &NULL on failure.
* Return: Pointer to @struct tb_tunnel or %NULL in case of failure.
*/
struct tb_tunnel *tb_tunnel_alloc_dp(struct tb *tb, struct tb_port *in,
struct tb_port *out, int link_nr,
@ -1861,7 +1869,7 @@ static void tb_dma_destroy(struct tb_tunnel *tunnel)
* @receive_ring: NHI ring number used to receive packets from the
* other domain. Set to %-1 if RX path is not needed.
*
* Return: Returns a tb_tunnel on success or NULL on failure.
* Return: Pointer to @struct tb_tunnel or %NULL in case of failure.
*/
struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
struct tb_port *dst, int transmit_path,
@ -1938,7 +1946,8 @@ struct tb_tunnel *tb_tunnel_alloc_dma(struct tb *tb, struct tb_port *nhi,
*
* This function can be used to match specific DMA tunnel, if there are
* multiple DMA tunnels going through the same XDomain connection.
* Returns true if there is match and false otherwise.
*
* Return: %true if there is a match, %false otherwise.
*/
bool tb_tunnel_match_dma(const struct tb_tunnel *tunnel, int transmit_path,
int transmit_ring, int receive_path, int receive_ring)
@ -2160,8 +2169,9 @@ static void tb_usb3_init_path(struct tb_path *path)
* @alloc_hopid: Allocate HopIDs from visited ports
*
* If @down adapter is active, follows the tunnel to the USB3 upstream
* adapter and back. Returns the discovered tunnel or %NULL if there was
* no tunnel.
* adapter and back.
*
* Return: Pointer to &struct tb_tunnel or %NULL if there was no tunnel.
*/
struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down,
bool alloc_hopid)
@ -2266,7 +2276,7 @@ struct tb_tunnel *tb_tunnel_discover_usb3(struct tb *tb, struct tb_port *down,
* Allocate an USB3 tunnel. The ports must be of type @TB_TYPE_USB3_UP and
* @TB_TYPE_USB3_DOWN.
*
* Return: Returns a tb_tunnel on success or %NULL on failure.
* Return: Pointer to @struct tb_tunnel or %NULL in case of failure.
*/
struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up,
struct tb_port *down, int max_up,
@ -2337,6 +2347,8 @@ struct tb_tunnel *tb_tunnel_alloc_usb3(struct tb *tb, struct tb_port *up,
/**
* tb_tunnel_is_invalid - check whether an activated path is still valid
* @tunnel: Tunnel to check
*
* Return: %true if path is valid, %false otherwise.
*/
bool tb_tunnel_is_invalid(struct tb_tunnel *tunnel)
{
@ -2355,10 +2367,11 @@ bool tb_tunnel_is_invalid(struct tb_tunnel *tunnel)
* tb_tunnel_activate() - activate a tunnel
* @tunnel: Tunnel to activate
*
* Return: 0 on success and negative errno in case if failure.
* Specifically returns %-EINPROGRESS if the tunnel activation is still
* in progress (that's for DP tunnels to complete DPRX capabilities
* read).
* Return:
* * %0 - On success.
* * %-EINPROGRESS - If the tunnel activation is still in progress (that's
* for DP tunnels to complete DPRX capabilities read).
* * Negative errno - Another error occurred.
*/
int tb_tunnel_activate(struct tb_tunnel *tunnel)
{
@ -2438,8 +2451,8 @@ void tb_tunnel_deactivate(struct tb_tunnel *tunnel)
* @tunnel: Tunnel to check
* @port: Port to check
*
* Returns true if @tunnel goes through @port (direction does not matter),
* false otherwise.
* Return: %true if @tunnel goes through @port (direction does not matter),
* %false otherwise.
*/
bool tb_tunnel_port_on_path(const struct tb_tunnel *tunnel,
const struct tb_port *port)
@ -2469,9 +2482,11 @@ static bool tb_tunnel_is_activated(const struct tb_tunnel *tunnel)
* @max_up: Maximum upstream bandwidth in Mb/s
* @max_down: Maximum downstream bandwidth in Mb/s
*
* Returns maximum possible bandwidth this tunnel can go if not limited
* by other bandwidth clients. If the tunnel does not support this
* returns %-EOPNOTSUPP.
* Return:
* * Maximum possible bandwidth this tunnel can support if not
* limited by other bandwidth clients.
* * %-EOPNOTSUPP - If the tunnel does not support this function.
* * %-ENOTCONN - If the tunnel is not active.
*/
int tb_tunnel_maximum_bandwidth(struct tb_tunnel *tunnel, int *max_up,
int *max_down)
@ -2491,8 +2506,12 @@ int tb_tunnel_maximum_bandwidth(struct tb_tunnel *tunnel, int *max_up,
* @allocated_down: Currently allocated downstream bandwidth in Mb/s is
* stored here
*
* Returns the bandwidth allocated for the tunnel. This may be higher
* than what the tunnel actually consumes.
* Return:
* * Bandwidth allocated for the tunnel. This may be higher than what the
* tunnel actually consumes.
* * %-EOPNOTSUPP - If the tunnel does not support this function.
* * %-ENOTCONN - If the tunnel is not active.
* * Negative errno - Another error occurred.
*/
int tb_tunnel_allocated_bandwidth(struct tb_tunnel *tunnel, int *allocated_up,
int *allocated_down)
@ -2512,10 +2531,12 @@ int tb_tunnel_allocated_bandwidth(struct tb_tunnel *tunnel, int *allocated_up,
* @alloc_up: New upstream bandwidth in Mb/s
* @alloc_down: New downstream bandwidth in Mb/s
*
* Tries to change tunnel bandwidth allocation. If succeeds returns %0
* and updates @alloc_up and @alloc_down to that was actually allocated
* (it may not be the same as passed originally). Returns negative errno
* in case of failure.
* Tries to change tunnel bandwidth allocation.
*
* Return:
* * %0 - On success. Updates @alloc_up and @alloc_down to values that were
* actually allocated (it may not be the same as passed originally).
* * Negative errno - In case of failure.
*/
int tb_tunnel_alloc_bandwidth(struct tb_tunnel *tunnel, int *alloc_up,
int *alloc_down)
@ -2546,8 +2567,9 @@ int tb_tunnel_alloc_bandwidth(struct tb_tunnel *tunnel, int *alloc_up,
* Can be %NULL.
*
* Stores the amount of isochronous bandwidth @tunnel consumes in
* @consumed_up and @consumed_down. In case of success returns %0,
* negative errno otherwise.
* @consumed_up and @consumed_down.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_tunnel_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up,
int *consumed_down)
@ -2585,7 +2607,7 @@ int tb_tunnel_consumed_bandwidth(struct tb_tunnel *tunnel, int *consumed_up,
* If tunnel supports dynamic bandwidth management (USB3 tunnels at the
* moment) this function makes it to release all the unused bandwidth.
*
* Returns %0 in case of success and negative errno otherwise.
* Return: %0 on success, negative errno otherwise.
*/
int tb_tunnel_release_unused_bandwidth(struct tb_tunnel *tunnel)
{

View file

@ -142,10 +142,11 @@ void tb_tunnel_deactivate(struct tb_tunnel *tunnel);
* tb_tunnel_is_active() - Is tunnel fully activated
* @tunnel: Tunnel to check
*
* Returns %true if @tunnel is fully activated. For other than DP
* tunnels this is pretty much once tb_tunnel_activate() returns
* successfully. However, for DP tunnels this returns %true only once the
* DPRX capabilities read has been issued successfully.
* Return: %true if @tunnel is fully activated.
*
* Note for DP tunnels this returns %true only once the DPRX capabilities
* read has been issued successfully. For other tunnels, this function
* returns %true pretty much once tb_tunnel_activate() returns successfully.
*/
static inline bool tb_tunnel_is_active(const struct tb_tunnel *tunnel)
{

View file

@ -9,6 +9,7 @@
#include <linux/delay.h>
#include <linux/ktime.h>
#include <linux/string_choices.h>
#include <linux/units.h>
#include "sb_regs.h"
@ -172,8 +173,8 @@ void usb4_switch_check_wakes(struct tb_switch *sw)
return;
tb_sw_dbg(sw, "PCIe wake: %s, USB3 wake: %s\n",
(val & ROUTER_CS_6_WOPS) ? "yes" : "no",
(val & ROUTER_CS_6_WOUS) ? "yes" : "no");
str_yes_no(val & ROUTER_CS_6_WOPS),
str_yes_no(val & ROUTER_CS_6_WOUS));
wakeup = val & (ROUTER_CS_6_WOPS | ROUTER_CS_6_WOUS);
}
@ -191,9 +192,9 @@ void usb4_switch_check_wakes(struct tb_switch *sw)
break;
tb_port_dbg(port, "USB4 wake: %s, connection wake: %s, disconnection wake: %s\n",
(val & PORT_CS_18_WOU4S) ? "yes" : "no",
(val & PORT_CS_18_WOCS) ? "yes" : "no",
(val & PORT_CS_18_WODS) ? "yes" : "no");
str_yes_no(val & PORT_CS_18_WOU4S),
str_yes_no(val & PORT_CS_18_WOCS),
str_yes_no(val & PORT_CS_18_WODS));
wakeup_usb4 = val & (PORT_CS_18_WOU4S | PORT_CS_18_WOCS |
PORT_CS_18_WODS);
@ -236,6 +237,8 @@ static bool link_is_usb4(struct tb_port *port)
*
* This does not set the configuration valid bit of the router. To do
* that call usb4_switch_configuration_valid().
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_switch_setup(struct tb_switch *sw)
{
@ -260,7 +263,7 @@ int usb4_switch_setup(struct tb_switch *sw)
tbt3 = !(val & ROUTER_CS_6_TNS);
tb_sw_dbg(sw, "TBT3 support: %s, xHCI: %s\n",
tbt3 ? "yes" : "no", xhci ? "yes" : "no");
str_yes_no(tbt3), str_yes_no(xhci));
ret = tb_sw_read(sw, &val, TB_CFG_SWITCH, ROUTER_CS_5, 1);
if (ret)
@ -303,7 +306,7 @@ int usb4_switch_setup(struct tb_switch *sw)
* usb4_switch_setup() has been called. Can be called to host and device
* routers (does nothing for the latter).
*
* Returns %0 in success and negative errno otherwise.
* Return: %0 on success, negative errno otherwise.
*/
int usb4_switch_configuration_valid(struct tb_switch *sw)
{
@ -333,6 +336,8 @@ int usb4_switch_configuration_valid(struct tb_switch *sw)
* @uid: UID is stored here
*
* Reads 64-bit UID from USB4 router config space.
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_switch_read_uid(struct tb_switch *sw, u64 *uid)
{
@ -370,6 +375,8 @@ static int usb4_switch_drom_read_block(void *data,
* Uses USB4 router operations to read router DROM. For devices this
* should always work but for hosts it may return %-EOPNOTSUPP in which
* case the host router does not have DROM.
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_switch_drom_read(struct tb_switch *sw, unsigned int address, void *buf,
size_t size)
@ -384,6 +391,8 @@ int usb4_switch_drom_read(struct tb_switch *sw, unsigned int address, void *buf,
*
* Checks whether conditions are met so that lane bonding can be
* established with the upstream router. Call only for device routers.
*
* Return: %true if lane bonding is possible, %false otherwise.
*/
bool usb4_switch_lane_bonding_possible(struct tb_switch *sw)
{
@ -406,6 +415,8 @@ bool usb4_switch_lane_bonding_possible(struct tb_switch *sw)
* @runtime: Wake is being programmed during system runtime
*
* Enables/disables router to wake up from sleep.
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_switch_set_wake(struct tb_switch *sw, unsigned int flags, bool runtime)
{
@ -483,8 +494,10 @@ int usb4_switch_set_wake(struct tb_switch *sw, unsigned int flags, bool runtime)
* usb4_switch_set_sleep() - Prepare the router to enter sleep
* @sw: USB4 router
*
* Sets sleep bit for the router. Returns when the router sleep ready
* Sets sleep bit for the router and waits until router sleep ready
* bit has been asserted.
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_switch_set_sleep(struct tb_switch *sw)
{
@ -510,9 +523,10 @@ int usb4_switch_set_sleep(struct tb_switch *sw)
* usb4_switch_nvm_sector_size() - Return router NVM sector size
* @sw: USB4 router
*
* If the router supports NVM operations this function returns the NVM
* sector size in bytes. If NVM operations are not supported returns
* %-EOPNOTSUPP.
* Return:
* * NVM sector size in bytes if router supports NVM operations.
* * %-EOPNOTSUPP - If router does not support NVM operations.
* * Negative errno - Another error occurred.
*/
int usb4_switch_nvm_sector_size(struct tb_switch *sw)
{
@ -559,8 +573,12 @@ static int usb4_switch_nvm_read_block(void *data,
* @buf: Read data is placed here
* @size: How many bytes to read
*
* Reads NVM contents of the router. If NVM is not supported returns
* %-EOPNOTSUPP.
* Reads NVM contents of the router.
*
* Return:
* * %0 - Read completed successfully.
* * %-EOPNOTSUPP - NVM not supported.
* * Negative errno - Another error occurred.
*/
int usb4_switch_nvm_read(struct tb_switch *sw, unsigned int address, void *buf,
size_t size)
@ -577,7 +595,7 @@ int usb4_switch_nvm_read(struct tb_switch *sw, unsigned int address, void *buf,
* Explicitly sets NVM write offset. Normally when writing to NVM this
* is done automatically by usb4_switch_nvm_write().
*
* Returns %0 in success and negative errno if there was a failure.
* Return: %0 on success, negative errno otherwise.
*/
int usb4_switch_nvm_set_offset(struct tb_switch *sw, unsigned int address)
{
@ -619,8 +637,12 @@ static int usb4_switch_nvm_write_next_block(void *data, unsigned int dwaddress,
* @buf: Pointer to the data to write
* @size: Size of @buf in bytes
*
* Writes @buf to the router NVM using USB4 router operations. If NVM
* write is not supported returns %-EOPNOTSUPP.
* Writes @buf to the router NVM using USB4 router operations.
*
* Return:
* * %0 - Write completed successfully.
* * %-EOPNOTSUPP - NVM write not supported.
* * Negative errno - Another error occurred.
*/
int usb4_switch_nvm_write(struct tb_switch *sw, unsigned int address,
const void *buf, size_t size)
@ -642,11 +664,13 @@ int usb4_switch_nvm_write(struct tb_switch *sw, unsigned int address,
* After the new NVM has been written via usb4_switch_nvm_write(), this
* function triggers NVM authentication process. The router gets power
* cycled and if the authentication is successful the new NVM starts
* running. In case of failure returns negative errno.
* running.
*
* The caller should call usb4_switch_nvm_authenticate_status() to read
* the status of the authentication after power cycle. It should be the
* first router operation to avoid the status being lost.
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_switch_nvm_authenticate(struct tb_switch *sw)
{
@ -674,11 +698,13 @@ int usb4_switch_nvm_authenticate(struct tb_switch *sw)
* @status: Status code of the operation
*
* The function checks if there is status available from the last NVM
* authenticate router operation. If there is status then %0 is returned
* and the status code is placed in @status. Returns negative errno in case
* of failure.
* authenticate router operation.
*
* Must be called before any other router operation.
*
* Return:
* * %0 - If there is status. Status code is placed in @status.
* * Negative errno - Failure occurred.
*/
int usb4_switch_nvm_authenticate_status(struct tb_switch *sw, u32 *status)
{
@ -722,7 +748,7 @@ int usb4_switch_nvm_authenticate_status(struct tb_switch *sw, u32 *status)
* allocation fields accordingly. Specifically @sw->credits_allocation
* is set to %true if these parameters can be used in tunneling.
*
* Returns %0 on success and negative errno otherwise.
* Return: %0 on success, negative errno otherwise.
*/
int usb4_switch_credits_init(struct tb_switch *sw)
{
@ -861,8 +887,10 @@ int usb4_switch_credits_init(struct tb_switch *sw)
* @in: DP IN adapter
*
* For DP tunneling this function can be used to query availability of
* DP IN resource. Returns true if the resource is available for DP
* tunneling, false otherwise.
* DP IN resource.
*
* Return: %true if the resource is available for DP tunneling, %false
* otherwise.
*/
bool usb4_switch_query_dp_resource(struct tb_switch *sw, struct tb_port *in)
{
@ -890,9 +918,12 @@ bool usb4_switch_query_dp_resource(struct tb_switch *sw, struct tb_port *in)
* @in: DP IN adapter
*
* Allocates DP IN resource for DP tunneling using USB4 router
* operations. If the resource was allocated returns %0. Otherwise
* returns negative errno, in particular %-EBUSY if the resource is
* already allocated.
* operations.
*
* Return:
* * %0 - Resource allocated successfully.
* * %-EBUSY - Resource is already allocated.
* * Negative errno - Other failure occurred.
*/
int usb4_switch_alloc_dp_resource(struct tb_switch *sw, struct tb_port *in)
{
@ -916,6 +947,8 @@ int usb4_switch_alloc_dp_resource(struct tb_switch *sw, struct tb_port *in)
* @in: DP IN adapter
*
* Releases the previously allocated DP IN resource.
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_switch_dealloc_dp_resource(struct tb_switch *sw, struct tb_port *in)
{
@ -971,6 +1004,8 @@ int usb4_port_index(const struct tb_switch *sw, const struct tb_port *port)
* downstream adapters where the PCIe topology is extended. This
* function returns the corresponding downstream PCIe adapter or %NULL
* if no such mapping was possible.
*
* Return: Pointer to &struct tb_port or %NULL if not found.
*/
struct tb_port *usb4_switch_map_pcie_down(struct tb_switch *sw,
const struct tb_port *port)
@ -1002,6 +1037,8 @@ struct tb_port *usb4_switch_map_pcie_down(struct tb_switch *sw,
* downstream adapters where the USB 3.x topology is extended. This
* function returns the corresponding downstream USB 3.x adapter or
* %NULL if no such mapping was possible.
*
* Return: Pointer to &struct tb_port or %NULL if not found.
*/
struct tb_port *usb4_switch_map_usb3_down(struct tb_switch *sw,
const struct tb_port *port)
@ -1031,7 +1068,7 @@ struct tb_port *usb4_switch_map_usb3_down(struct tb_switch *sw,
* For USB4 router finds all USB4 ports and registers devices for each.
* Can be called to any router.
*
* Return %0 in case of success and negative errno in case of failure.
* Return: %0 on success, negative errno otherwise.
*/
int usb4_switch_add_ports(struct tb_switch *sw)
{
@ -1084,6 +1121,8 @@ void usb4_switch_remove_ports(struct tb_switch *sw)
*
* Unlocks USB4 downstream port so that the connection manager can
* access the router below this port.
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_port_unlock(struct tb_port *port)
{
@ -1104,6 +1143,8 @@ int usb4_port_unlock(struct tb_port *port)
*
* Enables hot plug events on a given port. This is only intended
* to be used on lane, DP-IN, and DP-OUT adapters.
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_port_hotplug_enable(struct tb_port *port)
{
@ -1123,6 +1164,8 @@ int usb4_port_hotplug_enable(struct tb_port *port)
* @port: USB4 port to reset
*
* Issues downstream port reset to @port.
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_port_reset(struct tb_port *port)
{
@ -1184,6 +1227,8 @@ static int usb4_port_set_configured(struct tb_port *port, bool configured)
* @port: USB4 router
*
* Sets the USB4 link to be configured for power management purposes.
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_port_configure(struct tb_port *port)
{
@ -1195,6 +1240,8 @@ int usb4_port_configure(struct tb_port *port)
* @port: USB4 router
*
* Sets the USB4 link to be unconfigured for power management purposes.
*
* Return: %0 on success, negative errno otherwise.
*/
void usb4_port_unconfigure(struct tb_port *port)
{
@ -1229,7 +1276,9 @@ static int usb4_set_xdomain_configured(struct tb_port *port, bool configured)
* @xd: XDomain that is connected to the port
*
* Marks the USB4 port as being connected to another host and updates
* the link type. Returns %0 in success and negative errno in failure.
* the link type.
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_port_configure_xdomain(struct tb_port *port, struct tb_xdomain *xd)
{
@ -1299,7 +1348,8 @@ static int usb4_port_write_data(struct tb_port *port, const void *data,
* @size: Size of @buf
*
* Reads data from sideband register @reg and copies it into @buf.
* Returns %0 in case of success and negative errno in case of failure.
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_port_sb_read(struct tb_port *port, enum usb4_sb_target target, u8 index,
u8 reg, void *buf, u8 size)
@ -1350,8 +1400,9 @@ int usb4_port_sb_read(struct tb_port *port, enum usb4_sb_target target, u8 index
* @buf: Data to write
* @size: Size of @buf
*
* Writes @buf to sideband register @reg. Returns %0 in case of success
* and negative errno in case of failure.
* Writes @buf to sideband register @reg.
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_port_sb_write(struct tb_port *port, enum usb4_sb_target target,
u8 index, u8 reg, const void *buf, u8 size)
@ -1468,8 +1519,7 @@ static int usb4_port_set_router_offline(struct tb_port *port, bool offline)
* port does not react on hotplug events anymore. This needs to be
* called before retimer access is done when the USB4 links is not up.
*
* Returns %0 in case of success and negative errno if there was an
* error.
* Return: %0 on success, negative errno otherwise.
*/
int usb4_port_router_offline(struct tb_port *port)
{
@ -1481,6 +1531,8 @@ int usb4_port_router_offline(struct tb_port *port)
* @port: USB4 port
*
* Makes the USB4 port functional again.
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_port_router_online(struct tb_port *port)
{
@ -1492,8 +1544,9 @@ int usb4_port_router_online(struct tb_port *port)
* @port: USB4 port
*
* This forces the USB4 port to send broadcast RT transaction which
* makes the retimers on the link to assign index to themselves. Returns
* %0 in case of success and negative errno if there was an error.
* makes the retimers on the link assign index to themselves.
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_port_enumerate_retimers(struct tb_port *port)
{
@ -1510,6 +1563,8 @@ int usb4_port_enumerate_retimers(struct tb_port *port)
*
* PORT_CS_18_CPS bit reflects if the link supports CLx including
* active cables (if connected on the link).
*
* Return: %true if Clx is supported, %false otherwise.
*/
bool usb4_port_clx_supported(struct tb_port *port)
{
@ -1528,8 +1583,9 @@ bool usb4_port_clx_supported(struct tb_port *port)
* usb4_port_asym_supported() - If the port supports asymmetric link
* @port: USB4 port
*
* Checks if the port and the cable supports asymmetric link and returns
* %true in that case.
* Checks if the port and the cable support asymmetric link.
*
* Return: %true if asymmetric link is supported, %false otherwise.
*/
bool usb4_port_asym_supported(struct tb_port *port)
{
@ -1551,6 +1607,8 @@ bool usb4_port_asym_supported(struct tb_port *port)
*
* Sets USB4 port link width to @width. Can be called for widths where
* usb4_port_asym_width_supported() returned @true.
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_port_asym_set_link_width(struct tb_port *port, enum tb_link_width width)
{
@ -1595,8 +1653,10 @@ int usb4_port_asym_set_link_width(struct tb_port *port, enum tb_link_width width
* (according to what was previously set in tb_port_set_link_width().
* Wait for completion of the change.
*
* Returns %0 in case of success, %-ETIMEDOUT if case of timeout or
* a negative errno in case of a failure.
* Return:
* * %0 - Symmetry change was successful.
* * %-ETIMEDOUT - Timeout occurred.
* * Negative errno - Other failure occurred.
*/
int usb4_port_asym_start(struct tb_port *port)
{
@ -1640,6 +1700,8 @@ int usb4_port_asym_start(struct tb_port *port)
* @ncaps: Number of elements in the caps array
*
* Reads the USB4 port lane margining capabilities into @caps.
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_port_margining_caps(struct tb_port *port, enum usb4_sb_target target,
u8 index, u32 *caps, size_t ncaps)
@ -1666,6 +1728,8 @@ int usb4_port_margining_caps(struct tb_port *port, enum usb4_sb_target target,
*
* Runs hardware lane margining on USB4 port and returns the result in
* @results.
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_port_hw_margin(struct tb_port *port, enum usb4_sb_target target,
u8 index, const struct usb4_port_margining_params *params,
@ -1710,8 +1774,9 @@ int usb4_port_hw_margin(struct tb_port *port, enum usb4_sb_target target,
* @results: Data word for the operation completion data
*
* Runs software lane margining on USB4 port. Read back the error
* counters by calling usb4_port_sw_margin_errors(). Returns %0 in
* success and negative errno otherwise.
* counters by calling usb4_port_sw_margin_errors().
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_port_sw_margin(struct tb_port *port, enum usb4_sb_target target,
u8 index, const struct usb4_port_margining_params *params,
@ -1758,7 +1823,8 @@ int usb4_port_sw_margin(struct tb_port *port, enum usb4_sb_target target,
* @errors: Error metadata is copied here.
*
* This reads back the software margining error counters from the port.
* Returns %0 in success and negative errno otherwise.
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_port_sw_margin_errors(struct tb_port *port, enum usb4_sb_target target,
u8 index, u32 *errors)
@ -1789,6 +1855,8 @@ static inline int usb4_port_retimer_op(struct tb_port *port, u8 index,
*
* Enables sideband channel transations on SBTX. Can be used when USB4
* link does not go up, for example if there is no device connected.
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_port_retimer_set_inbound_sbtx(struct tb_port *port, u8 index)
{
@ -1816,6 +1884,8 @@ int usb4_port_retimer_set_inbound_sbtx(struct tb_port *port, u8 index)
*
* Disables sideband channel transations on SBTX. The reverse of
* usb4_port_retimer_set_inbound_sbtx().
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_port_retimer_unset_inbound_sbtx(struct tb_port *port, u8 index)
{
@ -1828,10 +1898,12 @@ int usb4_port_retimer_unset_inbound_sbtx(struct tb_port *port, u8 index)
* @port: USB4 port
* @index: Retimer index
*
* If the retimer at @index is last one (connected directly to the
* Type-C port) this function returns %1. If it is not returns %0. If
* the retimer is not present returns %-ENODEV. Otherwise returns
* negative errno.
* Return:
* * %1 - Retimer at @index is the last one (connected directly to the
* Type-C port).
* * %0 - Retimer at @index is not the last one.
* * %-ENODEV - Retimer is not present.
* * Negative errno - Other failure occurred.
*/
int usb4_port_retimer_is_last(struct tb_port *port, u8 index)
{
@ -1853,9 +1925,11 @@ int usb4_port_retimer_is_last(struct tb_port *port, u8 index)
* @port: USB4 port
* @index: Retimer index
*
* If the retimer at @index is last cable retimer this function returns
* %1 and %0 if it is on-board retimer. In case a retimer is not present
* at @index returns %-ENODEV. Otherwise returns negative errno.
* Return:
* * %1 - Retimer at @index is the last cable retimer.
* * %0 - Retimer at @index is on-board retimer.
* * %-ENODEV - Retimer is not present.
* * Negative errno - Other failure occurred.
*/
int usb4_port_retimer_is_cable(struct tb_port *port, u8 index)
{
@ -1879,9 +1953,12 @@ int usb4_port_retimer_is_cable(struct tb_port *port, u8 index)
*
* Reads NVM sector size (in bytes) of a retimer at @index. This
* operation can be used to determine whether the retimer supports NVM
* upgrade for example. Returns sector size in bytes or negative errno
* in case of error. Specifically returns %-ENODEV if there is no
* retimer at @index.
* upgrade for example.
*
* Return:
* * Sector size in bytes.
* * %-ENODEV - If there is no retimer at @index.
* * Negative errno - In case of an error.
*/
int usb4_port_retimer_nvm_sector_size(struct tb_port *port, u8 index)
{
@ -1907,7 +1984,7 @@ int usb4_port_retimer_nvm_sector_size(struct tb_port *port, u8 index)
* Exlicitly sets NVM write offset. Normally when writing to NVM this is
* done automatically by usb4_port_retimer_nvm_write().
*
* Returns %0 in success and negative errno if there was a failure.
* Return: %0 on success, negative errno otherwise.
*/
int usb4_port_retimer_nvm_set_offset(struct tb_port *port, u8 index,
unsigned int address)
@ -1960,9 +2037,12 @@ static int usb4_port_retimer_nvm_write_next_block(void *data,
* @size: Size in bytes how much to write
*
* Writes @size bytes from @buf to the retimer NVM. Used for NVM
* upgrade. Returns %0 if the data was written successfully and negative
* errno in case of failure. Specifically returns %-ENODEV if there is
* no retimer at @index.
* upgrade.
*
* Return:
* * %0 - If the data was written successfully.
* * %-ENODEV - If there is no retimer at @index.
* * Negative errno - In case of an error.
*/
int usb4_port_retimer_nvm_write(struct tb_port *port, u8 index, unsigned int address,
const void *buf, size_t size)
@ -1988,6 +2068,8 @@ int usb4_port_retimer_nvm_write(struct tb_port *port, u8 index, unsigned int add
* successful the retimer restarts with the new NVM and may not have the
* index set so one needs to call usb4_port_enumerate_retimers() to
* force index to be assigned.
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_port_retimer_nvm_authenticate(struct tb_port *port, u8 index)
{
@ -2012,9 +2094,9 @@ int usb4_port_retimer_nvm_authenticate(struct tb_port *port, u8 index)
* This can be called after usb4_port_retimer_nvm_authenticate() and
* usb4_port_enumerate_retimers() to fetch status of the NVM upgrade.
*
* Returns %0 if the authentication status was successfully read. The
* Return: %0 if the authentication status was successfully read. The
* completion metadata (the result) is then stored into @status. If
* reading the status fails, returns negative errno.
* status read fails, returns negative errno.
*/
int usb4_port_retimer_nvm_authenticate_status(struct tb_port *port, u8 index,
u32 *status)
@ -2082,9 +2164,12 @@ static int usb4_port_retimer_nvm_read_block(void *data, unsigned int dwaddress,
* @buf: Data read from NVM is stored here
* @size: Number of bytes to read
*
* Reads retimer NVM and copies the contents to @buf. Returns %0 if the
* read was successful and negative errno in case of failure.
* Specifically returns %-ENODEV if there is no retimer at @index.
* Reads retimer NVM and copies the contents to @buf.
*
* Return:
* * %0 - If the read was successful.
* * %-ENODEV - If there is no retimer at @index.
* * Negative errno - In case of an error.
*/
int usb4_port_retimer_nvm_read(struct tb_port *port, u8 index,
unsigned int address, void *buf, size_t size)
@ -2108,8 +2193,8 @@ usb4_usb3_port_max_bandwidth(const struct tb_port *port, unsigned int bw)
* usb4_usb3_port_max_link_rate() - Maximum support USB3 link rate
* @port: USB3 adapter port
*
* Return maximum supported link rate of a USB3 adapter in Mb/s.
* Negative errno in case of error.
* Return: Maximum supported link rate of a USB3 adapter in Mb/s.
* Negative errno in case of an error.
*/
int usb4_usb3_port_max_link_rate(struct tb_port *port)
{
@ -2227,8 +2312,9 @@ static int usb4_usb3_port_read_allocated_bandwidth(struct tb_port *port,
* @downstream_bw: Allocated downstream bandwidth is stored here
*
* Stores currently allocated USB3 bandwidth into @upstream_bw and
* @downstream_bw in Mb/s. Returns %0 in case of success and negative
* errno in failure.
* @downstream_bw in Mb/s.
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_usb3_port_allocated_bandwidth(struct tb_port *port, int *upstream_bw,
int *downstream_bw)
@ -2330,8 +2416,7 @@ static int usb4_usb3_port_write_allocated_bandwidth(struct tb_port *port,
* cannot be taken away by CM). The actual new values are returned in
* @upstream_bw and @downstream_bw.
*
* Returns %0 in case of success and negative errno if there was a
* failure.
* Return: %0 on success, negative errno otherwise.
*/
int usb4_usb3_port_allocate_bandwidth(struct tb_port *port, int *upstream_bw,
int *downstream_bw)
@ -2373,7 +2458,7 @@ int usb4_usb3_port_allocate_bandwidth(struct tb_port *port, int *upstream_bw,
* Releases USB3 allocated bandwidth down to what is actually consumed.
* The new bandwidth is returned in @upstream_bw and @downstream_bw.
*
* Returns 0% in success and negative errno in case of failure.
* Return: %0 on success, negative errno otherwise.
*/
int usb4_usb3_port_release_bandwidth(struct tb_port *port, int *upstream_bw,
int *downstream_bw)
@ -2425,9 +2510,12 @@ static bool is_usb4_dpin(const struct tb_port *port)
* @port: DP IN adapter
* @cm_id: CM ID to assign
*
* Sets CM ID for the @port. Returns %0 on success and negative errno
* otherwise. Speficially returns %-EOPNOTSUPP if the @port does not
* support this.
* Sets CM ID for the @port.
*
* Return:
* * %0 - On success.
* * %-EOPNOTSUPP - If the @port does not support this.
* * Negative errno - Another error occurred.
*/
int usb4_dp_port_set_cm_id(struct tb_port *port, int cm_id)
{
@ -2454,8 +2542,10 @@ int usb4_dp_port_set_cm_id(struct tb_port *port, int cm_id)
* supported
* @port: DP IN adapter to check
*
* Can be called to any DP IN adapter. Returns true if the adapter
* supports USB4 bandwidth allocation mode, false otherwise.
* Can be called to any DP IN adapter.
*
* Return: %true if the adapter supports USB4 bandwidth allocation mode,
* %false otherwise.
*/
bool usb4_dp_port_bandwidth_mode_supported(struct tb_port *port)
{
@ -2478,8 +2568,10 @@ bool usb4_dp_port_bandwidth_mode_supported(struct tb_port *port)
* enabled
* @port: DP IN adapter to check
*
* Can be called to any DP IN adapter. Returns true if the bandwidth
* allocation mode has been enabled, false otherwise.
* Can be called to any DP IN adapter.
*
* Return: %true if the bandwidth allocation mode has been enabled,
* %false otherwise.
*/
bool usb4_dp_port_bandwidth_mode_enabled(struct tb_port *port)
{
@ -2504,9 +2596,12 @@ bool usb4_dp_port_bandwidth_mode_enabled(struct tb_port *port)
* @supported: Does the CM support bandwidth allocation mode
*
* Can be called to any DP IN adapter. Sets or clears the CM support bit
* of the DP IN adapter. Returns %0 in success and negative errno
* otherwise. Specifically returns %-OPNOTSUPP if the passed in adapter
* does not support this.
* of the DP IN adapter.
*
* * Return:
* * %0 - On success.
* * %-EOPNOTSUPP - If the passed IN adapter does not support this.
* * Negative errno - Another error occurred.
*/
int usb4_dp_port_set_cm_bandwidth_mode_supported(struct tb_port *port,
bool supported)
@ -2536,8 +2631,12 @@ int usb4_dp_port_set_cm_bandwidth_mode_supported(struct tb_port *port,
* @port: DP IN adapter
*
* Reads bandwidth allocation Group ID from the DP IN adapter and
* returns it. If the adapter does not support setting Group_ID
* %-EOPNOTSUPP is returned.
* returns it.
*
* Return:
* * Group ID assigned to adapter @port.
* * %-EOPNOTSUPP - If adapter does not support setting GROUP_ID.
* * Negative errno - Another error occurred.
*/
int usb4_dp_port_group_id(struct tb_port *port)
{
@ -2561,9 +2660,11 @@ int usb4_dp_port_group_id(struct tb_port *port)
* @group_id: Group ID for the adapter
*
* Sets bandwidth allocation mode Group ID for the DP IN adapter.
* Returns %0 in case of success and negative errno otherwise.
* Specifically returns %-EOPNOTSUPP if the adapter does not support
* this.
*
* Return:
* * %0 - On success.
* * %-EOPNOTSUPP - If the adapter does not support this.
* * Negative errno - Another error occurred.
*/
int usb4_dp_port_set_group_id(struct tb_port *port, int group_id)
{
@ -2591,9 +2692,12 @@ int usb4_dp_port_set_group_id(struct tb_port *port, int group_id)
* @rate: Non-reduced rate in Mb/s is placed here
* @lanes: Non-reduced lanes are placed here
*
* Reads the non-reduced rate and lanes from the DP IN adapter. Returns
* %0 in success and negative errno otherwise. Specifically returns
* %-EOPNOTSUPP if the adapter does not support this.
* Reads the non-reduced rate and lanes from the DP IN adapter.
*
* Return:
* * %0 - On success.
* * %-EOPNOTSUPP - If the adapter does not support this.
* * Negative errno - Another error occurred.
*/
int usb4_dp_port_nrd(struct tb_port *port, int *rate, int *lanes)
{
@ -2646,10 +2750,13 @@ int usb4_dp_port_nrd(struct tb_port *port, int *rate, int *lanes)
* @rate: Non-reduced rate in Mb/s
* @lanes: Non-reduced lanes
*
* Before the capabilities reduction this function can be used to set
* the non-reduced values for the DP IN adapter. Returns %0 in success
* and negative errno otherwise. If the adapter does not support this
* %-EOPNOTSUPP is returned.
* Before the capabilities reduction, this function can be used to set
* the non-reduced values for the DP IN adapter.
*
* Return:
* * %0 - On success.
* * %-EOPNOTSUPP - If the adapter does not support this.
* * Negative errno - Another error occurred.
*/
int usb4_dp_port_set_nrd(struct tb_port *port, int rate, int lanes)
{
@ -2708,9 +2815,13 @@ int usb4_dp_port_set_nrd(struct tb_port *port, int rate, int lanes)
* usb4_dp_port_granularity() - Return granularity for the bandwidth values
* @port: DP IN adapter
*
* Reads the programmed granularity from @port. If the DP IN adapter does
* not support bandwidth allocation mode returns %-EOPNOTSUPP and negative
* errno in other error cases.
* Reads the programmed granularity from @port.
*
* Return:
* * Granularity value of a @port.
* * %-EOPNOTSUPP - If the DP IN adapter does not support bandwidth
* allocation mode.
* * Negative errno - Another error occurred.
*/
int usb4_dp_port_granularity(struct tb_port *port)
{
@ -2746,8 +2857,12 @@ int usb4_dp_port_granularity(struct tb_port *port)
* @granularity: Granularity in Mb/s. Supported values: 1000, 500 and 250.
*
* Sets the granularity used with the estimated, allocated and requested
* bandwidth. Returns %0 in success and negative errno otherwise. If the
* adapter does not support this %-EOPNOTSUPP is returned.
* bandwidth.
*
* Return:
* * %0 - On success.
* * %-EOPNOTSUPP - If the adapter does not support this.
* * Negative errno - Another error occurred.
*/
int usb4_dp_port_set_granularity(struct tb_port *port, int granularity)
{
@ -2788,10 +2903,13 @@ int usb4_dp_port_set_granularity(struct tb_port *port, int granularity)
* @bw: Estimated bandwidth in Mb/s.
*
* Sets the estimated bandwidth to @bw. Set the granularity by calling
* usb4_dp_port_set_granularity() before calling this. The @bw is round
* down to the closest granularity multiplier. Returns %0 in success
* and negative errno otherwise. Specifically returns %-EOPNOTSUPP if
* the adapter does not support this.
* usb4_dp_port_set_granularity() before calling this. The @bw is rounded
* down to the closest granularity multiplier.
*
* Return:
* * %0 - On success.
* * %-EOPNOTSUPP - If the adapter does not support this.
* * Negative errno - Another error occurred.
*/
int usb4_dp_port_set_estimated_bandwidth(struct tb_port *port, int bw)
{
@ -2822,9 +2940,10 @@ int usb4_dp_port_set_estimated_bandwidth(struct tb_port *port, int bw)
* usb4_dp_port_allocated_bandwidth() - Return allocated bandwidth
* @port: DP IN adapter
*
* Reads and returns allocated bandwidth for @port in Mb/s (taking into
* account the programmed granularity). Returns negative errno in case
* of error.
* Reads the allocated bandwidth for @port in Mb/s (taking into account
* the programmed granularity).
*
* Return: Allocated bandwidth in Mb/s or negative errno in case of an error.
*/
int usb4_dp_port_allocated_bandwidth(struct tb_port *port)
{
@ -2919,8 +3038,9 @@ static int usb4_dp_port_wait_and_clear_cm_ack(struct tb_port *port,
* @bw: New allocated bandwidth in Mb/s
*
* Communicates the new allocated bandwidth with the DPCD (graphics
* driver). Takes into account the programmed granularity. Returns %0 in
* success and negative errno in case of error.
* driver). Takes into account the programmed granularity.
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_dp_port_allocate_bandwidth(struct tb_port *port, int bw)
{
@ -2960,10 +3080,15 @@ int usb4_dp_port_allocate_bandwidth(struct tb_port *port, int bw)
* @port: DP IN adapter
*
* Reads the DPCD (graphics driver) requested bandwidth and returns it
* in Mb/s. Takes the programmed granularity into account. In case of
* error returns negative errno. Specifically returns %-EOPNOTSUPP if
* the adapter does not support bandwidth allocation mode, and %ENODATA
* if there is no active bandwidth request from the graphics driver.
* in Mb/s. Takes the programmed granularity into account.
*
* Return:
* * Requested bandwidth in Mb/s - On success.
* * %-EOPNOTSUPP - If the adapter does not support bandwidth allocation
* mode.
* * %ENODATA - If there is no active bandwidth request from the graphics
* driver.
* * Negative errno - On failure.
*/
int usb4_dp_port_requested_bandwidth(struct tb_port *port)
{
@ -2995,8 +3120,9 @@ int usb4_dp_port_requested_bandwidth(struct tb_port *port)
* @enable: Enable/disable extended encapsulation
*
* Enables or disables extended encapsulation used in PCIe tunneling. Caller
* needs to make sure both adapters support this before enabling. Returns %0 on
* success and negative errno otherwise.
* needs to make sure both adapters support this before enabling.
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_pci_port_set_ext_encapsulation(struct tb_port *port, bool enable)
{

View file

@ -296,8 +296,9 @@ const struct device_type usb4_port_device_type = {
* usb4_port_device_add() - Add USB4 port device
* @port: Lane 0 adapter port to add the USB4 port
*
* Creates and registers a USB4 port device for @port. Returns the new
* USB4 port device pointer or ERR_PTR() in case of error.
* Creates and registers a USB4 port device for @port.
*
* Return: Pointer to &struct usb4_port or ERR_PTR() in case of an error.
*/
struct usb4_port *usb4_port_device_add(struct tb_port *port)
{
@ -356,6 +357,8 @@ void usb4_port_device_remove(struct usb4_port *usb4)
* @usb4: USB4 port device
*
* Used to resume USB4 port device after sleep state.
*
* Return: %0 on success, negative errno otherwise.
*/
int usb4_port_device_resume(struct usb4_port *usb4)
{

View file

@ -160,7 +160,7 @@ static int __tb_xdomain_response(struct tb_ctl *ctl, const void *response,
* This can be used to send a XDomain response message to the other
* domain. No response for the message is expected.
*
* Return: %0 in case of success and negative errno in case of failure
* Return: %0 on success, negative errno otherwise.
*/
int tb_xdomain_response(struct tb_xdomain *xd, const void *response,
size_t size, enum tb_cfg_pkg_type type)
@ -212,7 +212,7 @@ static int __tb_xdomain_request(struct tb_ctl *ctl, const void *request,
* the other domain. The function waits until the response is received
* or when timeout triggers. Whichever comes first.
*
* Return: %0 in case of success and negative errno in case of failure
* Return: %0 on success, negative errno otherwise.
*/
int tb_xdomain_request(struct tb_xdomain *xd, const void *request,
size_t request_size, enum tb_cfg_pkg_type request_type,
@ -613,6 +613,8 @@ static int tb_xdp_link_state_change_response(struct tb_ctl *ctl, u64 route,
* messages. After this function is called the service driver needs to
* be able to handle calls to callback whenever a package with the
* registered protocol is received.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_register_protocol_handler(struct tb_protocol_handler *handler)
{
@ -877,6 +879,8 @@ tb_xdp_schedule_request(struct tb *tb, const struct tb_xdp_header *hdr,
* @drv: Driver to register
*
* Registers new service driver from @drv to the bus.
*
* Return: %0 on success, negative errno otherwise.
*/
int tb_register_service_driver(struct tb_service_driver *drv)
{
@ -1955,6 +1959,8 @@ static void tb_xdomain_link_exit(struct tb_xdomain *xd)
*
* Allocates new XDomain structure and returns pointer to that. The
* object must be released by calling tb_xdomain_put().
*
* Return: Pointer to &struct tb_xdomain, %NULL in case of failure.
*/
struct tb_xdomain *tb_xdomain_alloc(struct tb *tb, struct device *parent,
u64 route, const uuid_t *local_uuid,
@ -2091,7 +2097,7 @@ void tb_xdomain_remove(struct tb_xdomain *xd)
* to enable bonding by first enabling the port and waiting for the CL0
* state.
*
* Return: %0 in case of success and negative errno in case of error.
* Return: %0 on success, negative errno otherwise.
*/
int tb_xdomain_lane_bonding_enable(struct tb_xdomain *xd)
{
@ -2171,10 +2177,14 @@ EXPORT_SYMBOL_GPL(tb_xdomain_lane_bonding_disable);
* @xd: XDomain connection
* @hopid: Preferred HopID or %-1 for next available
*
* Returns allocated HopID or negative errno. Specifically returns
* %-ENOSPC if there are no more available HopIDs. Returned HopID is
* guaranteed to be within range supported by the input lane adapter.
* Returned HopID is guaranteed to be within range supported by the input
* lane adapter.
* Call tb_xdomain_release_in_hopid() to release the allocated HopID.
*
* Return:
* * Allocated HopID - On success.
* * %-ENOSPC - If there are no more available HopIDs.
* * Negative errno - Another error occurred.
*/
int tb_xdomain_alloc_in_hopid(struct tb_xdomain *xd, int hopid)
{
@ -2193,10 +2203,14 @@ EXPORT_SYMBOL_GPL(tb_xdomain_alloc_in_hopid);
* @xd: XDomain connection
* @hopid: Preferred HopID or %-1 for next available
*
* Returns allocated HopID or negative errno. Specifically returns
* %-ENOSPC if there are no more available HopIDs. Returned HopID is
* guaranteed to be within range supported by the output lane adapter.
* Call tb_xdomain_release_in_hopid() to release the allocated HopID.
* Returned HopID is guaranteed to be within range supported by the
* output lane adapter.
* Call tb_xdomain_release_out_hopid() to release the allocated HopID.
*
* Return:
* * Allocated HopID - On success.
* * %-ENOSPC - If there are no more available HopIDs.
* * Negative errno - Another error occurred.
*/
int tb_xdomain_alloc_out_hopid(struct tb_xdomain *xd, int hopid)
{
@ -2245,7 +2259,7 @@ EXPORT_SYMBOL_GPL(tb_xdomain_release_out_hopid);
* path. If a transmit or receive path is not needed, pass %-1 for those
* parameters.
*
* Return: %0 in case of success and negative errno in case of error
* Return: %0 on success, negative errno otherwise.
*/
int tb_xdomain_enable_paths(struct tb_xdomain *xd, int transmit_path,
int transmit_ring, int receive_path,
@ -2270,7 +2284,7 @@ EXPORT_SYMBOL_GPL(tb_xdomain_enable_paths);
* as path/ring parameter means don't care. Normally the callers should
* pass the same values here as they do when paths are enabled.
*
* Return: %0 in case of success and negative errno in case of error
* Return: %0 on success, negative errno otherwise.
*/
int tb_xdomain_disable_paths(struct tb_xdomain *xd, int transmit_path,
int transmit_ring, int receive_path,
@ -2335,6 +2349,8 @@ static struct tb_xdomain *switch_find_xdomain(struct tb_switch *sw,
* to the bus (handshake is still in progress).
*
* The caller needs to hold @tb->lock.
*
* Return: Pointer to &struct tb_xdomain or %NULL if not found.
*/
struct tb_xdomain *tb_xdomain_find_by_uuid(struct tb *tb, const uuid_t *uuid)
{
@ -2364,6 +2380,8 @@ EXPORT_SYMBOL_GPL(tb_xdomain_find_by_uuid);
* to the bus (handshake is still in progress).
*
* The caller needs to hold @tb->lock.
*
* Return: Pointer to &struct tb_xdomain or %NULL if not found.
*/
struct tb_xdomain *tb_xdomain_find_by_link_depth(struct tb *tb, u8 link,
u8 depth)
@ -2393,6 +2411,8 @@ struct tb_xdomain *tb_xdomain_find_by_link_depth(struct tb *tb, u8 link,
* to the bus (handshake is still in progress).
*
* The caller needs to hold @tb->lock.
*
* Return: Pointer to &struct tb_xdomain or %NULL if not found.
*/
struct tb_xdomain *tb_xdomain_find_by_route(struct tb *tb, u64 route)
{
@ -2491,7 +2511,7 @@ static bool remove_directory(const char *key, const struct tb_property_dir *dir)
* notified so they can re-read properties of this host if they are
* interested.
*
* Return: %0 on success and negative errno on failure
* Return: %0 on success, negative errno otherwise.
*/
int tb_register_property_dir(const char *key, struct tb_property_dir *dir)
{
@ -2562,10 +2582,9 @@ int tb_xdomain_init(void)
* Rest of the properties are filled dynamically based on these
* when the P2P connection is made.
*/
tb_property_add_immediate(xdomain_property_dir, "vendorid",
PCI_VENDOR_ID_INTEL);
tb_property_add_text(xdomain_property_dir, "vendorid", "Intel Corp.");
tb_property_add_immediate(xdomain_property_dir, "deviceid", 0x1);
tb_property_add_immediate(xdomain_property_dir, "vendorid", 0x1d6b);
tb_property_add_text(xdomain_property_dir, "vendorid", "Linux");
tb_property_add_immediate(xdomain_property_dir, "deviceid", 0x0004);
tb_property_add_immediate(xdomain_property_dir, "devicerv", 0x80000100);
xdomain_property_block_gen = get_random_u32();

View file

@ -283,39 +283,6 @@ TRACE_EVENT(cdns3_ep0_queue,
__entry->length)
);
DECLARE_EVENT_CLASS(cdns3_stream_split_transfer_len,
TP_PROTO(struct cdns3_request *req),
TP_ARGS(req),
TP_STRUCT__entry(
__string(name, req->priv_ep->name)
__field(struct cdns3_request *, req)
__field(unsigned int, length)
__field(unsigned int, actual)
__field(unsigned int, stream_id)
),
TP_fast_assign(
__assign_str(name);
__entry->req = req;
__entry->actual = req->request.length;
__entry->length = req->request.actual;
__entry->stream_id = req->request.stream_id;
),
TP_printk("%s: req: %p,request length: %u actual length: %u SID: %u",
__get_str(name), __entry->req, __entry->length,
__entry->actual, __entry->stream_id)
);
DEFINE_EVENT(cdns3_stream_split_transfer_len, cdns3_stream_transfer_split,
TP_PROTO(struct cdns3_request *req),
TP_ARGS(req)
);
DEFINE_EVENT(cdns3_stream_split_transfer_len,
cdns3_stream_transfer_split_next_part,
TP_PROTO(struct cdns3_request *req),
TP_ARGS(req)
);
DECLARE_EVENT_CLASS(cdns3_log_aligned_request,
TP_PROTO(struct cdns3_request *priv_req),
TP_ARGS(priv_req),
@ -354,34 +321,6 @@ DEFINE_EVENT(cdns3_log_aligned_request, cdns3_prepare_aligned_request,
TP_ARGS(req)
);
DECLARE_EVENT_CLASS(cdns3_log_map_request,
TP_PROTO(struct cdns3_request *priv_req),
TP_ARGS(priv_req),
TP_STRUCT__entry(
__string(name, priv_req->priv_ep->name)
__field(struct usb_request *, req)
__field(void *, buf)
__field(dma_addr_t, dma)
),
TP_fast_assign(
__assign_str(name);
__entry->req = &priv_req->request;
__entry->buf = priv_req->request.buf;
__entry->dma = priv_req->request.dma;
),
TP_printk("%s: req: %p, req buf %p, dma %p",
__get_str(name), __entry->req, __entry->buf, &__entry->dma
)
);
DEFINE_EVENT(cdns3_log_map_request, cdns3_map_request,
TP_PROTO(struct cdns3_request *req),
TP_ARGS(req)
);
DEFINE_EVENT(cdns3_log_map_request, cdns3_mapped_request,
TP_PROTO(struct cdns3_request *req),
TP_ARGS(req)
);
DECLARE_EVENT_CLASS(cdns3_log_trb,
TP_PROTO(struct cdns3_endpoint *priv_ep, struct cdns3_trb *trb),
TP_ARGS(priv_ep, trb),

View file

@ -1976,7 +1976,10 @@ static int __cdnsp_gadget_init(struct cdns *cdns)
return 0;
del_gadget:
usb_del_gadget_udc(&pdev->gadget);
usb_del_gadget(&pdev->gadget);
cdnsp_gadget_free_endpoints(pdev);
usb_put_gadget(&pdev->gadget);
goto halt_pdev;
free_endpoints:
cdnsp_gadget_free_endpoints(pdev);
halt_pdev:
@ -1998,8 +2001,9 @@ static void cdnsp_gadget_exit(struct cdns *cdns)
devm_free_irq(pdev->dev, cdns->dev_irq, pdev);
pm_runtime_mark_last_busy(cdns->dev);
pm_runtime_put_autosuspend(cdns->dev);
usb_del_gadget_udc(&pdev->gadget);
usb_del_gadget(&pdev->gadget);
cdnsp_gadget_free_endpoints(pdev);
usb_put_gadget(&pdev->gadget);
cdnsp_mem_cleanup(pdev);
kfree(pdev);
cdns->gadget_dev = NULL;

View file

@ -85,7 +85,7 @@ static int cdnsp_pci_probe(struct pci_dev *pdev,
cdnsp = kzalloc(sizeof(*cdnsp), GFP_KERNEL);
if (!cdnsp) {
ret = -ENOMEM;
goto disable_pci;
goto put_pci;
}
}
@ -168,9 +168,6 @@ static int cdnsp_pci_probe(struct pci_dev *pdev,
if (!pci_is_enabled(func))
kfree(cdnsp);
disable_pci:
pci_disable_device(pdev);
put_pci:
pci_dev_put(func);

View file

@ -178,11 +178,6 @@ DEFINE_EVENT(cdnsp_log_simple, cdnsp_ep0_set_config,
TP_ARGS(msg)
);
DEFINE_EVENT(cdnsp_log_simple, cdnsp_ep0_halted,
TP_PROTO(char *msg),
TP_ARGS(msg)
);
DEFINE_EVENT(cdnsp_log_simple, cdnsp_ep_halt,
TP_PROTO(char *msg),
TP_ARGS(msg)
@ -399,11 +394,6 @@ DEFINE_EVENT(cdnsp_log_trb, cdnsp_cmd_timeout,
TP_ARGS(ring, trb)
);
DEFINE_EVENT(cdnsp_log_trb, cdnsp_defered_event,
TP_PROTO(struct cdnsp_ring *ring, struct cdnsp_generic_trb *trb),
TP_ARGS(ring, trb)
);
DECLARE_EVENT_CLASS(cdnsp_log_pdev,
TP_PROTO(struct cdnsp_device *pdev),
TP_ARGS(pdev),
@ -433,16 +423,6 @@ DEFINE_EVENT(cdnsp_log_pdev, cdnsp_alloc_priv_device,
TP_ARGS(vdev)
);
DEFINE_EVENT(cdnsp_log_pdev, cdnsp_free_priv_device,
TP_PROTO(struct cdnsp_device *vdev),
TP_ARGS(vdev)
);
DEFINE_EVENT(cdnsp_log_pdev, cdnsp_setup_device,
TP_PROTO(struct cdnsp_device *vdev),
TP_ARGS(vdev)
);
DEFINE_EVENT(cdnsp_log_pdev, cdnsp_setup_addressable_priv_device,
TP_PROTO(struct cdnsp_device *vdev),
TP_ARGS(vdev)
@ -575,11 +555,6 @@ DEFINE_EVENT(cdnsp_log_ep_ctx, cdnsp_handle_cmd_stop_ep,
TP_ARGS(ctx)
);
DEFINE_EVENT(cdnsp_log_ep_ctx, cdnsp_handle_cmd_flush_ep,
TP_PROTO(struct cdnsp_ep_ctx *ctx),
TP_ARGS(ctx)
);
DEFINE_EVENT(cdnsp_log_ep_ctx, cdnsp_handle_cmd_set_deq_ep,
TP_PROTO(struct cdnsp_ep_ctx *ctx),
TP_ARGS(ctx)

View file

@ -34,6 +34,7 @@
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/minmax.h>
#include <linux/sched/signal.h>
#include <linux/signal.h>
#include <linux/poll.h>
@ -871,7 +872,7 @@ static ssize_t usblp_read(struct file *file, char __user *buffer, size_t len, lo
goto done;
}
count = len < avail - usblp->readcount ? len : avail - usblp->readcount;
count = min_t(ssize_t, len, avail - usblp->readcount);
if (count != 0 &&
copy_to_user(buffer, usblp->readbuf + usblp->readcount, count)) {
count = -EFAULT;

View file

@ -9,6 +9,7 @@ usbcore-y += devio.o notify.o generic.o quirks.o devices.o
usbcore-y += phy.o port.o
usbcore-$(CONFIG_OF) += of.o
usbcore-$(CONFIG_USB_XHCI_SIDEBAND) += offload.o
usbcore-$(CONFIG_USB_PCI) += hcd-pci.o
usbcore-$(CONFIG_ACPI) += usb-acpi.o

View file

@ -507,8 +507,8 @@ static int usb_parse_endpoint(struct device *ddev, int cfgno,
}
/* Parse a possible eUSB2 periodic endpoint companion descriptor */
if (bcdUSB == 0x0220 && d->wMaxPacketSize == 0 &&
(usb_endpoint_xfer_isoc(d) || usb_endpoint_xfer_int(d)))
if (udev->speed == USB_SPEED_HIGH && bcdUSB == 0x0220 &&
!le16_to_cpu(d->wMaxPacketSize) && usb_endpoint_is_isoc_in(d))
usb_parse_eusb2_isoc_endpoint_companion(ddev, cfgno, inum, asnum,
endpoint, buffer, size);

View file

@ -332,10 +332,10 @@ static int usb_probe_interface(struct device *dev)
return error;
if (udev->authorized == 0) {
dev_err(&intf->dev, "Device is not authorized for usage\n");
dev_info(&intf->dev, "Device is not authorized for usage\n");
return error;
} else if (intf->authorized == 0) {
dev_err(&intf->dev, "Interface %d is not authorized for usage\n",
dev_info(&intf->dev, "Interface %d is not authorized for usage\n",
intf->altsetting->desc.bInterfaceNumber);
return error;
}
@ -1420,11 +1420,28 @@ static int usb_suspend_both(struct usb_device *udev, pm_message_t msg)
udev->state == USB_STATE_SUSPENDED)
goto done;
if (msg.event == PM_EVENT_SUSPEND && usb_offload_check(udev)) {
dev_dbg(&udev->dev, "device offloaded, skip suspend.\n");
udev->offload_at_suspend = 1;
}
/* Suspend all the interfaces and then udev itself */
if (udev->actconfig) {
n = udev->actconfig->desc.bNumInterfaces;
for (i = n - 1; i >= 0; --i) {
intf = udev->actconfig->interface[i];
/*
* Don't suspend interfaces with remote wakeup while
* the controller is active. This preserves pending
* interrupt urbs, allowing interrupt events to be
* handled during system suspend.
*/
if (udev->offload_at_suspend &&
intf->needs_remote_wakeup) {
dev_dbg(&intf->dev,
"device offloaded, skip suspend.\n");
continue;
}
status = usb_suspend_interface(udev, intf, msg);
/* Ignore errors during system sleep transitions */
@ -1435,7 +1452,8 @@ static int usb_suspend_both(struct usb_device *udev, pm_message_t msg)
}
}
if (status == 0) {
status = usb_suspend_device(udev, msg);
if (!udev->offload_at_suspend)
status = usb_suspend_device(udev, msg);
/*
* Ignore errors from non-root-hub devices during
@ -1480,9 +1498,11 @@ static int usb_suspend_both(struct usb_device *udev, pm_message_t msg)
*/
} else {
udev->can_submit = 0;
for (i = 0; i < 16; ++i) {
usb_hcd_flush_endpoint(udev, udev->ep_out[i]);
usb_hcd_flush_endpoint(udev, udev->ep_in[i]);
if (!udev->offload_at_suspend) {
for (i = 0; i < 16; ++i) {
usb_hcd_flush_endpoint(udev, udev->ep_out[i]);
usb_hcd_flush_endpoint(udev, udev->ep_in[i]);
}
}
}
@ -1524,17 +1544,35 @@ static int usb_resume_both(struct usb_device *udev, pm_message_t msg)
udev->can_submit = 1;
/* Resume the device */
if (udev->state == USB_STATE_SUSPENDED || udev->reset_resume)
status = usb_resume_device(udev, msg);
if (udev->state == USB_STATE_SUSPENDED || udev->reset_resume) {
if (!udev->offload_at_suspend)
status = usb_resume_device(udev, msg);
else
dev_dbg(&udev->dev,
"device offloaded, skip resume.\n");
}
/* Resume the interfaces */
if (status == 0 && udev->actconfig) {
for (i = 0; i < udev->actconfig->desc.bNumInterfaces; i++) {
intf = udev->actconfig->interface[i];
/*
* Interfaces with remote wakeup aren't suspended
* while the controller is active. This preserves
* pending interrupt urbs, allowing interrupt events
* to be handled during system suspend.
*/
if (udev->offload_at_suspend &&
intf->needs_remote_wakeup) {
dev_dbg(&intf->dev,
"device offloaded, skip resume.\n");
continue;
}
usb_resume_interface(udev, intf, msg,
udev->reset_resume);
}
}
udev->offload_at_suspend = 0;
usb_mark_last_busy(udev);
done:
@ -1723,8 +1761,6 @@ int usb_autoresume_device(struct usb_device *udev)
dev_vdbg(&udev->dev, "%s: cnt %d -> %d\n",
__func__, atomic_read(&udev->dev.power.usage_count),
status);
if (status > 0)
status = 0;
return status;
}
@ -1829,8 +1865,6 @@ int usb_autopm_get_interface(struct usb_interface *intf)
dev_vdbg(&intf->dev, "%s: cnt %d -> %d\n",
__func__, atomic_read(&intf->dev.power.usage_count),
status);
if (status > 0)
status = 0;
return status;
}
EXPORT_SYMBOL_GPL(usb_autopm_get_interface);

View file

@ -243,7 +243,7 @@ int usb_generic_driver_probe(struct usb_device *udev)
* with the driver core and lets interface drivers bind to them.
*/
if (udev->authorized == 0)
dev_err(&udev->dev, "Device is not authorized for usage\n");
dev_info(&udev->dev, "Device is not authorized for usage\n");
else {
c = usb_choose_configuration(udev);
if (c >= 0) {

136
drivers/usb/core/offload.c Normal file
View file

@ -0,0 +1,136 @@
// SPDX-License-Identifier: GPL-2.0
/*
* offload.c - USB offload related functions
*
* Copyright (c) 2025, Google LLC.
*
* Author: Guan-Yu Lin
*/
#include <linux/usb.h>
#include "usb.h"
/**
* usb_offload_get - increment the offload_usage of a USB device
* @udev: the USB device to increment its offload_usage
*
* Incrementing the offload_usage of a usb_device indicates that offload is
* enabled on this usb_device; that is, another entity is actively handling USB
* transfers. This information allows the USB driver to adjust its power
* management policy based on offload activity.
*
* Return: 0 on success. A negative error code otherwise.
*/
int usb_offload_get(struct usb_device *udev)
{
int ret;
usb_lock_device(udev);
if (udev->state == USB_STATE_NOTATTACHED) {
usb_unlock_device(udev);
return -ENODEV;
}
if (udev->state == USB_STATE_SUSPENDED ||
udev->offload_at_suspend) {
usb_unlock_device(udev);
return -EBUSY;
}
/*
* offload_usage could only be modified when the device is active, since
* it will alter the suspend flow of the device.
*/
ret = usb_autoresume_device(udev);
if (ret < 0) {
usb_unlock_device(udev);
return ret;
}
udev->offload_usage++;
usb_autosuspend_device(udev);
usb_unlock_device(udev);
return ret;
}
EXPORT_SYMBOL_GPL(usb_offload_get);
/**
* usb_offload_put - drop the offload_usage of a USB device
* @udev: the USB device to drop its offload_usage
*
* The inverse operation of usb_offload_get, which drops the offload_usage of
* a USB device. This information allows the USB driver to adjust its power
* management policy based on offload activity.
*
* Return: 0 on success. A negative error code otherwise.
*/
int usb_offload_put(struct usb_device *udev)
{
int ret;
usb_lock_device(udev);
if (udev->state == USB_STATE_NOTATTACHED) {
usb_unlock_device(udev);
return -ENODEV;
}
if (udev->state == USB_STATE_SUSPENDED ||
udev->offload_at_suspend) {
usb_unlock_device(udev);
return -EBUSY;
}
/*
* offload_usage could only be modified when the device is active, since
* it will alter the suspend flow of the device.
*/
ret = usb_autoresume_device(udev);
if (ret < 0) {
usb_unlock_device(udev);
return ret;
}
/* Drop the count when it wasn't 0, ignore the operation otherwise. */
if (udev->offload_usage)
udev->offload_usage--;
usb_autosuspend_device(udev);
usb_unlock_device(udev);
return ret;
}
EXPORT_SYMBOL_GPL(usb_offload_put);
/**
* usb_offload_check - check offload activities on a USB device
* @udev: the USB device to check its offload activity.
*
* Check if there are any offload activity on the USB device right now. This
* information could be used for power management or other forms of resource
* management.
*
* The caller must hold @udev's device lock. In addition, the caller should
* ensure downstream usb devices are all either suspended or marked as
* "offload_at_suspend" to ensure the correctness of the return value.
*
* Returns true on any offload activity, false otherwise.
*/
bool usb_offload_check(struct usb_device *udev) __must_hold(&udev->dev->mutex)
{
struct usb_device *child;
bool active;
int port1;
usb_hub_for_each_child(udev, port1, child) {
usb_lock_device(child);
active = usb_offload_check(child);
usb_unlock_device(child);
if (active)
return true;
}
return !!udev->offload_usage;
}
EXPORT_SYMBOL_GPL(usb_offload_check);

View file

@ -372,6 +372,7 @@ int usb_submit_urb(struct urb *urb, gfp_t mem_flags)
struct usb_host_endpoint *ep;
int is_out;
unsigned int allowed;
bool is_eusb2_isoch_double;
if (!urb || !urb->complete)
return -EINVAL;
@ -434,7 +435,8 @@ int usb_submit_urb(struct urb *urb, gfp_t mem_flags)
return -ENODEV;
max = usb_endpoint_maxp(&ep->desc);
if (max <= 0) {
is_eusb2_isoch_double = usb_endpoint_is_hs_isoc_double(dev, ep);
if (!max && !is_eusb2_isoch_double) {
dev_dbg(&dev->dev,
"bogus endpoint ep%d%s in %s (bad maxpacket %d)\n",
usb_endpoint_num(&ep->desc), is_out ? "out" : "in",
@ -467,9 +469,13 @@ int usb_submit_urb(struct urb *urb, gfp_t mem_flags)
max = le32_to_cpu(isoc_ep_comp->dwBytesPerInterval);
}
/* "high bandwidth" mode, 1-3 packets/uframe? */
if (dev->speed == USB_SPEED_HIGH)
max *= usb_endpoint_maxp_mult(&ep->desc);
/* High speed, 1-3 packets/uframe, max 6 for eUSB2 double bw */
if (dev->speed == USB_SPEED_HIGH) {
if (is_eusb2_isoch_double)
max = le32_to_cpu(ep->eusb2_isoc_ep_comp.dwBytesPerInterval);
else
max *= usb_endpoint_maxp_mult(&ep->desc);
}
if (urb->number_of_packets <= 0)
return -EINVAL;

View file

@ -670,6 +670,7 @@ struct usb_device *usb_alloc_dev(struct usb_device *parent,
set_dev_node(&dev->dev, dev_to_node(bus->sysdev));
dev->state = USB_STATE_ATTACHED;
dev->lpm_disable_count = 1;
dev->offload_usage = 0;
atomic_set(&dev->urbnum, 0);
INIT_LIST_HEAD(&dev->ep0.urb_list);
@ -1110,6 +1111,56 @@ void usb_free_noncoherent(struct usb_device *dev, size_t size,
}
EXPORT_SYMBOL_GPL(usb_free_noncoherent);
/**
* usb_endpoint_max_periodic_payload - Get maximum payload bytes per service
* interval
* @udev: The USB device
* @ep: The endpoint
*
* Returns: the maximum number of bytes isochronous or interrupt endpoint @ep
* can transfer during a service interval, or 0 for other endpoints.
*/
u32 usb_endpoint_max_periodic_payload(struct usb_device *udev,
const struct usb_host_endpoint *ep)
{
if (!usb_endpoint_xfer_isoc(&ep->desc) &&
!usb_endpoint_xfer_int(&ep->desc))
return 0;
switch (udev->speed) {
case USB_SPEED_SUPER_PLUS:
if (USB_SS_SSP_ISOC_COMP(ep->ss_ep_comp.bmAttributes))
return le32_to_cpu(ep->ssp_isoc_ep_comp.dwBytesPerInterval);
fallthrough;
case USB_SPEED_SUPER:
return le16_to_cpu(ep->ss_ep_comp.wBytesPerInterval);
default:
if (usb_endpoint_is_hs_isoc_double(udev, ep))
return le32_to_cpu(ep->eusb2_isoc_ep_comp.dwBytesPerInterval);
return usb_endpoint_maxp(&ep->desc) * usb_endpoint_maxp_mult(&ep->desc);
}
}
EXPORT_SYMBOL_GPL(usb_endpoint_max_periodic_payload);
/**
* usb_endpoint_is_hs_isoc_double - Tell whether an endpoint uses USB 2
* Isochronous Double IN Bandwidth
* @udev: The USB device
* @ep: The endpoint
*
* Returns: true if an endpoint @ep conforms to USB 2 Isochronous Double IN
* Bandwidth ECN, false otherwise.
*/
bool usb_endpoint_is_hs_isoc_double(struct usb_device *udev,
const struct usb_host_endpoint *ep)
{
return ep->eusb2_isoc_ep_comp.bDescriptorType &&
le16_to_cpu(udev->descriptor.bcdUSB) == 0x220 &&
usb_endpoint_is_isoc_in(&ep->desc) &&
!le16_to_cpu(ep->desc.wMaxPacketSize);
}
EXPORT_SYMBOL_GPL(usb_endpoint_is_hs_isoc_double);
/*
* Notifications of device and interface registration
*/

View file

@ -1029,11 +1029,33 @@ int dwc2_get_hwparams(struct dwc2_hsotg *hsotg)
return 0;
}
static int dwc2_limit_speed(struct dwc2_hsotg *hsotg)
{
enum usb_device_speed usb_speed;
usb_speed = usb_get_maximum_speed(hsotg->dev);
switch (usb_speed) {
case USB_SPEED_LOW:
dev_err(hsotg->dev, "Maximum speed cannot be forced to low-speed\n");
return -EINVAL;
case USB_SPEED_FULL:
if (hsotg->params.speed == DWC2_SPEED_PARAM_LOW)
break;
hsotg->params.speed = DWC2_SPEED_PARAM_FULL;
break;
default:
break;
}
return 0;
}
typedef void (*set_params_cb)(struct dwc2_hsotg *data);
int dwc2_init_params(struct dwc2_hsotg *hsotg)
{
set_params_cb set_params;
int ret;
dwc2_set_default_params(hsotg);
dwc2_get_device_properties(hsotg);
@ -1051,6 +1073,10 @@ int dwc2_init_params(struct dwc2_hsotg *hsotg)
}
}
ret = dwc2_limit_speed(hsotg);
if (ret)
return ret;
dwc2_check_params(hsotg);
return 0;

View file

@ -189,4 +189,15 @@ config USB_DWC3_RTK
or dual-role mode.
Say 'Y' or 'M' if you have such device.
config USB_DWC3_GENERIC_PLAT
tristate "DWC3 Generic Platform Driver"
depends on OF && COMMON_CLK
default USB_DWC3
help
Support USB3 functionality in simple SoC integrations.
Currently supports SpacemiT DWC USB3. Platforms using
dwc3-of-simple can easily switch to dwc3-generic by flattening
the dwc3 child node in the device tree.
Say 'Y' or 'M' here if your platform integrates DWC3 in a similar way.
endif

View file

@ -57,3 +57,4 @@ obj-$(CONFIG_USB_DWC3_IMX8MP) += dwc3-imx8mp.o
obj-$(CONFIG_USB_DWC3_XILINX) += dwc3-xilinx.o
obj-$(CONFIG_USB_DWC3_OCTEON) += dwc3-octeon.o
obj-$(CONFIG_USB_DWC3_RTK) += dwc3-rtk.o
obj-$(CONFIG_USB_DWC3_GENERIC_PLAT) += dwc3-generic-plat.o

View file

@ -156,6 +156,7 @@ void dwc3_set_prtcap(struct dwc3 *dwc, u32 mode, bool ignore_susphy)
dwc3_writel(dwc->regs, DWC3_GCTL, reg);
dwc->current_dr_role = mode;
trace_dwc3_set_prtcap(mode);
}
static void __dwc3_set_mode(struct work_struct *work)
@ -2351,6 +2352,7 @@ static int dwc3_probe(struct platform_device *pdev)
return -ENOMEM;
dwc->dev = &pdev->dev;
dwc->glue_ops = NULL;
probe_data.dwc = dwc;
probe_data.res = res;

View file

@ -992,6 +992,17 @@ struct dwc3_scratchpad_array {
__le64 dma_adr[DWC3_MAX_HIBER_SCRATCHBUFS];
};
/**
* struct dwc3_glue_ops - The ops indicate the notifications that
* need to be passed on to glue layer
* @pre_set_role: Notify glue of role switch notifications
* @pre_run_stop: Notify run stop enable/disable information to glue
*/
struct dwc3_glue_ops {
void (*pre_set_role)(struct dwc3 *dwc, enum usb_role role);
void (*pre_run_stop)(struct dwc3 *dwc, bool is_on);
};
/**
* struct dwc3 - representation of our controller
* @drd_work: workqueue used for role swapping
@ -1012,6 +1023,7 @@ struct dwc3_scratchpad_array {
* @eps: endpoint array
* @gadget: device side representation of the peripheral controller
* @gadget_driver: pointer to the gadget driver
* @glue_ops: Vendor callbacks for flattened device implementations.
* @bus_clk: clock for accessing the registers
* @ref_clk: reference clock
* @susp_clk: clock used when the SS phy is in low power (S3) state
@ -1197,6 +1209,8 @@ struct dwc3 {
struct usb_gadget *gadget;
struct usb_gadget_driver *gadget_driver;
const struct dwc3_glue_ops *glue_ops;
struct clk *bus_clk;
struct clk *ref_clk;
struct clk *susp_clk;
@ -1614,6 +1628,18 @@ void dwc3_event_buffers_cleanup(struct dwc3 *dwc);
int dwc3_core_soft_reset(struct dwc3 *dwc);
void dwc3_enable_susphy(struct dwc3 *dwc, bool enable);
static inline void dwc3_pre_set_role(struct dwc3 *dwc, enum usb_role role)
{
if (dwc->glue_ops && dwc->glue_ops->pre_set_role)
dwc->glue_ops->pre_set_role(dwc, role);
}
static inline void dwc3_pre_run_stop(struct dwc3 *dwc, bool is_on)
{
if (dwc->glue_ops && dwc->glue_ops->pre_run_stop)
dwc->glue_ops->pre_run_stop(dwc, is_on);
}
#if IS_ENABLED(CONFIG_USB_DWC3_HOST) || IS_ENABLED(CONFIG_USB_DWC3_DUAL_ROLE)
int dwc3_host_init(struct dwc3 *dwc);
void dwc3_host_exit(struct dwc3 *dwc);

View file

@ -13,6 +13,24 @@
#include "core.h"
/**
* dwc3_mode_string - returns mode name
* @mode: GCTL.PrtCapDir value
*/
static inline const char *dwc3_mode_string(u32 mode)
{
switch (mode) {
case DWC3_GCTL_PRTCAP_HOST:
return "host";
case DWC3_GCTL_PRTCAP_DEVICE:
return "device";
case DWC3_GCTL_PRTCAP_OTG:
return "otg";
default:
return "UNKNOWN";
}
}
/**
* dwc3_gadget_ep_cmd_string - returns endpoint command string
* @cmd: command code

View file

@ -402,6 +402,7 @@ static int dwc3_mode_show(struct seq_file *s, void *unused)
struct dwc3 *dwc = s->private;
unsigned long flags;
u32 reg;
u32 mode;
int ret;
ret = pm_runtime_resume_and_get(dwc->dev);
@ -412,18 +413,15 @@ static int dwc3_mode_show(struct seq_file *s, void *unused)
reg = dwc3_readl(dwc->regs, DWC3_GCTL);
spin_unlock_irqrestore(&dwc->lock, flags);
switch (DWC3_GCTL_PRTCAP(reg)) {
mode = DWC3_GCTL_PRTCAP(reg);
switch (mode) {
case DWC3_GCTL_PRTCAP_HOST:
seq_puts(s, "host\n");
break;
case DWC3_GCTL_PRTCAP_DEVICE:
seq_puts(s, "device\n");
break;
case DWC3_GCTL_PRTCAP_OTG:
seq_puts(s, "otg\n");
seq_printf(s, "%s\n", dwc3_mode_string(mode));
break;
default:
seq_printf(s, "UNKNOWN %08x\n", DWC3_GCTL_PRTCAP(reg));
seq_printf(s, "UNKNOWN %08x\n", mode);
}
pm_runtime_put_sync(dwc->dev);

View file

@ -464,6 +464,7 @@ static int dwc3_usb_role_switch_set(struct usb_role_switch *sw,
break;
}
dwc3_pre_set_role(dwc, role);
dwc3_set_mode(dwc, mode);
return 0;
}

View file

@ -0,0 +1,166 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* dwc3-generic-plat.c - DesignWare USB3 generic platform driver
*
* Copyright (C) 2025 Ze Huang <huang.ze@linux.dev>
*
* Inspired by dwc3-qcom.c and dwc3-of-simple.c
*/
#include <linux/clk.h>
#include <linux/platform_device.h>
#include <linux/reset.h>
#include "glue.h"
struct dwc3_generic {
struct device *dev;
struct dwc3 dwc;
struct clk_bulk_data *clks;
int num_clocks;
struct reset_control *resets;
};
#define to_dwc3_generic(d) container_of((d), struct dwc3_generic, dwc)
static void dwc3_generic_reset_control_assert(void *data)
{
reset_control_assert(data);
}
static int dwc3_generic_probe(struct platform_device *pdev)
{
struct dwc3_probe_data probe_data = {};
struct device *dev = &pdev->dev;
struct dwc3_generic *dwc3g;
struct resource *res;
int ret;
dwc3g = devm_kzalloc(dev, sizeof(*dwc3g), GFP_KERNEL);
if (!dwc3g)
return -ENOMEM;
dwc3g->dev = dev;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res) {
dev_err(&pdev->dev, "missing memory resource\n");
return -ENODEV;
}
dwc3g->resets = devm_reset_control_array_get_optional_exclusive(dev);
if (IS_ERR(dwc3g->resets))
return dev_err_probe(dev, PTR_ERR(dwc3g->resets), "failed to get resets\n");
ret = reset_control_assert(dwc3g->resets);
if (ret)
return dev_err_probe(dev, ret, "failed to assert resets\n");
/* Not strict timing, just for safety */
udelay(2);
ret = reset_control_deassert(dwc3g->resets);
if (ret)
return dev_err_probe(dev, ret, "failed to deassert resets\n");
ret = devm_add_action_or_reset(dev, dwc3_generic_reset_control_assert, dwc3g->resets);
if (ret)
return ret;
ret = devm_clk_bulk_get_all_enabled(dwc3g->dev, &dwc3g->clks);
if (ret < 0)
return dev_err_probe(dev, ret, "failed to get clocks\n");
dwc3g->num_clocks = ret;
dwc3g->dwc.dev = dev;
probe_data.dwc = &dwc3g->dwc;
probe_data.res = res;
probe_data.ignore_clocks_and_resets = true;
ret = dwc3_core_probe(&probe_data);
if (ret)
return dev_err_probe(dev, ret, "failed to register DWC3 Core\n");
return 0;
}
static void dwc3_generic_remove(struct platform_device *pdev)
{
struct dwc3 *dwc = platform_get_drvdata(pdev);
struct dwc3_generic *dwc3g = to_dwc3_generic(dwc);
dwc3_core_remove(dwc);
clk_bulk_disable_unprepare(dwc3g->num_clocks, dwc3g->clks);
}
static int dwc3_generic_suspend(struct device *dev)
{
struct dwc3 *dwc = dev_get_drvdata(dev);
struct dwc3_generic *dwc3g = to_dwc3_generic(dwc);
int ret;
ret = dwc3_pm_suspend(dwc);
if (ret)
return ret;
clk_bulk_disable_unprepare(dwc3g->num_clocks, dwc3g->clks);
return 0;
}
static int dwc3_generic_resume(struct device *dev)
{
struct dwc3 *dwc = dev_get_drvdata(dev);
struct dwc3_generic *dwc3g = to_dwc3_generic(dwc);
int ret;
ret = clk_bulk_prepare_enable(dwc3g->num_clocks, dwc3g->clks);
if (ret)
return ret;
ret = dwc3_pm_resume(dwc);
if (ret)
return ret;
return 0;
}
static int dwc3_generic_runtime_suspend(struct device *dev)
{
return dwc3_runtime_suspend(dev_get_drvdata(dev));
}
static int dwc3_generic_runtime_resume(struct device *dev)
{
return dwc3_runtime_resume(dev_get_drvdata(dev));
}
static int dwc3_generic_runtime_idle(struct device *dev)
{
return dwc3_runtime_idle(dev_get_drvdata(dev));
}
static const struct dev_pm_ops dwc3_generic_dev_pm_ops = {
SYSTEM_SLEEP_PM_OPS(dwc3_generic_suspend, dwc3_generic_resume)
RUNTIME_PM_OPS(dwc3_generic_runtime_suspend, dwc3_generic_runtime_resume,
dwc3_generic_runtime_idle)
};
static const struct of_device_id dwc3_generic_of_match[] = {
{ .compatible = "spacemit,k1-dwc3", },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, dwc3_generic_of_match);
static struct platform_driver dwc3_generic_driver = {
.probe = dwc3_generic_probe,
.remove = dwc3_generic_remove,
.driver = {
.name = "dwc3-generic-plat",
.of_match_table = dwc3_generic_of_match,
.pm = pm_ptr(&dwc3_generic_dev_pm_ops),
},
};
module_platform_driver(dwc3_generic_driver);
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("DesignWare USB3 generic platform driver");

View file

@ -11,7 +11,6 @@
#include <linux/of_clk.h>
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/extcon.h>
#include <linux/interconnect.h>
#include <linux/platform_device.h>
#include <linux/phy/phy.h>
@ -79,16 +78,13 @@ struct dwc3_qcom {
struct dwc3_qcom_port ports[DWC3_QCOM_MAX_PORTS];
u8 num_ports;
struct extcon_dev *edev;
struct extcon_dev *host_edev;
struct notifier_block vbus_nb;
struct notifier_block host_nb;
enum usb_dr_mode mode;
bool is_suspended;
bool pm_suspended;
struct icc_path *icc_path_ddr;
struct icc_path *icc_path_apps;
enum usb_role current_role;
};
#define to_dwc3_qcom(d) container_of((d), struct dwc3_qcom, dwc)
@ -117,11 +113,6 @@ static inline void dwc3_qcom_clrbits(void __iomem *base, u32 offset, u32 val)
readl(base + offset);
}
/*
* TODO: Make the in-core role switching code invoke dwc3_qcom_vbus_override_enable(),
* validate that the in-core extcon support is functional, and drop extcon
* handling from the glue
*/
static void dwc3_qcom_vbus_override_enable(struct dwc3_qcom *qcom, bool enable)
{
if (enable) {
@ -137,80 +128,6 @@ static void dwc3_qcom_vbus_override_enable(struct dwc3_qcom *qcom, bool enable)
}
}
static int dwc3_qcom_vbus_notifier(struct notifier_block *nb,
unsigned long event, void *ptr)
{
struct dwc3_qcom *qcom = container_of(nb, struct dwc3_qcom, vbus_nb);
/* enable vbus override for device mode */
dwc3_qcom_vbus_override_enable(qcom, event);
qcom->mode = event ? USB_DR_MODE_PERIPHERAL : USB_DR_MODE_HOST;
return NOTIFY_DONE;
}
static int dwc3_qcom_host_notifier(struct notifier_block *nb,
unsigned long event, void *ptr)
{
struct dwc3_qcom *qcom = container_of(nb, struct dwc3_qcom, host_nb);
/* disable vbus override in host mode */
dwc3_qcom_vbus_override_enable(qcom, !event);
qcom->mode = event ? USB_DR_MODE_HOST : USB_DR_MODE_PERIPHERAL;
return NOTIFY_DONE;
}
static int dwc3_qcom_register_extcon(struct dwc3_qcom *qcom)
{
struct device *dev = qcom->dev;
struct extcon_dev *host_edev;
int ret;
if (!of_property_present(dev->of_node, "extcon"))
return 0;
qcom->edev = extcon_get_edev_by_phandle(dev, 0);
if (IS_ERR(qcom->edev))
return dev_err_probe(dev, PTR_ERR(qcom->edev),
"Failed to get extcon\n");
qcom->vbus_nb.notifier_call = dwc3_qcom_vbus_notifier;
qcom->host_edev = extcon_get_edev_by_phandle(dev, 1);
if (IS_ERR(qcom->host_edev))
qcom->host_edev = NULL;
ret = devm_extcon_register_notifier(dev, qcom->edev, EXTCON_USB,
&qcom->vbus_nb);
if (ret < 0) {
dev_err(dev, "VBUS notifier register failed\n");
return ret;
}
if (qcom->host_edev)
host_edev = qcom->host_edev;
else
host_edev = qcom->edev;
qcom->host_nb.notifier_call = dwc3_qcom_host_notifier;
ret = devm_extcon_register_notifier(dev, host_edev, EXTCON_USB_HOST,
&qcom->host_nb);
if (ret < 0) {
dev_err(dev, "Host notifier register failed\n");
return ret;
}
/* Update initial VBUS override based on extcon state */
if (extcon_get_state(qcom->edev, EXTCON_USB) ||
!extcon_get_state(host_edev, EXTCON_USB_HOST))
dwc3_qcom_vbus_notifier(&qcom->vbus_nb, true, qcom->edev);
else
dwc3_qcom_vbus_notifier(&qcom->vbus_nb, false, qcom->edev);
return 0;
}
static int dwc3_qcom_interconnect_enable(struct dwc3_qcom *qcom)
{
int ret;
@ -641,6 +558,55 @@ static int dwc3_qcom_setup_irq(struct dwc3_qcom *qcom, struct platform_device *p
return 0;
}
static void dwc3_qcom_set_role_notifier(struct dwc3 *dwc, enum usb_role next_role)
{
struct dwc3_qcom *qcom = to_dwc3_qcom(dwc);
if (qcom->current_role == next_role)
return;
if (pm_runtime_resume_and_get(qcom->dev)) {
dev_dbg(qcom->dev, "Failed to resume device\n");
return;
}
if (qcom->current_role == USB_ROLE_DEVICE)
dwc3_qcom_vbus_override_enable(qcom, false);
else if (qcom->current_role != USB_ROLE_DEVICE)
dwc3_qcom_vbus_override_enable(qcom, true);
pm_runtime_mark_last_busy(qcom->dev);
pm_runtime_put_sync(qcom->dev);
/*
* Current role changes via usb_role_switch_set_role callback protected
* internally by mutex lock.
*/
qcom->current_role = next_role;
}
static void dwc3_qcom_run_stop_notifier(struct dwc3 *dwc, bool is_on)
{
struct dwc3_qcom *qcom = to_dwc3_qcom(dwc);
/*
* When autosuspend is enabled and controller goes to suspend
* after removing UDC from userspace, the next UDC write needs
* setting of QSCRATCH VBUS_VALID to "1" to generate a connect
* done event.
*/
if (!is_on)
return;
dwc3_qcom_vbus_override_enable(qcom, true);
pm_runtime_mark_last_busy(qcom->dev);
}
struct dwc3_glue_ops dwc3_qcom_glue_ops = {
.pre_set_role = dwc3_qcom_set_role_notifier,
.pre_run_stop = dwc3_qcom_run_stop_notifier,
};
static int dwc3_qcom_probe(struct platform_device *pdev)
{
struct dwc3_probe_data probe_data = {};
@ -717,6 +683,23 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
if (ignore_pipe_clk)
dwc3_qcom_select_utmi_clk(qcom);
qcom->mode = usb_get_dr_mode(dev);
if (qcom->mode == USB_DR_MODE_HOST) {
qcom->current_role = USB_ROLE_HOST;
} else if (qcom->mode == USB_DR_MODE_PERIPHERAL) {
qcom->current_role = USB_ROLE_DEVICE;
dwc3_qcom_vbus_override_enable(qcom, true);
} else {
if ((device_property_read_bool(dev, "usb-role-switch")) &&
(usb_get_role_switch_default_mode(dev) == USB_DR_MODE_HOST))
qcom->current_role = USB_ROLE_HOST;
else
qcom->current_role = USB_ROLE_DEVICE;
}
qcom->dwc.glue_ops = &dwc3_qcom_glue_ops;
qcom->dwc.dev = dev;
probe_data.dwc = &qcom->dwc;
probe_data.res = &res;
@ -731,17 +714,6 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
if (ret)
goto remove_core;
qcom->mode = usb_get_dr_mode(dev);
/* enable vbus override for device mode */
if (qcom->mode != USB_DR_MODE_HOST)
dwc3_qcom_vbus_override_enable(qcom, true);
/* register extcon to override sw_vbus on Vbus change later */
ret = dwc3_qcom_register_extcon(qcom);
if (ret)
goto interconnect_exit;
wakeup_source = of_property_read_bool(dev->of_node, "wakeup-source");
device_init_wakeup(&pdev->dev, wakeup_source);
@ -749,8 +721,6 @@ static int dwc3_qcom_probe(struct platform_device *pdev)
return 0;
interconnect_exit:
dwc3_qcom_interconnect_exit(qcom);
remove_core:
dwc3_core_remove(&qcom->dwc);
clk_disable:
@ -764,11 +734,14 @@ static void dwc3_qcom_remove(struct platform_device *pdev)
struct dwc3 *dwc = platform_get_drvdata(pdev);
struct dwc3_qcom *qcom = to_dwc3_qcom(dwc);
if (pm_runtime_resume_and_get(qcom->dev) < 0)
return;
dwc3_core_remove(&qcom->dwc);
clk_bulk_disable_unprepare(qcom->num_clocks, qcom->clks);
dwc3_qcom_interconnect_exit(qcom);
pm_runtime_put_noidle(qcom->dev);
}
static int dwc3_qcom_pm_suspend(struct device *dev)
@ -873,6 +846,7 @@ MODULE_DEVICE_TABLE(of, dwc3_qcom_of_match);
static struct platform_driver dwc3_qcom_driver = {
.probe = dwc3_qcom_probe,
.remove = dwc3_qcom_remove,
.shutdown = dwc3_qcom_remove,
.driver = {
.name = "dwc3-qcom",
.pm = pm_ptr(&dwc3_qcom_dev_pm_ops),

View file

@ -2662,6 +2662,7 @@ static int dwc3_gadget_run_stop(struct dwc3 *dwc, int is_on)
dwc->pullups_connected = false;
}
dwc3_pre_run_stop(dwc, is_on);
dwc3_gadget_dctl_write_safe(dwc, reg);
do {

View file

@ -19,6 +19,23 @@
#include "core.h"
#include "debug.h"
DECLARE_EVENT_CLASS(dwc3_log_set_prtcap,
TP_PROTO(u32 mode),
TP_ARGS(mode),
TP_STRUCT__entry(
__field(u32, mode)
),
TP_fast_assign(
__entry->mode = mode;
),
TP_printk("mode %s", dwc3_mode_string(__entry->mode))
);
DEFINE_EVENT(dwc3_log_set_prtcap, dwc3_set_prtcap,
TP_PROTO(u32 mode),
TP_ARGS(mode)
);
DECLARE_EVENT_CLASS(dwc3_log_io,
TP_PROTO(void *base, u32 offset, u32 value),
TP_ARGS(base, offset, value),

View file

@ -1750,6 +1750,8 @@ static int configfs_composite_bind(struct usb_gadget *gadget,
cdev->use_os_string = true;
cdev->b_vendor_code = gi->b_vendor_code;
memcpy(cdev->qw_sign, gi->qw_sign, OS_STRING_QW_SIGN_LEN);
} else {
cdev->use_os_string = false;
}
if (gadget_is_otg(gadget) && !otg_desc[0]) {

View file

@ -11,12 +11,15 @@
/* #define VERBOSE_DEBUG */
#include <linux/cleanup.h>
#include <linux/slab.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/device.h>
#include <linux/err.h>
#include <linux/usb/gadget.h>
#include "u_serial.h"
@ -613,6 +616,7 @@ acm_bind(struct usb_configuration *c, struct usb_function *f)
struct usb_string *us;
int status;
struct usb_ep *ep;
struct usb_request *request __free(free_usb_request) = NULL;
/* REVISIT might want instance-specific strings to help
* distinguish instances ...
@ -630,7 +634,7 @@ acm_bind(struct usb_configuration *c, struct usb_function *f)
/* allocate instance-specific interface IDs, and patch descriptors */
status = usb_interface_id(c, f);
if (status < 0)
goto fail;
return status;
acm->ctrl_id = status;
acm_iad_descriptor.bFirstInterface = status;
@ -639,43 +643,41 @@ acm_bind(struct usb_configuration *c, struct usb_function *f)
status = usb_interface_id(c, f);
if (status < 0)
goto fail;
return status;
acm->data_id = status;
acm_data_interface_desc.bInterfaceNumber = status;
acm_union_desc.bSlaveInterface0 = status;
acm_call_mgmt_descriptor.bDataInterface = status;
status = -ENODEV;
/* allocate instance-specific endpoints */
ep = usb_ep_autoconfig(cdev->gadget, &acm_fs_in_desc);
if (!ep)
goto fail;
return -ENODEV;
acm->port.in = ep;
ep = usb_ep_autoconfig(cdev->gadget, &acm_fs_out_desc);
if (!ep)
goto fail;
return -ENODEV;
acm->port.out = ep;
ep = usb_ep_autoconfig(cdev->gadget, &acm_fs_notify_desc);
if (!ep)
goto fail;
return -ENODEV;
acm->notify = ep;
acm_iad_descriptor.bFunctionProtocol = acm->bInterfaceProtocol;
acm_control_interface_desc.bInterfaceProtocol = acm->bInterfaceProtocol;
/* allocate notification */
acm->notify_req = gs_alloc_req(ep,
sizeof(struct usb_cdc_notification) + 2,
GFP_KERNEL);
if (!acm->notify_req)
goto fail;
request = gs_alloc_req(ep,
sizeof(struct usb_cdc_notification) + 2,
GFP_KERNEL);
if (!request)
return -ENODEV;
acm->notify_req->complete = acm_cdc_notify_complete;
acm->notify_req->context = acm;
request->complete = acm_cdc_notify_complete;
request->context = acm;
/* support all relevant hardware speeds... we expect that when
* hardware is dual speed, all bulk-capable endpoints work at
@ -692,7 +694,9 @@ acm_bind(struct usb_configuration *c, struct usb_function *f)
status = usb_assign_descriptors(f, acm_fs_function, acm_hs_function,
acm_ss_function, acm_ss_function);
if (status)
goto fail;
return status;
acm->notify_req = no_free_ptr(request);
dev_dbg(&cdev->gadget->dev,
"acm ttyGS%d: IN/%s OUT/%s NOTIFY/%s\n",
@ -700,14 +704,6 @@ acm_bind(struct usb_configuration *c, struct usb_function *f)
acm->port.in->name, acm->port.out->name,
acm->notify->name);
return 0;
fail:
if (acm->notify_req)
gs_free_req(acm->notify, acm->notify_req);
ERROR(cdev, "%s/%p: can't bind, err %d\n", f->name, f, status);
return status;
}
static void acm_unbind(struct usb_configuration *c, struct usb_function *f)

View file

@ -8,6 +8,7 @@
/* #define VERBOSE_DEBUG */
#include <linux/cleanup.h>
#include <linux/slab.h>
#include <linux/kernel.h>
#include <linux/module.h>
@ -15,6 +16,8 @@
#include <linux/etherdevice.h>
#include <linux/string_choices.h>
#include <linux/usb/gadget.h>
#include "u_ether.h"
#include "u_ether_configfs.h"
#include "u_ecm.h"
@ -678,6 +681,7 @@ ecm_bind(struct usb_configuration *c, struct usb_function *f)
struct usb_ep *ep;
struct f_ecm_opts *ecm_opts;
struct usb_request *request __free(free_usb_request) = NULL;
if (!can_support_ecm(cdev->gadget))
return -EINVAL;
@ -711,7 +715,7 @@ ecm_bind(struct usb_configuration *c, struct usb_function *f)
/* allocate instance-specific interface IDs */
status = usb_interface_id(c, f);
if (status < 0)
goto fail;
return status;
ecm->ctrl_id = status;
ecm_iad_descriptor.bFirstInterface = status;
@ -720,24 +724,22 @@ ecm_bind(struct usb_configuration *c, struct usb_function *f)
status = usb_interface_id(c, f);
if (status < 0)
goto fail;
return status;
ecm->data_id = status;
ecm_data_nop_intf.bInterfaceNumber = status;
ecm_data_intf.bInterfaceNumber = status;
ecm_union_desc.bSlaveInterface0 = status;
status = -ENODEV;
/* allocate instance-specific endpoints */
ep = usb_ep_autoconfig(cdev->gadget, &fs_ecm_in_desc);
if (!ep)
goto fail;
return -ENODEV;
ecm->port.in_ep = ep;
ep = usb_ep_autoconfig(cdev->gadget, &fs_ecm_out_desc);
if (!ep)
goto fail;
return -ENODEV;
ecm->port.out_ep = ep;
/* NOTE: a status/notification endpoint is *OPTIONAL* but we
@ -746,20 +748,18 @@ ecm_bind(struct usb_configuration *c, struct usb_function *f)
*/
ep = usb_ep_autoconfig(cdev->gadget, &fs_ecm_notify_desc);
if (!ep)
goto fail;
return -ENODEV;
ecm->notify = ep;
status = -ENOMEM;
/* allocate notification request and buffer */
ecm->notify_req = usb_ep_alloc_request(ep, GFP_KERNEL);
if (!ecm->notify_req)
goto fail;
ecm->notify_req->buf = kmalloc(ECM_STATUS_BYTECOUNT, GFP_KERNEL);
if (!ecm->notify_req->buf)
goto fail;
ecm->notify_req->context = ecm;
ecm->notify_req->complete = ecm_notify_complete;
request = usb_ep_alloc_request(ep, GFP_KERNEL);
if (!request)
return -ENOMEM;
request->buf = kmalloc(ECM_STATUS_BYTECOUNT, GFP_KERNEL);
if (!request->buf)
return -ENOMEM;
request->context = ecm;
request->complete = ecm_notify_complete;
/* support all relevant hardware speeds... we expect that when
* hardware is dual speed, all bulk-capable endpoints work at
@ -778,7 +778,7 @@ ecm_bind(struct usb_configuration *c, struct usb_function *f)
status = usb_assign_descriptors(f, ecm_fs_function, ecm_hs_function,
ecm_ss_function, ecm_ss_function);
if (status)
goto fail;
return status;
/* NOTE: all that is done without knowing or caring about
* the network link ... which is unavailable to this code
@ -788,20 +788,12 @@ ecm_bind(struct usb_configuration *c, struct usb_function *f)
ecm->port.open = ecm_open;
ecm->port.close = ecm_close;
ecm->notify_req = no_free_ptr(request);
DBG(cdev, "CDC Ethernet: IN/%s OUT/%s NOTIFY/%s\n",
ecm->port.in_ep->name, ecm->port.out_ep->name,
ecm->notify->name);
return 0;
fail:
if (ecm->notify_req) {
kfree(ecm->notify_req->buf);
usb_ep_free_request(ecm->notify, ecm->notify_req);
}
ERROR(cdev, "%s: can't bind, err %d\n", f->name, status);
return status;
}
static inline struct f_ecm_opts *to_f_ecm_opts(struct config_item *item)

View file

@ -2407,7 +2407,12 @@ static int ffs_func_eps_enable(struct ffs_function *func)
ep = func->eps;
epfile = ffs->epfiles;
count = ffs->eps_count;
while(count--) {
if (!epfile) {
ret = -ENOMEM;
goto done;
}
while (count--) {
ep->ep->driver_data = ep;
ret = config_ep_by_speed(func->gadget, &func->function, ep->ep);
@ -2431,6 +2436,7 @@ static int ffs_func_eps_enable(struct ffs_function *func)
}
wake_up_interruptible(&ffs->wait);
done:
spin_unlock_irqrestore(&func->ffs->eps_lock, flags);
return ret;

View file

@ -511,7 +511,7 @@ static ssize_t f_hidg_write(struct file *file, const char __user *buffer,
}
req->status = 0;
req->zero = 0;
req->zero = 1;
req->length = count;
req->complete = f_hidg_req_complete;
req->context = hidg;
@ -967,7 +967,7 @@ static int hidg_setup(struct usb_function *f,
return -EOPNOTSUPP;
respond:
req->zero = 0;
req->zero = 1;
req->length = length;
status = usb_ep_queue(cdev->gadget->ep0, req, GFP_ATOMIC);
if (status < 0)

View file

@ -11,6 +11,7 @@
* Copyright (C) 2008 Nokia Corporation
*/
#include <linux/cleanup.h>
#include <linux/kernel.h>
#include <linux/interrupt.h>
#include <linux/module.h>
@ -20,6 +21,7 @@
#include <linux/string_choices.h>
#include <linux/usb/cdc.h>
#include <linux/usb/gadget.h>
#include "u_ether.h"
#include "u_ether_configfs.h"
@ -1436,18 +1438,18 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f)
struct usb_ep *ep;
struct f_ncm_opts *ncm_opts;
struct usb_os_desc_table *os_desc_table __free(kfree) = NULL;
struct usb_request *request __free(free_usb_request) = NULL;
if (!can_support_ecm(cdev->gadget))
return -EINVAL;
ncm_opts = container_of(f->fi, struct f_ncm_opts, func_inst);
if (cdev->use_os_string) {
f->os_desc_table = kzalloc(sizeof(*f->os_desc_table),
GFP_KERNEL);
if (!f->os_desc_table)
os_desc_table = kzalloc(sizeof(*os_desc_table), GFP_KERNEL);
if (!os_desc_table)
return -ENOMEM;
f->os_desc_n = 1;
f->os_desc_table[0].os_desc = &ncm_opts->ncm_os_desc;
}
mutex_lock(&ncm_opts->lock);
@ -1459,16 +1461,17 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f)
mutex_unlock(&ncm_opts->lock);
if (status)
goto fail;
return status;
ncm_opts->bound = true;
ncm_string_defs[1].s = ncm->ethaddr;
us = usb_gstrings_attach(cdev, ncm_strings,
ARRAY_SIZE(ncm_string_defs));
if (IS_ERR(us)) {
status = PTR_ERR(us);
goto fail;
}
if (IS_ERR(us))
return PTR_ERR(us);
ncm_control_intf.iInterface = us[STRING_CTRL_IDX].id;
ncm_data_nop_intf.iInterface = us[STRING_DATA_IDX].id;
ncm_data_intf.iInterface = us[STRING_DATA_IDX].id;
@ -1478,20 +1481,16 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f)
/* allocate instance-specific interface IDs */
status = usb_interface_id(c, f);
if (status < 0)
goto fail;
return status;
ncm->ctrl_id = status;
ncm_iad_desc.bFirstInterface = status;
ncm_control_intf.bInterfaceNumber = status;
ncm_union_desc.bMasterInterface0 = status;
if (cdev->use_os_string)
f->os_desc_table[0].if_id =
ncm_iad_desc.bFirstInterface;
status = usb_interface_id(c, f);
if (status < 0)
goto fail;
return status;
ncm->data_id = status;
ncm_data_nop_intf.bInterfaceNumber = status;
@ -1500,35 +1499,31 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f)
ecm_desc.wMaxSegmentSize = cpu_to_le16(ncm_opts->max_segment_size);
status = -ENODEV;
/* allocate instance-specific endpoints */
ep = usb_ep_autoconfig(cdev->gadget, &fs_ncm_in_desc);
if (!ep)
goto fail;
return -ENODEV;
ncm->port.in_ep = ep;
ep = usb_ep_autoconfig(cdev->gadget, &fs_ncm_out_desc);
if (!ep)
goto fail;
return -ENODEV;
ncm->port.out_ep = ep;
ep = usb_ep_autoconfig(cdev->gadget, &fs_ncm_notify_desc);
if (!ep)
goto fail;
return -ENODEV;
ncm->notify = ep;
status = -ENOMEM;
/* allocate notification request and buffer */
ncm->notify_req = usb_ep_alloc_request(ep, GFP_KERNEL);
if (!ncm->notify_req)
goto fail;
ncm->notify_req->buf = kmalloc(NCM_STATUS_BYTECOUNT, GFP_KERNEL);
if (!ncm->notify_req->buf)
goto fail;
ncm->notify_req->context = ncm;
ncm->notify_req->complete = ncm_notify_complete;
request = usb_ep_alloc_request(ep, GFP_KERNEL);
if (!request)
return -ENOMEM;
request->buf = kmalloc(NCM_STATUS_BYTECOUNT, GFP_KERNEL);
if (!request->buf)
return -ENOMEM;
request->context = ncm;
request->complete = ncm_notify_complete;
/*
* support all relevant hardware speeds... we expect that when
@ -1548,7 +1543,7 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f)
status = usb_assign_descriptors(f, ncm_fs_function, ncm_hs_function,
ncm_ss_function, ncm_ss_function);
if (status)
goto fail;
return status;
/*
* NOTE: all that is done without knowing or caring about
@ -1561,23 +1556,18 @@ static int ncm_bind(struct usb_configuration *c, struct usb_function *f)
hrtimer_setup(&ncm->task_timer, ncm_tx_timeout, CLOCK_MONOTONIC, HRTIMER_MODE_REL_SOFT);
if (cdev->use_os_string) {
os_desc_table[0].os_desc = &ncm_opts->ncm_os_desc;
os_desc_table[0].if_id = ncm_iad_desc.bFirstInterface;
f->os_desc_table = no_free_ptr(os_desc_table);
f->os_desc_n = 1;
}
ncm->notify_req = no_free_ptr(request);
DBG(cdev, "CDC Network: IN/%s OUT/%s NOTIFY/%s\n",
ncm->port.in_ep->name, ncm->port.out_ep->name,
ncm->notify->name);
return 0;
fail:
kfree(f->os_desc_table);
f->os_desc_n = 0;
if (ncm->notify_req) {
kfree(ncm->notify_req->buf);
usb_ep_free_request(ncm->notify, ncm->notify_req);
}
ERROR(cdev, "%s: can't bind, err %d\n", f->name, status);
return status;
}
static inline struct f_ncm_opts *to_f_ncm_opts(struct config_item *item)
@ -1771,7 +1761,6 @@ static struct usb_function *ncm_alloc(struct usb_function_instance *fi)
mutex_unlock(&opts->lock);
return ERR_PTR(-EINVAL);
}
ncm_string_defs[STRING_MAC_IDX].s = ncm->ethaddr;
spin_lock_init(&ncm->lock);
ncm_reset_values(ncm);

View file

@ -19,6 +19,8 @@
#include <linux/atomic.h>
#include <linux/usb/gadget.h>
#include "u_ether.h"
#include "u_ether_configfs.h"
#include "u_rndis.h"
@ -662,6 +664,8 @@ rndis_bind(struct usb_configuration *c, struct usb_function *f)
struct usb_ep *ep;
struct f_rndis_opts *rndis_opts;
struct usb_os_desc_table *os_desc_table __free(kfree) = NULL;
struct usb_request *request __free(free_usb_request) = NULL;
if (!can_support_rndis(c))
return -EINVAL;
@ -669,12 +673,9 @@ rndis_bind(struct usb_configuration *c, struct usb_function *f)
rndis_opts = container_of(f->fi, struct f_rndis_opts, func_inst);
if (cdev->use_os_string) {
f->os_desc_table = kzalloc(sizeof(*f->os_desc_table),
GFP_KERNEL);
if (!f->os_desc_table)
os_desc_table = kzalloc(sizeof(*os_desc_table), GFP_KERNEL);
if (!os_desc_table)
return -ENOMEM;
f->os_desc_n = 1;
f->os_desc_table[0].os_desc = &rndis_opts->rndis_os_desc;
}
rndis_iad_descriptor.bFunctionClass = rndis_opts->class;
@ -692,16 +693,14 @@ rndis_bind(struct usb_configuration *c, struct usb_function *f)
gether_set_gadget(rndis_opts->net, cdev->gadget);
status = gether_register_netdev(rndis_opts->net);
if (status)
goto fail;
return status;
rndis_opts->bound = true;
}
us = usb_gstrings_attach(cdev, rndis_strings,
ARRAY_SIZE(rndis_string_defs));
if (IS_ERR(us)) {
status = PTR_ERR(us);
goto fail;
}
if (IS_ERR(us))
return PTR_ERR(us);
rndis_control_intf.iInterface = us[0].id;
rndis_data_intf.iInterface = us[1].id;
rndis_iad_descriptor.iFunction = us[2].id;
@ -709,36 +708,30 @@ rndis_bind(struct usb_configuration *c, struct usb_function *f)
/* allocate instance-specific interface IDs */
status = usb_interface_id(c, f);
if (status < 0)
goto fail;
return status;
rndis->ctrl_id = status;
rndis_iad_descriptor.bFirstInterface = status;
rndis_control_intf.bInterfaceNumber = status;
rndis_union_desc.bMasterInterface0 = status;
if (cdev->use_os_string)
f->os_desc_table[0].if_id =
rndis_iad_descriptor.bFirstInterface;
status = usb_interface_id(c, f);
if (status < 0)
goto fail;
return status;
rndis->data_id = status;
rndis_data_intf.bInterfaceNumber = status;
rndis_union_desc.bSlaveInterface0 = status;
status = -ENODEV;
/* allocate instance-specific endpoints */
ep = usb_ep_autoconfig(cdev->gadget, &fs_in_desc);
if (!ep)
goto fail;
return -ENODEV;
rndis->port.in_ep = ep;
ep = usb_ep_autoconfig(cdev->gadget, &fs_out_desc);
if (!ep)
goto fail;
return -ENODEV;
rndis->port.out_ep = ep;
/* NOTE: a status/notification endpoint is, strictly speaking,
@ -747,21 +740,19 @@ rndis_bind(struct usb_configuration *c, struct usb_function *f)
*/
ep = usb_ep_autoconfig(cdev->gadget, &fs_notify_desc);
if (!ep)
goto fail;
return -ENODEV;
rndis->notify = ep;
status = -ENOMEM;
/* allocate notification request and buffer */
rndis->notify_req = usb_ep_alloc_request(ep, GFP_KERNEL);
if (!rndis->notify_req)
goto fail;
rndis->notify_req->buf = kmalloc(STATUS_BYTECOUNT, GFP_KERNEL);
if (!rndis->notify_req->buf)
goto fail;
rndis->notify_req->length = STATUS_BYTECOUNT;
rndis->notify_req->context = rndis;
rndis->notify_req->complete = rndis_response_complete;
request = usb_ep_alloc_request(ep, GFP_KERNEL);
if (!request)
return -ENOMEM;
request->buf = kmalloc(STATUS_BYTECOUNT, GFP_KERNEL);
if (!request->buf)
return -ENOMEM;
request->length = STATUS_BYTECOUNT;
request->context = rndis;
request->complete = rndis_response_complete;
/* support all relevant hardware speeds... we expect that when
* hardware is dual speed, all bulk-capable endpoints work at
@ -778,7 +769,7 @@ rndis_bind(struct usb_configuration *c, struct usb_function *f)
status = usb_assign_descriptors(f, eth_fs_function, eth_hs_function,
eth_ss_function, eth_ss_function);
if (status)
goto fail;
return status;
rndis->port.open = rndis_open;
rndis->port.close = rndis_close;
@ -789,10 +780,19 @@ rndis_bind(struct usb_configuration *c, struct usb_function *f)
if (rndis->manufacturer && rndis->vendorID &&
rndis_set_param_vendor(rndis->params, rndis->vendorID,
rndis->manufacturer)) {
status = -EINVAL;
goto fail_free_descs;
usb_free_all_descriptors(f);
return -EINVAL;
}
if (cdev->use_os_string) {
os_desc_table[0].os_desc = &rndis_opts->rndis_os_desc;
os_desc_table[0].if_id = rndis_iad_descriptor.bFirstInterface;
f->os_desc_table = no_free_ptr(os_desc_table);
f->os_desc_n = 1;
}
rndis->notify_req = no_free_ptr(request);
/* NOTE: all that is done without knowing or caring about
* the network link ... which is unavailable to this code
* until we're activated via set_alt().
@ -802,21 +802,6 @@ rndis_bind(struct usb_configuration *c, struct usb_function *f)
rndis->port.in_ep->name, rndis->port.out_ep->name,
rndis->notify->name);
return 0;
fail_free_descs:
usb_free_all_descriptors(f);
fail:
kfree(f->os_desc_table);
f->os_desc_n = 0;
if (rndis->notify_req) {
kfree(rndis->notify_req->buf);
usb_ep_free_request(rndis->notify, rndis->notify_req);
}
ERROR(cdev, "%s: can't bind, err %d\n", f->name, status);
return status;
}
void rndis_borrow_net(struct usb_function_instance *f, struct net_device *net)

View file

@ -47,16 +47,6 @@ DEFINE_EVENT(cdns2_log_enable_disable, cdns2_pullup,
TP_ARGS(set)
);
DEFINE_EVENT(cdns2_log_enable_disable, cdns2_lpm,
TP_PROTO(int set),
TP_ARGS(set)
);
DEFINE_EVENT(cdns2_log_enable_disable, cdns2_may_wakeup,
TP_PROTO(int set),
TP_ARGS(set)
);
DECLARE_EVENT_CLASS(cdns2_log_simple,
TP_PROTO(char *msg),
TP_ARGS(msg),
@ -79,11 +69,6 @@ DEFINE_EVENT(cdns2_log_simple, cdns2_ep0_status_stage,
TP_ARGS(msg)
);
DEFINE_EVENT(cdns2_log_simple, cdns2_ep0_set_config,
TP_PROTO(char *msg),
TP_ARGS(msg)
);
DEFINE_EVENT(cdns2_log_simple, cdns2_ep0_setup,
TP_PROTO(char *msg),
TP_ARGS(msg)
@ -340,11 +325,6 @@ DEFINE_EVENT(cdns2_log_request, cdns2_free_request,
TP_ARGS(preq)
);
DEFINE_EVENT(cdns2_log_request, cdns2_ep_queue,
TP_PROTO(struct cdns2_request *preq),
TP_ARGS(preq)
);
DEFINE_EVENT(cdns2_log_request, cdns2_request_dequeue,
TP_PROTO(struct cdns2_request *preq),
TP_ARGS(preq)
@ -355,50 +335,6 @@ DEFINE_EVENT(cdns2_log_request, cdns2_request_giveback,
TP_ARGS(preq)
);
TRACE_EVENT(cdns2_ep0_enqueue,
TP_PROTO(struct cdns2_device *dev_priv, struct usb_request *request),
TP_ARGS(dev_priv, request),
TP_STRUCT__entry(
__field(int, dir)
__field(int, length)
),
TP_fast_assign(
__entry->dir = dev_priv->eps[0].dir;
__entry->length = request->length;
),
TP_printk("Queue to ep0%s length: %u", __entry->dir ? "in" : "out",
__entry->length)
);
DECLARE_EVENT_CLASS(cdns2_log_map_request,
TP_PROTO(struct cdns2_request *priv_req),
TP_ARGS(priv_req),
TP_STRUCT__entry(
__string(name, priv_req->pep->name)
__field(struct usb_request *, req)
__field(void *, buf)
__field(dma_addr_t, dma)
),
TP_fast_assign(
__assign_str(name);
__entry->req = &priv_req->request;
__entry->buf = priv_req->request.buf;
__entry->dma = priv_req->request.dma;
),
TP_printk("%s: req: %p, req buf %p, dma %p",
__get_str(name), __entry->req, __entry->buf, &__entry->dma
)
);
DEFINE_EVENT(cdns2_log_map_request, cdns2_map_request,
TP_PROTO(struct cdns2_request *req),
TP_ARGS(req)
);
DEFINE_EVENT(cdns2_log_map_request, cdns2_mapped_request,
TP_PROTO(struct cdns2_request *req),
TP_ARGS(req)
);
DECLARE_EVENT_CLASS(cdns2_log_trb,
TP_PROTO(struct cdns2_endpoint *pep, struct cdns2_trb *trb),
TP_ARGS(pep, trb),
@ -507,11 +443,6 @@ DEFINE_EVENT(cdns2_log_ep, cdns2_gadget_ep_disable,
TP_ARGS(pep)
);
DEFINE_EVENT(cdns2_log_ep, cdns2_iso_out_ep_disable,
TP_PROTO(struct cdns2_endpoint *pep),
TP_ARGS(pep)
);
DEFINE_EVENT(cdns2_log_ep, cdns2_ep_busy_try_halt_again,
TP_PROTO(struct cdns2_endpoint *pep),
TP_ARGS(pep)

View file

@ -194,6 +194,9 @@ struct usb_request *usb_ep_alloc_request(struct usb_ep *ep,
req = ep->ops->alloc_request(ep, gfp_flags);
if (req)
req->ep = ep;
trace_usb_ep_alloc_request(ep, req, req ? 0 : -ENOMEM);
return req;
@ -1125,6 +1128,7 @@ void usb_gadget_set_state(struct usb_gadget *gadget,
{
gadget->state = state;
schedule_work(&gadget->work);
trace_usb_gadget_set_state(gadget, 0);
}
EXPORT_SYMBOL_GPL(usb_gadget_set_state);

View file

@ -812,8 +812,7 @@ static void tegra_xudc_update_data_role(struct tegra_xudc *xudc,
return;
}
xudc->device_mode = (usbphy->last_event == USB_EVENT_VBUS) ? true :
false;
xudc->device_mode = usbphy->last_event == USB_EVENT_VBUS;
phy_index = tegra_xudc_get_phy_index(xudc, usbphy);
dev_dbg(xudc->dev, "%s(): current phy index is %d\n", __func__,

View file

@ -81,6 +81,11 @@ DECLARE_EVENT_CLASS(udc_log_gadget,
__entry->ret)
);
DEFINE_EVENT(udc_log_gadget, usb_gadget_set_state,
TP_PROTO(struct usb_gadget *g, int ret),
TP_ARGS(g, ret)
);
DEFINE_EVENT(udc_log_gadget, usb_gadget_frame_number,
TP_PROTO(struct usb_gadget *g, int ret),
TP_ARGS(g, ret)

View file

@ -93,7 +93,7 @@ config USB_XHCI_RCAR
default ARCH_RENESAS
help
Say 'Y' to enable the support for the xHCI host controller
found in Renesas R-Car ARM SoCs.
found in Renesas R-Car and RZ/G3E alike ARM SoCs.
config USB_XHCI_RZV2M
bool "xHCI support for Renesas RZ/V2M SoC"

View file

@ -1916,7 +1916,7 @@ max3421_probe(struct spi_device *spi)
if (hcd) {
kfree(max3421_hcd->tx);
kfree(max3421_hcd->rx);
if (max3421_hcd->spi_thread)
if (!IS_ERR_OR_NULL(max3421_hcd->spi_thread))
kthread_stop(max3421_hcd->spi_thread);
usb_put_hcd(hcd);
}

View file

@ -448,13 +448,6 @@ static const struct dev_pm_ops ohci_hcd_s3c2410_pm_ops = {
.resume = ohci_hcd_s3c2410_drv_resume,
};
static const struct of_device_id ohci_hcd_s3c2410_dt_ids[] = {
{ .compatible = "samsung,s3c2410-ohci" },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, ohci_hcd_s3c2410_dt_ids);
static struct platform_driver ohci_hcd_s3c2410_driver = {
.probe = ohci_hcd_s3c2410_probe,
.remove = ohci_hcd_s3c2410_remove,
@ -462,7 +455,6 @@ static struct platform_driver ohci_hcd_s3c2410_driver = {
.driver = {
.name = "s3c2410-ohci",
.pm = &ohci_hcd_s3c2410_pm_ops,
.of_match_table = ohci_hcd_s3c2410_dt_ids,
},
};

View file

@ -89,3 +89,5 @@
#define HCC2_GSC(p) ((p) & (1 << 8))
/* true: HC support Virtualization Based Trusted I/O Capability */
#define HCC2_VTC(p) ((p) & (1 << 9))
/* true: HC support Double BW on a eUSB2 HS ISOC EP */
#define HCC2_EUSB2_DIC(p) ((p) & (1 << 11))

View file

@ -1330,18 +1330,33 @@ static unsigned int xhci_get_endpoint_interval(struct usb_device *udev,
return interval;
}
/* The "Mult" field in the endpoint context is only set for SuperSpeed isoc eps.
/*
* xHCs without LEC use the "Mult" field in the endpoint context for SuperSpeed
* isoc eps, and High speed isoc eps that support bandwidth doubling. Standard
* High speed endpoint descriptors can define "the number of additional
* transaction opportunities per microframe", but that goes in the Max Burst
* endpoint context field.
*/
static u32 xhci_get_endpoint_mult(struct usb_device *udev,
struct usb_host_endpoint *ep)
static u32 xhci_get_endpoint_mult(struct xhci_hcd *xhci,
struct usb_device *udev,
struct usb_host_endpoint *ep)
{
if (udev->speed < USB_SPEED_SUPER ||
!usb_endpoint_xfer_isoc(&ep->desc))
return 0;
return ep->ss_ep_comp.bmAttributes;
bool lec;
/* xHCI 1.1 with LEC set does not use mult field, except intel eUSB2 */
lec = xhci->hci_version > 0x100 && HCC2_LEC(xhci->hcc_params2);
/* eUSB2 double isoc bw devices are the only USB2 devices using mult */
if (usb_endpoint_is_hs_isoc_double(udev, ep) &&
(!lec || xhci->quirks & XHCI_INTEL_HOST))
return 1;
/* SuperSpeed isoc transfers on hosts without LEC uses mult field */
if (udev->speed >= USB_SPEED_SUPER &&
usb_endpoint_xfer_isoc(&ep->desc) && !lec)
return ep->ss_ep_comp.bmAttributes;
return 0;
}
static u32 xhci_get_endpoint_max_burst(struct usb_device *udev,
@ -1353,8 +1368,16 @@ static u32 xhci_get_endpoint_max_burst(struct usb_device *udev,
if (udev->speed == USB_SPEED_HIGH &&
(usb_endpoint_xfer_isoc(&ep->desc) ||
usb_endpoint_xfer_int(&ep->desc)))
usb_endpoint_xfer_int(&ep->desc))) {
/*
* USB 2 Isochronous Double IN Bandwidth ECN uses fixed burst
* size and max packets bits 12:11 are invalid.
*/
if (usb_endpoint_is_hs_isoc_double(udev, ep))
return 2;
return usb_endpoint_maxp_mult(&ep->desc) - 1;
}
return 0;
}
@ -1378,36 +1401,6 @@ static u32 xhci_get_endpoint_type(struct usb_host_endpoint *ep)
return 0;
}
/* Return the maximum endpoint service interval time (ESIT) payload.
* Basically, this is the maxpacket size, multiplied by the burst size
* and mult size.
*/
static u32 xhci_get_max_esit_payload(struct usb_device *udev,
struct usb_host_endpoint *ep)
{
int max_burst;
int max_packet;
/* Only applies for interrupt or isochronous endpoints */
if (usb_endpoint_xfer_control(&ep->desc) ||
usb_endpoint_xfer_bulk(&ep->desc))
return 0;
/* SuperSpeedPlus Isoc ep sending over 48k per esit */
if ((udev->speed >= USB_SPEED_SUPER_PLUS) &&
USB_SS_SSP_ISOC_COMP(ep->ss_ep_comp.bmAttributes))
return le32_to_cpu(ep->ssp_isoc_ep_comp.dwBytesPerInterval);
/* SuperSpeed or SuperSpeedPlus Isoc ep with less than 48k per esit */
if (udev->speed >= USB_SPEED_SUPER)
return le16_to_cpu(ep->ss_ep_comp.wBytesPerInterval);
max_packet = usb_endpoint_maxp(&ep->desc);
max_burst = usb_endpoint_maxp_mult(&ep->desc);
/* A 0 in max burst means 1 transfer per ESIT */
return max_packet * max_burst;
}
/* Set up an endpoint with one ring segment. Do not allocate stream rings.
* Drivers will have to call usb_alloc_streams() to do that.
*/
@ -1439,13 +1432,20 @@ int xhci_endpoint_init(struct xhci_hcd *xhci,
ring_type = usb_endpoint_type(&ep->desc);
/* Ensure host supports double isoc bandwidth for eUSB2 devices */
if (usb_endpoint_is_hs_isoc_double(udev, ep) &&
!HCC2_EUSB2_DIC(xhci->hcc_params2)) {
dev_dbg(&udev->dev, "Double Isoc Bandwidth not supported by xhci\n");
return -EINVAL;
}
/*
* Get values to fill the endpoint context, mostly from ep descriptor.
* The average TRB buffer lengt for bulk endpoints is unclear as we
* have no clue on scatter gather list entry size. For Isoc and Int,
* set it to max available. See xHCI 1.1 spec 4.14.1.1 for details.
*/
max_esit_payload = xhci_get_max_esit_payload(udev, ep);
max_esit_payload = usb_endpoint_max_periodic_payload(udev, ep);
interval = xhci_get_endpoint_interval(udev, ep);
/* Periodic endpoint bInterval limit quirk */
@ -1462,8 +1462,8 @@ int xhci_endpoint_init(struct xhci_hcd *xhci,
}
}
mult = xhci_get_endpoint_mult(udev, ep);
max_packet = usb_endpoint_maxp(&ep->desc);
mult = xhci_get_endpoint_mult(xhci, udev, ep);
max_packet = xhci_usb_endpoint_maxp(udev, ep);
max_burst = xhci_get_endpoint_max_burst(udev, ep);
avg_trb_len = max_esit_payload;
@ -1484,9 +1484,6 @@ int xhci_endpoint_init(struct xhci_hcd *xhci,
/* xHCI 1.0 and 1.1 indicates that ctrl ep avg TRB Length should be 8 */
if (usb_endpoint_xfer_control(&ep->desc) && xhci->hci_version >= 0x100)
avg_trb_len = 8;
/* xhci 1.1 with LEC support doesn't use mult field, use RsvdZ */
if ((xhci->hci_version > 0x100) && HCC2_LEC(xhci->hcc_params2))
mult = 0;
/* Set up the endpoint ring */
virt_dev->eps[ep_index].new_ring =

View file

@ -610,7 +610,7 @@ int xhci_pci_common_probe(struct pci_dev *dev, const struct pci_device_id *id)
{
int retval;
struct xhci_hcd *xhci;
struct usb_hcd *hcd;
struct usb_hcd *hcd, *usb3_hcd;
struct reset_control *reset;
reset = devm_reset_control_get_optional_exclusive(&dev->dev, NULL);
@ -636,26 +636,32 @@ int xhci_pci_common_probe(struct pci_dev *dev, const struct pci_device_id *id)
hcd = dev_get_drvdata(&dev->dev);
xhci = hcd_to_xhci(hcd);
xhci->reset = reset;
xhci->shared_hcd = usb_create_shared_hcd(&xhci_pci_hc_driver, &dev->dev,
pci_name(dev), hcd);
if (!xhci->shared_hcd) {
retval = -ENOMEM;
goto dealloc_usb2_hcd;
xhci->allow_single_roothub = 1;
if (!xhci_has_one_roothub(xhci)) {
xhci->shared_hcd = usb_create_shared_hcd(&xhci_pci_hc_driver, &dev->dev,
pci_name(dev), hcd);
if (!xhci->shared_hcd) {
retval = -ENOMEM;
goto dealloc_usb2_hcd;
}
retval = xhci_ext_cap_init(xhci);
if (retval)
goto put_usb3_hcd;
retval = usb_add_hcd(xhci->shared_hcd, dev->irq, IRQF_SHARED);
if (retval)
goto put_usb3_hcd;
} else {
retval = xhci_ext_cap_init(xhci);
if (retval)
goto dealloc_usb2_hcd;
}
retval = xhci_ext_cap_init(xhci);
if (retval)
goto put_usb3_hcd;
retval = usb_add_hcd(xhci->shared_hcd, dev->irq,
IRQF_SHARED);
if (retval)
goto put_usb3_hcd;
/* Roothub already marked as USB 3.0 speed */
if (!(xhci->quirks & XHCI_BROKEN_STREAMS) &&
HCC_MAX_PSA(xhci->hcc_params) >= 4)
xhci->shared_hcd->can_do_streams = 1;
usb3_hcd = xhci_get_usb3_hcd(xhci);
if (usb3_hcd && !(xhci->quirks & XHCI_BROKEN_STREAMS) && HCC_MAX_PSA(xhci->hcc_params) >= 4)
usb3_hcd->can_do_streams = 1;
/* USB-2 and USB-3 roothubs initialized, allow runtime pm suspend */
pm_runtime_put_noidle(&dev->dev);

View file

@ -20,6 +20,7 @@
#include <linux/acpi.h>
#include <linux/usb/of.h>
#include <linux/reset.h>
#include <linux/usb/xhci-sideband.h>
#include "xhci.h"
#include "xhci-plat.h"
@ -74,6 +75,16 @@ static int xhci_priv_resume_quirk(struct usb_hcd *hcd)
return priv->resume_quirk(hcd);
}
static int xhci_priv_post_resume_quirk(struct usb_hcd *hcd)
{
struct xhci_plat_priv *priv = hcd_to_xhci_priv(hcd);
if (!priv->post_resume_quirk)
return 0;
return priv->post_resume_quirk(hcd);
}
static void xhci_plat_quirks(struct device *dev, struct xhci_hcd *xhci)
{
struct xhci_plat_priv *priv = xhci_to_priv(xhci);
@ -171,6 +182,7 @@ int xhci_plat_probe(struct platform_device *pdev, struct device *sysdev, const s
return ret;
pm_runtime_set_active(&pdev->dev);
pm_runtime_use_autosuspend(&pdev->dev);
pm_runtime_enable(&pdev->dev);
pm_runtime_get_noresume(&pdev->dev);
@ -454,7 +466,7 @@ void xhci_plat_remove(struct platform_device *dev)
}
EXPORT_SYMBOL_GPL(xhci_plat_remove);
static int xhci_plat_suspend(struct device *dev)
static int xhci_plat_suspend_common(struct device *dev)
{
struct usb_hcd *hcd = dev_get_drvdata(dev);
struct xhci_hcd *xhci = hcd_to_xhci(hcd);
@ -482,6 +494,25 @@ static int xhci_plat_suspend(struct device *dev)
return 0;
}
static int xhci_plat_suspend(struct device *dev)
{
struct usb_hcd *hcd = dev_get_drvdata(dev);
struct xhci_plat_priv *priv = hcd_to_xhci_priv(hcd);
if (xhci_sideband_check(hcd)) {
priv->sideband_at_suspend = 1;
dev_dbg(dev, "sideband instance active, skip suspend.\n");
return 0;
}
return xhci_plat_suspend_common(dev);
}
static int xhci_plat_freeze(struct device *dev)
{
return xhci_plat_suspend_common(dev);
}
static int xhci_plat_resume_common(struct device *dev, bool power_lost)
{
struct usb_hcd *hcd = dev_get_drvdata(dev);
@ -509,6 +540,10 @@ static int xhci_plat_resume_common(struct device *dev, bool power_lost)
if (ret)
goto disable_clks;
ret = xhci_priv_post_resume_quirk(hcd);
if (ret)
goto disable_clks;
pm_runtime_disable(dev);
pm_runtime_set_active(dev);
pm_runtime_enable(dev);
@ -525,6 +560,20 @@ static int xhci_plat_resume_common(struct device *dev, bool power_lost)
}
static int xhci_plat_resume(struct device *dev)
{
struct usb_hcd *hcd = dev_get_drvdata(dev);
struct xhci_plat_priv *priv = hcd_to_xhci_priv(hcd);
if (priv->sideband_at_suspend) {
priv->sideband_at_suspend = 0;
dev_dbg(dev, "sideband instance active, skip resume.\n");
return 0;
}
return xhci_plat_resume_common(dev, false);
}
static int xhci_plat_thaw(struct device *dev)
{
return xhci_plat_resume_common(dev, false);
}
@ -558,9 +607,9 @@ static int __maybe_unused xhci_plat_runtime_resume(struct device *dev)
const struct dev_pm_ops xhci_plat_pm_ops = {
.suspend = pm_sleep_ptr(xhci_plat_suspend),
.resume = pm_sleep_ptr(xhci_plat_resume),
.freeze = pm_sleep_ptr(xhci_plat_suspend),
.thaw = pm_sleep_ptr(xhci_plat_resume),
.poweroff = pm_sleep_ptr(xhci_plat_suspend),
.freeze = pm_sleep_ptr(xhci_plat_freeze),
.thaw = pm_sleep_ptr(xhci_plat_thaw),
.poweroff = pm_sleep_ptr(xhci_plat_freeze),
.restore = pm_sleep_ptr(xhci_plat_restore),
SET_RUNTIME_PM_OPS(xhci_plat_runtime_suspend,

View file

@ -16,10 +16,12 @@ struct xhci_plat_priv {
const char *firmware_name;
unsigned long long quirks;
bool power_lost;
unsigned sideband_at_suspend:1;
void (*plat_start)(struct usb_hcd *);
int (*init_quirk)(struct usb_hcd *);
int (*suspend_quirk)(struct usb_hcd *);
int (*resume_quirk)(struct usb_hcd *);
int (*post_resume_quirk)(struct usb_hcd *);
};
#define hcd_to_xhci_priv(h) ((struct xhci_plat_priv *)hcd_to_xhci(h)->priv)

View file

@ -0,0 +1,49 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __XHCI_RCAR_H
#define __XHCI_RCAR_H
/*** Register Offset ***/
#define RCAR_USB3_AXH_STA 0x104 /* AXI Host Control Status */
#define RCAR_USB3_INT_ENA 0x224 /* Interrupt Enable */
#define RCAR_USB3_DL_CTRL 0x250 /* FW Download Control & Status */
#define RCAR_USB3_FW_DATA0 0x258 /* FW Data0 */
#define RCAR_USB3_LCLK 0xa44 /* LCLK Select */
#define RCAR_USB3_CONF1 0xa48 /* USB3.0 Configuration1 */
#define RCAR_USB3_CONF2 0xa5c /* USB3.0 Configuration2 */
#define RCAR_USB3_CONF3 0xaa8 /* USB3.0 Configuration3 */
#define RCAR_USB3_RX_POL 0xab0 /* USB3.0 RX Polarity */
#define RCAR_USB3_TX_POL 0xab8 /* USB3.0 TX Polarity */
/*** Register Settings ***/
/* AXI Host Control Status */
#define RCAR_USB3_AXH_STA_B3_PLL_ACTIVE 0x00010000
#define RCAR_USB3_AXH_STA_B2_PLL_ACTIVE 0x00000001
#define RCAR_USB3_AXH_STA_PLL_ACTIVE_MASK (RCAR_USB3_AXH_STA_B3_PLL_ACTIVE | \
RCAR_USB3_AXH_STA_B2_PLL_ACTIVE)
/* Interrupt Enable */
#define RCAR_USB3_INT_XHC_ENA 0x00000001
#define RCAR_USB3_INT_PME_ENA 0x00000002
#define RCAR_USB3_INT_HSE_ENA 0x00000004
#define RCAR_USB3_INT_ENA_VAL (RCAR_USB3_INT_XHC_ENA | \
RCAR_USB3_INT_PME_ENA | RCAR_USB3_INT_HSE_ENA)
/* FW Download Control & Status */
#define RCAR_USB3_DL_CTRL_ENABLE 0x00000001
#define RCAR_USB3_DL_CTRL_FW_SUCCESS 0x00000010
#define RCAR_USB3_DL_CTRL_FW_SET_DATA0 0x00000100
/* LCLK Select */
#define RCAR_USB3_LCLK_ENA_VAL 0x01030001
/* USB3.0 Configuration */
#define RCAR_USB3_CONF1_VAL 0x00030204
#define RCAR_USB3_CONF2_VAL 0x00030300
#define RCAR_USB3_CONF3_VAL 0x13802007
/* USB3.0 Polarity */
#define RCAR_USB3_RX_POL_VAL BIT(21)
#define RCAR_USB3_TX_POL_VAL BIT(4)
#endif /* __XHCI_RCAR_H */

View file

@ -11,9 +11,12 @@
#include <linux/platform_device.h>
#include <linux/of.h>
#include <linux/usb/phy.h>
#include <linux/reset.h>
#include "xhci.h"
#include "xhci-plat.h"
#include "xhci-rcar-regs.h"
#include "xhci-rzg3e-regs.h"
#include "xhci-rzv2m.h"
#define XHCI_RCAR_FIRMWARE_NAME_V1 "r8a779x_usb3_v1.dlmem"
@ -29,50 +32,6 @@
MODULE_FIRMWARE(XHCI_RCAR_FIRMWARE_NAME_V1);
MODULE_FIRMWARE(XHCI_RCAR_FIRMWARE_NAME_V3);
/*** Register Offset ***/
#define RCAR_USB3_AXH_STA 0x104 /* AXI Host Control Status */
#define RCAR_USB3_INT_ENA 0x224 /* Interrupt Enable */
#define RCAR_USB3_DL_CTRL 0x250 /* FW Download Control & Status */
#define RCAR_USB3_FW_DATA0 0x258 /* FW Data0 */
#define RCAR_USB3_LCLK 0xa44 /* LCLK Select */
#define RCAR_USB3_CONF1 0xa48 /* USB3.0 Configuration1 */
#define RCAR_USB3_CONF2 0xa5c /* USB3.0 Configuration2 */
#define RCAR_USB3_CONF3 0xaa8 /* USB3.0 Configuration3 */
#define RCAR_USB3_RX_POL 0xab0 /* USB3.0 RX Polarity */
#define RCAR_USB3_TX_POL 0xab8 /* USB3.0 TX Polarity */
/*** Register Settings ***/
/* AXI Host Control Status */
#define RCAR_USB3_AXH_STA_B3_PLL_ACTIVE 0x00010000
#define RCAR_USB3_AXH_STA_B2_PLL_ACTIVE 0x00000001
#define RCAR_USB3_AXH_STA_PLL_ACTIVE_MASK (RCAR_USB3_AXH_STA_B3_PLL_ACTIVE | \
RCAR_USB3_AXH_STA_B2_PLL_ACTIVE)
/* Interrupt Enable */
#define RCAR_USB3_INT_XHC_ENA 0x00000001
#define RCAR_USB3_INT_PME_ENA 0x00000002
#define RCAR_USB3_INT_HSE_ENA 0x00000004
#define RCAR_USB3_INT_ENA_VAL (RCAR_USB3_INT_XHC_ENA | \
RCAR_USB3_INT_PME_ENA | RCAR_USB3_INT_HSE_ENA)
/* FW Download Control & Status */
#define RCAR_USB3_DL_CTRL_ENABLE 0x00000001
#define RCAR_USB3_DL_CTRL_FW_SUCCESS 0x00000010
#define RCAR_USB3_DL_CTRL_FW_SET_DATA0 0x00000100
/* LCLK Select */
#define RCAR_USB3_LCLK_ENA_VAL 0x01030001
/* USB3.0 Configuration */
#define RCAR_USB3_CONF1_VAL 0x00030204
#define RCAR_USB3_CONF2_VAL 0x00030300
#define RCAR_USB3_CONF3_VAL 0x13802007
/* USB3.0 Polarity */
#define RCAR_USB3_RX_POL_VAL BIT(21)
#define RCAR_USB3_TX_POL_VAL BIT(4)
static void xhci_rcar_start_gen2(struct usb_hcd *hcd)
{
/* LCLK Select */
@ -110,6 +69,48 @@ static void xhci_rcar_start(struct usb_hcd *hcd)
}
}
static void xhci_rzg3e_start(struct usb_hcd *hcd)
{
u32 int_en;
if (hcd->regs) {
/* Update the controller initial setting */
writel(0x03130200, hcd->regs + RZG3E_USB3_HOST_U3P0PIPESC(0));
writel(0x00160200, hcd->regs + RZG3E_USB3_HOST_U3P0PIPESC(1));
writel(0x03150000, hcd->regs + RZG3E_USB3_HOST_U3P0PIPESC(2));
writel(0x03130200, hcd->regs + RZG3E_USB3_HOST_U3P0PIPESC(3));
writel(0x00180000, hcd->regs + RZG3E_USB3_HOST_U3P0PIPESC(4));
/* Interrupt Enable */
int_en = readl(hcd->regs + RZG3E_USB3_HOST_INTEN);
int_en |= RZG3E_USB3_HOST_INTEN_ENA;
writel(int_en, hcd->regs + RZG3E_USB3_HOST_INTEN);
}
}
static int xhci_rzg3e_resume(struct usb_hcd *hcd)
{
struct xhci_hcd *xhci = hcd_to_xhci(hcd);
return reset_control_deassert(xhci->reset);
}
static int xhci_rzg3e_post_resume(struct usb_hcd *hcd)
{
xhci_rzg3e_start(hcd);
return 0;
}
static int xhci_rzg3e_suspend(struct usb_hcd *hcd)
{
struct xhci_hcd *xhci = hcd_to_xhci(hcd);
reset_control_assert(xhci->reset);
return 0;
}
static int xhci_rcar_download_firmware(struct usb_hcd *hcd)
{
struct device *dev = hcd->self.controller;
@ -233,6 +234,14 @@ static const struct xhci_plat_priv xhci_plat_renesas_rzv2m = {
.plat_start = xhci_rzv2m_start,
};
static const struct xhci_plat_priv xhci_plat_renesas_rzg3e = {
.quirks = XHCI_NO_64BIT_SUPPORT | XHCI_RESET_ON_RESUME | XHCI_SUSPEND_RESUME_CLKS,
.plat_start = xhci_rzg3e_start,
.suspend_quirk = xhci_rzg3e_suspend,
.resume_quirk = xhci_rzg3e_resume,
.post_resume_quirk = xhci_rzg3e_post_resume,
};
static const struct of_device_id usb_xhci_of_match[] = {
{
.compatible = "renesas,xhci-r8a7790",
@ -249,6 +258,9 @@ static const struct of_device_id usb_xhci_of_match[] = {
}, {
.compatible = "renesas,xhci-r8a7796",
.data = &xhci_plat_renesas_rcar_gen3,
}, {
.compatible = "renesas,r9a09g047-xhci",
.data = &xhci_plat_renesas_rzg3e,
}, {
.compatible = "renesas,rcar-gen2-xhci",
.data = &xhci_plat_renesas_rcar_gen2,

View file

@ -711,7 +711,7 @@ static int xhci_move_dequeue_past_td(struct xhci_hcd *xhci,
return -ENODEV;
}
hw_dequeue = xhci_get_hw_deq(xhci, dev, ep_index, stream_id);
hw_dequeue = xhci_get_hw_deq(xhci, dev, ep_index, stream_id) & TR_DEQ_PTR_MASK;
new_seg = ep_ring->deq_seg;
new_deq = ep_ring->dequeue;
new_cycle = le32_to_cpu(td->end_trb->generic.field[3]) & TRB_CYCLE;
@ -723,7 +723,7 @@ static int xhci_move_dequeue_past_td(struct xhci_hcd *xhci,
*/
do {
if (!hw_dequeue_found && xhci_trb_virt_to_dma(new_seg, new_deq)
== (dma_addr_t)(hw_dequeue & ~0xf)) {
== (dma_addr_t)hw_dequeue) {
hw_dequeue_found = true;
if (td_last_trb_found)
break;
@ -1066,7 +1066,7 @@ static int xhci_invalidate_cancelled_tds(struct xhci_virt_ep *ep)
*/
hw_deq = xhci_get_hw_deq(xhci, ep->vdev, ep->ep_index,
td->urb->stream_id);
hw_deq &= ~0xf;
hw_deq &= TR_DEQ_PTR_MASK;
if (td->cancel_status == TD_HALTED || trb_in_td(td, hw_deq)) {
switch (td->cancel_status) {
@ -1156,7 +1156,7 @@ static struct xhci_td *find_halted_td(struct xhci_virt_ep *ep)
if (!list_empty(&ep->ring->td_list)) { /* Not streams compatible */
hw_deq = xhci_get_hw_deq(ep->xhci, ep->vdev, ep->ep_index, 0);
hw_deq &= ~0xf;
hw_deq &= TR_DEQ_PTR_MASK;
td = list_first_entry(&ep->ring->td_list, struct xhci_td, td_list);
if (trb_in_td(td, hw_deq))
return td;
@ -1262,19 +1262,17 @@ static void xhci_handle_cmd_stop_ep(struct xhci_hcd *xhci, int slot_id,
* Stopped state, but it will soon change to Running.
*
* Assume this bug on unexpected Stop Endpoint failures.
* Keep retrying until the EP starts and stops again.
* Keep retrying until the EP starts and stops again or
* up to a timeout (a defective HC may never start, or a
* driver bug may cause stopping an already stopped EP).
*/
if (time_is_before_jiffies(ep->stop_time + msecs_to_jiffies(100)))
break;
fallthrough;
case EP_STATE_RUNNING:
/* Race, HW handled stop ep cmd before ep was running */
xhci_dbg(xhci, "Stop ep completion ctx error, ctx_state %d\n",
GET_EP_CTX_STATE(ep_ctx));
/*
* Don't retry forever if we guessed wrong or a defective HC never starts
* the EP or says 'Running' but fails the command. We must give back TDs.
*/
if (time_is_before_jiffies(ep->stop_time + msecs_to_jiffies(100)))
break;
command = xhci_alloc_command(xhci, false, GFP_ATOMIC);
if (!command) {
@ -1481,7 +1479,7 @@ static void xhci_handle_cmd_set_deq(struct xhci_hcd *xhci, int slot_id,
u64 deq;
/* 4.6.10 deq ptr is written to the stream ctx for streams */
if (ep->ep_state & EP_HAS_STREAMS) {
deq = le64_to_cpu(stream_ctx->stream_ring) & SCTX_DEQ_MASK;
deq = le64_to_cpu(stream_ctx->stream_ring) & TR_DEQ_PTR_MASK;
/*
* Cadence xHCI controllers store some endpoint state
@ -1497,7 +1495,7 @@ static void xhci_handle_cmd_set_deq(struct xhci_hcd *xhci, int slot_id,
stream_ctx->reserved[1] = 0;
}
} else {
deq = le64_to_cpu(ep_ctx->deq) & ~EP_CTX_CYCLE_MASK;
deq = le64_to_cpu(ep_ctx->deq) & TR_DEQ_PTR_MASK;
}
xhci_dbg_trace(xhci, trace_xhci_dbg_cancel_urb,
"Successful Set TR Deq Ptr cmd, deq = @%08llx", deq);
@ -3550,7 +3548,7 @@ static u32 xhci_td_remainder(struct xhci_hcd *xhci, int transferred,
if ((xhci->quirks & XHCI_MTK_HOST) && (xhci->hci_version < 0x100))
trb_buff_len = 0;
maxp = usb_endpoint_maxp(&urb->ep->desc);
maxp = xhci_usb_endpoint_maxp(urb->dev, urb->ep);
total_packet_count = DIV_ROUND_UP(td_total_len, maxp);
/* Queueing functions don't count the current TRB into transferred */
@ -3567,7 +3565,7 @@ static int xhci_align_td(struct xhci_hcd *xhci, struct urb *urb, u32 enqd_len,
u32 new_buff_len;
size_t len;
max_pkt = usb_endpoint_maxp(&urb->ep->desc);
max_pkt = xhci_usb_endpoint_maxp(urb->dev, urb->ep);
unalign = (enqd_len + *trb_buff_len) % max_pkt;
/* we got lucky, last normal TRB data on segment is packet aligned */
@ -4138,7 +4136,7 @@ static int xhci_queue_isoc_tx(struct xhci_hcd *xhci, gfp_t mem_flags,
addr = start_addr + urb->iso_frame_desc[i].offset;
td_len = urb->iso_frame_desc[i].length;
td_remain_len = td_len;
max_pkt = usb_endpoint_maxp(&urb->ep->desc);
max_pkt = xhci_usb_endpoint_maxp(urb->dev, urb->ep);
total_pkt_count = DIV_ROUND_UP(td_len, max_pkt);
/* A zero-length transfer still involves at least one packet. */

View file

@ -0,0 +1,12 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __XHCI_RZG3E_H
#define __XHCI_RZG3E_H
#define RZG3E_USB3_HOST_INTEN 0x1044 /* Interrupt Enable */
#define RZG3E_USB3_HOST_U3P0PIPESC(x) (0x10c0 + (x) * 4) /* PIPE Status and Control Register */
#define RZG3E_USB3_HOST_INTEN_XHC BIT(0)
#define RZG3E_USB3_HOST_INTEN_HSE BIT(2)
#define RZG3E_USB3_HOST_INTEN_ENA (RZG3E_USB3_HOST_INTEN_XHC | RZG3E_USB3_HOST_INTEN_HSE)
#endif /* __XHCI_RZG3E_H */

View file

@ -266,6 +266,31 @@ xhci_sideband_get_event_buffer(struct xhci_sideband *sb)
}
EXPORT_SYMBOL_GPL(xhci_sideband_get_event_buffer);
/**
* xhci_sideband_check - check the existence of active sidebands
* @hcd: the host controller driver associated with the target host controller
*
* Allow other drivers, such as usb controller driver, to check if there are
* any sideband activity on the host controller. This information could be used
* for power management or other forms of resource management. The caller should
* ensure downstream usb devices are all either suspended or marked as
* "offload_at_suspend" to ensure the correctness of the return value.
*
* Returns true on any active sideband existence, false otherwise.
*/
bool xhci_sideband_check(struct usb_hcd *hcd)
{
struct usb_device *udev = hcd->self.root_hub;
bool active;
usb_lock_device(udev);
active = usb_offload_check(udev);
usb_unlock_device(udev);
return active;
}
EXPORT_SYMBOL_GPL(xhci_sideband_check);
/**
* xhci_sideband_create_interrupter - creates a new interrupter for this sideband
* @sb: sideband instance for this usb device
@ -286,6 +311,7 @@ xhci_sideband_create_interrupter(struct xhci_sideband *sb, int num_seg,
bool ip_autoclear, u32 imod_interval, int intr_num)
{
int ret = 0;
struct usb_device *udev;
if (!sb || !sb->xhci)
return -ENODEV;
@ -304,6 +330,9 @@ xhci_sideband_create_interrupter(struct xhci_sideband *sb, int num_seg,
goto out;
}
udev = sb->vdev->udev;
ret = usb_offload_get(udev);
sb->ir->ip_autoclear = ip_autoclear;
out:
@ -323,6 +352,8 @@ EXPORT_SYMBOL_GPL(xhci_sideband_create_interrupter);
void
xhci_sideband_remove_interrupter(struct xhci_sideband *sb)
{
struct usb_device *udev;
if (!sb || !sb->ir)
return;
@ -330,6 +361,11 @@ xhci_sideband_remove_interrupter(struct xhci_sideband *sb)
xhci_remove_secondary_interrupter(xhci_to_hcd(sb->xhci), sb->ir);
sb->ir = NULL;
udev = sb->vdev->udev;
if (udev->state != USB_STATE_NOTATTACHED)
usb_offload_put(udev);
mutex_unlock(&sb->mutex);
}
EXPORT_SYMBOL_GPL(xhci_sideband_remove_interrupter);

View file

@ -155,6 +155,8 @@
#define FW_IOCTL_TYPE_SHIFT 24
#define FW_IOCTL_CFGTBL_READ 17
#define WAKE_IRQ_START_INDEX 2
struct tegra_xusb_fw_header {
__le32 boot_loadaddr_in_imem;
__le32 boot_codedfi_offset;
@ -228,6 +230,7 @@ struct tegra_xusb_soc {
unsigned int num_supplies;
const struct tegra_xusb_phy_type *phy_types;
unsigned int num_types;
unsigned int max_num_wakes;
const struct tegra_xusb_context_soc *context;
struct {
@ -263,6 +266,7 @@ struct tegra_xusb {
int xhci_irq;
int mbox_irq;
int padctl_irq;
int *wake_irqs;
void __iomem *ipfs_base;
void __iomem *fpci_base;
@ -313,6 +317,7 @@ struct tegra_xusb {
bool suspended;
struct tegra_xusb_context context;
u8 lp0_utmi_pad_mask;
int num_wakes;
};
static struct hc_driver __read_mostly tegra_xhci_hc_driver;
@ -1482,7 +1487,7 @@ static int tegra_xhci_id_notify(struct notifier_block *nb,
tegra->otg_usb2_port = tegra_xusb_get_usb2_port(tegra, usbphy);
tegra->host_mode = (usbphy->last_event == USB_EVENT_ID) ? true : false;
tegra->host_mode = usbphy->last_event == USB_EVENT_ID;
schedule_work(&tegra->id_work);
@ -1537,6 +1542,58 @@ static void tegra_xusb_deinit_usb_phy(struct tegra_xusb *tegra)
otg_set_host(tegra->usbphy[i]->otg, NULL);
}
static int tegra_xusb_setup_wakeup(struct platform_device *pdev, struct tegra_xusb *tegra)
{
unsigned int i;
if (tegra->soc->max_num_wakes == 0)
return 0;
tegra->wake_irqs = devm_kcalloc(tegra->dev,
tegra->soc->max_num_wakes,
sizeof(*tegra->wake_irqs), GFP_KERNEL);
if (!tegra->wake_irqs)
return -ENOMEM;
/*
* USB wake events are independent of each other, so it is not necessary for a platform
* to utilize all wake-up events supported for a given device. The USB host can operate
* even if wake-up events are not defined or fail to be configured. Therefore, we only
* return critical errors, such as -ENOMEM.
*/
for (i = 0; i < tegra->soc->max_num_wakes; i++) {
struct irq_data *data;
tegra->wake_irqs[i] = platform_get_irq(pdev, i + WAKE_IRQ_START_INDEX);
if (tegra->wake_irqs[i] < 0)
break;
data = irq_get_irq_data(tegra->wake_irqs[i]);
if (!data) {
dev_warn(tegra->dev, "get wake event %d irq data fail\n", i);
irq_dispose_mapping(tegra->wake_irqs[i]);
break;
}
irq_set_irq_type(tegra->wake_irqs[i], irqd_get_trigger_type(data));
}
tegra->num_wakes = i;
dev_dbg(tegra->dev, "setup %d wake events\n", tegra->num_wakes);
return 0;
}
static void tegra_xusb_dispose_wake(struct tegra_xusb *tegra)
{
unsigned int i;
for (i = 0; i < tegra->num_wakes; i++)
irq_dispose_mapping(tegra->wake_irqs[i]);
tegra->num_wakes = 0;
}
static int tegra_xusb_probe(struct platform_device *pdev)
{
struct tegra_xusb *tegra;
@ -1587,9 +1644,15 @@ static int tegra_xusb_probe(struct platform_device *pdev)
if (tegra->mbox_irq < 0)
return tegra->mbox_irq;
err = tegra_xusb_setup_wakeup(pdev, tegra);
if (err)
return err;
tegra->padctl = tegra_xusb_padctl_get(&pdev->dev);
if (IS_ERR(tegra->padctl))
return PTR_ERR(tegra->padctl);
if (IS_ERR(tegra->padctl)) {
err = PTR_ERR(tegra->padctl);
goto dispose_wake;
}
np = of_parse_phandle(pdev->dev.of_node, "nvidia,xusb-padctl", 0);
if (!np) {
@ -1913,6 +1976,8 @@ static int tegra_xusb_probe(struct platform_device *pdev)
put_padctl:
of_node_put(np);
tegra_xusb_padctl_put(tegra->padctl);
dispose_wake:
tegra_xusb_dispose_wake(tegra);
return err;
}
@ -1945,6 +2010,8 @@ static void tegra_xusb_remove(struct platform_device *pdev)
if (tegra->padctl_irq)
pm_runtime_disable(&pdev->dev);
tegra_xusb_dispose_wake(tegra);
pm_runtime_put(&pdev->dev);
tegra_xusb_disable(tegra);
@ -2355,8 +2422,13 @@ static __maybe_unused int tegra_xusb_suspend(struct device *dev)
pm_runtime_disable(dev);
if (device_may_wakeup(dev)) {
unsigned int i;
if (enable_irq_wake(tegra->padctl_irq))
dev_err(dev, "failed to enable padctl wakes\n");
for (i = 0; i < tegra->num_wakes; i++)
enable_irq_wake(tegra->wake_irqs[i]);
}
}
@ -2384,8 +2456,13 @@ static __maybe_unused int tegra_xusb_resume(struct device *dev)
}
if (device_may_wakeup(dev)) {
unsigned int i;
if (disable_irq_wake(tegra->padctl_irq))
dev_err(dev, "failed to disable padctl wakes\n");
for (i = 0; i < tegra->num_wakes; i++)
disable_irq_wake(tegra->wake_irqs[i]);
}
tegra->suspended = false;
mutex_unlock(&tegra->lock);
@ -2636,6 +2713,7 @@ static const struct tegra_xusb_soc tegra234_soc = {
.num_supplies = ARRAY_SIZE(tegra194_supply_names),
.phy_types = tegra194_phy_types,
.num_types = ARRAY_SIZE(tegra194_phy_types),
.max_num_wakes = 7,
.context = &tegra186_xusb_context,
.ports = {
.usb3 = { .offset = 0, .count = 4, },

View file

@ -541,23 +541,23 @@ DEFINE_EVENT(xhci_log_ring, xhci_inc_deq,
);
DECLARE_EVENT_CLASS(xhci_log_portsc,
TP_PROTO(struct xhci_port *port, u32 portsc),
TP_ARGS(port, portsc),
TP_STRUCT__entry(
__field(u32, busnum)
__field(u32, portnum)
__field(u32, portsc)
),
TP_fast_assign(
__entry->busnum = port->rhub->hcd->self.busnum;
__entry->portnum = port->hcd_portnum;
__entry->portsc = portsc;
),
TP_printk("port %d-%d: %s",
__entry->busnum,
__entry->portnum,
xhci_decode_portsc(__get_buf(XHCI_MSG_MAX), __entry->portsc)
)
TP_PROTO(struct xhci_port *port, u32 portsc),
TP_ARGS(port, portsc),
TP_STRUCT__entry(
__field(u32, busnum)
__field(u32, portnum)
__field(u32, portsc)
),
TP_fast_assign(
__entry->busnum = port->rhub->hcd->self.busnum;
__entry->portnum = port->hcd_portnum + 1;
__entry->portsc = portsc;
),
TP_printk("port %d-%d: %s",
__entry->busnum,
__entry->portnum,
xhci_decode_portsc(__get_buf(XHCI_MSG_MAX), __entry->portsc)
)
);
DEFINE_EVENT(xhci_log_portsc, xhci_handle_port_status,

View file

@ -1336,7 +1336,7 @@ static bool xhci_urb_temp_buffer_required(struct usb_hcd *hcd,
struct scatterlist *tail_sg;
tail_sg = urb->sg;
max_pkt = usb_endpoint_maxp(&urb->ep->desc);
max_pkt = xhci_usb_endpoint_maxp(urb->dev, urb->ep);
if (!urb->num_sgs)
return ret;
@ -2924,6 +2924,20 @@ int xhci_stop_endpoint_sync(struct xhci_hcd *xhci, struct xhci_virt_ep *ep, int
}
EXPORT_SYMBOL_GPL(xhci_stop_endpoint_sync);
/*
* xhci_usb_endpoint_maxp - get endpoint max packet size
* @host_ep: USB host endpoint to be checked
*
* Returns max packet from the correct descriptor
*/
int xhci_usb_endpoint_maxp(struct usb_device *udev,
struct usb_host_endpoint *host_ep)
{
if (usb_endpoint_is_hs_isoc_double(udev, host_ep))
return le16_to_cpu(host_ep->eusb2_isoc_ep_comp.wMaxPacketSize);
return usb_endpoint_maxp(&host_ep->desc);
}
/* Issue a configure endpoint command or evaluate context command
* and wait for it to finish.
*/

View file

@ -500,7 +500,8 @@ struct xhci_ep_ctx {
/* deq bitmasks */
#define EP_CTX_CYCLE_MASK (1 << 0)
#define SCTX_DEQ_MASK (~0xfL)
/* bits 63:4 - TR Dequeue Pointer */
#define TR_DEQ_PTR_MASK GENMASK_ULL(63, 4)
/**
@ -1958,6 +1959,8 @@ void xhci_update_erst_dequeue(struct xhci_hcd *xhci,
struct xhci_interrupter *ir,
bool clear_ehb);
void xhci_add_interrupter(struct xhci_hcd *xhci, unsigned int intr_num);
int xhci_usb_endpoint_maxp(struct usb_device *udev,
struct usb_host_endpoint *host_ep);
/* xHCI roothub code */
void xhci_set_link_state(struct xhci_hcd *xhci, struct xhci_port *port,

Some files were not shown because too many files have changed in this diff Show more