[original] analysis of Linux PCI Driver Framework (3)

Time:2021-12-28

Background

  • Read the fucking source code!–By Lu Xun
  • A picture is worth a thousand words.–By Golgi

explain:

  1. Kernel version: 4.14
  2. Arm64 processor
  3. Tools used: source insight 3.5, Visio

1. General

First review the PCIe architecture diagram:

  • This article will talk about the driver of PCIe host, which corresponds toRoot ComplexPart, equivalent to PCIHost Bridgepart;
  • This article will choose Xilinxnwl-pcieTo analyze;
  • The overall writing of the driver is relatively simple. It’s OK to set it on the existing framework, so it won’t take too much pen and ink to point to the end;

2. Process analysis

  • Whenever it comes to the analysis of driver, it is inseparable from the introduction of driver model. The implementation of driver model makes the specific driver development easier;
  • Therefore, let’s review the driver model mentioned in the previous article: the Linux kernel establishes a unified device model, which is abstracted by bus, device and driver respectively. The device and driver are hung on the bus. When there is a new device registration or new driver registration, the bus will perform matching operation(matchFunction), when it is found that the driver and the device can match, the probe function will be executed;

  • Linux PCI Driver Framework Analysis (II)The creation of PCI device, PCI bus and PCI Driver is mentioned in. PCI device and PCI Driver are connected to PCI bus. This understanding is very intuitive. The PCIe controller also follows the matching model of device, bus and driver, but the bus here is composed of virtual busplatformInstead of bus, the corresponding devices and drivers areplatform_deviceandplatform_driver

So here comes the question…,platform_deviceWhen was it created? Then I have to mentionDevice TreeDevice tree.

2.1 Device Tree

  • The device tree is used to describe the hardware information, including various attributes of nodes, which are defined in the DTS file, and will eventually be compiled into a DTB file and loaded into memory;
  • The kernel parses the DTB file during startup and parses it intodevice_nodeDescriptiveDevice Tree
  • according todevice_nodeNodes, creatingplatform_deviceStructure and finally registered into the system, which is the creation process of PCIe host device;

Let’s look at the content of the PCIe host’s device tree:

pcie: [email protected] {
	compatible = "xlnx,nwl-pcie-2.11";
	status = "disabled";
	#address-cells = <3>;
	#size-cells = <2>;
	#interrupt-cells = <1>;
	msi-controller;
	device_type = "pci";
    
	interrupt-parent = ;
	interrupts = <0 118 4>,
		     <0 117 4>,
		     <0 116 4>,
		     <0 115 4>,	/* MSI_1 [63...32] */
		     <0 114 4>;	/* MSI_0 [31...0] */
	interrupt-names = "misc", "dummy", "intx", "msi1", "msi0";
	msi-parent = ;
    
	reg = <0x0 0xfd0e0000 0x0 0x1000>,
	      <0x0 0xfd480000 0x0 0x1000>,
	      <0x80 0x00000000 0x0 0x1000000>;
	reg-names = "breg", "pcireg", "cfg";
	ranges = <0x02000000 0x00000000 0xe0000000 0x00000000 0xe0000000 0x00000000 0x10000000	/* non-prefetchable memory */
		  0x43000000 0x00000006 0x00000000 0x00000006 0x00000000 0x00000002 0x00000000>;/* prefetchable memory */
	bus-range = <0x00 0xff>;
    
	interrupt-map-mask = <0x0 0x0 0x0 0x7>;
	interrupt-map =     <0x0 0x0 0x0 0x1 &pcie_intc 0x1>,
			    <0x0 0x0 0x0 0x2 &pcie_intc 0x2>,
			    <0x0 0x0 0x0 0x3 &pcie_intc 0x3>,
			    <0x0 0x0 0x0 0x4 &pcie_intc 0x4>;
    
	pcie_intc: legacy-interrupt-controller {
		interrupt-controller;
		#address-cells = <0>;
		#interrupt-cells = <1>;
	};
};

Key fields are described as follows:

  • compatible: used to match PCIe host driver;
  • msi-controller: indicates an MSI(Message Signaled Interrupt)For the controller node, it should be noted that some SOC interrupt controllers use gicv2 version, but gicv2 does not support MSI, which will lead to the loss of this function;
  • device-type: must be"pci"
  • interrupts: contains the interrupt number of NWL PCIe controller;
  • interrupts-namemsi1, msi0For MSI interrupt,intxFor legacy interrupts, andinterruptsCorresponding to interrupt number in;
  • reg: contains the physical address and size of registers used to access PCIe controller operations;
  • reg-name: respectivelyBridge registersPCIe Controller registersConfiguration space region, andregCorresponding to the value in;
  • ranges: the range from PCIe address space to CPU address space;
  • bus-range: the starting range of PCIe bus;
  • interrupt-map-maskandinterrupt-map: standard PCI attribute, used to define the mapping from PCI interface to interrupt number;
  • legacy-interrupt-controller: old interrupt controller;

2.2 probe process

  • The system will create the corresponding platform based on the DTB file_ Device and register;
  • When the drive and equipment pass throughcompatibleAfter the fields match, the probe function will be called, that isnwl_pcie_probe

to glance atnwl_pcie_probeFunction:

  • Generally, the probe function performs some initialization and registration operations:
    1. Initialization includes data structure initialization and device initialization. For device initialization, hardware information (such as register base address, length, interrupt number, etc.) needs to be obtained. These information comes from DTS;
    2. The registration operation mainly includes the registration of interrupt processing functions and the usual device file registration;

 

  • For the driver of PCI controller, the core process is to allocate and initialize onepci_host_bridgeStructure, finally through thisbridgeTo enumerate all devices on PCI bus;
  • devm_pci_alloc_host_bridge: allocate and initialize a basicpci_hsot_bridgeStructure;
  • nwl_pcie_parse_dtObtain register information and interrupt information in DTS through:irq_set_chained_handler_and_dataset upintxAn interrupt processing function corresponding to the interrupt number, which is used for cascading interrupts;
  • nwl_pcie_bridge_init: there are a lot of settings for the hardware controller. You need to consult the spec to understand the details of the hardware work. In addition, throughdevm_request_irqregistermiscAn interrupt processing function corresponding to the interrupt number, which is used for processing the state of the controller itself;
  • pci_parse_request_of_pci_ranges: used to analyze the bus range of PCI bus and the address range on the bus, that is, the address area that can be seen by CPU;
  • nwl_pcie_init_irq_domainandmwl_pcie_enable_msiRelated to interrupt cascading, which will be introduced in the next section;
  • pci_scan_root_bus_bridgeThe process of scanning and enumerating devices on the bus:Analysis of Linux PCI Driver Framework (2)Analyzed in.brdigeIn structurepci_opsField, which is used to point to the set of read-write operation functions of PCI. When the device is scanned to read and write the configuration space, this function is called and implemented by the specific controller driver;

2.3 interrupt handling

PCIe controller connects various devices through PCIe bus, so it acts as an interrupt controller and cascades to the interrupt controller (such as GIC) of the upper layer, as shown in the following figure:

  • PCIe Bus supports two interrupt processing modes:
    1. Legacy interrupt: bus providedINTA#, INTB#, INTC#, INTD#Four interrupt signals, with which the PCI device submits an interrupt request in the level trigger mode;
    2. MSI(Message Signaled Interrupt)Interrupt: interrupt based on message mechanism, that is, write a specific message to a specified address to trigger an interrupt;

For two processing methods,NWL PCIeIn the driver, twoirq_chip, i.e. two interrupt controllers:

  • irq_domainCorresponding to an interrupt controller(irq_chip),irq_domainMapping the hardware interrupt number to the virtual interrupt number;
  • Let’s have an old picture. For specific articles, please refer to the articles related to interrupt subsystem;

Take another looknwl_pcie_enable_msiFunction:

  • In this function, the main work is to set the cascaded interrupt processing function, and the cascaded interrupt processing function will eventually call the interrupt processing function of the specific device;

 

Therefore, to sum up, as two different interrupt processing methods, the routine is the same and is createdirq_chipInterrupt controller, addingirq_domain, the interrupt response process of specific equipment is as follows:

  1. The device is connected to the PCI bus. When an interrupt is triggered, it is routed to the upper controller through the interrupt controller acted by the PCIe controller, and finally to the CPU;
  2. When processing the interrupt of PCIe controller, the CPU calls its interrupt processing function, which is mentioned abovenwl_pcie_leg_handlernwl_pcie_msi_handler_high, andnwl_pcie_leg_handler_low
  3. In cascading interrupt handling functions, callchained_irq_enterEnter interrupt cascade processing;
  4. callirq_find_mappingFind the interrupt number of the specific PCIe device;
  5. callgeneric_handle_irqTrigger the execution of interrupt processing function of specific PCIe device;
  6. callchained_irq_exitExit the processing of interrupt cascade;

2.4 summary

  • PCIe controller drivers have different IP implementations, and drivers may vary greatly. After all, analyzing a driver alone is only an example, so we should master the general framework behind it;
  • All kinds of drivers are generally hardware initialization configuration and resource application registration. The core is to deal with the interaction with hardware (generally interrupt processing). If users need to interact, they also need to register device files to realize a bunch of functionsfile_operationSet of operation functions;
  • Well, personally, I don’t like to analyze a certain driver. It’s a hasty end;

Starting with the next chapter, continue to return to virtualization and look forward to it.

reference resources

Documentation/devicetree/bindings/pci/xlinx-nwl-pcie.txt

Welcome to pay attention to the official account number and share the technical articles regularly.

Recommended Today

Openfgein uses

On the consumer startup class: @EnableFeignClients() @SpringBootApplication public class OrderfeignMain80 { public static void main(String[] args) { SpringApplication.run(OrderfeignMain80.class,args); } }   Name of calling service: @Component @FeignClient(value = “CLOUD-PAYMENT-SERVER”) public interface PaymentFeginService { @GetMapping(“/payment/selectOne/{id}”) CommonResult selectOne(@PathVariable(name = “id”) Integer id); @GetMapping(“/payment/timeOut”) CommonResult timeOut(); @GetMapping(“/payment_timeOut/{id}”) String payment_timeOut(@PathVariable(“id”) Integer id);   org.springframework.cloud spring-cloud-starter-openfeign   Openfegin timeout and […]