Giter Site home page Giter Site logo

Support DPDK about corundum HOT 27 OPEN

corundum avatar corundum commented on July 25, 2024 13
Support DPDK

from corundum.

Comments (27)

vaniaprkl avatar vaniaprkl commented on July 25, 2024 3

Just as an update, I was able to get testpmd running:

don@machine:/disk2/dpdk_test3/dpdk-stable# sudo ./build/app/dpdk-testpmd 
EAL: Detected CPU lcores: 56
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Ignore mapping IO port bar(1)
EAL: Probe PCI driver: net_mqnic (1234:1001) device: 0000:81:00.0 (socket 1)
eth_mqnic_pci_probe():  Vendor: 0x1234
eth_mqnic_pci_probe():  Device: 0x1001
eth_mqnic_pci_probe():  Subsystem vendor: 0x10ee
eth_mqnic_pci_probe():  Subsystem device: 0x9032
eth_mqnic_pci_probe():  Class: 0x020000
mqnic_dev_pci_specific_init(): Control BAR size: 33554432
mqnic_common_probe():  type 0xffffffff (v 0.0.1.0)
mqnic_common_probe():  type 0x0000c007 (v 0.0.1.0)
mqnic_common_probe():  type 0x0000c000 (v 0.0.1.0)
mqnic_common_probe():  type 0x00000000 (v 0.0.0.0)
mqnic_common_probe():  type 0x0000c006 (v 0.0.1.0)
mqnic_common_probe():  type 0x0000c080 (v 0.0.1.0)
mqnic_common_probe():  type 0x0000c120 (v 0.0.2.0)
mqnic_common_probe():  type 0x0000c140 (v 0.0.1.0)
mqnic_common_probe():  type 0x0000c150 (v 0.0.1.0)
mqnic_common_probe(): Find Register Block
mqnic_common_probe(): FPGA ID: 0x04b77093
mqnic_common_probe(): FW ID: 0x00000000
mqnic_common_probe(): FW version: 0.0.1.0
mqnic_common_probe(): Board ID: 0x10ee9032
mqnic_common_probe(): Board version: 1.0.0.0
mqnic_common_probe(): Git hash: 56fe10f2
mqnic_common_probe(): Release info: 00000000
mqnic_common_probe(): IF offset: 0x00000000
mqnic_common_probe(): IF count: 4
mqnic_common_probe(): IF stride: 0x00800000
mqnic_common_probe(): IF CSR offset: 0x00040000
mqnic_create_interface(): Interface-level register blocks:
mqnic_create_interface():  type 0x0000c001 (v 0.0.4.0)
mqnic_create_interface():  type 0x0000c010 (v 0.0.1.0)
mqnic_create_interface():  type 0x0000c020 (v 0.0.1.0)
mqnic_create_interface():  type 0x0000c030 (v 0.0.1.0)
mqnic_create_interface():  type 0x0000c021 (v 0.0.1.0)
mqnic_create_interface():  type 0x0000c031 (v 0.0.1.0)
mqnic_create_interface():  type 0x0000c090 (v 0.0.1.0)
mqnic_create_interface():  type 0x0000c002 (v 0.0.2.0)
mqnic_create_interface():  type 0x0000c004 (v 0.0.3.0)
mqnic_create_interface(): IF features: 0x00000711
mqnic_create_interface(): Port count: 1
mqnic_create_interface(): Scheduler block count: 1
mqnic_create_interface(): Max TX MTU: 9214
mqnic_create_interface(): Max RX MTU: 9214
mqnic_create_interface(): TX queue offset: 0x00100000
mqnic_create_interface(): TX queue count: 8192
mqnic_create_interface(): TX queue stride: 0x00000020
mqnic_create_interface(): TX completion queue offset: 0x00200000
mqnic_create_interface(): TX completion queue count: 8192
mqnic_create_interface(): TX completion queue stride: 0x00000020
mqnic_create_interface(): RX queue offset: 0x00300000
mqnic_create_interface(): RX queue count: 256
mqnic_create_interface(): RX queue stride: 0x00000020
mqnic_create_interface(): RX completion queue offset: 0x00380000
mqnic_create_interface(): RX completion queue count: 256
mqnic_create_interface(): RX completion queue stride: 0x00000020
mqnic_create_interface(): Max desc block size: 8
mqnic_create_port(): Port-level register blocks:
mqnic_create_port():  type 0x0000c003 (v 0.0.2.0)
mqnic_create_port(): Port features: 0x00000000
mqnic_create_port(): Port TX status: 0x00000001
mqnic_create_port(): Port RX status: 0x00000001
mqnic_create_sched_block(): Scheduler block-level register blocks:
mqnic_create_sched_block():  type 0x0000c040 (v 0.0.1.0)
mqnic_create_scheduler(): Scheduler type: 0x0000c040
mqnic_create_scheduler(): Scheduler offset: 0x00400000
mqnic_create_scheduler(): Scheduler channel count: 8192
mqnic_create_scheduler(): Scheduler channel stride: 0x00000004
mqnic_create_sched_block(): Scheduler count: 1
EAL: Error disabling MSI-X interrupts for fd 243
eth_mqnic_pci_probe(): Device name: 0000:81:00.0.1
mqnic_create_interface(): Interface-level register blocks:
mqnic_create_interface():  type 0x0000c001 (v 0.0.4.0)
mqnic_create_interface():  type 0x0000c010 (v 0.0.1.0)
mqnic_create_interface():  type 0x0000c020 (v 0.0.1.0)
mqnic_create_interface():  type 0x0000c030 (v 0.0.1.0)
mqnic_create_interface():  type 0x0000c021 (v 0.0.1.0)
mqnic_create_interface():  type 0x0000c031 (v 0.0.1.0)
mqnic_create_interface():  type 0x0000c090 (v 0.0.1.0)
mqnic_create_interface():  type 0x0000c002 (v 0.0.2.0)
mqnic_create_interface():  type 0x0000c004 (v 0.0.3.0)
mqnic_create_interface(): IF features: 0x00000711
mqnic_create_interface(): Port count: 1
mqnic_create_interface(): Scheduler block count: 1
mqnic_create_interface(): Max TX MTU: 9214
mqnic_create_interface(): Max RX MTU: 9214
mqnic_create_interface(): TX queue offset: 0x00100000
mqnic_create_interface(): TX queue count: 8192
mqnic_create_interface(): TX queue stride: 0x00000020
mqnic_create_interface(): TX completion queue offset: 0x00200000
mqnic_create_interface(): TX completion queue count: 8192
mqnic_create_interface(): TX completion queue stride: 0x00000020
mqnic_create_interface(): RX queue offset: 0x00300000
mqnic_create_interface(): RX queue count: 256
mqnic_create_interface(): RX queue stride: 0x00000020
mqnic_create_interface(): RX completion queue offset: 0x00380000
mqnic_create_interface(): RX completion queue count: 256
mqnic_create_interface(): RX completion queue stride: 0x00000020
mqnic_create_interface(): Max desc block size: 8
mqnic_create_port(): Port-level register blocks:
mqnic_create_port():  type 0x0000c003 (v 0.0.2.0)
mqnic_create_port(): Port features: 0x00000000
mqnic_create_port(): Port TX status: 0x00000001
mqnic_create_port(): Port RX status: 0x00000001
mqnic_create_sched_block(): Scheduler block-level register blocks:
mqnic_create_sched_block():  type 0x0000c040 (v 0.0.1.0)
mqnic_create_scheduler(): Scheduler type: 0x0000c040
mqnic_create_scheduler(): Scheduler offset: 0x00400000
mqnic_create_scheduler(): Scheduler channel count: 8192
mqnic_create_scheduler(): Scheduler channel stride: 0x00000004
mqnic_create_sched_block(): Scheduler count: 1
eth_mqnic_pci_probe(): Device name: 0000:81:00.0.2
mqnic_create_interface(): Interface-level register blocks:
mqnic_create_interface():  type 0x0000c001 (v 0.0.4.0)
mqnic_create_interface():  type 0x0000c010 (v 0.0.1.0)
mqnic_create_interface():  type 0x0000c020 (v 0.0.1.0)
mqnic_create_interface():  type 0x0000c030 (v 0.0.1.0)
mqnic_create_interface():  type 0x0000c021 (v 0.0.1.0)
mqnic_create_interface():  type 0x0000c031 (v 0.0.1.0)
mqnic_create_interface():  type 0x0000c090 (v 0.0.1.0)
mqnic_create_interface():  type 0x0000c002 (v 0.0.2.0)
mqnic_create_interface():  type 0x0000c004 (v 0.0.3.0)
mqnic_create_interface(): IF features: 0x00000711
mqnic_create_interface(): Port count: 1
mqnic_create_interface(): Scheduler block count: 1
mqnic_create_interface(): Max TX MTU: 9214
mqnic_create_interface(): Max RX MTU: 9214
mqnic_create_interface(): TX queue offset: 0x00100000
mqnic_create_interface(): TX queue count: 8192
mqnic_create_interface(): TX queue stride: 0x00000020
mqnic_create_interface(): TX completion queue offset: 0x00200000
mqnic_create_interface(): TX completion queue count: 8192
mqnic_create_interface(): TX completion queue stride: 0x00000020
mqnic_create_interface(): RX queue offset: 0x00300000
mqnic_create_interface(): RX queue count: 256
mqnic_create_interface(): RX queue stride: 0x00000020
mqnic_create_interface(): RX completion queue offset: 0x00380000
mqnic_create_interface(): RX completion queue count: 256
mqnic_create_interface(): RX completion queue stride: 0x00000020
mqnic_create_interface(): Max desc block size: 8
mqnic_create_port(): Port-level register blocks:
mqnic_create_port():  type 0x0000c003 (v 0.0.2.0)
mqnic_create_port(): Port features: 0x00000000
mqnic_create_port(): Port TX status: 0x00000001
mqnic_create_port(): Port RX status: 0x00000001
mqnic_create_sched_block(): Scheduler block-level register blocks:
mqnic_create_sched_block():  type 0x0000c040 (v 0.0.1.0)
mqnic_create_scheduler(): Scheduler type: 0x0000c040
mqnic_create_scheduler(): Scheduler offset: 0x00400000
mqnic_create_scheduler(): Scheduler channel count: 8192
mqnic_create_scheduler(): Scheduler channel stride: 0x00000004
mqnic_create_sched_block(): Scheduler count: 1
eth_mqnic_pci_probe(): Device name: 0000:81:00.0.3
mqnic_create_interface(): Interface-level register blocks:
mqnic_create_interface():  type 0x0000c001 (v 0.0.4.0)
mqnic_create_interface():  type 0x0000c010 (v 0.0.1.0)
mqnic_create_interface():  type 0x0000c020 (v 0.0.1.0)
mqnic_create_interface():  type 0x0000c030 (v 0.0.1.0)
mqnic_create_interface():  type 0x0000c021 (v 0.0.1.0)
mqnic_create_interface():  type 0x0000c031 (v 0.0.1.0)
mqnic_create_interface():  type 0x0000c090 (v 0.0.1.0)
mqnic_create_interface():  type 0x0000c002 (v 0.0.2.0)
mqnic_create_interface():  type 0x0000c004 (v 0.0.3.0)
mqnic_create_interface(): IF features: 0x00000711
mqnic_create_interface(): Port count: 1
mqnic_create_interface(): Scheduler block count: 1
mqnic_create_interface(): Max TX MTU: 9214
mqnic_create_interface(): Max RX MTU: 9214
mqnic_create_interface(): TX queue offset: 0x00100000
mqnic_create_interface(): TX queue count: 8192
mqnic_create_interface(): TX queue stride: 0x00000020
mqnic_create_interface(): TX completion queue offset: 0x00200000
mqnic_create_interface(): TX completion queue count: 8192
mqnic_create_interface(): TX completion queue stride: 0x00000020
mqnic_create_interface(): RX queue offset: 0x00300000
mqnic_create_interface(): RX queue count: 256
mqnic_create_interface(): RX queue stride: 0x00000020
mqnic_create_interface(): RX completion queue offset: 0x00380000
mqnic_create_interface(): RX completion queue count: 256
mqnic_create_interface(): RX completion queue stride: 0x00000020
mqnic_create_interface(): Max desc block size: 8
mqnic_create_port(): Port-level register blocks:
mqnic_create_port():  type 0x0000c003 (v 0.0.2.0)
mqnic_create_port(): Port features: 0x00000000
mqnic_create_port(): Port TX status: 0x00000001
mqnic_create_port(): Port RX status: 0x00000001
mqnic_create_sched_block(): Scheduler block-level register blocks:
mqnic_create_sched_block():  type 0x0000c040 (v 0.0.1.0)
mqnic_create_scheduler(): Scheduler type: 0x0000c040
mqnic_create_scheduler(): Scheduler offset: 0x00400000
mqnic_create_scheduler(): Scheduler channel count: 8192
mqnic_create_scheduler(): Scheduler channel stride: 0x00000004
mqnic_create_sched_block(): Scheduler count: 1
TELEMETRY: No legacy callbacks, legacy socket not created
testpmd: create a new mbuf pool <mb_pool_0>: n=587456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=587456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 1)
mqnic_dev_tx_queue_setup(): params: queue: 0, nb_desc: 1024, socket_id: 1
mqnic_dev_tx_queue_start(): TX Queue 0 started
mqnic_dev_rx_queue_setup(): params: queue: 0, nb_desc: 1024, socket_id: 1
mqnic_dev_rx_queue_start(): RX Queue 0 started
mqnic_dev_start(): mqnic_dev_start on interface 0 netdev 0
mqnic_dev_start(): 0
Port 0: 67:C6:69:73:51:FF
Configuring Port 1 (socket 0)
mqnic_dev_tx_queue_setup(): params: queue: 0, nb_desc: 1024, socket_id: 0
mqnic_dev_tx_queue_start(): TX Queue 0 started
mqnic_dev_rx_queue_setup(): params: queue: 0, nb_desc: 1024, socket_id: 0
mqnic_dev_rx_queue_start(): RX Queue 0 started
mqnic_dev_start(): mqnic_dev_start on interface 1 netdev 1
mqnic_dev_start(): 0
Port 1: 4A:EC:29:CD:BA:AB
Configuring Port 2 (socket 0)
mqnic_dev_tx_queue_setup(): params: queue: 0, nb_desc: 1024, socket_id: 0
mqnic_dev_tx_queue_start(): TX Queue 0 started
mqnic_dev_rx_queue_setup(): params: queue: 0, nb_desc: 1024, socket_id: 0
mqnic_dev_rx_queue_start(): RX Queue 0 started
mqnic_dev_start(): mqnic_dev_start on interface 2 netdev 2
mqnic_dev_start(): 0
Port 2: F2:FB:E3:46:7C:C2
Configuring Port 3 (socket 0)
mqnic_dev_tx_queue_setup(): params: queue: 0, nb_desc: 1024, socket_id: 0
mqnic_dev_tx_queue_start(): TX Queue 0 started
mqnic_dev_rx_queue_setup(): params: queue: 0, nb_desc: 1024, socket_id: 0
mqnic_dev_rx_queue_start(): RX Queue 0 started
mqnic_dev_start(): mqnic_dev_start on interface 3 netdev 3
mqnic_dev_start(): 0
Port 3: 54:F8:1B:E8:E7:8D
Checking link statuses...
Done
No commandline core given, start packet forwarding
io packet forwarding - ports=4 - cores=1 - streams=4 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 4 streams:
  RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00
  RX P=2/Q=0 (socket 0) -> TX P=3/Q=0 (socket 0) peer=02:00:00:00:00:03
  RX P=3/Q=0 (socket 0) -> TX P=2/Q=0 (socket 0) peer=02:00:00:00:00:02

  io packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=4
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=0 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=0 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=0 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=0 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 2: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=0 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=0 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 3: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=0 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=0 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
Press enter to exit

Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 1032896        RX-dropped: 0             RX-total: 1032896
  TX-packets: 1032897        TX-dropped: 0             TX-total: 1032897
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 1032897        RX-dropped: 0             RX-total: 1032897
  TX-packets: 1032896        TX-dropped: 0             TX-total: 1032896
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 2  ----------------------
  RX-packets: 1032908        RX-dropped: 0             RX-total: 1032908
  TX-packets: 1032920        TX-dropped: 0             TX-total: 1032920
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 3  ----------------------
  RX-packets: 1032920        RX-dropped: 0             RX-total: 1032920
  TX-packets: 1032908        TX-dropped: 0             TX-total: 1032908
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 4131621        RX-dropped: 0             RX-total: 4131621
  TX-packets: 4131621        TX-dropped: 0             TX-total: 4131621
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

Stopping port 0...
Stopping ports...
Done

Stopping port 1...
Stopping ports...
Done

Stopping port 2...
Stopping ports...
Done

Stopping port 3...
Stopping ports...
Done

Shutting down port 0...
Closing ports...
mqnic_dev_close(): 
mqnic_dev_rx_queue_release(): RX Completion Queue 0 released
mqnic_dev_rx_queue_release(): RX Queue 0 released
mqnic_dev_tx_queue_release(): TX Completion Queue 0 released
mqnic_dev_tx_queue_release(): TX Queue 0 released
Port 0 is closed
Done

Shutting down port 1...
Closing ports...
mqnic_dev_close(): 
mqnic_dev_rx_queue_release(): RX Completion Queue 0 released
mqnic_dev_rx_queue_release(): RX Queue 0 released
mqnic_dev_tx_queue_release(): TX Completion Queue 0 released
mqnic_dev_tx_queue_release(): TX Queue 0 released
Port 1 is closed
Done

Shutting down port 2...
Closing ports...
mqnic_dev_close(): 
mqnic_dev_rx_queue_release(): RX Completion Queue 0 released
mqnic_dev_rx_queue_release(): RX Queue 0 released
mqnic_dev_tx_queue_release(): TX Completion Queue 0 released
mqnic_dev_tx_queue_release(): TX Queue 0 released
Port 2 is closed
Done

Shutting down port 3...
Closing ports...
mqnic_dev_close(): 
mqnic_dev_rx_queue_release(): RX Completion Queue 0 released
mqnic_dev_rx_queue_release(): RX Queue 0 released
mqnic_dev_tx_queue_release(): TX Completion Queue 0 released
mqnic_dev_tx_queue_release(): TX Queue 0 released
Port 3 is closed
Done

Bye...
don@machine:/disk2/dpdk_test3/dpdk-stable# 

from corundum.

Basseuph avatar Basseuph commented on July 25, 2024 2

There is now the update which has been announced and awaited for some time now. We published the initial version of the mqnic driver for DPDK, see the respective commit [1] in our mirror repository on github.

All testing and comments and improvements are very welcome as we just did some very basic playing around with it.

We will pause the effort on this topic for a short duration (a handful of weeks) and then resume on this topic to update the driver to the current mqnic API and to remove some of the limitations.

[1] missinglinkelectronics/dpdk-stable@d346502

from corundum.

AdamI75 avatar AdamI75 commented on July 25, 2024 2

@vaniaprkl MLE have done some good work and so has @alexforencich. The documentation has been a bit lacking for testing the DPDK with Corundum, but check this out - added by MLE yesterday: https://github.com/missinglinkelectronics/dpdk-stable/tree/22.11-add-corundum-mqnic-pmd/drivers/net/mqnic and check this issue I created: missinglinkelectronics/dpdk-stable#4. The issue is that MLE incorrectly referred to the incorrect Corundum githash version in their documentation, which didn't work. I am now using the corrected githash and getting success. I hope this helps! I did suggest they update their test procedure to give more detailed expected output as these screen shots are limited and do not show much, in my opinion.

from corundum.

YangZhou1997 avatar YangZhou1997 commented on July 25, 2024

Hi Alex,

I just tried your corundum on Xilinx Alevo U250, which works great! Thanks for open-sourcing it!

Just wonder how the DPDK driver development goes now? Really want to use fast user-space networking.

Best,
Yang

from corundum.

alexforencich avatar alexforencich commented on July 25, 2024

There is nobody working on it right now. Currently I am the only one working on corundum, and my current priorities are to implement variable-length descriptors and metadata and then to support Stratix 10 DX. Take a look at the roadmap here: https://github.com/corundum/corundum/wiki/Corundum-Roadmap

I want to at least get variable-length descriptors finished before taking a crack at DPDK as this will change the descriptor format and the PMD would have to be reworked for the new descriptor format.

Now, if there are folks who are using corundum that would like to help write a DPDK PMD sooner, then I would be happy to facilitate that.

from corundum.

Basseuph avatar Basseuph commented on July 25, 2024

Hi Yang,

are you still interested in DPDK support? We are working on it and I am wondering whether we should have a quick discussion about this topic.

Ulrich

from corundum.

kimns516 avatar kimns516 commented on July 25, 2024

I am also interested in using DPDK. What's the status? Is [1] stable to use at this point?

from corundum.

alexforencich avatar alexforencich commented on July 25, 2024

No, that one is based on a very old version of Corundum, and it is missing some vital sanity checks to prevent it from attaching to more recent versions. However, it seems like MLE has picked it up again and has made quite a few updates, so hopefully they will have an updated version available soon which will work with a more recent version of Corundum (and I think a more recent version of DPDK as well).

from corundum.

AdamI75 avatar AdamI75 commented on July 25, 2024

@alexforencich is there any more status on the MLE version of DPDK for Corundum, thanks?

from corundum.

alexforencich avatar alexforencich commented on July 25, 2024

Take a look at the branches on the linked repo. Looks like maybe https://github.com/missinglinkelectronics/dpdk-stable/tree/22.11-add-corundum-mqnic-pmd is the latest.

from corundum.

AdamI75 avatar AdamI75 commented on July 25, 2024

@alexforencich thanks very much. I will investigate this shortly.

from corundum.

AdamI75 avatar AdamI75 commented on July 25, 2024

@alexforencich I am able to compile and build the drivers using meson and ninja. I have configured the Alveo U50 and U280 with Corundum and when I load the kernel I can see the Corundum NIC interfaces. However, when I run the DPDK command: "./dpdk-devbind.py --status" from the MLE DPDK repo "usertools" directory, I do not see the Corundum NICs whether I load the kernel or not. How should I enable the PMD? It says it was tested with a slightly older version of Corundum and only for 10GbE. Therefore, does this mean that it won't work for the Alveo U50 and U280 as they are 100GbE? I am not sure. I am still reading up and will continue to try on my end. I would like to create a test where I can produce data from one Alveo configured with Corundum on my one server and then send that to another Alveo configured with Corundum on another server using iperf3 and then load the mqnic kernel driver and compare with the DPDK MLE mqnic driver. I am concerned that the script is not seeing the Corundum configured Alveos. It may need further porting. What do you think?

from corundum.

AdamI75 avatar AdamI75 commented on July 25, 2024

@alexforencich Okay, the Alveo is seen as a memory controller and not a network device, so the class needs to be a 5 and not a 2. I now see it and so will play around with the binding and then test what I see. I am not sure why this file was not updated when this was released. I have let MLE know as well.

from corundum.

alexforencich avatar alexforencich commented on July 25, 2024

That's odd. The class code should be set from the config.tcl script for all of the Alveo boards. Can you show me an lspci -vvv -nn output for the board?

from corundum.

alexforencich avatar alexforencich commented on July 25, 2024

Or are you saying that the DPDK PMD is looking for a memory controller class code?

from corundum.

AdamI75 avatar AdamI75 commented on July 25, 2024

@alexforencich I have attached two files for you. The one you wanted and a screen shot of the DPDK drivers. I am able to bind the DPDK driver to the Alveo U50 using vfio-pci driver. However, as soon as I do this then your mqnic kernel driver is removed and when I do ifconfig then I don't see the Corundum NIC anymore. However, it is listed as a DPDK compatible driver, so maybe I am misunderstanding something?

Yes, I am talking about the DPDK PMD. Should that be run with mqnic driver or just the vfio-pci driver? How do I test this vfio-pci driver if I can't see the NIC using ifconfig? If I rebind to your mqnic driver then it is fine. I can see the NIC, but then I lose the DPDK driver.

Status update: This actually makes sense that I can't see the NIC via ifconfig as I am no longer using the kernel driver and so I need to use the DPDK API calls to access the PHY and other parameters. I am just not sure why testpmd and l2ifwd is not working with this driver and why it is not creating the interface? Maybe you can give me your explanation or point me in the right direction, thanks? I am stumped for now :).

lspci_command_17_August_2023
DPDK_compatible_driver_17_Aug_2023

from corundum.

alexforencich avatar alexforencich commented on July 25, 2024

The drivers are mutually exclusive, you can use either the kernel driver or the DPDK PMD, but not both. Seems like at least that part is behaving as expected, but I'm not sure why the DPDK applications are not working. It's possible something in the DPDK PMD needs to be adjusted due to a change in the HW, but I'm not familiar with how MLE has implemented the DPDK PMD. Presumably if it has trouble talking to the hardware, it would print out some sort of a warning message.

from corundum.

AdamI75 avatar AdamI75 commented on July 25, 2024

@alexforencich Yes, at least I can bind the PMD driver to the Corundum NIC. It seems the "Failed to create interface.." print out is from the drivers/net/mqnic/rte_eth_mqnic.c file, so I think it might need a tweak due to different infrastructure or something like that. I have asked MLE for support as well. I am still waiting to hear back. I will put this on the back burner for now. I am going to do some test with your kernel driver.

from corundum.

AdamI75 avatar AdamI75 commented on July 25, 2024

@alexforencich MLE got back to me with some useful information. I am recompiling Corundum using their tested repo version just to see if there is a difference. Refer to our correspondence here: missinglinkelectronics/dpdk-stable#4

from corundum.

alexforencich avatar alexforencich commented on July 25, 2024

I thought they had updated their PMD code for more recent versions of Corundum. Looks like they didn't, so their code only works with very old versions of Corundum, and also doesn't include some checks to prevent it from trying to talk to newer versions of Corundum.

from corundum.

AdamI75 avatar AdamI75 commented on July 25, 2024

@alexforencich thanks for the input. This is all part of integration. I don't really need the DPDK drivers for now, but when it comes to testing the DPDK improvements from your original driver then I will need it. I am busy documenting the ease of use of Corundum over other frameworks. I don't mind contributing towards the Corundum framework at a later stage, but it really would be nice to see the difference if I load your mqnic driver with iperf3 vs the DPDK driver. I am not sure that iperf3 will work with the DPDK driver. I may need to use the pktgen utility. Anyway, I have only spent a couple of days on DPDK, so I am a newbie with lots to learn. I am going to do a cold boot and then just load the MLE Corundum build and try again. I don't think I will have luck since I can't even build your driver on that version. I am not sure whether my issue is that the firmware is not loading correctIy as MLE are stating. If I program with your latest Corundum updates then it works fine. I will keep you posted. Thanks for the great work that you are doing :). It reminds me of CASPER, which I have been involved with since 2016 - see https://casper.berkeley.edu/. Wesley will be presenting on Corundum and the OpenNIC in November :).

Your documentation is great, but I just feel that Corundum and DPDK need to have a few tutorials documented step by step for those without the SmartNIC or NIC knowledge. It would help and save lots of questions coming your way. I used to have this with CASPER until we documented it with sufficient tutorials. Of course, there were always those who tried different versions of the baselined software and this caused endless emails and slack messages, but was fun debugging.

from corundum.

vaniaprkl avatar vaniaprkl commented on July 25, 2024

Hi @alexforencich and @AdamI75
So excited to suddenly see movement on this front and something to try out. I had switched to testing with kernel drivers having not been successful with the creation of interfaces. I just tried the older corundum fw and had gotten to the point were I tried with both dpdk drivers, vfio-pci and uio_pci_generic... no dice with getting interfaces up with either driver. Don't know how mle validated these without an interface being seen. Maybe they didn't get that far?

from corundum.

vaniaprkl avatar vaniaprkl commented on July 25, 2024

Hi @AdamI75 thanks for the quick feedback. I had stumbled on that update from mle and that's how I got the above output which I couldn't before. Yeah its pretty cool seeing all of it work.

I was playing around with Corundum for a few weeks now and I really like the framework Alex has built out. It's a lot more complete than OpenNic that I briefly went over. I am not familiar with DPDK, but just starting to read up on it because of Alex mentioning it to be the way to go to get better performance in his paper.

from corundum.

AdamI75 avatar AdamI75 commented on July 25, 2024

@vaniaprkl thanks for the screen shot above. This is useful.

from corundum.

alexforencich avatar alexforencich commented on July 25, 2024

Unfortunately, I think the DPDK support is going to remain rather rough until we can either figure out how to get the DPDK driver into the main Corundum repo so it can be kept up to date alongside the HDL, or until the overall hardware design gets stable enough to consider upstreaming drivers. We discussed this a bit at the meeting today, it's not clear exactly what the best path forward is going to be.

from corundum.

AdamI75 avatar AdamI75 commented on July 25, 2024

@alexforencich I agree that it should be linked to the Corundum repo to keep it up to date - maybe a submodule with an appropriate githash. It would also be useful if the kernel driver and DPDK PMD didn't need to be mutually exclusive like the Mellanox drivers. If we decide to use the Corundum framework, then I would like to get the MLE DPDK driver to work with the latest Corundum version and then possibly to explore having both the kernel driver and DPDK PMD driver working together, so it can look like a NIC to the OS using both drivers - not possible at this point. I will definitely keep you posted. We should decide by the end of the week.

from corundum.

liyangyang-linux avatar liyangyang-linux commented on July 25, 2024

@vaniaprkl Hi, have you ever encountered a situation where there is no data in io mode, but tx is normal in txonly mode

from corundum.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.