Giter Site home page Giter Site logo

Comments (6)

PepperJo avatar PepperJo commented on July 26, 2024 1

I assume you want multiple clients accessing the same NVMe devices remotely hence the need for the cache at the target side to allow some form of atomicity?

  1. Do you need to be able to mount the device such that it appears in /dev/nvmeXX or similar? This is not possible with my code (and never will be). That means every application that wants to use jNVMf needs to be modified.
    Regarding unaligned accesses: The NVMf specification does not allow this however the NVMe specification (although to my knowledge there are no devices which support it) does, cf. https://nvmexpress.org/wp-content/uploads/NVM_Express_Revision_1.3.pdf page 21 - BitBucket, for an example refer to page 61.
    You could extend the jNVMf code to support the BitBucket descriptor to allow such access. However, that brings us to 2)

  2. This is the harder part. You need to implement BitBucket NVMf target support for the kernel or SPDK plus buffering. I suggest using SPDK. Cached IO is not easy since you need some eviction strategy etc.

from jnvmf.

gwnet avatar gwnet commented on July 26, 2024

buddy, here is the error if I follow your idea.

wayne@ubuntu:~/jNVMf$ java -cp target/jnvmf-1.5-jar-with-dependencies.jar:target/jnvmf-1.5-tests.jar -Djnvmf.legacy=true com.ibm.jnvmf.benchmark.NvmfClientBenchmark -a 192.168.147.130 -p 4420 -g 4096 -i 3 -m RANDOM -n 10 -nqn nqn.2014-08.org.nvmexpress.discovery -qd 1 -rw read -s 4096 -qs 64 -H -I
Exception in thread "main" java.lang.IllegalArgumentException: Invalid NQN
at com.ibm.jnvmf.NvmeQualifiedName.validate(NvmeQualifiedName.java:47)
at com.ibm.jnvmf.NvmeQualifiedName.(NvmeQualifiedName.java:36)
at com.ibm.jnvmf.benchmark.NvmfClientBenchmark.(NvmfClientBenchmark.java:177)
at com.ibm.jnvmf.benchmark.NvmfClientBenchmark.main(NvmfClientBenchmark.java:551)

from jnvmf.

gwnet avatar gwnet commented on July 26, 2024

Hello buddy, new questions comes here.
I feel the inline RDMA and incapsule is the same thing. it will put the data on the buffer and send out.
SGL need memory transaction. let me know your insight.
what is SPDK client support. what is your jNVMf support?

from jnvmf.

PepperJo avatar PepperJo commented on July 26, 2024

inline and incapsule are two completely separate things: "inline" is an RDMA concept independent of NVMf while "incapsule" data is part of the NVMf protocol.
When you use RDMA to send data "inline" data is put in the work request on the send queue (of the RDMA connections) directly. You need to to do your own research to understand the difference of these concepts.
SPDK does not support incapsule or inline to my knowledge.

from jnvmf.

gwnet avatar gwnet commented on July 26, 2024

thank you so much man!~ I am new to RDMA, but I can basically understand you. that inline is one RDMA send flag. and incapsule is NVMeOF spec defination.
Now I have one much more urgent question that may make us use your project.
Could you please let me know if this project support host side issue non-aligned IO request, length is also not aligned with sector size to NVMeOF kernel target?
Could you please also send me the test steps? let us verify this together?
even if your project currently does not support it, but I feel this project natural design, it can be done in your project. but for Kernel Host, it does not support it at all. for SPDK, it mean a lot of work to duplicate the kernel block layer buffered IO handling.
for detail, you can refer my request to SUSE:

we actually need one way that can help get below two points:

  1. we need NVMeOF host block driver bypass any cache and it can support any not aligned IO be passed to target
  2. we need NVMeOF target side can have one cache and support buffered IO to the target block driver. and buffered IO does not need all requests are aligned.
    ===

from jnvmf.

gwnet avatar gwnet commented on July 26, 2024

Yes, I have multiple clients will access the same target. as you said.

  1. I do not need the /dev/nvmexx at host side, we hate that too.
    for unaligned. I also noticed that, because NVMeOF transfer NVMe command, NVMe write command use LBA and sector count. so no matter which solution. we cannot support unaligned. Kernel Host support only buffer IO that is not aligned because it use kernel page cache. I agree with you, your project cannot fix the unalign issue. I will research BitBucket and talk with you later.
  2. Yes we noticed that it is not easy. change our upper logic is not easy too. maybe this is why it cannot be widely used?

from jnvmf.

Related Issues (15)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.