ferristseng / rust-ipfs-api Goto Github PK
View Code? Open in Web Editor NEWIPFS HTTP client in Rust
License: Apache License 2.0
IPFS HTTP client in Rust
License: Apache License 2.0
I try to cat
a not exists resource. The program runs for a long time. No sign of exit.
Hiya :)
In the IPFS org we’ve created a new group for maintainers of client libraries. This will make it easier for us to loop you into issues that may cause problems or necessitate changes in your client library.
If you could let us know the github usernames of all the active maintainers we can add them to the group and note them in the clients table.
Also, we have a great IPFS weekly newsletter. A few days before each edition goes out a new pull request is created in the repo. If you have any announcements related to your client library feed free to log a new issue in the repo or just comment on an upcoming newsletter pull request.
This is a real problem when trying to do something with the API of this crate.
Can we work towards a Send
interface? What needs to be done?
Opening this issue for tracking and to help me remember caveats.
seanmonstar/reqwest#1141
seanmonstar/reqwest#1151
seanmonstar/reqwest#1020
seanmonstar/reqwest#1013
Will have to wait for crate split #43, async multipart #58, and reqwest support #69.
As a rust dev new to IPFS, it's not clear to me from the IpfsClient::dag_put
docs how to publish linked nodes.
Improve the doc example to show putting two nodes, where the parent node links to a child node with a CID
.
This might live in ipfs-api-examples/examples/dag.rs
, or directly in the dag_put
doc comment if it's concise enough.
(This came from #76 but I thought it would help to have a separate specific ticket.)
Hi, maybe I am misunderstanding something, so I am not sure whether there's an obvious reason why not, ... so why doesn't this crate use the cid
crate for the content ids and instead uses String
in its API?
The title actually says it all, but:
Various getters return asynchronous streams (async_ftp::FtpSream::retr
in my case, but so does e.g. cat
— seems unfair to return something the library itself doesn't accept), and those cannot be easily passed to functions like add
. Buffering in memory or temporary files may be expensive or even infeasible, depending on the size.
So it would be nice if some form of async stream could be passed to add and friends.
(This requires changes to rust-multipart-rfc7578, I think. I hope not to hyper, too?)
Hi,
I want to implement File Seeks with client.cat. For example, I'd like to request a range of bytes from an endpoint rather than the whole file.
I know this is possible with js-ipfs, and probably could be implemented by incorporating a 'range' header in the client's request to an endpoint.
Could anyone either offer me a way to do this with the repository as is or lay out steps for implementing this? I am new to Rust and not sure where to start looking.
Would be nice if this would be ported to a current version of futures...
Hi! We are considering renaming the IPFS Client libraries, please read more at ipfs/ipfs#374 and comment if you are onboard.
Any chance of implementing the standard error traits for ipfs_api?
pub async fn heartbeat(&mut self) -> Result<(), Box<dyn Error>> {
let client = IpfsClient::default();
let nodeid = client.id(None).await?;
println!("Connected to {}", nodeid.id);
Ok(())
}
error[E0277]: the trait bound `ipfs_api::response::Error: std::error::Error` is not satisfied
--> src/ipfsbackend/mod.rs:33:43
|
33 | let nodeid = client.id(None).await?;
| ^ the trait `std::error::Error` is not implemented for `ipfs_api::response::Error`
|
This is a notice I'm filling in repo of every HTTP Client I can find.
Feel free to close it if this project already works fine with go-ipfs 0.5
go-ipfs 0.5 will block GET
commands on the API port (ipfs/kubo#7097), requiring every command (RPC) to be sent as HTTP POST
request.
See API reference docs: https://docs.ipfs.io/reference/api/http/
This is potentially a breaking change,
double check if this project uses POST
for every RPC call.
Download links for v0.5-rc* are available at ipfs/kubo#7109
You can also test using an ephemeral Docker container:
$ docker run --rm -it --net=host ipfs/go-ipfs:v0.5.0-rc1
The response::ObjectStatResponse
uses isize
for all its number fields. This is bad 'cause it will behave differently (and probably wrong) on 32-bit platforms (32-bit ARM, for instance).
Checked with the IPFS people and they say most sizes are stored as a u64, such as defined here: https://github.com/ipfs/go-ipfs/blob/master/merkledag/pb/merkledag.proto#L28 , "but assume it can grow to a 128 or so anytime" (🤦).
Say the word and I'll make a PR to fix all these, and (try to) figure out the appropriate sizes where necessary.
rustc issue here: rust-lang/rust#54471
Error message:
Compiling ipfs-api v0.5.0-alpha1 (/home/icefox/tmp/rust-ipfs-api/ipfs-api)
error[E0432]: unresolved imports `response::BitswapStatResponse`, `response::RepoStatResponse`
--> ipfs-api/src/response/stats.rs:9:16
| 9 | use response::{BitswapStatResponse, RepoStatResponse};
| ^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^ no `RepoStatResponse` in `response`. Did you
mean to use `ResolveResponse`?
| |
| no `BitswapStatResponse` in `response`. Did you mean to use `BitswapUnwantResponse`
?
error: cannot determine resolution for the attribute macro `serde`
--> ipfs-api/src/response/bitswap.rs:12:3
|
12 | #[serde(rename_all = "PascalCase")]
| ^^^^^
|
= note: import resolution is stuck, try simplifying macro imports
error: cannot determine resolution for the attribute macro `serde`
--> ipfs-api/src/response/bitswap.rs:24:3
|
24 | #[serde(rename_all = "PascalCase")]
| ^^^^^
|
= note: import resolution is stuck, try simplifying macro imports
...
...plus a lot more serde stuff. Full build log available here: https://crater-reports.s3.amazonaws.com/beta-1.30-1/beta-2018-09-19/reg/ipfs-api-0.5.0-alpha2/log.txt
After reading over the API docs, I can't find a way to insert objects (the equivalent of ipfs_api::response::ObjectGetResponse
.
Is the equivalent of the command line ipfs object put <data>
present in the API?
If not, is this simply a gap or is there a different way to use the API to achieve this? I have not been able to use IpfsClient::object_new
successfully at all, see #77 .
If there's a different way, I request updating the API docs to spell that out (maybe linking to it from IpfsClient::object_get
).
I've successfully used your examples, but what I'd like to do is the equivalent to
> echo awesome test content > test.txt
> ipfs add --raw-leaves test.txt
zb2rhgCbaGmTcdZVRpZi3Z8CsdtAbFv7PRdRD9s6mKtef6LK9
> ipfs add --hash blake2b-256 test.txt
zCT5htke82ziqES2sZUP6MPR1EcC3DchZjbFuGeQcTfm16m5q5e8
Is that supported? I couldn't find anything in the docs.
Hello,
I tried my best to use the crate as shown in the examples, but I struggle on an error : type inside async
block must be known in this context
[dev-dependencies]
ipfs-api-backend-hyper = { version = "0.3", features = ["with-hyper-tls"] }
use ipfs_api::{IpfsApi, IpfsClient};
use ipfs_api_backend_hyper as ipfs_api;
use std::io::Cursor;
#[tokio::test]
async fn test() -> anyhow::Result<()> {
let client = IpfsClient::default();
let data = Cursor::new("Hello World!");
match client.add(data).await {
Ok(res) => println!("{}", res.hash),
Err(e) => eprintln!("error adding file: {}", e),
}
Ok(())
}
error[E0698]: type inside `async` block must be known in this context
--> app/tests/ipfs_tests.rs:7:18
|
7 | let client = IpfsClient::default();
| ^^^^^^^^^^ cannot infer type for type parameter `C` declared on the struct `HyperBackend`
|
note: the type is part of the `async` block because of this `await`
--> app/tests/ipfs_tests.rs:10:27
|
10 | match client.add(data).await {
| ^^^^^^
error[E0283]: type annotations needed
--> app/tests/ipfs_tests.rs:7:18
|
7 | let client = IpfsClient::default();
| ^^^^^^^^^^^^^^^^^^^ cannot infer type for struct `IpfsClient<_>`
|
= note: multiple `impl`s satisfying `IpfsClient<_>: Default` found in the `ipfs_api_backend_hyper` crate:
- impl Default for IpfsClient;
- impl Default for IpfsClient<hyper_tls::client::HttpsConnector<hyper::client::connect::http::HttpConnector>>;
do you have any idea about what's going on ?
The IPLD data model is a superset of JSON: it should be OK to store binary data. So on the command line, ipfs dag put --input-enc cbor actually works. It's another option that this Rust implementation could perhaps support. Do you think it's doable? It looks like there's a pervasive assumption that the data is json.
The http API is asymmetric though: there's no way to read back cbor AFAICT ( ipfs/kubo#4313 ), so until that is done, perhaps there's no point.
I tried to force a byte array into a string, and ran into the problem that Rust expects every string to be valid UTF-8. If I use serde_json::json!(unsafe { String::from_utf8_unchecked(buf) }), it fails at runtime. So it seems the json API is hopeless for dealing with binary data in DAG nodes.
The reason I want to do that is to directly store a contiguous array of numbers into a byte array, to avoid cbor overhead. An array of numbers in CBOR has a one-byte prefix in front of each number to declare the type. If you already know what the type is, that's a waste of space, and prevents passing the array unconverted to other software (for example to draw a line graph). So I'd rather that the dag node uses cbor to annotate the expected data type, and then the actual array of numbers should just be a binary array. It's fine to construct CBOR that way, but getting it into and out of dag nodes is problematic so far.
Is there a way to upload entire folders?
I had to implement a recursive function to make this work and i was wondering if the API has a way to do this.
Do you have a license for this code? Is it okay for me to use it in a commercial project?
Hi I wanted to know if this library support also the ipfs cluster
I'm unable to successfully call IpfsClient::object_new
.
With the following code:
let client = IpfsClient::default();
client.object_new(None).await?
-I get the following output:
2021-06-02 14:31:50,873 TRACE [mio::poll] registering event source with poller: token=Token(0), interests=READABLE | WRITABLE
2021-06-02 14:31:50,873 TRACE [want] signal: Want
2021-06-02 14:31:50,873 TRACE [want] poll_want: taker wants!
2021-06-02 14:31:50,875 TRACE [mio::poll] deregistering event source from poller
2021-06-02 14:31:50,876 TRACE [want] signal: Closed
Error: IpfsResponse(Parse(Error("missing field `Links`", line: 1, column: 57)))
When using add
, AFAIK the data I pass in has to be moved into the method because the argument requires a 'static lifetime.
This makes the following impossible:
let mut child = Command::new("cat")
.arg("test.txt")
.stdout(Stdio::piped())
.spawn()?;
let hash = ipfs.add(child.stdout?);
child.wait()?; // error: child was moved in the previous line
hash
What I want to do:
please correct me if I'm wrong, but with a 'static lifetime I think there's no way to do this.
This is something I can do using js-ipfs-http-client:
const dir_obj = [
{
path: "some.json",
content: JSON.stringify(object),
},
];
const add_options = {
wrapWithDirectory: true,
};
const add_result = await ipfs.add(obj, add_options);
I looked into writing the equivalent with build_base_request and request_stream_json like add_path does but couldn't because they are private.
Is there another way to do this already?
If not, I suppose I could take a swing at implementing it.
Would adding an Optional form argument be a good approach?
Something like:
async fn add_with_options<R>(
&self,
data: R,
add: request::Add<'_>,
mut form_option: Option<multipart::Form<'static>>,
) -> Result<response::AddResponse, Self::Error>
where
R: 'static + Read + Send + Sync,
{
if form_option.is_none() {
let mut form_unwrapped = multipart::Form::default();
form_unwrapped.add_reader("path", data);
form_option = Some(form_unwrapped);
}
self.request(add, form_option).await
}
Please.
I ran this code. And I got result it have a prefix. How can I get pure content?
#[tokio::test]
async fn ipfs_test() {
let client = IpfsClient::from_str("http://localhost:5002").unwrap();
let data = Cursor::new("Hello World!");
let hash = client.add(data).await.unwrap().hash;
println!("{:?}++++++++++++++++++++++", hash);
let file = client.get(&format!("/ipfs/{}", hash));
match file.map_ok(|chunk| chunk.to_vec()).try_concat().await {
Ok(res) => {
let out = io::stdout();
let mut out = out.lock();
out.write_all(&res).unwrap();
}
Err(e) => eprintln!("{}", e),
}
}
"Qmf1rtki74jvYmGeqaaV51hzeiaa6DyWc98fzDiuPatzyy"++++++++++++++++++++++
Qmf1rtki74jvYmGeqaaV51hzeiaa6DyWc98fzDiuPatzyy0000644000000000000000000000001414112126667017605 0ustar0000000000000000Hello World!
When I want to get a piece of text through hash, I can't print it out
#[tokio::test]
async fn get_hash_info() {
let client = IpfsClient::from_str("http://127.0.0.1:5001").unwrap();
let hash = "QmWsJn7mNcqZULbM2arPjywKX8evwNFAznKPLzg1oFZyfg";
let res = client
.dag_get(hash)
.map_ok(|chunk| chunk.to_vec())
.try_concat();
println!("{:?}", res);
}
error:
no method named `map_ok` found for struct `Box<(dyn futures_core::stream::Stream<Item = Result<actix_web::web::Bytes, ipfs_api_backend_actix::Error>> + Unpin + 'static)>` in the current scope
items from traits can only be used if the trait is in scoperustc[E0599](https://doc.rust-lang.org/error-index.html#E0599)
mod.rs(200, 8): the method is available for `Box<(dyn futures_core::stream::Stream<Item = Result<actix_web::web::Bytes, ipfs_api_backend_actix::Error>> + Unpin + 'static)>` here
ipfs_api.rs(38, 5): the following trait is implemented but not in scope; perhaps add a `use` for it:: `use futures_util::stream::try_stream::TryStreamExt;
I also try to use object_get method. I can't find the relevant example
ipfs name publish bafy... --offline
works fast on the command line, but by default we still wait for the publishing to succeed, so rust code that calls name_publish is slow. There should be a way to give the offline option to any function, including this one. See ipfs/js-ipfs#2569 and sorry if I'm missing something; I'm still new to rust.
hi, I use this library in my code https://github.com/Lawliet-Chan/ipse-miner/blob/master/src/storage/ipfs.rs#L22
But it is always panic when I call this function add()
. The panic is as following:
I just run my ipfs client on my host and the file is correctly (here I print it) AND I try the 'example add_file', it is all right.
But when I run my project code, it is panic.
Can you help me, please?
The multipart module is complete enough to be split into its own module (hyper-multipart, or something else).
First, I would like to test it more, and make sure it is working well, before going through the trouble of it though.
Please remove the "-alpha2" version. We know it's in alpha.
Hi,
while contributing to the shiplift crate, I ran into the problem of converting a Stream<Item = Chunk>
into an AsyncReader
so I could use the Framed
/ Decoder
stuff from tokio-codec
. After banging my head on it for a few days, I found your StreamReader
implementation which works great, but I'd like to avoid copying/pasting it...
Given that it's not specific to ipfs, would you consider extracting it to its own crate, or ideally submitting it to the tokio project?
I'm happy to do some of the work if that helps.
Hi there,
As we are working towards a rust ipfs implementation I noticed that ipfs_api::response::VersionResponse
has the two required fields golang
and system
which are not expected by the interface-ipfs-core
tests and as such can probably be regarded as "go-ipfs specific".
Making the two fields optional or removing them would make this single endpoint compatible with js-ipfs
and rust-ipfs
.
I am facing an issue where a type in awc
cannot be send between threads:
std::rc::Rc<awc::ClientConfig>` cannot be sent between threads safely
= help: within `ipfs_api::IpfsClient`, the trait `std::marker::Send` is not implemented for `std::rc::Rc<awc::ClientConfig>
I guess this might be fixed when updating awc
to 2.0.0... but I am not sure. Maybe the issue is something entirely different here and I'm just not getting the actual problem.
My current goal is to learn how to create and consume linked DAG nodes with serde
. An example that spells this out would go a long way, because mapping between the general IPFS HTTP RPC documentation, the IPLD guides, and the ipfs_api
crate docs is cumbersome.
Research weather or not multiaddr can be replaced by parity multiaddr. multiaddr doesn't look like it's maintained any more
So we can use the library to connect to web services that have SSL active (like https://ipfs.infura.io/)
Most notably the repository
key is missing, hence it is not that easy to find this repository from crates.io!
Thanks for your awesome work here, I really hope to use this crate any time soon.
The docs for [IpfsClient::dag_put
aren't clear about which IPLD format multicodec is used.
For comparison, the js-ipfs
API docs for ipfs.dag.put
present an option for selecting the IPLD format.
Imagine a developer needs both a JS client and a rust ipfs_api
client to interoperate. To do this, at minimum the ipfs_api
needs to document which format is used, then the more flexible JS api would need to be modified to match. OTOH, if IpfsClient::dag_put
allowed selection of the format, then either implementation could change to interoperate with the other.
Hey there!
I'd love to use this crate in a project of mine, but given that async/await is landing in stable on Nov 6th (just a few weeks from now), I can't bring myself to pull in a dependency that can't make use of the new syntax 😞
Are there any plans to migrate rust-ipfs-api over to the new async/await ecosystem? i.e: futures 0.3, tokio 0.2, and hyper 0.13? It looks like Actix is also working on async/await support (see actix/actix-net#45)
I've taken a quick stab at using futures::compat with rust-ipfs-api, but unfortunately, hyper relies on having a tokio-0.1 runtime, and trying to run the compat'd futures on tokio 0.3 results in a runtime error.
https://github.com/ipfs/go-ipfs/releases/tag/v0.11.0
Changes to Pubsub need to be made for comapt with go-ipfs v0.11.0
I plan to do it in Jan.
Hey,
any plans to implement the IPFS Key API (basically what one can do with ipfs key ...
) and IPNS name publishing/resolving?
Or is that already implemented and I am missing something?
Just a thought that I had while fixing a bug I'd made...
We have the request::ApiRequest
type which specifies some data for the request type. It should be possible to make an associated type request::ApiRequest::Response
to link the response type to a particular request type. Then the type of client::IpfsClient::request<Req, Res>() -> AsyncResponse<Res>
could just become client::IpfsClient::request<Req>() -> AsyncResponse<Req::Response>
.
That wouldn't have prevented the bug that I'd made, I would have just put the typo in a different spot, so I'm not sure if this will actually help rather than just bikeshedding. But maybe it could make life a little simpler?
...actually, now that I think about it, we might be able to make the ApiRequest::path()
method be an associated const now. Though that might be more restrictive than we want? I dunno.
Hello!
I tried to send and receive messages locally but sending "hello world" I received "aGVsbG8gd29ybGQ=".
If I use the CLI ipfs pubsub sub It work as expected which makes me thing there's a bug somewhere.
I'll look around and see if I can fix it.
Otherwise great API!
error[E0432]: unresolved import `actix_http::error::ResponseError`
--> /home/user/.cargo/registry/src/github.com-1ecc6299db9ec823/actix-multipart-rfc7578-0.5.0/src/error.rs:9:5
|
9 | use actix_http::error::ResponseError;
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ no `ResponseError` in `error`
error: aborting due to previous error
For more information about this error, try `rustc --explain E0432`.
error: could not compile `actix-multipart-rfc7578`
To learn more, run the command again with --verbose.
While testing reqwest I found a crash when pubsub example is run with actix.
note: ipfs must be run with the --enable-pubsub-experiment flag
connecting to localhost:5001...
thread 'main' panicked at 'there is no reactor running, must be called from the context of a Tokio 1.x runtime', /home/user/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-1.2.0/src/runtime/context.rs:37:26
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
Panic in Arbiter thread.
Currently impossible to pass as "web::Data" in actix for reusability.
And also not possible to use in any async function either.
The problem is that the IpfsClient is using std::rc::Rc
, I'm wondering why not Arc
.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.