Giter Site home page Giter Site logo

async-book's Introduction

async-book

Asynchronous Programming in Rust

Requirements

The async book is built with mdbook, you can install it using cargo.

cargo install mdbook
cargo install mdbook-linkcheck

Building

To create a finished book, run mdbook build to generate it under the book/ directory.

mdbook build

Development

While writing it can be handy to see your changes, mdbook serve will launch a local web server to serve the book.

mdbook serve

async-book's People

Contributors

aturon avatar betamos avatar cfsamson avatar cramertj avatar eholk avatar ehuss avatar funkill avatar humancalico avatar kamuelafranco avatar kestrer avatar kraai avatar larsch avatar lbernick avatar manishearth avatar mikemorris avatar nellshamrell avatar nereuxofficial avatar nikomatsakis avatar nrc avatar olehmisar avatar petertrotman avatar pietroalbini avatar purewhitewu avatar rich-murphey avatar s373r avatar samueltardieu avatar sectore avatar taiki-e avatar tmandry avatar volker-weissmann avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

async-book's Issues

2.1 "The Future Trait" introduces a confusing (to me) example

I was sort of familiar with async/await, enough to get into trouble and decided to finally read the async book from the start and really understand it or at least gain a better understanding.

I totally got the Rust-specific parts about purpose of poll(), what the Output type was, etc... I got to 2.1, and for discussion, it introduces SimpleFuture as a boiled-down version of the real Future trait, but it confused me because it was a copy of the real Future trait, but why wasn't (at least excerpts) from Future used instead?

I write C++ actively daily too and I'm familiar with its "named requirements" (https://en.cppreference.com/w/cpp/named_req) which are an informal collection of methods that must be implemented to satisfy a specific interface. These collections may be considered something of a trait. I got confused because I didn't know if I could use any trait I wanted to make as long as it has an "Output" type, and a poll function?

Can this section be modified to add caveats to clear up what the role of SimpleFuture is and that readers should use the real Future, or just show readers snippets from the real Future trait altogether?

Confusing explanation of Pin

Pinning makes it possible to guarantee that an object won't ever be moved.

This is not true, unless the object is of a type that does not implement Unpin.

The sentence as written makes it unclear why code the following works.

~ ❯❯❯ cat test.rs
use std::pin::Pin;

struct S(u64);

fn main() {
    let boxed = Box::new(S(0));
    println!("boxed: {:p}", boxed);

    let pinned = Pin::from(boxed);
    println!("pinned: {:p}", pinned);

    let moved_out: S = *Pin::into_inner(pinned);
    println!("moved out: {:p}", &moved_out);
}
~ ❯❯❯ rustc test.rs && ./test
boxed: 0x561fc1dc5a40
pinned: 0x561fc1dc5a40
moved out: 0x7ffe9cd05c10

Pinning chapter 'move' should be disambiguated

In rust, move typically means transfer of ownership, not moving to a different memory address. This chapter should disambiguate that meaning with phrases like "move to a different memory address" or "move in memory".

Pinning chapter would benefit from diagram

I'm trying to understand concretely how pinning works, especially in the context of async/await!, and I think that having an example with a diagram that shows exactly where everything is in memory would be extremely helpful.

I'd consider offering such a diagram, but I don't believe I grok the subject matter enough to produce one.

Incorrect Pin example

The following example is incorrect:

fn main() {
    let mut test1 = Test::new("test1");
    test1.init();
    let mut test2 = Test::new("test2");
    test2.init();

    println!("a: {}, b: {}", test1.a(), test1.b());
    std::mem::swap(&mut test1, &mut test2);
    println!("a: {}, b: {}", test2.a(), test2.b());
}

Output specified by the book:

a: test2, b: test1
a: test1, b: test2

Actual output:

a: test1, b: test1
a: test1, b: test2

Running long task inside async functions

Hello Rust Developers,

I have searched the book, several times. But I couldn't find one answer for my question, which is: Can I run a long task inside an async function?

For example:

async fn long_task() {
    let data = std::fs::read(...);

    // Hash data
    // ...

    // Do some other tasks with data
    // ...
}

I know about tokio-fs crate. But my question is about that.

Thank you,

Executors and System IO: self reference should be mutable

The Executors and System IO chapter includes a (pseudo-)code snippet that outlines how an IO blocking primitive is generally structured. This is minor, but shouldn't the self reference this function takes be mutable?

    /// Express an interest in a particular IO event.
    fn add_io_event_interest(
        &self,

        /// The object on which the event will occur
        io_object: &IoObject,

        /// A set of signals that may appear on the `io_object` for
        /// which an event should be triggered, paired with
        /// an ID to give to events that result from this interest.
        event: Event,
    ) { /* ... */ }

Later on in the same snippet, we specifically make the variable holding an IoBlocker mutable, presumably because it needs to be mutable to call add_io_event_interest on it.

let mut io_blocker = IoBlocker::new();
io_blocker.add_io_event_interest(
    &socket_1,
    Event { id: 1, signals: READABLE },
);
io_blocker.add_io_event_interest(
    &socket_2,
    Event { id: 2, signals: READABLE | WRITABLE },
);
let event = io_blocker.block();

more select! documentation

Hey,

Thanks for this amazing book and the time that has put into this!
Please provider more documentation with code samples for the select!

Thanks!

"surprisingly hard things"

In the async foundations meeting today we discussed the idea of adding a "surprisingly hard things" chapter that lists out tricky scenarios and the workarounds to help get people unstuck. The hope would be that most of these things will eventually be fixed in some way. =)

This issue is meant to collect some possible topics:

Timeout question

Hi! First of all, thank you for a great resource!

Then, I made a timeout future, extending on the TimerFuture and Executor from the current sections of the book.

pub enum Result<T> { Completed(T), TimedOut }

pub async fn timeout<T>(future: impl Future<Output=T> + Unpin, timeout: Duration) -> Result<T> {
    let (abort_handle, abort_registration) = AbortHandle::new_pair();
    let abortable_future = Abortable::new(future, abort_registration);
    let mut fused_future = abortable_future.fuse();

    let fused_timer = TimerFuture::new(timeout).fuse();
    pin_mut!(fused_timer);

    // could cause a race if timer and future end at the same time?
    let result: Result<T> = select! {
        res_1 = fused_future => Result::Completed(res_1.unwrap()),
        _ = fused_timer => Result::TimedOut,
    };

    if let Result::TimedOut = result {
        abort_handle.abort();
    }

    result
}

I apologize for any glaring redundancies, I'm new to Rust.

My question is, before I used an async function for the timeout implementation I tried to make a TimeoutFuture implementation similar to the TimerFuture from scratch, and ran into the issue of not being able to pass through / spawn the passed in timed future from inside the TimeoutFuture implementation.
Is there any canonical way to get around this? For a Future implementation to spawn another Future?

Also, secondly, my AbortHandle doesn't seem to work, I assume this is because it requires some support from the Executor::run function?

Hope to see some new chapters!

PS: The HTTP server has a nice cleaner version with hyper 1.30.0 without the need for compat.

use std::convert::Infallible;
use std::net::SocketAddr;
use hyper::{Body, Request, Response, Server, 
            service::{make_service_fn, service_fn}};

async fn req_handler(_req: Request<Body>) -> Result<Response<Body>, Infallible> {
    Ok(Response::new(Body::from("hello, world!.")))
}

async fn run_server(addr: SocketAddr) {
    println!("listening on http://{}", addr);

    let service = make_service_fn(|_conn| async {
        Ok::<_, Infallible>(service_fn(req_handler))
    });

    let server = Server::bind(&addr).serve(service);

    if let Err(e) = server.await {
        eprintln!("server error: {}", e);
    }
}

// as per hyper 1.30.0 release notes recommendation
#[tokio::main]
async fn main() {
    let addr = SocketAddr::from(([127, 0, 0, 1], 7878));

    run_server(addr).await;
}

Maybe it's worth mentioning the compiler generated Future that wraps the transformed async block into the state machine code?

When discussing Rust Futures, pretty much all the information I found online talks about Futures that are coded by a human user, e.g.: TimerFuture, SocketReadFuture, etc.. These Futures are what I call "actual blocking Futures", in that async code flows actually ends up getting blocked at these points.

However, there is another category of Futures that should be generated by the compiler that wraps the async fn/block code into the so-called "state machine code" - the Poll method of the compiler generated Futures should be able to "resume" execution of the async code wrapped inside, whenever appropriate.

I had a hard time wrapping my head around realizing that it is the compiler generated state machine code Future that actually "executes" (or resumes) the async code, which is not clear at all by reading the Async Book.

Furthermore, I think it's worth mentioning that in the chapter "Build an Executor", inside the "run" method, the top level task context is passed to the top level Future, which internally should be passed down to all the other nested Futures encountered - that's why a user defined Future (e.g. TimerFuture) can register its associated "wake" method, which would re-queue the original top level task Future to the executor when it is ready, and the top level task Future's poll method can be invoked again, but internally, the blocked async code at some nested leve would be awoken and continue. Without this information, it was hard for me to reason how exactly the "wake" method wakes up the top level task Future by re-queueing it.

It would also be (exetremely) useful to maybe show some example compiler generated state machine pseudo code to help see the hidden-part of the iceberg - especially, how does the compiler generated poll method look like - I even have a guessed version of my own (very very psuedo code, it might even be totally wrong - so show us the correct code! :D)

struct CompilerGeneratedStateMachineFuture {
	started: bool;
	code: StateMachineCode;
	innerFutureResult: Option<T> // the result of the current nested inner future, if any (if the current async fn is blocked on some inner Future.await), if there are many parallel .await inside the same async block, this value will be updated accordingly
}

struct Context {
	chain: Stack<dyn Future>; // should probably be a boxed enum, but for illustration purposes, let's put it this way
        wake: Waker; // the logic of wake is formed by a specific executor, so the top level task could be re-queued to the executor! (though, we don't use it in this pseudo code)
}

// code.resume would internally call inner future.poll for the first time in the nested fashion, until it hits the first blocker at some level; each future is responsible adding itself into the chain 
// code.resume() returns Poll<T> - the same return type of poll() function - since code.resume would return Poll::Pending if an inner most 
// code::resume(Option<T>)

impl Future for CompilerGeneratedStateMachineFuture {
	fn poll(self: Self, cx: &mut Context) -> Poll<T> {
		if !self.started { // 1.
			self.started = true;
			// the future hasn't been polled yet, run the state machine code from the start of the function
			// since the compiler would transform all the .await sites to Future.poll(), so poll would form the nested async future calling chain
			// if at some nested level, one future returns Poll::Pending, we return from here; if it's Poll::Ready, we return from there as well
			// so code.resume would itself return Poll<> result
			// every level should add self (future) to the future calling chain, and when the async function or future completes, it removes itself from the nested future chain
			// who should add the foundational future at the end of the chain, who adds it? could be the compiler generated code adding it, if the future is not ready
			cx.chain.push(self);
			let r = self.code.resume(None); // run the state machine code of the current future from the start - no initial intermediate value
			if r != Poll::Pending { // this pending would definitely be caused by a foundamental future not ready yet!
				cx.chain.pop();
			}
			return r;
		} else if self.innerFutureResult != None { // 2.
			let r = self.innerFutureResult; // moved, so innerFutureResult becomes None
			return self.code.resume(r); // continue the state machine of the current block
		} else { // 3.
			// the chain is not empty without a result, meaning we were blocked at some foundational future at the end of the chain - the end of the nested future chain must be the foundational future that is Pending
			let mut r: Poll<T> = Poll::Ready<()>;
			loop {
				if cx.chain.empty() {
					return r
				}
				blocked = cx.chain.last(); // don't pop it yet
				blocked.innerFutureResult = Option(r); // convert poll result to option, r would normally be set from the last iteration of the loop
				r = blocked.poll(cx); // this might resolve now; if this is the compiler generated future, then when we reach here, its innerFutureResult must be set to a valid value! so inside this poll, the control would goto 2.
				if r == ready {
					cx.chain.pop();
					continue;
				} else {
					return Poll::Pending
				}

			}
		}
	}
}

Confusing "simplified" Future API incompatible with actual Future API

Chapter 2 "The Future Trait" describes the Future API, but incorrectly:

The Future trait is at the center of asynchronous programming in Rust.
A Future is an asynchronous computation that can produce a value
(although that value may be empty, e.g. ()). A simplified version of the
future trait might look something like this:

trait SimpleFuture {
    type Output;
    fn poll(&mut self, wake: fn()) -> Poll<Self::Output>;
}

Futures can be advanced by calling the poll function, which will drive
the future as far towards completion as possible. If the future completes,
it returns Poll::Ready(result). If the future is not able to complete yet, it
returns Poll::Pending and arranges for the wake() function to be called
when the Future is ready to make more progress. When wake() is called,
the executor driving the Future will call poll again so that the Future
can make more progress.

Without wake(), the executor would have no way of knowing when a
particular future could make progress, and would have to be constantly
polling every future. With wake(), the executor knows exactly which
futures are ready to be polled.

This indicates that the wake function is an essential, vital part of the Future API and points it out so you know about it and how to use it.

Then in Chapter 3 Task Wake-ups with Waker the actual API is revealed to be different:

impl Future for TimerFuture {
    type Output = ();
    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {

This is actually known to the authors, as it's noted at the end of Chapter 2 that it was incorrect:

Secondly, wake: fn() has changed to &mut Context<'_>. In SimpleFuture,
we used a call to a function pointer (fn()) to tell the future executor that
the future in question should be polled. However, since fn() is just a
function pointer, it can't store any data about which Future called wake.

I think it would be much less confusing if the actual API were used instead of a false API being exposed and generalized, with a tiny note at the end that it doesn't apply in real life.

Thank you for your time and consideration.

Error building `Applied: HTTP Server` example

Not sure it is correct to open this issue, since this book is probably WIP, but I got stuck with this:
I'm trying to get through Applied: HTTP Server tutorial, and I'm getting an error when trying to build very first version of a server (If you cargo run now part of the tutorial).

I've tried this on 2 different PCs, both on Windows 10. My latest laptop have this versions:

C:\Users\stasd\code\async-await-echo>rustc --version
rustc 1.33.0-nightly (ec194646f 2019-01-02)
C:\Users\stasd\code\async-await-echo>cargo --version
cargo 1.33.0-nightly (0d1f1bbea 2018-12-19)

And I'm getting an error while running cargo run:

C:\Users\stasd\code\async-await-echo>cargo run
    Updating crates.io index
  Downloaded tokio v0.1.13
  Downloaded hyper v0.12.19
  Downloaded bytes v0.4.11
  Downloaded num_cpus v1.9.0
  Downloaded tokio-reactor v0.1.7
  Downloaded tokio-fs v0.1.4
  Downloaded tokio-threadpool v0.1.9
  Downloaded futures-preview v0.3.0-alpha.11
  Downloaded cfg-if v0.1.6
  Downloaded log v0.4.6
  Downloaded tokio-io v0.1.10
  Downloaded tokio-current-thread v0.1.4
  Downloaded tokio-timer v0.2.8
  Downloaded tokio-udp v0.1.3
  Downloaded lazycell v1.2.1
  Downloaded byteorder v1.2.7
  Downloaded tokio-async-await v0.1.4
  Downloaded crossbeam-utils v0.6.3
  Downloaded rand v0.6.1
  Downloaded lock_api v0.1.5
  Downloaded http v0.1.14
  Downloaded futures-channel-preview v0.3.0-alpha.11
  Downloaded h2 v0.1.14
  Downloaded smallvec v0.6.7
  Downloaded time v0.1.41
  Downloaded futures-executor-preview v0.3.0-alpha.11
  Downloaded lazy_static v1.2.0
  Downloaded futures-io-preview v0.3.0-alpha.11
  Downloaded crossbeam-deque v0.6.3
  Downloaded futures-core-preview v0.3.0-alpha.11
  Downloaded rand_isaac v0.1.1
  Downloaded rand_hc v0.1.0
  Downloaded futures-util-preview v0.3.0-alpha.11
  Downloaded rand_xorshift v0.1.0
  Downloaded owning_ref v0.4.0
  Downloaded libc v0.2.46
  Downloaded rand_pcg v0.1.1
  Downloaded crossbeam-epoch v0.7.0
  Downloaded indexmap v1.0.2
  Downloaded string v0.1.2
  Downloaded pin-utils v0.1.0-alpha.4
  Downloaded futures-sink-preview v0.3.0-alpha.11
  Downloaded rand_chacha v0.1.0
  Downloaded proc-macro-hack v0.5.4
  Downloaded arrayvec v0.4.10
  Downloaded syn v0.15.23
  Downloaded proc-macro2 v0.4.24
  Downloaded quote v0.6.10
  Downloaded nodrop v0.1.13
  Downloaded futures-select-macro-preview v0.3.0-alpha.11
   Compiling semver-parser v0.7.0
   Compiling proc-macro2 v0.4.24
   Compiling winapi v0.3.6
   Compiling winapi-build v0.1.1
   Compiling rand_core v0.3.0
   Compiling unicode-xid v0.1.0
   Compiling arrayvec v0.4.10
   Compiling void v1.0.2
   Compiling nodrop v0.1.13
   Compiling cfg-if v0.1.6
   Compiling winapi v0.2.8
   Compiling libc v0.2.46
   Compiling stable_deref_trait v1.1.1
   Compiling lazy_static v1.2.0
   Compiling either v1.5.0
   Compiling memoffset v0.2.1
   Compiling byteorder v1.2.7
   Compiling scopeguard v0.3.3
   Compiling futures v0.1.25
   Compiling lazycell v1.2.1
   Compiling slab v0.4.1
   Compiling fnv v1.0.6
   Compiling pin-utils v0.1.0-alpha.4
   Compiling httparse v1.3.3
   Compiling itoa v0.4.3
   Compiling indexmap v1.0.2
   Compiling try-lock v0.2.2
   Compiling string v0.1.2
   Compiling unreachable v1.0.0
   Compiling ws2_32-sys v0.2.1
   Compiling kernel32-sys v0.2.2
   Compiling crossbeam-utils v0.6.3
   Compiling log v0.4.6
   Compiling rand_core v0.2.2
   Compiling rand_isaac v0.1.1
   Compiling rand_hc v0.1.0
   Compiling rand_xorshift v0.1.0
   Compiling semver v0.9.0
   Compiling owning_ref v0.4.0
   Compiling futures-core-preview v0.3.0-alpha.11
   Compiling smallvec v0.6.7
   Compiling lock_api v0.1.5
   Compiling futures-channel-preview v0.3.0-alpha.11
   Compiling rustc_version v0.2.3
   Compiling num_cpus v1.9.0
   Compiling crossbeam-epoch v0.7.0
   Compiling futures-sink-preview v0.3.0-alpha.11
   Compiling rand_chacha v0.1.0
   Compiling parking_lot_core v0.3.1
   Compiling rand_pcg v0.1.1
   Compiling rand v0.6.1
   Compiling quote v0.6.10
   Compiling tokio-executor v0.1.5
   Compiling futures-cpupool v0.1.8
   Compiling want v0.0.6
   Compiling crossbeam-deque v0.6.3
   Compiling syn v0.15.23
   Compiling tokio-timer v0.2.8
   Compiling tokio-current-thread v0.1.4
   Compiling net2 v0.2.33
   Compiling rand v0.5.5
   Compiling time v0.1.41
   Compiling tokio-threadpool v0.1.9
   Compiling parking_lot v0.6.4
   Compiling proc-macro-hack v0.5.4
   Compiling futures-select-macro-preview v0.3.0-alpha.11
   Compiling iovec v0.1.2
   Compiling miow v0.2.1
   Compiling bytes v0.4.11
   Compiling futures-io-preview v0.3.0-alpha.11
   Compiling futures-util-preview v0.3.0-alpha.11
   Compiling tokio-io v0.1.10
   Compiling http v0.1.14
   Compiling mio v0.6.16
   Compiling tokio-codec v0.1.1
   Compiling tokio-async-await v0.1.4
   Compiling tokio-fs v0.1.4
error[E0599]: no function or associated item named `pinned` found for type `std::boxed::Box<_>` in the current scope
  --> C:\Users\stasd\.cargo\registry\src\github.com-1ecc6299db9ec823\tokio-async-await-0.1.4\src\compat\backward.rs:22:21
   |
22 |         Compat(Box::pinned(future))
   |                -----^^^^^^
   |                |
   |                function or associated item not found in `std::boxed::Box<_>`

error: aborting due to previous error

For more information about this error, try `rustc --explain E0599`.
error: Could not compile `tokio-async-await`.
warning: build failed, waiting for other jobs to finish...
error: build failed

Production-ready accept loop handling

I've just published a PR to add the chapter on writing accept loops to the async-std book. Since this is a real problem for a lot of current Rust applications (including for web frameworks like tide), I want to bring more attention to the topic in the Rust community. Adding it to rust async book should help.

More background: async-std issue and older tokio issue.

I'd like to add something similar to the async-book. But few questions first:

  1. The work on the book looks like inactive for now. Is it alive, or basically it's split into "async-std book" and "tokio book"?
  2. Do we need a section on async-std?
  3. Should the issue here have the same level of detail that it has in async book PR?
  4. Any guidelines on how to share the problem statement between async-std and tokio sections in this book (if we are going to add them here)?

Or any other guidelines that could be helpful....

Unsound examples in the "Pinning" chapter

The Test examples (https://github.com/rust-lang/async-book/blame/641dc148b34a36d68a9d64047d4712d1a6e932ac/src/04_pinning/01_chapter.md#L136, https://github.com/rust-lang/async-book/blame/641dc148b34a36d68a9d64047d4712d1a6e932ac/src/04_pinning/01_chapter.md#L363) and all its copy-pastes in the Pinning chapter provide unsound API: the potential user might easily call Test::b on a Test object that has not been previously initialized with a Test::init call.

If my understanding is correct, either Test::b should be marked as an unsafe method, or we should rethink the whole API (by introducing a safe macro-wrapper, for instance).

P.S.

Sorry for the links with "blame", but it seems like there is no way to make a link to an non-rendered markdown in github: isaacs/github#297

Blocking `thread::sleep` inside an async fn

I've seen several async beginners try to use thread::sleep to try to wrap their brains around async. It's probably one of the most common ways to "see" async. I just want to make sure it's addressed in the async book. (I haven't read it, but at least one of the reddit threads was lifted from the book's join example).

There's also the possibility of a clippy lint, but beginners may not have gotten there yet.

rust-lang/rust-clippy#4377

https://old.reddit.com/r/rust/comments/e1gxf8/not_understanding_asyncawait_properly/
https://old.reddit.com/r/rust/comments/dtp6z7/what_can_i_actually_do_with_the_new_async_fn/
https://old.reddit.com/r/rust/comments/dt0ruy/how_to_await_futures_concurrently/

Nested `use` statements are slightly difficult to read.

I find nested use statements slightly more difficult to grok than keeping them all separate. it also uses more lines in most cases.

current:

use {
    std::{
        future::Future,
        pin::Pin,
        sync::{Arc, Mutex},
        task::{Context, Poll, Waker},
        thread,
        time::Duration,
    },
};

without nesting:

use std::future::Future;
use std::pin::Pin;
use std::sync::{Arc, Mutex};
use std::task::{Context, Poll, Waker};
use std::thread;
use std::time::Duration;

I think the latter is easier to read and understand. It also has the benefit of being easier to comment out unused imports while writing the rest of the code to silence RLS complaints.

If others feel the same way, I can send a PR.

Please clarify different roles of Future from std vs library (and FuturesExt)

When reading the "under the hood" chapter I was confused because the definition of the "real" Future didn't include methods like boxed which the chapter uses later. It was only after some time that I realised these were defined in FuturesExt. At this point I wasn't aware of the other book at books.async.rs, but I now see that it makes these relationships clear early in the introduction to avoid confusion.

Perhaps it would be an idea to include a link to the async-std docs at the start of this book? (Edit: I now see that the other book/async-std is not actually part of standard rust but also a separate crate 🤔 ) . I think a similar explanation would be very useful in this book, as well as some mention of FuturesExt as the source of methods like boxed.

Examples in book mostly not work

Expamples provided in book is a part of working solution. For noobies like me it is not clear how to get working example.
I guess every example should be covered by CI/CD test.

run buttons don't actually work

Each of the code examples has a "run" button that takes you to play -- but if you click them, you just get errors. The code is not in the 2018 edition, for example, and it also has the wrong dependencies, I believe.

Error Running Sample Code: Incorrect Edition and Version of Rust

Trying to run the async/await example on the GitHub pages site yields errors regarding the edition of Rust and instability.

Code:

async fn learn_and_sing() {
    // Wait until the song has been learned before singing it.
    // We use `.await` here rather than `block_on` to prevent blocking the
    // thread, which makes it possible to `dance` at the same time.
    let song = learn_song().await;
    sing_song(song).await;
}

async fn async_main() {
    let f1 = learn_and_sing();
    let f2 = dance();

    // `join!` is like `.await` but can wait for multiple futures concurrently.
    // If we're temporarily blocked in the `learn_and_sing` future, the `dance`
    // future will take over the current thread. If `dance` becomes blocked,
    // `learn_and_sing` can take back over. If both futures are blocked, then
    // `async_main` is blocked and will yield to the executor.
    futures::join!(f1, f2);
}

fn main() {
    block_on(async_main());
}
Compiling playground v0.0.1 (/playground)
error[E0670]: `async fn` is not permitted in the 2015 edition
 --> src/main.rs:1:1
  |
1 | async fn learn_and_sing() {
  | ^^^^^

error[E0670]: `async fn` is not permitted in the 2015 edition
 --> src/main.rs:9:1
  |
9 | async fn async_main() {
  | ^^^^^

error[E0433]: failed to resolve: could not find `join` in `futures`
  --> src/main.rs:18:14
   |
18 |     futures::join!(f1, f2);
   |              ^^^^ could not find `join` in `futures`

error[E0425]: cannot find function `learn_song` in this scope
 --> src/main.rs:5:16
  |
5 |     let song = learn_song().await;
  |                ^^^^^^^^^^ not found in this scope

error[E0425]: cannot find function `sing_song` in this scope
 --> src/main.rs:6:5
  |
6 |     sing_song(song).await;
  |     ^^^^^^^^^ not found in this scope

error[E0425]: cannot find function `dance` in this scope
  --> src/main.rs:11:14
   |
11 |     let f2 = dance();
   |              ^^^^^ not found in this scope

error[E0425]: cannot find function `block_on` in this scope
  --> src/main.rs:22:5
   |
22 |     block_on(async_main());
   |     ^^^^^^^^ not found in this scope

error[E0658]: async fn is unstable
 --> src/main.rs:1:1
  |
1 | / async fn learn_and_sing() {
2 | |     // Wait until the song has been learned before singing it.
3 | |     // We use `.await` here rather than `block_on` to prevent blocking the
4 | |     // thread, which makes it possible to `dance` at the same time.
5 | |     let song = learn_song().await;
6 | |     sing_song(song).await;
7 | | }
  | |_^
  |
  = note: for more information, see https://github.com/rust-lang/rust/issues/50547

error[E0658]: async fn is unstable
  --> src/main.rs:9:1
   |
9  | / async fn async_main() {
10 | |     let f1 = learn_and_sing();
11 | |     let f2 = dance();
12 | |
...  |
18 | |     futures::join!(f1, f2);
19 | | }
   | |_^
   |
   = note: for more information, see https://github.com/rust-lang/rust/issues/50547

error: aborting due to 9 previous errors

Some errors have detailed explanations: E0425, E0433, E0658, E0670.
For more information about an error, try `rustc --explain E0425`.
error: Could not compile `playground`.

To learn more, run the command again with --verbose.

Read data from hyper::body::Body with async/await

I've just followed the guide Applied: Simple HTTP Server to set up a simple server app, and got stuck in getting data from hyper::Body.

I've found these posts: hyperium/hyper#1098, hyperium/hyper#1137, A question on the forum.

body.concat() returns an object implements future_01::Future, but when I await on it as following, the compiler throw long errors to me:

async fn serve_req(req: Request<Body>) -> Result<Response<Body>, hyper::Error> {
    println!("Got request at {:?}", req.uri());

    let url_str = "http://www.rust-lang.org/en-US/";
    let url = url_str.parse::<Uri>().expect("failed to parse URL");

    let res = await!(Client::new().get(url))?;

    let body = res.into_body();
    // Or
    // let body = req.into_body();

    // use futures_01::stream::Stream as Stream01;
    // use futures::compat::{Future01CompatExt, Stream01CompatExt};
    // use futures::stream::{Stream, StreamExt};

    // These are not working
    // let sth: Vec<u8> = await!(body.concat2());
    // let sth: Vec<u8> = await!(body.concat2().compat());
    // let sth: Vec<u8> = await!(body.compat().concat());

    Ok(Response::new(Body::from("Some response based on the data we just read")))
}
error[E0277]: the trait bound `std::result::Result<hyper::body::chunk::Chunk, hyper::error::Error>: std::iter::Extend<hyper::body::chunk::Chunk>` is not satisfied
  --> src/main.rs:34:34
   |
34 |     let b = await!(body.compat().concat());
   |                                  ^^^^^^ the trait `std::iter::Extend<hyper::body::chunk::Chunk>` is not implemented for `std::result::Result<hyper::body::chunk::Chunk, hyper::error::Error>`

error[E0277]: the trait bound `std::result::Result<hyper::body::chunk::Chunk, hyper::error::Error>: std::default::Default` is not satisfied
  --> src/main.rs:34:34
   |
34 |     let b = await!(body.compat().concat());
   |                                  ^^^^^^ the trait `std::default::Default` is not implemented for `std::result::Result<hyper::body::chunk::Chunk, hyper::error::Error>`

error[E0599]: no method named `into_awaitable` found for type `futures_util::stream::concat::Concat<futures_util::compat::compat01as03::Compat01As03<hyper::body::body::Body>>` in the current scope
  --> src/main.rs:34:13
   |
34 |     let b = await!(body.compat().concat());
   |             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
   |
   = note: the method `into_awaitable` exists but the following trait bounds were not satisfied:
           `&futures_util::stream::concat::Concat<futures_util::compat::compat01as03::Compat01As03<hyper::body::body::Body>> : tokio_async_await::compat::backward::IntoAwaitable`
           `&futures_util::stream::concat::Concat<futures_util::compat::compat01as03::Compat01As03<hyper::body::body::Body>> : tokio_async_await::compat::forward::IntoAwaitable`
           `&mut futures_util::stream::concat::Concat<futures_util::compat::compat01as03::Compat01As03<hyper::body::body::Body>> : tokio_async_await::compat::backward::IntoAwaitable`
           `&mut futures_util::stream::concat::Concat<futures_util::compat::compat01as03::Compat01As03<hyper::body::body::Body>> : tokio_async_await::compat::forward::IntoAwaitable`
           `futures_util::stream::concat::Concat<futures_util::compat::compat01as03::Compat01As03<hyper::body::body::Body>> : tokio_async_await::compat::backward::IntoAwaitable`
           `futures_util::stream::concat::Concat<futures_util::compat::compat01as03::Compat01As03<hyper::body::body::Body>> : tokio_async_await::compat::forward::IntoAwaitable`
   = note: this error originates in a macro outside of the current crate (in Nightly builds, run with -Z external-macro-backtrace for more info)

error: aborting due to 3 previous errors

Some errors occurred: E0277, E0599.
For more information about an error, try `rustc --explain E0277`.
error: Could not compile `tt`.

Please add more detail on Wakers and the RawWaker

I'd like to see a more detail on Wakers. A Waker is a wrapper around a RawWaker which looks to implement dynamic dispatch manually with lots of unsafe code, which seems odd when I would have expect this to be done via a 'traits object'

This is obviously done deliberately, and I'd like to read about the design rational for this. I'm not sure if this book is the best document for discussing such things - but I'd find helpful to better understanding the async mechanisms in rust.

Revamping the book to focus on usage

Following some conversation in wg-async-foundations, we've been proposing to revamp the async book to focus better on using and understanding async Rust. Currently, the book mixes advanced topics such as building an executor, understanding wakers and implementing Future manually, with fundamental knowledge required to properly understand and use async Rust in order to Build Stuff™. I believe the book can become a more useful central resource with some structural changes.

The need for more comprehensive async documentation arose when we at Fuchsia have realized (and in fact, measured) that people struggle with the async programming model more than any other area in Rust (even including complex, under-documented internal crates).

The current state of async Rust is fragmented into different runtime eco-systems. Our initial thoughts were to build yet-another set of docs for Fuchsia devs, but I hope we can do better in bringing shared knowledge to a central place. Here's a draft of what we have so far. So far, the reception has been positive, but we need more concrete feedback.

Is this something that sounds interesting? I'd be happy to drive this, and can also help find reviewers if need be. Also, feel free to reach out in the wg-async-foundations topic linked above.

try to make reproducible examples (missing cargo dependencies)

As a newbie to Rust, I found very complex to follow documents in where part of the code belongs to the std library and other depends on crates from example from: 04_async_await_primer

async/.await is Rust's built-in tool for writing asynchronous functions that look like synchronous code. async transforms a block of code into a state machine that implements a trait called Future. Whereas calling a blocking function in a synchronous method would block the whole thread, blocked Futures will yield control of the thread, allowing other Futures to run.
To create an asynchronous function, you can use the async fn syntax:
async fn do_something() { /* ... */ }
The value returned by async fn is a Future. For anything to happen, the Future needs to be run on an executor.

Then there is this example:

// `block_on` blocks the current thread until the provided future has run to
// completion. Other executors provide more complex behavior, like scheduling
// multiple futures onto the same thread.
use futures::executor::block_on;

async fn hello_world() {
    println!("hello, world!");
}

fn main() {
    let future = hello_world(); // Nothing is printed
    block_on(future); // `future` is run and "hello, world!" is printed
}

By trying to learn by doing, I do:

cargo new example --bin

Then go and edit src/main.rs copy, paste code from the examples and try to run it with:

cargo run

In this case I just stumble with an error regarding undeclare type of module futures:

error[E0433]: failed to resolve: use of undeclared type or module `futures`
 --> src/main.rs:4:5
  |
4 | use futures::executor::block_on;
  |     ^^^^^^^ use of undeclared type or module `futures`

Initial toughs are to search about futures and by guessing I try to add in the dependencies of Cargo.toml something like this:

[dependencies]
futures = "0.3"

Code now compiles, I can run it but I am more confused since I found out that there is also another future from the std library: https://doc.rust-lang.org/std/future/index.html

It will help if the examples could be improved by explaining what dependencies are used and why the usage of crates instead of the std libraries.

In 03_state_of_async_rust.html there is a hint about the crate, but maybe in the comments of the code, it could help to have something basic like:

/// check https://rust-lang.github.io/async-book/01_getting_started/03_state_of_async_rust.html
/// Depends on `futures = "0.3"` (check Cargo.toml [dependencies])
...

Expand on TODOs

Several chapters are marked as TODO and link to 404.html. It would be useful to write down what the authors were thinking should be in these sections to make contribution easier.

Add favicons

Was gonna make a pull request myself but i dont understand structure of mdbook's, but ye the site needs favicons.

Join Streams?

Reading the chapter on join! for futures it would be good to say whether there is or isn't an equivilent for merging streams together into one big firehose stream and ideally an example of that in the steams chapter.

Book cover concept

Was just chatting with one of my engineers—who also happens to be a designer—and I came up with what we think to be a good idea!

Same cover as the Rust book, except show a shadow of a crab—i.e.: future crab—rather than an existing one.

Can play around with colours also (e.g.: all black cover with a white shadow of a crab)

What do you think 🦀

Managing shared state chapter - ideas

I just wanted to share some experience about this topic that is on the TODO list for the book. I don't feel confident enough to write this topic myself, but would like to contribute some ideas.

  • Something I'm confused about still is using references in async/await. I read blog posts from withoutboats explaining that we can use references, but in practice I end up with alot of Rc<RefCell<_>> on self.

There is 2 topics that I see important to mention:

  1. The need for synchronization (dope, well as in thread synchronization), in async code. The compiler currently isn't as well equipped for async as it is for multi-threading, but similar issues arise. Where the compiler won't let you access data from multiple threads without using a locking mechanism, RefCell will cause runtime issues when borrowed twice. In multi-threading blocking the thread waiting for a lock will (with the exception of deadlocks) kind of automatically solve the synchronization problem, but in async code you can't block the thread. You can't just await the borrow being released.
    Enter critical sections. I find myself writing code so that there are no yield points (awaits) between borrowing and releasing a RefCell. That way I can guarantee there will be no double borrow.

  2. Other synchronization issues: What if two unrelated paths through the program require that a runs before b. I could store a future somewhere, but I should not poll it from 2 different paths. There is no way to ask a future "are you ready" without consuming it. I could set a boolean saying that something has resolved, but I cannot await a boolean... I would have to make a custom future that returns not ready based on that boolean? I'm a bit confused about how to deal with this... There most certainly are ways, but finding something generic with not to much boilerplate and overhead?

Hope that this helps launching some discussion and eventual writing of the chapter...

ps: futures-locks look interesting as well

wrong link in 01_getting_started/03_state_of_async_rust.md

In the first paragraph, there is a list, its item 4 says:

Execution of async code, IO and task spawning are provided by "async runtimes", such as Tokio and async-std. Most async applications, and some async crates, depend on a specific runtime. See "The Async Ecosystem" section for more details.

I think the link of "The Async Ecosystem" should refer to ../08_ecosystem/00_chapter.md

Expecting a panic on max queued tasks

In the chapter 'Applied: Build an Executor' we set MAX_QUEUED_TASKS and have an expect on send(task) which should fail if there are too many tasks.
I tried to trigger this panic by setting MAX_QUEUED_TASKS to 1 and spawn the async function for TimerFuture two times.
However the program just hangs. By setting the max queued tasks to 2, it works.

I'm on windows, if it helps here is the stack trace for the main thread:

[0x0] ntdll!NtWaitForAlertByThreadId + 0x14
[0x1] ntdll!RtlSleepConditionVariableSRW + 0x130
[0x2] KERNELBASE!SleepConditionVariableSRW + 0x2d
[0x3] async_prog!std::sys::windows::c::SleepConditionVariableSRW + 0x3e
[0x4] async_prog!std::sys::windows::condvar::Condvar::wait + 0x3e
[0x5] async_prog!std::sys_common::condvar::Condvar::wait + 0x3e
[0x6] async_prog!std::sync::condvar::Condvar::wait + 0x58
[0x7] async_prog!std::thread::park + 0x1ab
[0x8] async_prog!std::sync::mpsc::blocking::WaitToken::wait + 0x25
[0x9] async_prog!std::sync::mpsc::sync::Packet<alloc::sync::Arc<async_prog::simple_executor::Task>>::acquire_send_slot<alloc::sync::Arc<async_prog::simple_executor::Task>> + 0x19d
[0xa] async_prog!std::sync::mpsc::sync::Packet<alloc::sync::Arc<async_prog::simple_executor::Task>>::send<alloc::sync::Arc<async_prog::simple_executor::Task>> + 0x48
[0xb] async_prog!std::sync::mpsc::SyncSender<alloc::sync::Arc<async_prog::simple_executor::Task>>::send<alloc::sync::Arc<async_prog::simple_executor::Task>> + 0x42
[0xc] async_prog!async_prog::simple_executor::Spawner::spawn<std::future::GenFuture> + 0xad
[0xd] async_prog!async_prog::use_timer_future + 0x9b
[0xe] async_prog!async_prog::main::{{closure}} + 0x36
[0xf] async_prog!std::future::{{impl}}::poll::{{closure}} + 0x1d
[0x10] async_prog!std::future::set_task_context<closure-1,core::task::poll::Poll<()>> + 0x73
[0x11] async_prog!std::future::{{impl}}::poll + 0x36
[0x12] async_prog!tokio::runtime::enter::Enter::block_on<std::future::GenFuture> + 0x15e
[0x13] async_prog!tokio::runtime::thread_pool::ThreadPool::block_on<std::future::GenFuture> + 0x44
[0x14] async_prog!tokio::runtime::{{impl}}::block_on::{{closure}}<std::future::GenFuture> + 0xb7
[0x15] async_prog!tokio::runtime::context::enter<closure-0,()> + 0x6b
[0x16] async_prog!tokio::runtime::handle::Handle::enter<closure-0,()> + 0x56
[0x17] async_prog!tokio::runtime::Runtime::block_on<std::future::GenFuture> + 0x47
[0x18] async_prog!async_prog::main + 0xbd
[0x19] async_prog!std::rt::lang_start::{{closure}}<()> + 0x10
[0x1a] async_prog!std::rt::lang_start_internal::{{closure}} + 0xc
[0x1b] async_prog!std::panicking::try::do_call<closure-0,i32> + 0x17
[0x1c] async_prog!panic_unwind::__rust_maybe_catch_panic + 0x22
[0x1d] async_prog!std::panicking::try + 0x33
[0x1e] async_prog!std::panic::catch_unwind + 0x33
[0x1f] async_prog!std::rt::lang_start_internal + 0x102
[0x20] async_prog!std::rt::lang_start<()> + 0x3b
[0x21] async_prog!main + 0x20
[0x22] async_prog!invoke_main + 0x22
[0x23] async_prog!__scrt_common_main_seh + 0x10c
[0x24] KERNEL32!BaseThreadInitThunk + 0x14
[0x25] ntdll!RtlUserThreadStart + 0x21

FuturesUnordered section

Suggestions for section on FuturesUnordered:

  • Base the section example on outcome of conversation in #62 ?
  • Discuss the use case of a client sending a batch of requests for context ?

Thoughts?

Should this be part of a wider section on Streams as it's one of the types that implement Stream & StreamExt? Or is FuturesUnordered worth a special mention because it would be the one people are most likely to construct themselves when bundling together futures?

Pinning chapter should define the Pin contract

The pinning chapter refers to "the Pin contract" three times:

It's important to note that stack pinning will always rely on guarantees you give when writing unsafe. While we know that the pointee of &'a mut T is pinned for the lifetime of 'a we can't know if the data &'a mut T points to isn't moved after 'a ends. If it does it will violate the Pin contract.

A mistake that is easy to make is forgetting to shadow the original variable since you could drop the Pin and move the data after &'a mut T like shown below (which violates the Pin contract):

For pinned data where T: !Unpin you have to maintain the invariant that its memory will not get invalidated or repurposed from the moment it gets pinned until when drop is called. This is an important part of the pin contract.

but there is no definition of what the Pin contract really is.

In the first reference:

It's important to note that stack pinning will always rely on guarantees you give when writing unsafe. While we know that the pointee of &'a mut T is pinned for the lifetime of 'a we can't know if the data &'a mut T points to isn't moved after 'a ends. If it does it will violate the Pin contract.

it is not clear that pinning (in the sense of this chapter: a pointer type being wrapped in the Pin type to guarantee that the referent won't be moved to a distinct address unless the type of the referent implements the Unpin trait) is necessarily being described here. If pinning in the sense of this chapter is not being described, then there is no implication for the Pin contract.

Similarly, it is not clear that the second reference is actually an example of the Pin contract being broken.

The third reference alludes to "an important part of the pin [sic] contract" as if there is another part of parts of the contract.

The pin module documentation doesn't define the Pin contract either.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.