Comments (8)
Awesome, thanks @Ethan3600 this is super helpful.
from so.
confirm v0.4.1 bugs (Manjaro), v0.4.0 - works good.
RUST_BACKTRACE=1 ./so how to setup async rust
thread 'main' panicked at 'byte index 18446744073709551615 is out of bounds of ` function](https://github.com/rust-lang-nursery/futures-rs/blob/0.3.1/futures-util/src/future/
`', /rust/lib/rustlib/src/rust/src/libcore/str/mod.rs:2052:47
stack backtrace:
0: backtrace::backtrace::libunwind::trace
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.46/src/backtrace/libunwind.rs:86
1: backtrace::backtrace::trace_unsynchronized
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.46/src/backtrace/mod.rs:66
2: std::sys_common::backtrace::_print_fmt
at src/libstd/sys_common/backtrace.rs:78
3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
at src/libstd/sys_common/backtrace.rs:59
4: core::fmt::write
at src/libcore/fmt/mod.rs:1069
5: std::io::Write::write_fmt
at src/libstd/io/mod.rs:1504
6: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:62
7: std::sys_common::backtrace::print
at src/libstd/sys_common/backtrace.rs:49
8: std::panicking::default_hook::{{closure}}
at src/libstd/panicking.rs:198
9: std::panicking::default_hook
at src/libstd/panicking.rs:218
10: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:511
11: rust_begin_unwind
at src/libstd/panicking.rs:419
12: core::panicking::panic_fmt
at src/libcore/panicking.rs:111
13: core::str::slice_error_fail
at src/libcore/str/mod.rs:0
14: core::str::traits::<impl core::slice::SliceIndex<str> for core::ops::range::RangeFrom<usize>>::index::{{closure}}
15: termimad::wrap::hard_wrap_composite
16: termimad::text::FmtText::from_text
17: so::term::Term::print
18: so::run::{{closure}}
19: std::thread::local::LocalKey<T>::with
20: so::main
21: std::rt::lang_start_internal::{{closure}}::{{closure}}
at src/libstd/rt.rs:52
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
although other requests work fine
from so.
@Ethan3600 Thanks for the issue. Unfortunately I'm having trouble reproducing it, but it seems to be coming from termimad
especially if you've determined that print_inline
works fine. The difference (or at least one important difference) is that print_text
takes into account your terminal width and wraps things nicely. So, it looks much better than using the inline version, and I'd rather keep it if I can. If you have the time, it would be helpful if you could check out this branch, run your query, and paste the output here. Then I can take that information and give it to the author of termimad
.
And yes I desperately need to add logging and debug flags 🙈
@D1mon Are you positive? I'm not sure how that could be; while I did do some refactoring / reorganization around printing, for both v0.4.0 and v0.4.1 we call termimad::MadSkin::print_text
on the answer markdown. And there was no change to the termimad version between those two tags (assuming you are building with --locked
). Also, the error message that you're getting is different than the original issue description, so it could be a separate issue.
from so.
@samtay thanks for explaining that! Here's what I got from running the query again:
(it actually prints out a lot more stuff, but it looks like there's a bunch of markdown from stackoverflow)
Terminal size: Ok((190, 51))
Calling skin.print_text with: BEGIN
(removed the stuff that came back from stackoverflow)
END
thread 'main' panicked at 'attempt to subtract with overflow', /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/minimad-0.6.4/src/compound.rs:117:19
stack backtrace:
0: backtrace::backtrace::libunwind::trace
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.46/src/backtrace/libunwind.rs:86
1: backtrace::backtrace::trace_unsynchronized
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.46/src/backtrace/mod.rs:66
2: std::sys_common::backtrace::_print_fmt
at src/libstd/sys_common/backtrace.rs:78
3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
at src/libstd/sys_common/backtrace.rs:59
4: core::fmt::write
at src/libcore/fmt/mod.rs:1076
5: std::io::Write::write_fmt
at src/libstd/io/mod.rs:1537
6: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:62
7: std::sys_common::backtrace::print
at src/libstd/sys_common/backtrace.rs:49
8: std::panicking::default_hook::{{closure}}
at src/libstd/panicking.rs:198
9: std::panicking::default_hook
at src/libstd/panicking.rs:217
10: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:526
11: rust_begin_unwind
at src/libstd/panicking.rs:437
12: core::panicking::panic_fmt
at src/libcore/panicking.rs:85
13: core::panicking::panic
at src/libcore/panicking.rs:50
14: minimad::compound::Compound::cut_tail
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/minimad-0.6.4/src/compound.rs:117
15: termimad::wrap::hard_wrap_composite
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/termimad-0.8.24/src/wrap.rs:137
16: termimad::wrap::hard_wrap_lines
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/termimad-0.8.24/src/wrap.rs:172
17: termimad::text::FmtText::from_text
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/termimad-0.8.24/src/text.rs:44
18: termimad::text::FmtText::from
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/termimad-0.8.24/src/text.rs:32
19: termimad::skin::MadSkin::term_text
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/termimad-0.8.24/src/skin.rs:208
20: termimad::skin::MadSkin::print_text
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/termimad-0.8.24/src/skin.rs:243
21: so::term::Term::print
at src/term.rs:52
22: so::run::{{closure}}
at src/main.rs:86
23: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /home/eyehuda/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libcore/future/mod.rs:74
24: tokio::runtime::enter::Enter::block_on::{{closure}}
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.21/src/runtime/enter.rs:163
25: tokio::coop::with_budget::{{closure}}
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.21/src/coop.rs:127
26: std::thread::local::LocalKey<T>::try_with
at /home/eyehuda/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libstd/thread/local.rs:263
27: std::thread::local::LocalKey<T>::with
at /home/eyehuda/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libstd/thread/local.rs:239
28: tokio::coop::with_budget
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.21/src/coop.rs:120
29: tokio::coop::budget
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.21/src/coop.rs:96
30: tokio::runtime::enter::Enter::block_on
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.21/src/runtime/enter.rs:163
31: tokio::runtime::thread_pool::ThreadPool::block_on
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.21/src/runtime/thread_pool/mod.rs:82
32: tokio::runtime::Runtime::block_on::{{closure}}
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.21/src/runtime/mod.rs:446
33: tokio::runtime::context::enter
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.21/src/runtime/context.rs:72
34: tokio::runtime::handle::Handle::enter
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.21/src/runtime/handle.rs:76
35: tokio::runtime::Runtime::block_on
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.21/src/runtime/mod.rs:441
36: so::main
at src/main.rs:21
37: std::rt::lang_start::{{closure}}
at /home/eyehuda/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libstd/rt.rs:67
38: std::rt::lang_start_internal::{{closure}}
at src/libstd/rt.rs:52
39: std::panicking::try::do_call
at src/libstd/panicking.rs:348
40: std::panicking::try
at src/libstd/panicking.rs:325
41: std::panic::catch_unwind
at src/libstd/panic.rs:394
42: std::rt::lang_start_internal
at src/libstd/rt.rs:51
43: std::rt::lang_start
at /home/eyehuda/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libstd/rt.rs:67
44: main
45: __libc_start_main
46: _start
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
from so.
Oh, I actually wanted to see what's between the BEGIN & END tags; this is what termimad
is actually trying to print, so it should give us a reproducible example.
from so.
Whoops! Sorry! Here it is (sorry if github is interpreting the markdown. Couldn't find a way to make it ignore it):
△ Projects/tools/so RUST_BACKTRACE=1 target/debug/so how to setup async rust debug-termimad-panic 16m :: ⬢
Terminal size: Ok((190, 51))
Calling skin.print_text with: BEGIN
You are conflating a few concepts.
[Concurrency is not parallelism](https://stackoverflow.com/q/1050222/155423), and `async` and `await` are tools for *concurrency*, which may sometimes mean they are also tools for parallelism.
Additionally, whether a future is immediately polled or not is orthogonal to the syntax chosen.
# `async` / `await`
The keywords `async` and `await` exist to make creating and interacting with asynchronous code easier to read and look more like "normal" synchronous code. This is true in all of the languages that have such keywords, as far as I am aware.
## Simpler code
This is code that creates a future that adds two numbers when polled
**before**
```
fn long_running_operation(a: u8, b: u8) -> impl Future<Output = u8> {
struct Value(u8, u8);
impl Future for Value {
type Output = u8;
fn poll(self: Pin<&mut Self>, _ctx: &mut Context) -> Poll<Self::Output> {
Poll::Ready(self.0 + self.1)
}
}
Value(a, b)
}
```
**after**
```
async fn long_running_operation(a: u8, b: u8) -> u8 {
a + b
}
```
Note that the "before" code is basically the [implementation of today's `poll_fn` function](https://github.com/rust-lang-nursery/futures-rs/blob/0.3.1/futures-util/src/future/poll_fn.rs#L48-L56)
See also [Peter Hall's answer](https://stackoverflow.com/a/52839157/155423) about how keeping track of many variables can be made nicer.
## References
One of the potentially surprising things about `async`/`await` is that it enables a specific pattern that wasn't possible before: using references in futures. Here's some code that fills up a buffer with a value in an asynchronous manner:
**before**
```
use std::io;
fn fill_up<'a>(buf: &'a mut [u8]) -> impl Future<Output = io::Result<usize>> + 'a {
futures::future::lazy(move |_| {
for b in buf.iter_mut() { *b = 42 }
Ok(buf.len())
})
}
fn foo() -> impl Future<Output = Vec<u8>> {
let mut data = vec![0; 8];
fill_up(&mut data).map(|_| data)
}
```
This fails to compile:
```none
error[E0597]: `data` does not live long enough
--> src/main.rs:33:17
|
33 | fill_up_old(&mut data).map(|_| data)
| ^^^^^^^^^ borrowed value does not live long enough
34 | }
| - `data` dropped here while still borrowed
|
= note: borrowed value must be valid for the static lifetime...
error[E0505]: cannot move out of `data` because it is borrowed
--> src/main.rs:33:32
|
33 | fill_up_old(&mut data).map(|_| data)
| --------- ^^^ ---- move occurs due to use in closure
| | |
| | move out of `data` occurs here
| borrow of `data` occurs here
|
= note: borrowed value must be valid for the static lifetime...
```
**after**
```
use std::io;
async fn fill_up(buf: &mut [u8]) -> io::Result<usize> {
for b in buf.iter_mut() { *b = 42 }
Ok(buf.len())
}
async fn foo() -> Vec<u8> {
let mut data = vec![0; 8];
fill_up(&mut data).await.expect("IO failed");
data
}
```
This works!
# Calling an `async` function does not run anything
The implementation and design of a `Future` and the entire system around futures, on the other hand, is unrelated to the keywords `async` and `await`. Indeed, Rust has a thriving asynchronous ecosystem (such as with Tokio) before the `async` / `await` keywords ever existed. The same was true for JavaScript.
## Why aren't `Future`s polled immediately on creation?
For the most authoritative answer, check out [this comment from withoutboats](https://github.com/rust-lang/rfcs/pull/2394#issuecomment-382399020) on the RFC pull request:
> A fundamental difference between Rust's futures and those from other
> languages is that Rust's futures do not do anything unless polled. The
> whole system is built around this: for example, cancellation is
> dropping the future for precisely this reason. In contrast, in other
> languages, calling an async fn spins up a future that starts executing
> immediately.
>
> A point about this is that async & await in Rust are not inherently
> concurrent constructions. If you have a program that only uses async &
> await and no concurrency primitives, the code in your program will
> execute in a defined, statically known, linear order. Obviously, most
> programs will use some kind of concurrency to schedule multiple,
> concurrent tasks on the event loop, but they don't have to. What this
> means is that you can - trivially - locally guarantee the ordering of
> certain events, even if there is nonblocking IO performed in between
> them that you want to be asynchronous with some larger set of nonlocal
> events (e.g. you can strictly control ordering of events inside of a
> request handler, while being concurrent with many other request
> handlers, even on two sides of an await point).
>
> This property gives Rust's async/await syntax the kind of local
> reasoning & low-level control that makes Rust what it is. Running up
> to the first await point would not inherently violate that - you'd
> still know when the code executed, it would just execute in two
> different places depending on whether it came before or after an
> await. However, I think the decision made by other languages to start
> executing immediately largely stems from their systems which
> immediately schedule a task concurrently when you call an async fn
> (for example, that's the impression of the underlying problem I got
> from the Dart 2.0 document).
Some of the Dart 2.0 background is covered by [this discussion from munificent](https://www.reddit.com/r/rust/comments/8aaywk/async_await_in_rust_a_full_proposal/dwxjjo2/):
> Hi, I'm on the Dart team. Dart's async/await was designed mainly by
> Erik Meijer, who also worked on async/await for C#. In C#, async/await
> is synchronous to the first await. For Dart, Erik and others felt that
> C#'s model was too confusing and instead specified that an async
> function always yields once before executing any code.
>
> At the time, I and another on my team were tasked with being the
> guinea pigs to try out the new in-progress syntax and semantics in our
> package manager. Based on that experience, we felt async functions
> should run synchronously to the first await. Our arguments were
> mostly:
>
> 1. Always yielding once incurs a performance penalty for no good reason. In most cases, this doesn't matter, but in some it really
> does. Even in cases where you can live with it, it's a drag to bleed a
> little perf everywhere.
>
> 1. Always yielding means certain patterns cannot be implemented using async/await. In particular, it's really common to have code like
> (pseudo-code here):
>
> getThingFromNetwork():
> if (downloadAlreadyInProgress):
> return cachedFuture
>
> cachedFuture = startDownload()
> return cachedFuture
>
> In other words, you have an async operation that you can call multiple times before it completes. Later calls use the same
> previously-created pending future. You want to ensure you don't start
> the operation multiple times. That means you need to synchronously
> check the cache before starting the operation.
>
> If async functions are async from the start, the above function can't use async/await.
>
> We pleaded our case, but ultimately the language designers stuck with
> async-from-the-top. This was several years ago.
>
> That turned out to be the wrong call. The performance cost is real
> enough that many users developed a mindset that "async functions are
> slow" and started avoiding using it even in cases where the perf hit
> was affordable. Worse, we see nasty concurrency bugs where people
> think they can do some synchronous work at the top of a function and
> are dismayed to discover they've created race conditions. Overall, it
> seems users do not naturally assume an async function yields before
> executing any code.
>
> So, for Dart 2, we are now taking the very painful breaking change to
> change async functions to be synchronous to the first await and
> migrating all of our existing code through that transition. I'm glad
> we're making the change, but I really wish we'd done the right thing
> on day one.
>
> I don't know if Rust's ownership and performance model place different
> constraints on you where being async from the top really is better,
> but from our experience, sync-to-the-first-await is clearly the better
> trade-off for Dart.
[cramert replies](https://www.reddit.com/r/rust/comments/8aaywk/async_await_in_rust_a_full_proposal/dwxqgpy) (note that some of this syntax is outdated now):
> If you need code to execute immediately when a function is called
> rather than later on when the future is polled, you can write your
> function like this:
>
> fn foo() -> impl Future<Item=Thing> {
> println!("prints immediately");
> async_block! {
> println!("prints when the future is first polled");
> await!(bar());
> await!(baz())
> }
> }
## Code examples
These examples use the async support in Rust 1.39 and the futures crate 0.3.1.
### Literal transcription of the C# code
```
use futures; // 0.3.1
async fn long_running_operation(a: u8, b: u8) -> u8 {
println!("long_running_operation");
a + b
}
fn another_operation(c: u8, d: u8) -> u8 {
println!("another_operation");
c * d
}
async fn foo() -> u8 {
println!("foo");
let sum = long_running_operation(1, 2);
another_operation(3, 4);
sum.await
}
fn main() {
let task = foo();
futures::executor::block_on(async {
let v = task.await;
println!("Result: {}", v);
});
}
```
If you called `foo`, the sequence of events in Rust would be:
1. Something implementing `Future<Output = u8>` is returned.
That's it. No "actual" work is done yet. If you take the result of `foo` and drive it towards completion (by polling it, in this case via `futures::executor::block_on`), then the next steps are:
2. Something implementing `Future<Output = u8>` is returned from calling `long_running_operation` (it does not start work yet).
2. `another_operation` does work as it is synchronous.
2. the `.await` syntax causes the code in `long_running_operation` to start. The `foo` future will continue to return "not ready" until the computation is done.
The output would be:
```none
foo
another_operation
long_running_operation
Result: 3
```
Note that there are no thread pools here: this is all done on a single thread.
### `async` blocks
You can also use `async` blocks:
```
use futures::{future, FutureExt}; // 0.3.1
fn long_running_operation(a: u8, b: u8) -> u8 {
println!("long_running_operation");
a + b
}
fn another_operation(c: u8, d: u8) -> u8 {
println!("another_operation");
c * d
}
async fn foo() -> u8 {
println!("foo");
let sum = async { long_running_operation(1, 2) };
let oth = async { another_operation(3, 4) };
let both = future::join(sum, oth).map(|(sum, _)| sum);
both.await
}
```
Here we wrap synchronous code in an `async` block and then wait for both actions to complete before this function will be complete.
Note that wrapping synchronous code like this is *not* a good idea for anything that will actually take a long time; see https://stackoverflow.com/q/41932137/155423 for more info.
### With a threadpool
```
// Requires the `thread-pool` feature to be enabled
use futures::{executor::ThreadPool, future, task::SpawnExt, FutureExt};
async fn foo(pool: &mut ThreadPool) -> u8 {
println!("foo");
let sum = pool
.spawn_with_handle(async { long_running_operation(1, 2) })
.unwrap();
let oth = pool
.spawn_with_handle(async { another_operation(3, 4) })
.unwrap();
let both = future::join(sum, oth).map(|(sum, _)| sum);
both.await
}
```
END
thread 'main' panicked at 'attempt to subtract with overflow', /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/minimad-0.6.4/src/compound.rs:117:19
stack backtrace:
0: backtrace::backtrace::libunwind::trace
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.46/src/backtrace/libunwind.rs:86
1: backtrace::backtrace::trace_unsynchronized
at /cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.46/src/backtrace/mod.rs:66
2: std::sys_common::backtrace::_print_fmt
at src/libstd/sys_common/backtrace.rs:78
3: <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt
at src/libstd/sys_common/backtrace.rs:59
4: core::fmt::write
at src/libcore/fmt/mod.rs:1076
5: std::io::Write::write_fmt
at src/libstd/io/mod.rs:1537
6: std::sys_common::backtrace::_print
at src/libstd/sys_common/backtrace.rs:62
7: std::sys_common::backtrace::print
at src/libstd/sys_common/backtrace.rs:49
8: std::panicking::default_hook::{{closure}}
at src/libstd/panicking.rs:198
9: std::panicking::default_hook
at src/libstd/panicking.rs:217
10: std::panicking::rust_panic_with_hook
at src/libstd/panicking.rs:526
11: rust_begin_unwind
at src/libstd/panicking.rs:437
12: core::panicking::panic_fmt
at src/libcore/panicking.rs:85
13: core::panicking::panic
at src/libcore/panicking.rs:50
14: minimad::compound::Compound::cut_tail
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/minimad-0.6.4/src/compound.rs:117
15: termimad::wrap::hard_wrap_composite
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/termimad-0.8.24/src/wrap.rs:137
16: termimad::wrap::hard_wrap_lines
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/termimad-0.8.24/src/wrap.rs:172
17: termimad::text::FmtText::from_text
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/termimad-0.8.24/src/text.rs:44
18: termimad::text::FmtText::from
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/termimad-0.8.24/src/text.rs:32
19: termimad::skin::MadSkin::term_text
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/termimad-0.8.24/src/skin.rs:208
20: termimad::skin::MadSkin::print_text
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/termimad-0.8.24/src/skin.rs:243
21: so::term::Term::print
at src/term.rs:52
22: so::run::{{closure}}
at src/main.rs:86
23: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
at /home/eyehuda/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libcore/future/mod.rs:74
24: tokio::runtime::enter::Enter::block_on::{{closure}}
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.21/src/runtime/enter.rs:163
25: tokio::coop::with_budget::{{closure}}
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.21/src/coop.rs:127
26: std::thread::local::LocalKey<T>::try_with
at /home/eyehuda/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libstd/thread/local.rs:263
27: std::thread::local::LocalKey<T>::with
at /home/eyehuda/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libstd/thread/local.rs:239
28: tokio::coop::with_budget
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.21/src/coop.rs:120
29: tokio::coop::budget
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.21/src/coop.rs:96
30: tokio::runtime::enter::Enter::block_on
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.21/src/runtime/enter.rs:163
31: tokio::runtime::thread_pool::ThreadPool::block_on
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.21/src/runtime/thread_pool/mod.rs:82
32: tokio::runtime::Runtime::block_on::{{closure}}
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.21/src/runtime/mod.rs:446
33: tokio::runtime::context::enter
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.21/src/runtime/context.rs:72
34: tokio::runtime::handle::Handle::enter
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.21/src/runtime/handle.rs:76
35: tokio::runtime::Runtime::block_on
at /home/eyehuda/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.21/src/runtime/mod.rs:441
36: so::main
at src/main.rs:21
37: std::rt::lang_start::{{closure}}
at /home/eyehuda/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libstd/rt.rs:67
38: std::rt::lang_start_internal::{{closure}}
at src/libstd/rt.rs:52
39: std::panicking::try::do_call
at src/libstd/panicking.rs:348
40: std::panicking::try
at src/libstd/panicking.rs:325
41: std::panic::catch_unwind
at src/libstd/panic.rs:394
42: std::rt::lang_start_internal
at src/libstd/rt.rs:51
43: std::rt::lang_start
at /home/eyehuda/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/src/libstd/rt.rs:67
44: main
45: __libc_start_main
46: _start
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
from so.
sorry v0.4.0 - same error on query: so how to setup async rust
from so.
Thanks for the quick turnaround. Seems to work on v0.4.2! This is such an awesome tool! Now I can ask my newbie Rust questions in style 😎
from so.
Related Issues (20)
- Can not select text in any of the panes HOT 3
- Failed to connect? HOT 2
- cant build HOT 3
- The build for this broke, due to external crate change.
- "✖ DuckDuckGo blocked this request" almost all the time HOT 5
- Copy an answer's code snippets into the system clipboard HOT 2
- rustup version HOT 1
- Rating and date of answer HOT 1
- Display information about interaction keys HOT 1
- net2 is deprecated HOT 3
- Google blocking search in macos
- Feature request: Open space menu by default flag HOT 1
- Is there a way to customize the local list of sites? HOT 2
- Feature Request: More color variable options.
- Doesn't work in Git Bash on Windows HOT 1
- Can't handle some UTF-8 text HOT 4
- feature: highlight source code HOT 3
- feature: highlight url links and open it in browser HOT 2
- feature: export(save as) question(s) and answers to markdown(html) format HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from so.