Giter Site home page Giter Site logo

dpc / mioco.pre-0.9 Goto Github PK

View Code? Open in Web Editor NEW
457.0 16.0 30.0 41.84 MB

Scalable, coroutine-based, asynchronous IO handling library for Rust programming language. (aka MIO COroutines).

License: Mozilla Public License 2.0

Makefile 1.02% Rust 97.46% PowerShell 1.53%

mioco.pre-0.9's Introduction

mioco

Travis CI Build Status App Veyor Build Status crates.io Gitter Chat
Documentation

Project status

This is repository for mioco before version 0.9. It's hear for historical reasons, while I've started reworking mioco in the original location: https://github.com/dpc/mioco . Sorry for all the confusion.

Rust community decided that futures should be main Rust async IO story, so you might want to look at tokio-fiber: coroutines as Futures project, which should have mioco-like API and allow easily porting code using mioco.

Code snippet

    mioco::start(|| -> io::Result<()> {
        let addr = listend_addr();

        let listener = try!(TcpListener::bind(&addr));

        println!("Starting tcp echo server on {:?}", try!(listener.local_addr()));

        loop {
            let mut conn = try!(listener.accept());

            mioco::spawn(move || -> io::Result<()> {
                let mut buf = [0u8; 1024 * 16];
                loop {
                    let size = try!(conn.read(&mut buf));
                    if size == 0 {/* eof */ break; }
                    let _ = try!(conn.write_all(&mut buf[0..size]));
                }

                Ok(())
            });
        }
    }).unwrap().unwrap();

This trivial code scales very well. See benchmarks.

Contributors welcome!

Mioco is looking for contributors. See Contributing page for details.

Introduction

Scalable, coroutine-based, asynchronous IO handling library for Rust programming language.

Mioco uses asynchronous event loop, to cooperatively switch between coroutines (aka. green threads), depending on data availability. You can think of mioco as Node.js for Rust or Rust green threads on top of mio.

Read Documentation for details and features.

If you want to say hi, or need help use #mioco gitter.im.

To report a bug or ask for features use github issues.

Building & running

Standalone

To start test echo server:

cargo run --release --example echo

For daily work:

make all

In your project

In Cargo.toml:

[dependencies]
mioco = "*"

In your main.rs:

#[macro_use]
extern crate mioco;

Projects using mioco:

Send PR or drop a link on gitter.

mioco.pre-0.9's People

Contributors

3hren avatar blabaere avatar canndrew avatar d-unsed avatar dpc avatar drakulix avatar eddyb avatar euclio avatar gregwebs avatar hjr3 avatar jeremyjh avatar kev-the-dev avatar petrochenkov avatar ryman avatar sapikachu avatar serprex avatar sp3d avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mioco.pre-0.9's Issues

Unable to run without spawning at least 1 native thread

It would be useful to be able to pass 0 to start_threads() so that no extra threads are spawned. Currently, doing so results in a panic:

thread '<main>' panicked at 'called `Option::unwrap()` on a `None` value', ../src/libcore/option.rs:366
stack backtrace:
   1:     0x5641ef0a4130 - sys::backtrace::tracing::imp::write::h5839347184a363c1Tnt
   2:     0x5641ef0a6685 - panicking::log_panic::_<closure>::closure.39955
   3:     0x5641ef0a60d1 - panicking::log_panic::hcde6d42710304abbWnx
   4:     0x5641ef093f63 - sys_common::unwind::begin_unwind_inner::h4039843fef6bffefYgs
   5:     0x5641ef094618 - sys_common::unwind::begin_unwind_fmt::hbbea9d3fc97574084fs
   6:     0x5641ef0a3801 - rust_begin_unwind
   7:     0x5641ef0d37df - panicking::panic_fmt::h6c78ce0128588a957HK
   8:     0x5641ef0cf8b8 - panicking::panic::h98c7cbfc94ff8f45EGK
   9:     0x5641eee1513f - option::_<impl>::unwrap::unwrap::h16152199457913105795
                        at ../src/libcore/macros.rs:20
  10:     0x5641eee047c8 - _<impl>::start::start::h12834185189550730189
                        at /home/rust/.multirust/toolchains/nightly/cargo/git/checkouts/mioco-c9a859598988616b/master/src/lib.rs:2100
  11:     0x5641eee04149 - start_threads::start_threads::h7114804408540994820
                        at /home/rust/.multirust/toolchains/nightly/cargo/git/checkouts/mioco-c9a859598988616b/master/src/lib.rs:2368
  12:     0x5641eee03ebb - app::net2::main::hed6819897c005210Byc
                        at src/app/net2.rs:12
  13:     0x5641eee17e19 - main::hf894a5ce1ff19dc5aBc
                        at src/main.rs:60
  14:     0x5641ef0a5e14 - sys_common::unwind::try::try_fn::h2957776305083477838
  15:     0x5641ef0a3668 - __rust_try
  16:     0x5641ef0a5aaf - rt::lang_start::h0e35413b64fe04744kx
  17:     0x5641eee19619 - main
  18:     0x7fe17d7f060f - __libc_start_main
  19:     0x5641eece5ae8 - _start
  20:                0x0 - <unknown>

I can't think of any reason why the main thread (or the the thread on which start_threads() is called) is not used to execute coroutines; am I missing something?

Echo example does not compile (new-design branch)

I just found this project and tried to compile it to see if it works. So I created a Cargo.toml like this:

[package]
name = "echo"
version = "0.1.0"
authors = ["Carlo Pires <[email protected]>"]

[dependencies.mio]
git = "https://github.com/carllerche/mio"

[dependencies.mioco]
git = "https://github.com/dpc/mioco"
branch = "new-design"

When running, I get the following error:

$ cargo build
    Updating registry `https://github.com/rust-lang/crates.io-index`
 Downloading winapi v0.1.23
 Downloading tempdir v0.3.4
 Downloading spin v0.3.1
 Downloading bytes v0.2.10
 Downloading regex v0.1.41
 Downloading log v0.3.1
 Downloading bitflags v0.1.1
 Downloading clock_ticks v0.0.5
 Downloading mmap v0.1.1
 Downloading gcc v0.3.11
 Downloading env_logger v0.3.1
 Downloading aho-corasick v0.3.0
 Downloading slab v0.1.2
 Downloading regex-syntax v0.2.1
 Downloading memchr v0.1.3
 Downloading rand v0.3.8
 Downloading coroutine v0.3.2
 Downloading nix v0.3.9
 Downloading libc v0.1.8
   Compiling spin v0.3.1
   Compiling tempdir v0.3.4
   Compiling aho-corasick v0.3.0
   Compiling coroutine v0.3.2
/home/carlopires/.cargo/registry/src/github.com-1ecc6299db9ec823/spin-0.3.1/src/mutex.rs:112:15: 112:17 error: expected identifier, found keyword `fn`
/home/carlopires/.cargo/registry/src/github.com-1ecc6299db9ec823/spin-0.3.1/src/mutex.rs:112     pub const fn new(user_data: T) -> Mutex<T>
                                                                                                       ^~
/home/carlopires/.cargo/registry/src/github.com-1ecc6299db9ec823/spin-0.3.1/src/mutex.rs:112:18: 112:21 error: expected `:`, found `new`
/home/carlopires/.cargo/registry/src/github.com-1ecc6299db9ec823/spin-0.3.1/src/mutex.rs:112     pub const fn new(user_data: T) -> Mutex<T>
                                                                                                          ^~~
   Compiling mio v0.4.1 (https://github.com/carllerche/mio#478cef3f)
Build failed, waiting for other jobs to finish...
Could not compile `spin`.


$ rustc --version
rustc 1.1.0 (35ceea399 2015-06-19)

So, there is a problem with my rust ? Am I missing something?

Attach custom data to each Coroutine for Scheduling purposes.

Requested in #36 . Right now SchedulerThread can only get the following information about each coroutine:

  • id - through CoroutineControl::id()
  • if it's blocked on yield_now() - through CoroutineControl::is_yielding()

Being able to attach some custom data (eg. priorities) might greatly simplify custom schedulers job.

Weird behavior with the scheduler

I'm trying to get multiple consumers for a single producer

extern crate mioco;

use std::sync::Arc;
use mioco::mail;
use mioco::sync::Mutex;

fn main() {
    mioco::start_threads(5, move || {
        let (boxout, boxin) = mail::mailbox::<u64>();
        mioco::spawn(move || {
            let mut x = 0u64;
            loop {
                boxout.send(x);
                x = x + 1;
                mioco::sleep(500);
            }
        });

        let xboxin = Arc::new(Mutex::new(boxin));
        let xboxin2 = xboxin.clone();
        mioco::spawn(move || {
            loop {
                let x = { xboxin.lock().unwrap().read() };
                println!("{}\tA", x);
                mioco::sleep(2500);
            }
        });

        mioco::spawn(move || {
            loop {
                let x = { xboxin2.lock().unwrap().read() };
                println!("{}\tB", x);
                mioco::sleep(500);
            }
        });
        Ok(())
    });
}

This is what happens:

0   A
1   B
2   B
3   B
4   B
5   A
6   A
7   A
8   A
9   A
10  A
11  A
12  A
13  A
14  A
15  A
16  A
17  A
18  A
19  A
20  A
...

And it goes on and on, only A is printed from now on.

Everything quickly gets consumer by the first consumer coroutine, and only it. Nothing gets consumed by the second one any more.
A is the slow coroutine, which is sleeping for 2500ms, while B is yielding faster (500ms), so shouldn't the opposite happen?

Get rid of wrap() calls.

There's got to be a way to plug a wrap() call somewhere there implicitly, so I guess raw mio types would have to be wrapped everywhere, and return a custom type that does implicit wraps.
It might be possible.

MailboxInnerEnd recv blocks endlessly under certain conditions

First of all nice project, but I think I found a bug.

What I am trying to accomplish is to have a blocking send operation on a mailbox. I want some kind of re ndezvous channel, to know, when the other coroutine is done processing, what I did send.

Also it is not possible to create just two mailboxes for each direction, because I am spawning a not-fixed amount of coroutines.

To handle this I tried to create the following type:

let (first_tx, first_rx) = mioco::mailbox::<(String, mioco::MailboxOuterEnd<()>)>();

Allowing me to-do this, when I send a String to my first coroutine:

let (tx, rx) = mioco::mailbox::<()>();
let mut rx = mioco.wrap(rx);
first_tx.send((command, tx)).unwrap();
rx.recv().unwrap();

And to handle it like this on the other side:

let mut arg_recv = mioco.wrap(arg_recv);
loop {
    match arg_recv.recv() {
        Ok((command, result_send)) => {
            //do something with command
            result_send.send(()).unwrap();
         },
         Err(_) => {
             break
         },
     }
}

However in my case rx.recv() never returns.

Here is a full runnable example. Mostly a shortened version of the program, I am trying to write.
If you try it out, you will see, there is never "[Connection Task] Command returned" appearing in the console and if you give it enough time (or comment out that sleep) mio will run out of tokens.

extern crate mioco;

use std::collections::HashMap;
use std::sync::Arc;

fn do_command(handler: u32, command: String) {
    println!("[COMMAND] Dummy Command: {} for Handler: {} executed", command, handler);
}

fn main() {
    mioco::start(move |mioco| {
        let mut mailboxes = HashMap::new();

        for handler in 0..4 {
            let (arg_send, arg_recv) = mioco::mailbox::<(String, mioco::MailboxOuterEnd<()>)>();
            mailboxes.insert(handler, arg_send);

            mioco.spawn(move |mioco| {
                let mut arg_recv = mioco.wrap(arg_recv);
                loop {
                    match arg_recv.recv() {
                        Ok((command, result_send)) => {
                            println!("[Handler Coroutine] Got command");
                            do_command(handler, command);
                            println!("[Handler Coroutine] Command exited");
                            result_send.send(()).unwrap();
                            println!("[Handler Coroutine] Send back result");
                        },
                        Err(_) => {
                            break
                        },
                    }
                }
                Ok(())
            });
        }

        let mailboxes = Arc::new(mailboxes);

        loop {
            //normally accept tcp connections
            //simulate using sleep
            mioco.sleep(2000);

            //spawn a new coroutine for a new "connection"
            let mailbox_ref = mailboxes.clone();
            mioco.spawn(move |mioco| {

                //normally read command and provider from tcp here
                let provider = 0;
                let command = "test".to_string();

                let (result_send, result_recv) = mioco::mailbox::<()>();
                let mut result_recv = mioco.wrap(result_recv);

                match mailbox_ref.get(&provider) {
                    Some(arg_send) => {

                        println!("[Connection Task] Sending command");
                        let mut result = arg_send.send((command, result_send));
                        if result.is_ok() {
                            println!("[Connection Task] Waiting for response");
                            result = result_recv.recv();
                        }
                        println!("[Connection Task] Command returned");

                        match result {
                            Ok(()) => println!("[Connection Task] Command successfully passed to provider {}", provider),
                            Err(_) => println!("[Connection Task] Passing command failed to provider {}", provider),
                        }

                    },
                    None => println!("[Connection Task] Provider {} not found for issued command", provider),
                };

                Ok(())
            });
        }
        Ok(())
    })
}

Implement "connect_wait()".

I found a workaround for the connect problem: I can do s.with_raw(|s| s.take_socket_error()) and inspect that after the stream is writeable
not ideal, but it works
Mioco could expose that ...
This could be named "connect_wait()" or something.

mioco::Evented cannot be implemented externally

mioco::Evented is public but is constrained to types which are prv::EventedPrv, which is private. How can we have event sources defined in external crates? I began work on an event source for nanomsg which should be a separate crate from mioco, but can't see how to do that without making EventedPrv public.

0.1.0 TODO

  • Fix select operations
  • Update https://github.com/dpc/colerr
  • Squash and merge new-design into mainline
  • Update benchmarks
    • Write automatic benchmarks script, too lazy for that
    • against raw-mio echo
    • against libev
    • against Node.js echo

Mailbox optimizations and features.

Current mailbox seems sufficient for a start, but could potentially be optimized.

  • lockless implementation (currently using spinlocks)
  • ability to send messages in the other direction (currently it's only possible to send them into Coroutines); non-blocking channel into other direction should be enough for starter

Should mioco catch panics by default?

Right now mioco catches all panics from coroutines and converts them into ExitStatus::Panicked notification.

In #85 a config option was added to allow not-catching, and we're wondering if catching panic is justified at all.

The reasoning is:

  • coroutines should work just like native threads, and on native panic causes whole process to exit
  • user can easily add the panic catching manually

Looking for opinions about it.

Myself, ATM, I think removing catching panics is a way to go.

Improve logging messages.

Right now the log messages are rather inconsistent and incomplete. Go over the code and put log messages in carefully selected places. Use info! for important runtime operations, debug! for major stuff like events and switching from/to coroutine, trace! for details on event handling.

Manual coroutine scheduling.

TODO:

Original description:

https://www.reddit.com/r/rust/comments/3j1z78/mio_coroutines_010_released/culos3u: " For example, because mioco is single-threaded right now, I'd love to ask mioco which tasks are pending, so I could implement a scheduler and move work to a different thread if I needed. Right now work has to be scheduled more coarsely up-front (such as moving "work units" into a shared queue and then using co-routines to drain the queue, instead of scheduling co-routines themselves). "

`sender_retry` can deadlock.

sender_retry will spin on mio notify queue, while the queue itself might be to self. In that case (an in many others) this can lead to deadlock.

event loop runtimes benchmarks and mioco timeouts

You might want to check this out.

https://github.com/danoctavian/c10k-bench

I've gathered together some event driven runtimes and implemented an echo server for each.

For rust, i've put in mioco and coio.

The benchmark is crude and probably flawed. Also i'm not measuring latency average and standard deviation per request.

Any feedback is greatly welcome, i would like to make it a more solid benchmark, but i don't have much exp with benchmarks.

The trouble i am encountering with mioco is that requests time out when running 3000 connections simultaneously for 30s. if i run 128 threads each sequentially doing connections for 30s it completes.

The benching code is here:
https://github.com/danoctavian/c10k-bench/blob/master/bencher/src/Lib.hs

Thank you!

Blocking calls / longer computations wrapper.

Write an utility wrapper that:

  • spawns a new thread, with a mailbox, and sends the result back,
  • blocks coroutine an an mailbox receiver,
  • all packaged like it's a normal function call,
  • integrate with #39.

lots_of_event_sources fails on my system.

I am not sure what is causing this, but the current master branch tests do not succeed on my system.
I have already tried to update Rust to latest nightly, run cargo clean and cargo update.

Log:

...
test tests::exit_notifier_simple ... ok
test tests::lots_of_event_sources ... FAILED
test tests::million_coroutines ... ignored
test tests::long_chain ... ok
...

---- tests::lots_of_event_sources stdout ----
    thread 'tests::lots_of_event_sources' panicked at 'explicit panic', src/tests.rs:200
thread 'tests::lots_of_event_sources' panicked at 'assertion failed: *finished_ok.lock().unwrap()', src/tests.rs:210


failures:
    tests::lots_of_event_sources

test result: FAILED. 26 passed; 1 failed; 1 ignored; 0 measured

Notify mailboxes

Hi, I'm contemplating a mailbox functionality for mioco based around the notify feature of mio, but wanted to get some feedback and see if this kind of functionality and a PR is wanted before I go any further on my own.

In the reddit thread yesterday there was some discussion about capability for communicating between coroutines as well as with the outside world and I think notify-based mailboxes would accomplish this. I need functionality like this if I am to use mioco as a basis for my own (conceived) project, which is a small actors library loosely based on Haskell's distributed-process library (Cloud Haskell).

So this mailbox functionality may seem a bit too high-level for something like mioco but let me just share my sketchy design concept for the user API and if it can be simplified thats great.

Thanks!

Jeremy

//A unique identifier for a co-routine that can be shared with the outside world. This may
//simply be index.
struct CoID {
    id: usize
}

//Message should be Send so this will work from the outside world.
type Message = Box<Any + 'static + Send>;

struct Coroutine {
    //collection of messages which have been delivered to this co-routine
    mailbox: Vec<Message>,
..
}


//We'd need to expose this through some kind of proxy type returned
//by mioco::start I think - that is the handle to the outside world. 
//send uses mio::Sender::send to deliver the message; the Handler
//needs to implement mio::Handler::notify to put this in the correct mailbox and 
//resume the coroutine if it is blocked on receive.
fn send(coid: CoID, message: Message) -> io::Result<()> 

//This may belong to MiocoHandle...
//A coroutine can call this to attempt to receive a message of the specified type.
//Iterate through this Co's mailbox attempting downcast<T> and return
//the first success result. If no matching message is found we set a timeout and will attempt 
//the downcast again on any new message delivered before timeout expires. 
//Return None at timeout.
fn receive_timeout<T: Any+Send>(delay: u64) -> Option<Box<T>>

Track coroutines finishing status.

Mioco server should track which coroutines finished:

  • returning Ok(())
  • returning Err(_)
  • panicking

I'm not sure if we should count these, or what exactly, but it would be useful eg. in tests.

Idle coroutines.

It would be nice to have coroutines that work only when there is nothing better to do. Related to #36.

MiocoHandle::unwrap()

We need a new function that would release a previously wrapped EvenSource from the Coroutine Handler control, so it can be eg. moved to a different Coroutine.

impl MiocoHandle {
...
pub fn unwrap<T : 'static>(&mut self, io : EventSource<T>) ->  T
      where T : Evented {
...
}
...
}

Rewrite mailbox (channel).

  • Receiving end should be event source, so it can be used in select
  • Model after std-lib channels (name and behavior)
  • Make them work inside and outside of mioco, by detecting where the receiving end is, and using mio notification / std-lib mechanism to wake the receiver.

Spawned coroutines execute on two threads simultaneously

So I think I found the issue with my timers firing back-to-back. I haven't looked into the mioco code yet, but it appears that a spawned coroutine will schedule to a new thread, but also continue executing on the current thread?

My code opens a socket, accepts a new connection and sends it to a coroutine, then starts a select loop with a timer:

loop {
    let stream = try!(listener.accept());
    let state_clone = state.clone();
    debug!("Inbound connection, spawning coroutine  {:?}", thread::current().name());
    mioco.spawn(move |mioco| {
        let peer_addr = stream.peer_addr().unwrap();

        debug!("New coroutine: Accepting connection from [{}]  ({:?})", peer_addr, thread::current().name());

        let mut stream = mioco.wrap(stream);
        state_machine::start(mioco, &state_clone, &mut stream);
        Ok(())
    });
}

state_machine::start()

pub fn start(mioco: &mut MiocoHandle, state: &Arc<RwLock<State>>,
                    mut stream: &mut EventSource<TcpStream>) {
    debug!("state_machine()  ({:?})", thread::current().name());

    let timer_id = mioco.timer().id();

    let mut delta = 0;
    let mut last_time = UTC::now();
    loop {
        let event = mioco.select_read();

        if event.id() == timer_id {
            let now = UTC::now();
            delta = now.timestamp() - last_time.timestamp();
            last_time = now;
            println!("Timer fired: {}, elapsed time: {}  (Thread: {:?})",
                now.format("%H:%M:%S%.9f"),
                delta, thread::current().name());

            mioco.timer().set_timeout(10000);
        } else {
            println!("Read available: {}", UTC::now().format("%H:%M:%S%.9f").to_string());
        }
    }
}

If you look at the logs, you can see that mioco_thread_1 accepts the connection, spawns a new coroutine which is scheduled to mioco_thread_2, the new coroutine starts up the state_machine but you can see that the state machine is actually running on both threads. Then they both start a timer and voila, back-to-back timers firing.

INFO:cormorant::network_handler: Binding 127.0.0.1:19919...
DEBUG:cormorant::network_handler: bound! on [127.0.0.1:19919]
DEBUG:cormorant::network_handler: Listening...  (Some("mioco_thread_1"))
INFO:cormorant::network_handler: Server bound on V4(127.0.0.1:19919)  (Some("mioco_thread_1"))
DEBUG:cormorant::network_handler: Connecting to external node [127.0.0.1:19920]...
DEBUG:cormorant::network_handler: Inbound connection, spawning coroutine  Some("mioco_thread_1")
DEBUG:cormorant::network_handler: New coroutine: Accepting connection from [127.0.0.1:55492]  (Some("mioco_thread_2"))
DEBUG:cormorant::network_handler::state_machine: state_machine()  (Some("mioco_thread_1"))
DEBUG:cormorant::network_handler::state_machine: state_machine()  (Some("mioco_thread_2"))
Read available: 02:40:45.914096000
Read available: 02:40:45.914216000
Timer fired: 02:40:46.091812000, elapsed time: 1  (Thread: Some("mioco_thread_1"))
Timer fired: 02:40:46.106088000, elapsed time: 1  (Thread: Some("mioco_thread_2"))
Timer fired: 02:40:56.096791000, elapsed time: 10  (Thread: Some("mioco_thread_1"))
Timer fired: 02:40:56.303424000, elapsed time: 10  (Thread: Some("mioco_thread_2"))
Timer fired: 02:41:06.292521000, elapsed time: 10  (Thread: Some("mioco_thread_1"))
Timer fired: 02:41:06.305685000, elapsed time: 10  (Thread: Some("mioco_thread_2"))

Select must tolerate spurious events.

The way mio works, to guarantee correctness select-like operations must tolerate spurious events, without causing event source returned to block.

There seems to be just no other way to guarantee code correctness.

Timer can fire prematurely due to resolution differences.

SteadyTime keeps time in nanoseconds, while mio uses delay in ms. If duration between now and timeout time is rounded down when converting to ms, the timer can fire while the precise now is not yet bigger than timeout time. This is not fatal, as the timer will just generate next mio timeout_ms(0), but it's not optimal.

I guess rounding time up when converting to ms is a proper fix.

Protect against spurious events.

mio can generate spurious events due to: tokio-rs/mio#219 .

Spurious events can mess with select() semantics. After coroutine is resumed after select(), the Event Source that was returned must not block on a next corresponding operation. If the waking event was a spurious one, the whole coroutine could block on something that might not even happen again.

Save slab by keeping only blocked io in it.

Right now mioco keeps all the even sources in the Slab. But as one coroutine is usually blocked only on one event source, it makes sense to add event sources to the slab only when they are blocked on (EventSourceRefShared::reregister) and remove them when they are not (EventSourceRefShared::unreregister).

The downside is that event sources tokens will have different tokens during their lifetime (harder debugging?) and registering and deregistering things might add some overhead (though it seems tiny enough, to be worth tinier slab (better cache utilization etc)).

Initial TODO list

  • Is this practical and does it make sense?
  • Evaluate using coroutine::Context instead of coroutine::Coroutine. Maybe there are some gains by not using global scheduling etc.
  • Should more than just TcpStream be supported?
  • API to trim needless Stacks and Contexts allocations?

Can't try mioco's echo server example

$ rustc --version
rustc 1.8.0-nightly (18b851bc5 2016-01-22)
$ grep mioco Cargo.lock 
name = "rust_mioco_test"
 "mioco 0.2.0 (registry+https://github.com/rust-lang/crates.io-index)",
name = "mioco"
$ target/debug/rust_mioco_test 
Segmentation fault
$ rust-gdb -args target/debug/rust_mioco_test
...
(gdb) r
Starting program: /mnt/src/_/rust_mioco_test/target/debug/rust_mioco_test 
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/i386-linux-gnu/libthread_db.so.1".
[New Thread 0xf6fffb40 (LWP 23443)]
[New Thread 0xf6dffb40 (LWP 23444)]
[New Thread 0xf65ffb40 (LWP 23445)]

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0xf6fffb40 (LWP 23443)]
0x565e1d6a in sys_common::net::TcpListener::socket_addr::hf48f84ba8af5633bGAs ()
(gdb) bt full
#0  0x565e1d6a in sys_common::net::TcpListener::socket_addr::hf48f84ba8af5633bGAs ()
No symbol table info available.
#1  0x565e1d3e in net::tcp::TcpListener::local_addr::h3dad58d2c54df282slj ()
No symbol table info available.
#2  0x565cdff1 in mio::sys::unix::tcp::TcpListener::local_addr (self=0xf6826018)
    at /home/vi/.cargo/registry/src/github.com-48ad6e4054423464/mio-0.5.0/src/sys/unix/tcp.rs:156
No locals.
#3  0x565cdfa1 in mio::net::tcp::TcpListener::local_addr (self=0xf6826018) at /home/vi/.cargo/registry/src/github.com-48ad6e4054423464/mio-0.5.0/src/net/tcp.rs:214
No locals.
#4  0x565bb4a6 in mioco::tcp::TcpListener::local_addr (self=0xf59ff864) at /home/vi/.cargo/registry/src/github.com-48ad6e4054423464/mioco-0.2.0/src/tcp.rs:25
No locals.
#5  0x5657e5ba in fnfn () at src/main.rs:20
        addr = V4 = {SocketAddrV4 = {inner = sockaddr_in = {sin_family = 2, sin_port = 45845, sin_addr = in_addr = {s_addr = 16777343}, 
              sin_zero = "\000\000\000\000\000\000\000"}}}
        listener = TcpListener = {RcEvented<mio::net::tcp::TcpListener> = {Rc<core::cell::RefCell<mioco::EventedShared<mio::net::tcp::TcpListener>>> = {
              _ptr = Shared<alloc::rc::RcBox<core::cell::RefCell<mioco::EventedShared<mio::net::tcp::TcpListener>>>> = {
                pointer = NonZero<*const alloc::rc::RcBox<core::cell::RefCell<mioco::EventedShared<mio::net::tcp::TcpListener>>>> = {0xf6826000}, 
                _marker = PhantomData<alloc::rc::RcBox<core::cell::RefCell<mioco::EventedShared<mio::net::tcp::TcpListener>>>>}}}}
#6  0x5657e470 in rust_mioco_test::boxed::F.FnBox<A>::call_box (self=0x1, args=0) at ../src/liballoc/boxed.rs:541
No locals.
#7  0x565ada17 in fnfn () at /home/vi/.cargo/registry/src/github.com-48ad6e4054423464/mioco-0.2.0/src/lib.rs:1007
        coroutine = 0xf778d00c
        f = Box<FnBox<()>>
        arg = 4151889932
#8  0x565ad2b8 in fnfn () at ../src/libstd/panic.rs:260
        f = closure = {4151889932}
        result = 0xf59ffbac
#9  0x565ad26f in mioco::sys_common::unwind::try::try_fn<closure> (opt_closure=0xf59ffafc "") at ../src/libstd/sys/common/unwind/mod.rs:153
        opt_closure = 0xf59ffafc
#10 0x565e6145 in __rust_try ()
No symbol table info available.
#11 0x565e35d8 in sys_common::unwind::try::inner_try::h58edf5aa3f4d3bba5bt ()
No symbol table info available.
#12 0x565ad1f6 in mioco::sys_common::unwind::try<closure> (f=closure = {...}) at ../src/libstd/sys/common/unwind/mod.rs:123
Python Exception <class 'gdb.error'> Cannot convert value to int.: 
        f = {RUST$ENCODED$ENUM$0$1$None = Some = {closure = {closure = {0}, 0x0}}}
#13 0x565acf6c in mioco::panic::recover<closure,core::result::Result<(), std::io::error::Error>> (f=closure = {...}) at ../src/libstd/panic.rs:260
        result = 0xf59ffbac
        result = None
#14 0x565abe3f in mioco::Coroutine::spawn::init_fn (arg=4151889932) at /home/vi/.cargo/registry/src/github.com-48ad6e4054423464/mioco-0.2.0/src/lib.rs:998
        res = Ok = {Ok = {0}}
        coroutine = 0x0
        id = CoroutineId = {0}
        ctx = 0x0
#15 0x565abd83 in Coroutine::spawn::init_fn::h782ba2051dd1f929wjc ()
No symbol table info available.
---Type <return> to continue, or q <return> to quit---
#16 0x00000000 in ?? ()
No symbol table info available.

System is i686 Debian Wheezy running x86_64 kernel.

synchronous mailboxes?

Hi,

And thank you for this great crate.

Mailboxes are currently unlimited, with send() never blocking. Is there any plan to also implement synchronous mailboxes with a maximum size, where send() blocks when the queue is full?

Thanks.

Timer.

Mioco needs a timer. Combined with select(), that would give us free timeout functionality on every possible IO.

Change u32 bitmask into vec<u32> bitmask.

  • Change u32 bitmaps for EventSource indexes into vec<u32> bitmask, so even ridicoulously big number of EventSources can work.
  • Optimize for loops with "count leading zeros"-like (I'm experienced with ARM ISA, not x86 one) for fast lookup.

Evaluate using coroutine::Context instead of coroutine::Coroutine. Maybe there are some gains by not using global scheduling etc.

Mioco requires only a context switch from and from each handler, and does not require full flexible model of coroutine with keeping track of runnable coroutines etc.

Right now mioco uses coroutine primitives, potentially that are shared with other users of coroutine. If any handler was to use call Coroutine::yield() or other control mechanisms on it's own, that could lead to rather unexpected results within mio.

That's another reason, to plug into lower-level coroutine::Context interface.

Mioco tests can hang from time to time.

[Switching to thread 22 (Thread 0x7f486bfff700 (LWP 18921))]
#0  0x00007f489240b043 in epoll_wait () from /lib64/libc.so.6
=> 0x00007f489240b043 <epoll_wait+51>:  48 8b 3c 24     mov    (%rsp),%rdi
(gdb) bt
#0  0x00007f489240b043 in epoll_wait () from /lib64/libc.so.6
#1  0x000055f34f206899 in poll::Poll::poll::h0e2fda78f9477e43Xhc ()
#2  0x000055f34f0723e4 in event_loop::EventLoop$LT$H$GT$::run_once::h17108033434824725424 ()
#3  0x000055f34f159e5d in Mioco::thread_loop::h2685941314194202616 ()
#4  0x000055f34f156447 in tests::simple_mutex::h225dca9689e2840cIle ()
#5  0x000055f34f1e7fd7 in boxed::F.FnBox$LT$A$GT$::call_box::h3360770150654781710 ()
#6  0x000055f34f1ea6cc in sys_common::unwind::try::try_fn::h11980588905488300950 ()
#7  0x000055f34f2134ec in __rust_try ()
#8  0x000055f34f210cee in sys_common::unwind::inner_try::h876b2793e1ec4011kft ()
#9  0x000055f34f1eaa4b in boxed::F.FnBox$LT$A$GT$::call_box::h10282852815371371690 ()
#10 0x000055f34f2151c1 in sys::thread::Thread::new::thread_start::h36aef2efeb591414fUx ()
#11 0x00007f4891bec60a in start_thread () from /lib64/libpthread.so.0
#12 0x00007f489240aa4d in clone () from /lib64/libc.so.6

@Drakulix , I think this is the issue that you've found.

Growable slab.

When running out of slab space, it would be better to just resize the slab instead of panicking or whatever mioco is doing now in such case. Also, maybe by using some cheap-lookup HashMap, we could eliminate the problem altogether. And by running on every-increasing tokens, we could eliminate potential spurious events caused by: tokio-rs/mio#219 by just never reusing the same token (at least not until whole Token space wraps).

Move to CopyFair License.

I'd like this and probably all Open Source projects of mine, to move to CopyFair License when this license is actually ready. To get more idea on what CopyFair is about:

I strongly believe that self-sustaining Open&Free Software is important and I think it's a tragedy that so many important Open Source projects are struggling to fund their development and maintenance, while big companies are making a lot profit using them, often without giving back.

Until then this project will stay licensed as MPL2, but I'd like to give everyone a heads up, and gets and "I agree" from everyone before contributing.

Multithreading support?

Implement multithreading support.

Some TODO-notes:

  • Should new Coroutines be started by default on the thread of the parent-Coroutine or should there be a Scheduler (or SchedulerThread) function that can override this?
  • Default FiFoScheduler should randomly distribute new Coroutines between it's threads. Otherwise everything will be executed on first thread, which kind of misses the whole point.
  • What should be the exit conditions for whole Mioco server, and it's threads. Should SchedulerThread take an argument providing an API to shoot down thread/everything? Or does it matter at all in real life software?
  • How to accommodate Idle Coroutines from #39 ? Coroutines should be able to call MiocoHandle::yield() that puts them into idle state, that is basically like Ready, but not blocked on anything. After CoroutineControl::resume() they would appear in SchedulerThread::ready() in next tick batch, or should SchedulerThread deal with them distinctly.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.