Giter Site home page Giter Site logo

wee_alloc's People

Contributors

0x7cfe avatar 4tm4j33tk4ur avatar alexcrichton avatar creativcoder avatar dependabot-preview[bot] avatar dependabot-support avatar dependabot[bot] avatar fitzgen avatar joshlf avatar matklad avatar mirclus avatar pepyakin avatar robbepop avatar rreverser avatar zackpierce avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wee_alloc's Issues

Allow heap to start not at a page boundary

Motivation

We're trying to build an application where WASM is used to run "microfunctions", small stateless functions that can be written once in Rust and then ported via WASM to run in a variety of runtimes. A WASM Memory may be built once and then used again and again for multiple microfunction invocations. The buffers backing the WASM Memories would be owned and destroyed by the host environment.

One problem we're running into is that the WASM Memories, at 64 KiB page sizes, are unsuitable for scaling to dozens of microfunctions. It seems unlikely that the WASM spec would ever allow smaller page sizes, so the best alternative would be to guarantee that each microfunction fits within one page.

We've managed to get the Rust call stack and static memory to fit in a small chunk of linear memory, such that most of the first page is empty. However, per #61, wee_alloc seems to always add a new page for its heap, with no option to re-use empty space in the existing linear memory space.

Proposed Solution

Allow wee_alloc to set the head of its heap to an arbitrary location. The location could be provided by a Global, for example. If wee_alloc runs out of space in that initial block (between the start position and the current memory size), then it can allocate more pages as usual.

Alternatives

  1. Decreasing the page size, but this would require a fundamental change to the WASM MVP spec, which seems unlikely.
  2. Accepting that code written with wee_alloc always requires at least 128 KiB of linear memory.

Additional Context

The project is called OmnICU. We're hoping to share more details soon. For now, you can track some of our work at https://github.com/i18n-concept/rust-discuss

CC @hagbard @nciric @echeran

Unbounded Memory Leak

Describe the Bug

Making two large allocations, then dropping them in the order they were allocated leaks memory in wee_alloc, but not in default allocator.

Steps to Reproduce

Native code

  1. Clone this branch of my app: https://github.com/CraigMacomber/wee_alloc_leak_png
  2. cargo run
  3. GB of memory is consumed per second
  4. Comment out use of wee_alloc global_allocator.
  5. cargo run
  6. Memory use does not increase.

Wasm

  1. Clone this branch of my app: https://github.com/noencke/uuid-cluster/tree/wee_alloc_leak
  2. npm run serve (runs wasm-pack, npm install and webpack dev server)
  3. go to localhost:8080 in browser
  4. click the button (it's the only thing on the page) to increase heap by 131 MB each time

Expected Behavior

Second time allocations are made uses free list.

Actual Behavior

Repeated allocations grow heap infinitely.

Additional Context

This seems similar to an issue mentioned in #105
Heap size does not increase if using default allocator.

Rust source for example is just:

extern crate wee_alloc;

// Use `wee_alloc` as the global allocator.
#[global_allocator]
static ALLOC: wee_alloc::WeeAlloc = wee_alloc::WeeAlloc::INIT;

pub fn leak_test() {
    // This leaks when using wee_alloc
    let a = Box::new([0; 85196]);
    let b = Box::new([0; 80000]);
    drop(a);
    drop(b);
}

fn main() {
    loop {
        leak_test();
    }
}

free_cell_layout test fails on macOS

Summary

Running cargo test on a library code with either stable or nightly Rust fails on macOS Mojave.

Steps to Reproduce

  • First clone this repository that uses wee_alloc: ..............
  • cd $REPO/wee_alloc
  • cargo test

Actual Results

---- free_cell_layout stdout ----
thread 'free_cell_layout' panicked at 'assertion failed: `(left == right)`
  left: `Bytes(17)`,
 right: `Bytes(24)`: Safety and correctness depends on FreeCell being only one word larger than CellHeader', wee_alloc/src/lib.rs:345:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
stack backtrace:
   0: std::sys::unix::backtrace::tracing::imp::unwind_backtrace
             at src/libstd/sys/unix/backtrace/tracing/gcc_s.rs:49
   1: std::sys_common::backtrace::_print
             at src/libstd/sys_common/backtrace.rs:71
   2: std::panicking::default_hook::{{closure}}
             at src/libstd/sys_common/backtrace.rs:59
             at src/libstd/panicking.rs:211
   3: std::panicking::default_hook
             at src/libstd/panicking.rs:221
   4: <std::panicking::begin_panic::PanicPayload<A> as core::panic::BoxMeUp>::get
             at src/libstd/panicking.rs:476
   5: std::panicking::continue_panic_fmt
             at src/libstd/panicking.rs:390
   6: std::panicking::try::do_call
             at src/libstd/panicking.rs:325
   7: core::char::methods::<impl char>::escape_debug
             at src/libcore/panicking.rs:77
   8: wee_alloc::free_cell_layout
             at wee_alloc/src/lib.rs:345
   9: wee_alloc::free_cell_layout::{{closure}}
             at wee_alloc/src/lib.rs:344
  10: core::ops::function::FnOnce::call_once
             at /rustc/b68fc18c45350e1cdcd83cecf0f12e294e55af56/src/libcore/ops/function.rs:238
  11: <F as alloc::boxed::FnBox<A>>::call_box
             at src/libtest/lib.rs:1471
             at /rustc/b68fc18c45350e1cdcd83cecf0f12e294e55af56/src/libcore/ops/function.rs:238
             at /rustc/b68fc18c45350e1cdcd83cecf0f12e294e55af56/src/liballoc/boxed.rs:673
  12: panic_unwind::dwarf::eh::read_encoded_pointer
             at src/libpanic_unwind/lib.rs:102

Expected Results

All tests passing.

Shell scripts of repo not working due to wee_alloc compilation errors

Describe the Bug

Execution of .check, .build & .test fails due to a compilation problem.
I commented out the i686-pc-windows-gnu target-related checks since I can't run them in my env and the compilation of wee_alloc fails with output:

++ dirname ./check.sh
+ cd .
+ cd ./wee_alloc
+ cargo check
   Compiling wee_alloc v0.4.5 (path_to/wee_alloc/wee_alloc)
    Finished dev [unoptimized + debuginfo] target(s) in 0.20s
+ cargo check --target wasm32-unknown-unknown
   Compiling wee_alloc v0.4.5 (path_to/wee_alloc/wee_alloc)
    Finished dev [unoptimized + debuginfo] target(s) in 0.18s
+ cargo check --features size_classes
    Finished dev [unoptimized + debuginfo] target(s) in 0.03s
+ cargo check --features size_classes --target wasm32-unknown-unknown
    Finished dev [unoptimized + debuginfo] target(s) in 0.02s
+ cd -
path_to/wee_alloc
+ cd ./test
+ cargo check
   Compiling wee_alloc v0.4.5 (path_to/wee_alloc/wee_alloc)
    Checking env_logger v0.5.13
error[E0432]: unresolved imports `core::alloc::Alloc`, `core::alloc::AllocErr`
   --> wee_alloc/src/lib.rs:221:27
    |
221 |         use core::alloc::{Alloc, AllocErr};
    |                           ^^^^^  ^^^^^^^^
    |                           |      |
    |                           |      no `AllocErr` in `alloc`
    |                           |      help: a similar name exists in the module: `AllocError`
    |                           no `Alloc` in `alloc`

error: aborting due to previous error

For more information about this error, try `rustc --explain E0432`.
error: could not compile `wee_alloc`

To learn more, run the command again with --verbose.
warning: build failed, waiting for other jobs to finish...
error: build failed

It happens the same with the other scripts.

Steps to Reproduce

Just run any of check, build or test scripts.

Expected Behavior

I would expect the imports to be correct. Alloc does not exist in libcore, it was renamed to Allocator AFAIK. Same happens with AllocErr which was renamed to AllocError.

Also the Alloc trait-functions have been updated.

If I'm not missing something or ommiting any info. And the repo is still mantained, I would like to update that and solve this issue. Otherways, please, could you tell me what I'm missing ?

Investigate making the free list doubly linked

Summary

Investigate rounding up allocations to at least two words, and making the free list doubly linked.

Motivation

This could simplify code, maybe shrink code size, and lessen fragmentation.

Details

  • We could remove deferred consolidation of cells. Simplifying code and hopefully also shrinking code size.

  • We could always consolidate a free cell with both of its neighbors, if they are also free. Right now we can only do one or the other because the free list is singly linked, and doesn't afford these kinds of manipulations in O(1) time.

  • Downside is that heap allocations of size < 2 words get rounded up. I think this is probably an OK choice.

  • We would make FreeCell have another link prev_free_raw that is the previous free list pointer.

  • Anywhere we insert into or remove from the free list, we would need to make sure that the new link is kept valid.

Happy to mentor anyone who wants to try their hand at this!

Possible allocator memory corruption?

Describe the Bug

Unfortunately, I'm not sure. Using wee_alloc causes bugs in Bevy, suggesting a bug somewhere in the allocator.

Steps to Reproduce

See bevyengine/bevy#3763

Expected Behavior

No memory corruption bugs.

Actual Behavior

Memory corruption.

windows with_exclusive_access implementation is not threadsafe

Summary

The Windows implementation of with_exclusive_access is not safe with respect to threads.

Thought experiment to reproduce

Consider the code:

// If we haven't been through here yet, initialize the mutex.
if *self.lock.get() == NULL {
*self.lock.get() =
CreateMutexW(NULL as *mut SECURITY_ATTRIBUTES, FALSE, NULL as *mut u16);
extra_assert!(*self.lock.get() != NULL);
}

If we have two threads requesting exclusive access, then something akin to the following can happen:

T1: Checks for the lock at line 50, finds it to be NULL.
T1: Executes any code up to the point of actually storing the CreateMutexW return value.
T1: Gets suspended.
T2: Checks for the lock at line 50, finds it to be NULL.

We're now in trouble, because we have two threads that are trying to create the mutex for a single object, and will therefore be confused about who may access the data.

let code = WaitForSingleObject(*self.lock.get(), INFINITE);
extra_assert_eq!(
code,
WAIT_OBJECT_0,
"WaitForSingleObject should return WAIT_OBJECT_0"
);
let result = f(&mut *self.inner.get());
let code = ReleaseMutex(*self.lock.get());
extra_assert!(code != 0, "ReleaseMutex should return nonzero");

T2: Creates the lock, locks the mutex at line 56.
T2: Executes any code up to line 65, where the mutex is released.
T2: Gets suspended.
T1: Resumes, stores (and overwrites!) the lock with its mutex.
T1: Continues executing, possibly running f concurrently with T2, depending on where T2 was stopped.

This is a pretty unlikely scenario, as it can only happen on the very first exclusive access, and then only with some particular scheduling decisions on the part of the OS.

I don't believe the pthreads implementation is vulnerable to this, as the Exclusive<T> value's lock is initialized during creation, rather than at runtime during with_exclusive_access. It's possible that the semantics of UnsafeCell make the scenario above not work; I'm not 100% sure either way since I'm not that familiar with UnsafeCell. But looking at UnsafeCell's implementation didn't turn up anything that would make this safe.

A straightforward way to fix this on the Windows size would be to use slim reader/writer locks, which can be initialized with 0 at Exclusive<T> creation time, rather than requiring a runtime call like critical sections or mutexes.

Cloudflare Worker response bodies being mangled when using wee_alloc

Originally reported as cloudflare/workers-rs#64

Describe the Bug

I’m using the worker crate to experiment with using async-graphql in a Cloudflare Worker. When using wee_alloc as the global allocator, the first 2-4 bytes of the worker’s response body are being scrambled. The ’scrambling’ is consistent between requests to the worker. There are several different ways of creating response bodies in the worker crate, and for each of them the bytes are scrambled slightly differently.

This bug does not occur when wee_alloc is not used. The bug only seems to occur when using the methods of the Response type to create a response body, the methods from creating headers, as well as the async-graphql and serde_json crates, are working fine.

Steps to Reproduce

  1. Clone this repo: https://github.com/thomasfoster96/wealloc-issue
  2. Install wrangler by running cargo install wrangler.
  3. Run wrangler dev to build and run the worker locally.
  4. Navigate to http://127.0.0.1:8787 (or whatever URL wrangler gives) to see the output.
  5. Wrangler will log the output of calling serde_json::to_string inside the worker, which will be the expected output.

Expected Behaviour

The worker should respond with the following JSON response body:

{"data":null,"errors":[{"message":"Bad Request"}]}

Actual Behavior

In practice, the first 2 to 4 bytes of the response are scrambled, so the worker is responding with the following:

����ta":null,"errors":[{"message":"Bad Request"}]}

Visiting https://wealloc-issue.thomasfoster.workers.dev/ will also give you this response.

Additional Context

I’m not particularly experienced using Rust, wee_alloc or Cloudflare Workers, but my guess is that for some reason the first four bytes of the response body are being interpreted as a usize at some point. The fix suggested in the original issue was just that using wee_alloc should be a last resort as it’s probably an unnecessary optimisation (see cloudflare/workers-rs#64 (comment), Cloudflare Workers allow workers to be up to 1MB in size after compression).

Team

wee_alloc needs a team of people who can review each other's pull requests, and make consensus-based decisions together about its future.

Anyone who has made two meaningful contributions to wee_alloc should seriously consider joining the team!

Drop a comment here if you are interested in joining.

Binary size larger than advertized

Summary

Compiling a test-app with different settings (dlmalloc and wee_alloc) the resulting binary sizes of wee_alloc builds are not as small as expected. Benefit is just 3KB, the relation is 25KB for dlmalloc-builds compared to 22KB for wee_alloc-builds with code using simple String-allocation.
see

With an application without any dynamic memory allocation, wee_alloc is adding 2500 bytes:

   829  wasm-game-of-life-dlmalloc/pkg/wasm_game_of_life_bg.wasm
3666  wasm-game-of-life-wee_alloc/pkg/wasm_game_of_life_bg.wasm

With an application using simple String allocation, wee_alloc is adding ca 21000 bytes, being just 3KB better than dlmalloc.

25179  wasm-game-of-life-dlmalloc-dyn/pkg/wasm_game_of_life_bg.wasm
22141  wasm-game-of-life-wee_alloc-dyn/pkg/wasm_game_of_life_bg.wasm

Steps to Reproduce

git clone https://github.com/frehberg/wasm-dyn-mem.git
cd wasm-dyn-mem/rust-bindgen
make build print-sizes

Expected Results

Linking against wee_alloc instead of dlmalloc I expected binaries being much more smaller and gaining larger benefit compared to dlmallic. Just, in some cases the binary is larger and the relation is just 22KB vs 25KB
Maybe wee_alloc is using some code-patterns that can not be optimized as good as expected

Create a C API posix-compatible malloc/free wrapper crate

Summary

Add a C API crate that wraps wee_alloc and provides posix malloc and free.

Motivation

C and C++ projects targeting wasm could benefit from wee_alloc too, and more users = more bug reports and reliability work and all that good stuff.

Details

  • New crate at the top level of the repo, depending on wee_alloc

  • Wraps a wee_alloc global allocator and exposes malloc and free (and I think realloc is part of the posix spec too?)

Storage at arbitrary address

Motivation

I want to place the memory pool at some arbitrary address because my embedded device happens to have some accessible memory there. The embedded static array is too inflexible for that.

Proposed Solution

Adding another constructor to the WeeAlloc struct where I can just pass in pointer + length or a slice and this memory is used no questions asked.

Move unit.rs into separate crate?

Maybe a weird idea, but what do you think about

Summary

Move units.rs to separate crate and publish it on crates.io?

Motivation

I came up with that idea when I was working on wasmi. I noticed that Pages→Bytes and Bytes→Pages are quite common thing that people want to do, when working with parity-wasm/wasmi.
Example here and there.

Details

I think just taking units.rs and moving it into another crate will be enough.
I'm ready to take this if we agree if this idea is worth it.

Add ability to log allocations/frees for easily reporting bugs

Summary

We would write all allocations and frees into a buffer that we periodically flush to a file. When people report a bug, they could enable this feature, and provide their log.

Motivation

Easier to file bugs. Easier to reproduce bugs. Therefore, easier to fix bugs.

Details

  • Behind a cargo feature.

  • I guess we could use std::io::BufWriter, and say that this feature requires std

  • Ideally this would use the same operation definitions in ./test, so we could trivially turn them into regression tests. Also, shrink them to get reduced test cases.

wee_alloc leaks memory

Describe the Bug

wee_alloc is supposedly useful for embedded devices, but I tried it over the weekend and of the total amount of memory of 256 KiB, wee_alloc is only able to allocate 2.2 KiB in its default configuration. Something seems to be off here.

Steps to Reproduce

  1. Use the static array backend.
  2. Configure 256 KiB of memory.
  3. Do a few allocations in the range 1 - 700 bytes like in the image provided.
  4. wee_alloc should run out of memory very early.

Expected Behavior

It should be able to use at least 200 KiB or so.

Actual Behavior

It can only allocate 2.2 KiB.

https://i.imgur.com/XILjeLr.png

Panicky wasm from Vec allocation

Summary

Some panic related code is showing in a wasm binary produced with wee_alloc.
I'm unsure if this is just panic related boiler plate or a low-level error.

Steps to Reproduce

Using wee_alloc = "0.4.2" and rustc 1.33.0-nightly (c2d381d39 2019-01-10)
Compile to wam: cargo build --release --target=wasm32-unknown-unknown
Inspect wasm binary: wasm-objdump -x </path/to/binary.wasm>

// main.rs
#![feature(alloc_error_handler)]
#![feature(core_intrinsics)]
#![feature(alloc)]
#![no_std]

extern crate wee_alloc;

extern crate alloc;
use alloc::vec::Vec;

use core::intrinsics;

#[global_allocator]
static ALLOC: wee_alloc::WeeAlloc = wee_alloc::WeeAlloc::INIT;

#[panic_handler]
#[no_mangle]
pub fn panic(_info: &::core::panic::PanicInfo) -> ! {
    unsafe {
        intrinsics::abort();
    }
}

#[alloc_error_handler]
pub fn oom(_: ::core::alloc::Layout) -> ! {
    unsafe {
        intrinsics::abort();
    }
}

#[no_mangle]
pub extern "C" fn call() {
    let mut v: Vec<u8> = Vec::new();
    v.push(1u8);
    v.resize(10 as usize, 0u8);
}

#[no_mangle]
pub extern "C" fn deploy() {}

Actual Results

Full wasm-objdump here: https://pastebin.com/XvNy29es

Some output of interest:

<_ZN130_$LT$wee_alloc..size_classes..SizeClassAllocPolicy$LT$$u27$a$C$$u20$$u27$b$GT$$u20$as$u20$wee_alloc..AllocPolicy$LT$$u27$a$GT$$GT$32should_merge_adjacent_free_cells17ha5f8c5ccc562923dE>
 - func[27] <_ZN4core3ptr18real_drop_in_place17hc97ddcf593c971bfE>
 - func[28] <_ZN4core9panicking5panic17h654d1775d6e26cbdE>
 - func[29] <_ZN4core9panicking9panic_fmt17h77387da9b048cc4bE>
 - func[30] <_ZN36_$LT$T$u20$as$u20$core..any..Any$GT$11get_type_id17ha3646d193330ce0cE>
# ...
 - segment[1] size=40 - init i32=1049604
  - 0100404: 7372 632f 6c69 6261 6c6c 6f63 2f72 6177  src/liballoc/raw
  - 0100414: 5f76 6563 2e72 7363 6170 6163 6974 7920  _vec.rscapacity
  - 0100424: 6f76 6572 666c 6f77                      overflow

Expected Results

No panicky related code blocks

LinkError: WebAssembly Instantiation: Import #0 module="env" function="memcpy" error: function import requires a callable

Summary

When using wee_alloc and modifying a vec it looks like an external call to memcpy is being made. If that's not defined you get the error shown below. I tried manually defining memcpy here but then I get RuntimeError: unreachable.

Steps to Reproduce

  • git clone https://github.com/masonforest/wee_alloc_memcpy_bug
  • cd wee_alloc_memcpy_bug
  • cargo rustc --target wasm32-unknown-unknown --lib -- -O && node run.js

Actual Results

   Compiling wee_alloc_memcpy_bug v0.1.0 (file:///Users/masonf/src/wee_alloc_memcpy_bug)
    Finished dev [unoptimized + debuginfo] target(s) in 0.52 secs
(node:98141) UnhandledPromiseRejectionWarning: LinkError: WebAssembly Instantiation: Import #0 module="env" function="memcpy" error: function import requires a callable
    at <anonymous>
    at process._tickCallback (internal/process/next_tick.js:160:7)
    at Function.Module.runMain (module.js:703:11)
    at startup (bootstrap_node.js:193:16)
    at bootstrap_node.js:617:3
(node:98141) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 2)
(node:98141) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

Expected Results

   Compiling wee_alloc_memcpy_bug v0.1.0 (file:///Users/masonf/src/wee_alloc_memcpy_bug)
    Finished dev [unoptimized + debuginfo] target(s) in 0.37 secs

Work in stable rust

Summary

Now that GlobalAlloc is in stable Rust, we should cfg-off the Alloc implementation such that wee_alloc can be used as a global allocator in stable rust.

https://blog.rust-lang.org/2018/08/02/Rust-1.28.html

Motivation

Stabilize all the things!

Details

Will need to introduce a nightly cargo feature to turn on the Alloc implementation. Since the GlobalAlloc implementation proxies to the Alloc implementation, we will need to do some light refactoring to move those proxied methods into normal, non-trait methods on WeeAlloc.

This could be a good issue for someone who wants to dive into the code base for the first time!

How does wee_alloc know where to start the heap?

Hey folks! I'm looking to experiment with wee_alloc in a new language--ignoring the perf complications for now--so I need to fully grok how it knows what portion of linear memory is safe to use so it doesn't clobber the stack/static allocations.

I asked on Stack Overflow, but I hope you don't mind that even though it's only been a couple of days I thought I might cross-post here since it isn't likely to get a response otherwise. If someone has a moment to ask on Stack Overflow, it's much appreciated! I've dug into the code and tried to walk it backwards best I can but I'm not confident in my findings.

https://stackoverflow.com/questions/52022998/how-does-wee-alloc-a-malloc-alternative-know-where-to-start-the-heap-in-webass


I'm trying to utilize wee_alloc in a custom language, running in WebAssembly. However, I need to fully grok how it knows where to start the heap at so that my stack and static allocations do not clobber it and vice versa.

It's my understanding that how malloc, et al. know where to start the heap is platform dependent and often just a convention, or in some cases not applicable. However in WebAssembly we can only have a single contiguous piece of linear memory, so we have to share it and a convention needs to be used.

Reading through the code it appears that what wee_alloc does is make the assumption that what ever memory we start with is off-limits completely, and instead will use the grow_memory instruction to create the first piece of memory needed for the heap. That effectively means that the index/address of the start of the heap is highest index of what ever the initial size is, plus one.

e.g. if we start off with an initial memory size of 1 page:

current_memory = 1 page = 64KiB = 65,536 bytes
then the heap starts at index 65537.

Is my understanding correct?

Support un-splitting cells

  • Same 2 word overhead we currently have

  • Same O(1) deallocation

  • Will be much better at preventing fragmentation

  • Maintain a sorted, doubly-linked list of all cells, regardless if free or allocated, in the cell header.

  • When a cell is free, use its payload to store next link in the free list

  • Free list doesn't need to be sorted, can still push to front of free list on deallocation, just check adjacent neighbors in the all-cells-list for merging (use low bit to determine if allocated or not)

Allocated cell:

+-----------------------+
| prev_cell: *mut Cell  | // has low bit set
| ----------------------|
| next_cell: *mut Cell  |
| ----------------------|
| payload: [u8]         | // always >= 1 word
| ...                   |
+-----------------------+

Free cell:

+-----------------------+
| prev_cell: *mut Cell  | // low bit is not set
| ----------------------|
| next_cell: *mut Cell  |
| ----------------------|
| next_free: *mut Cell  | // stuff free list into payload's first word
| ...                   |
+-----------------------+

Update outdated usage instruction regarding "extern crate"

The README currently specifies to use extern crate wee_alloc;, but this is only needed for Rust 2015 and not for Rust 2018. This is distracting for new Rust users who wonder why this is needed.

Either remove this usage of extern crate or clearly mark it as only needed for Rust 2015.

CI doesn't check configuration w/o size_classes

As mentioned in #68, but for tracking separately:

size_classes is enabled by default, so one of the lines had to remain cargo check --release but another is supposed to be cargo check --release --no-default-features to test without size_classes enabled.

Hmm, looks like there is a bunch of other places in build.sh, check.sh, appveyor.yml that use --features size_classes.

@fitzgen Does this mean they are all similarly broken (testing same configuration as default) or am I missing some switch?

Emit statistics / visualizations of heap fragmentation.

Summary

Emit statistics / visualizations of heap fragmentation. There are edge cases where we can't currently consolidate adjacent free cells into a larger, contiguous free cell. This would let us empirically answer whether that is a problem in practice.

Motivation

Remove "unknown unknowns". Be a better allocator.

Details

  • Maybe use graphviz.

  • At least get some statistics / aggregated numbers on free vs allocated and distribution of free cell sizes in the free list.

WASM Module

Is there a downloadable wasm module with no-std to use in no rust environments?

Is this repo still maintained?

Summary

There's a pretty big bug #106 that sees a lot of projects switch away from using this crate. It hasn't received any comment or apparent attempt at fixing from anyone in the rust wasm org/team in 2 months.

Also, there has only been 1 commit in the last 3 years. Is this crate still maintained? It looks like @fitzgen is the original author and the other team members are @ZackPierce and @pepyakin.

Maybe the bug could be mentioned in the readme to raise awareness, so that people don't spend a lot of time debugging something that is not solvable.

What are alternatives for this crate?

This crate is quite popular at 2k daily downloads.

Impossibly large allocations fails but still allocates new memory pages

Describe the Bug

Allocating usize::MAX - 8 bytes fails but allocates new memory pages every time.

Steps to Reproduce

#[test]
fn cannot_alloc_max_usize_m8() {
    let a = &wee_alloc::WeeAlloc::INIT;
    let layout = Layout::from_size_align(std::usize::MAX - 8, 1)
        .expect("should be able to create a `Layout` with size = std::usize::MAX - 8");
    for _ in 0..10000000 {
        let result = unsafe { a.alloc(layout) };
        assert!(result.is_err());
    }
}

Expected Behavior

The test should complete without causing OOM.

Actual Behavior

With debug assertions: thread 'cannot_alloc_max_usize_m8' panicked at 'attempt to add with overflow', .../.cargo/registry/src/github.com-1ecc6299db9ec823/memory_units-0.4.0/src/lib.rs:166:1

Without debug assertions: The test allocates tens of gigabytes of memory and eventually gets killed by the kernel.

wee_alloc exemple doesn't compile with the last rust nightly.

Summary

Trying to compile wee_alloc/exemple with the current rust nightly (13/07/2018) will not work.

Steps to Reproduce

  • git clone https://github.com/rustwasm/wee_alloc.git
  • cd wee_alloc/exemple
  • rustup default nightly
  • rustup update
  • cargo build
  • OR/AND cargo build --target wasm32-unknown-unknown

I have the same result with my simple "hello world" wasm exemple who use wee_alloc.

Actual Results

⋊> ~/l/w/w/example on master ◦ cargo build --target wasm32-unknown-unknown --verbose                                                                                                                                                  10:22:03
       Fresh void v1.0.2                                                                                                                                                                                                                      
       Fresh memory_units v0.4.0                                                                                                                                                                                                              
       Fresh cfg-if v0.1.4                                                                                                                                                                                                                    
       Fresh unreachable v1.0.0                                                                                                                                                                                                               
       Fresh wee_alloc v0.4.1 (file:///Users/maeln/learning/waza/wee_alloc/wee_alloc)                                                                                                                                                         
   Compiling wee_alloc_example v0.1.0 (file:///Users/maeln/learning/waza/wee_alloc/example)                                                                                                                                                   
     Running `rustc --crate-name wee_alloc_example example/src/lib.rs --crate-type cdylib --emit=dep-info,link -C debuginfo=2 -C metadata=4294e51d06d4743e --out-dir /Users/maeln/learning/waza/wee_alloc/target/wasm32-unknown-unknown/debug/deps --target wasm32-unknown-unknown -C incremental=/Users/maeln/learning/waza/wee_alloc/target/wasm32-unknown-unknown/debug/incremental -L dependency=/Users/maeln/learning/waza/wee_alloc/target/wasm32-unknown-unknown/debug/deps -L dependency=/Users/maeln/learning/waza/wee_alloc/target/debug/deps --extern wee_alloc=/Users/maeln/learning/waza/wee_alloc/target/wasm32-unknown-unknown/debug/deps/libwee_alloc-91893f3bae72c6ce.rlib`
error: function should have one argument========================>        ] 7/8: wee_alloc_example
  --> example/src/lib.rs:33:1
   |
33 | / pub extern "C" fn oom() -> ! {
34 | |     unsafe {
35 | |         ::core::intrinsics::abort();
36 | |     }
37 | | }
   | |_^

error: aborting due to previous error

error: Could not compile `wee_alloc_example`.                                                                                                                                                                                                 

Caused by:
  process didn't exit successfully: `rustc --crate-name wee_alloc_example example/src/lib.rs --crate-type cdylib --emit=dep-info,link -C debuginfo=2 -C metadata=4294e51d06d4743e --out-dir /Users/maeln/learning/waza/wee_alloc/target/wasm32-unknown-unknown/debug/deps --target wasm32-unknown-unknown -C incremental=/Users/maeln/learning/waza/wee_alloc/target/wasm32-unknown-unknown/debug/incremental -L dependency=/Users/maeln/learning/waza/wee_alloc/target/wasm32-unknown-unknown/debug/deps -L dependency=/Users/maeln/learning/waza/wee_alloc/target/debug/deps --extern wee_alloc=/Users/maeln/learning/waza/wee_alloc/target/wasm32-unknown-unknown/debug/deps/libwee_alloc-91893f3bae72c6ce.rlib` (exit code: 101)

Expected Results

The example should have compiled successfully.

Support allocations with alignment greater than a word

Summary

When I create a Vec of Enums that have a u64 field on them I get the following error: RuntimeError: unreachable

Steps to Reproduce

  • git clone https://github.com/masonforest/wee_alloc_vec_of_enums_bug
  • cd wee_alloc_vec_of_enums_bug
  • cargo rustc --target wasm32-unknown-unknown --lib -- -O && node run.js

Actual Results

    Finished dev [unoptimized + debuginfo] target(s) in 0.0 secs
(node:6108) UnhandledPromiseRejectionWarning: RuntimeError: unreachable
    at run (wasm-function[7]:1)
    at run (/Users/masonf/src/wee_alloc_vec_of_enums_bug/run.js:20:20)
    at <anonymous>
    at process._tickCallback (internal/process/next_tick.js:160:7)
    at Function.Module.runMain (module.js:703:11)
    at startup (bootstrap_node.js:193:16)
    at bootstrap_node.js:617:3
(node:6108) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:6108) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

Expected Results

Finished dev [unoptimized + debuginfo] target(s) in 0.0 secs

Runtime storage initialization

Summary

I need an advice how properly implement runtime initialized storage.

Motivation

Now I use ugly hack to make wee_alloc work with my unikernel wrapper around solo5. In compile time i don't know anything about available heap address and size, I only can get it in runtime from specific structure.

So I add imp_ptr_array with specific alloc_pages implementation and call something like fn init(start: usize, size: usize), but that's very bad thing usable only for initial testing.

As i can see I need some trait or function which can change internal state of alloc_pages implementation

How does this further wee_alloc's goal of being the best allocator for
wasm32-unknown-unknown, with a very small .wasm code size footprint?

It doesn't make wee_alloc any bigger in wasm code size, but makes it more usable as general purpose no_std allocator

Details

What bits of code would need to change? Which modules?

As I see it will be additional specific implementation similar to static_array without significant impact on other parts.

What are the trade offs?

Additional trait to be implemented on WeeAlloc structure

Are you willing to implement this yourself? If you had a mentor? Are you
willing to mentor someone else?

I'm ready to implement it.

Build (nightly) fails: error[E0152]: duplicate lang item found: `panic_fmt`.

Summary

Include a sentence or two summarizing the bug.

Does not build with rust nightly

Steps to Reproduce

$ cargo build

error[E0152]: duplicate lang item found: `panic_fmt`.
  --> example/src/lib.rs:22:1
   |
22 | / extern "C" fn panic_fmt(_args: ::core::fmt::Arguments, _file: &'static str, _line: u32) -> ! {
23 | |     unsafe {
24 | |         ::core::intrinsics::abort();
25 | |     }
26 | | }
   | |_^
   |
   = note: first defined in crate `std`.

error[E0152]: duplicate lang item found: `oom`.
  --> example/src/lib.rs:31:1
   |
31 | / extern "C" fn oom() -> ! {
32 | |     unsafe {
33 | |         ::core::intrinsics::abort();
34 | |     }
35 | | }
   | |_^
   |
   = note: first defined in crate `std`.

error: aborting due to 2 previous errors

For more information about this error, try `rustc --explain E0152`.
error: Could not compile `wee_alloc_example`.


$ /usr/bin/ld --version
GNU ld (GNU Binutils for Ubuntu) 2.26.1
Copyright (C) 2015 Free Software Foundation, Inc.
...

Same with rustc nightly 2018-04-27 and 2018-05-09

CI should be running tests

Looks like tests are currently failing on master on Unix/Windows, and it's easy to miss because CI runs just cargo check.

Implement more efficient grow/shrink/realloc methods

Motivation

The Allocator trait provides grow() shrink() and grow_zeroed() but they are implemented naively - shink for example allocates new memory, copies over bytes, and deallocates the old block of memory. realloc can be implemented for the GlobalAlloc trait with grow() and shrink() under the hood.

Proposed Solution

Implement these traits in a more efficient way with the knowledge that wee_alloc has. grow for example, may be able to extend out the existing block of memory if the next cell is free. Maybe hide this behind a feature if it adds an appreciable amount of code bloat.

Would be helpful to have a benchmark here to see what the improvements are like. I imagine that things like pushing onto a vector in a loop ends up calling realloc quite a bit as the vector grows.

Can't compile on last rustc nightly

Hey!

Summary

I'm using rustc nightly (rustc 1.27.0-nightly (7360d6dd6 2018-04-15) on macOS) and wee_alloc no longer compiles. Here is the error I get:

error[E0053]: method `alloc` has an incompatible type for trait
    --> /Users/hywan/.cargo/registry/src/github.com-1ecc6299db9ec823/wee_alloc-0.2.0/src/lib.rs:1028:5
     |
1028 |     unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr> {
     |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected struct `core::ptr::NonNull`, found *-ptr
     |
     = note: expected type `unsafe fn(&mut &'a WeeAlloc, core::alloc::Layout) -> core::result::Result<core::ptr::NonNull<core::alloc::Opaque>, core::alloc::AllocErr>`
                found type `unsafe fn(&mut &'a WeeAlloc, core::alloc::Layout) -> core::result::Result<*mut u8, core::alloc::AllocErr>`

error[E0053]: method `dealloc` has an incompatible type for trait
    --> /Users/hywan/.cargo/registry/src/github.com-1ecc6299db9ec823/wee_alloc-0.2.0/src/lib.rs:1051:5
     |
1051 |     unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout) {
     |     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected struct `core::ptr::NonNull`, found *-ptr
     |
     = note: expected type `unsafe fn(&mut &'a WeeAlloc, core::ptr::NonNull<core::alloc::Opaque>, core::alloc::Layout)`
                found type `unsafe fn(&mut &'a WeeAlloc, *mut u8, core::alloc::Layout)`

Steps to reproduce

It fails quite early in the compilation steps, so I think my code might not be an issue.

Release new version to crates.io

Summary

Current version in crates.io have a link error while using the recent rust nightly.

#56 fix this problem

Steps to Reproduce

lib.rs:

use wasm_bindgen::prelude::*;
use web_sys::console;
#[wasm_bindgen]
pub fn init() {
    console::log_1(&format!("it run").into());
}
#[global_allocator]
static ALLOC: wee_alloc::WeeAlloc = wee_alloc::WeeAlloc::INIT;

Cargo.toml

[dependencies]
wasm-bindgen = "0.2"
wee_alloc = "0.4.2"

[dependencies.web-sys]
version = "0.3"
features = ["console"]

Actual Results

ERROR in ./pkg/wasm_test_bg.wasm
Module not found: Error: Can't resolve 'env' in '/mnt/g/project/wasm_test/pkg'
 @ ./pkg/wasm_test_bg.wasm
 @ ./pkg/wasm_test.js
 @ ./index.js
 @ multi (webpack)-dev-server/client?http://localhost:8080 ./index.js

in wasm file

  (import "env" "llvm.wasm.grow.memory.i32" (func $llvm.wasm.grow.memory.i32 (type $t2)))

Expected Results

Release a new version to crates.io

Rust upgrade to LLVM6 breaks wee_alloc

wee_alloc fails to build with the latest nightly (and possibly yesterdays as well) and most likely will continue to fail with upcoming versions (target: wasm32-unknown-unknown):

Intrinsic has incorrect return type!
void (i32)* @llvm.wasm.grow.memory.i32
LLVM ERROR: Broken function found, compilation aborted!

This is most likely caused by the upgrade to LLVM 6 (rust-lang/rust#47828), which changed the return type of this intrinsic.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.