Giter Site home page Giter Site logo

hashmap.c's Introduction

hashmap.c

Hash map implementation in C.

Features

  • Open addressing using Robin Hood hashing.
  • Generic interface with support for variable-sized items.
  • Built-in SipHash, MurmurHash3, xxHash and allows for alternative algorithms.
  • Supports C99 and up.
  • Supports custom allocators.
  • Pretty darn good performance. 🚀

Example

#include <stdio.h>
#include <string.h>
#include "hashmap.h"

struct user {
    char *name;
    int age;
};

int user_compare(const void *a, const void *b, void *udata) {
    const struct user *ua = a;
    const struct user *ub = b;
    return strcmp(ua->name, ub->name);
}

bool user_iter(const void *item, void *udata) {
    const struct user *user = item;
    printf("%s (age=%d)\n", user->name, user->age);
    return true;
}

uint64_t user_hash(const void *item, uint64_t seed0, uint64_t seed1) {
    const struct user *user = item;
    return hashmap_sip(user->name, strlen(user->name), seed0, seed1);
}

int main() {
    // create a new hash map where each item is a `struct user`. The second
    // argument is the initial capacity. The third and fourth arguments are 
    // optional seeds that are passed to the following hash function.
    struct hashmap *map = hashmap_new(sizeof(struct user), 0, 0, 0, 
                                     user_hash, user_compare, NULL, NULL);

    // Here we'll load some users into the hash map. Each set operation
    // performs a copy of the data that is pointed to in the second argument.
    hashmap_set(map, &(struct user){ .name="Dale", .age=44 });
    hashmap_set(map, &(struct user){ .name="Roger", .age=68 });
    hashmap_set(map, &(struct user){ .name="Jane", .age=47 });

    struct user *user; 
    
    printf("\n-- get some users --\n");
    user = hashmap_get(map, &(struct user){ .name="Jane" });
    printf("%s age=%d\n", user->name, user->age);

    user = hashmap_get(map, &(struct user){ .name="Roger" });
    printf("%s age=%d\n", user->name, user->age);

    user = hashmap_get(map, &(struct user){ .name="Dale" });
    printf("%s age=%d\n", user->name, user->age);

    user = hashmap_get(map, &(struct user){ .name="Tom" });
    printf("%s\n", user?"exists":"not exists");

    printf("\n-- iterate over all users (hashmap_scan) --\n");
    hashmap_scan(map, user_iter, NULL);

    printf("\n-- iterate over all users (hashmap_iter) --\n");
    size_t iter = 0;
    void *item;
    while (hashmap_iter(map, &iter, &item)) {
        const struct user *user = item;
        printf("%s (age=%d)\n", user->name, user->age);
    }

    hashmap_free(map);
}

// output:
// -- get some users --
// Jane age=47
// Roger age=68
// Dale age=44
// not exists
// 
// -- iterate over all users (hashmap_scan) --
// Dale (age=44)
// Roger (age=68)
// Jane (age=47)
//
// -- iterate over all users (hashmap_iter) --
// Dale (age=44)
// Roger (age=68)
// Jane (age=47)

Functions

Basic

hashmap_new      # allocate a new hash map
hashmap_free     # free the hash map
hashmap_count    # returns the number of items in the hash map
hashmap_set      # insert or replace an existing item and return the previous
hashmap_get      # get an existing item
hashmap_delete   # delete and return an item
hashmap_clear    # clear the hash map

Iteration

hashmap_iter     # loop based iteration over all items in hash map 
hashmap_scan     # callback based iteration over all items in hash map

Hash helpers

hashmap_sip      # returns hash value for data using SipHash-2-4
hashmap_murmur   # returns hash value for data using MurmurHash3

API Notes

An "item" is a structure of your design that contains a key and a value. You load your structure with key and value and you set it in the table, which copies the contents of your structure into a bucket in the table. When you get an item out of the table, you load your structure with the key data and call "hashmap_get()". This looks up the key and returns a pointer to the item stored in the bucket. The passed-in item is not modified.

Since the hashmap code doesn't know anything about your item structure, you must provide "compare" and "hash" functions which access the structure's key properly. If you want to use the "hashmap_scan()" function, you must also provide an "iter" function. For your hash function, you are welcome to call one of the supplied hash functions, passing the key in your structure.

Note that if your element structure contains pointers, those pointer values will be copied into the buckets. I.e. it is a "shallow" copy of the item, not a "deep" copy. Therefore, anything your entry points to must be maintained for the lifetime of the item in the table.

The functions "hashmap_get()", "hashmap_set()", and "hashmap_delete()" all return a pointer to an item if found. In all cases, the pointer is not guaranteed to continue to point to that same item after subsequent calls to the hashmap. I.e. the hashmap can be rearranged by a subsequent call, which can render previously-returned pointers invalid, possibly even pointing into freed heap space. DO NOT RETAIN POINTERS RETURNED BY HASHMAP CALLS! It is common to copy the contents of the item into your storage immediately following a call that returns an item pointer.

NOT THREAD SAFE. If you are using hashmap with multiple threads, you must provide locking to prevent concurrent calls. Note that it is NOT sufficient to add the locks to the hashmap code itself. Remember that hashmap calls return pointers to internal structures, which can become invalid after subsequent calls to hashmap. If you just add a lock inside the hashmap functions, by the time a pointer is returned to the caller, that pointer may have already been rendered invalid. You should lock before the call, make the call, copy out the result, and unlock.

Testing and benchmarks

$ cc -DHASHMAP_TEST hashmap.c && ./a.out              # run tests
$ cc -DHASHMAP_TEST -O3 hashmap.c && BENCH=1 ./a.out  # run benchmarks

The following benchmarks were run on my 2019 Macbook Pro (2.4 GHz 8-Core Intel Core i9) using gcc-9. The items are simple 4-byte ints. The hash function is MurmurHash3. Testing with 5,000,000 items. The (cap) results are hashmaps that are created with an inital capacity of 5,000,000.

set            5000000 ops in 0.708 secs, 142 ns/op, 7057960 op/sec, 26.84 bytes/op
get            5000000 ops in 0.303 secs, 61 ns/op, 16492723 op/sec
delete         5000000 ops in 0.486 secs, 97 ns/op, 10280873 op/sec
set (cap)      5000000 ops in 0.429 secs, 86 ns/op, 11641660 op/sec
get (cap)      5000000 ops in 0.303 secs, 61 ns/op, 16490493 op/sec
delete (cap)   5000000 ops in 0.410 secs, 82 ns/op, 12200091 op/sec

License

hashmap.c source code is available under the MIT License.

hashmap.c's People

Contributors

arthurep avatar calebschoepp avatar fordsfords avatar isaccbarker avatar scossu avatar thcopeland avatar tidwall avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hashmap.c's Issues

Passing userdata pointer back to custom allocator functions

Yooo! Thank you so much for this neat little project. I've been writing a bunch of C code for a common runtime environment for multi-language interop, and I finally reached a point where I can't avoid trying to not have to deal with Hash Map data types anymore.... ANYWAY, I love how elegant, compact, and well-written this is. It actually made it really easy for me to "subtype" and encapsulate it within my own framework...

My ONLY complaint is about the custom allocator support, which I'm extremely grateful for. The only issue is that I have multilevel stateful allocators everywhere, so I can't really hook into them using stateless allocation functions.

Is there any way you could pass the void* ud state back to the allocators? Here's my use-case:

typedef struct GblHashMap {
struct hashmap* pImpl_;
GblContext hCtx;
} GblHashMap;

I'm basically C "inheriting" from your opaque type, and am just passing the pointer to the outer GblHashMap object as the ud. The "GblContext" object is what contains the custom allocators, error loggers, etc.

GBL_INLINE GBL_RESULT gblHashMapSet(GblHashMap* pMap, void* pItem, void** ppData) {
GBL_API_ASSERT(pMap);
GBL_API_BEGIN(pMap->hCtx);
GBL_API_VERIFY_POINTER(pItem);
GBL_API_VERIFY_POINTER(ppData);
*ppData = hashmap_set(pMap->pImpl, pItem);
GBL_API_END();
}

GBL_INLINE GBL_RESULT gblHashMapDelete(GblHashMap* pMap, void* pItem, void** ppData) {
GBL_API_ASSERT(pMap);
GBL_API_BEGIN(pMap->hCtx);
GBL_API_VERIFY_POINTER(pItem);
GBL_API_VERIFY_POINTER(ppData);
*ppData = hashmap_delete(pMap->pImpl, pItem);
GBL_API_END();
}

Here is me with the custom allocator callbacks, but you can see I don't have access to the ud pointer or any external state... which means my macros are just falling back to the default C standard functions.

GBL_INLINE void* gblHashAlloc_(size_t bytes) {
GBL_API_BEGIN(NULL);
return GBL_API_MALLOC(bytes);
}

GBL_INLINE void* gblHashRealloc_(void* pData, size_t bytes) {
GBL_API_BEGIN(NULL);
return GBL_API_REALLOC(pData, bytes);
}

GBL_INLINE void gblHashFree_(void* pData) {
GBL_API_BEGIN(NULL);
GBL_API_FREE(pData);
}

Potential bug in map->spare usage (used both as struct bucket and stored item)

In the initialize function map->spare is initialized to have sizeof(hashmap) + 2 * bucketsz

But it is used both as struct bucket and as the items stored in the buckets in the setter and getter.

hashmap.c/hashmap.c

Lines 280 to 290 in 84d0d3b

{
memcpy(map->spare, bitem, map->elsize);
memcpy(bitem, eitem, map->elsize);
return map->spare;
}
if (bucket->dib < entry->dib) {
memcpy(map->spare, bucket, map->bucketsz);
memcpy(bucket, entry, map->bucketsz);
memcpy(entry, map->spare, map->bucketsz);
eitem = bucket_item(entry);
}

memcpy(map->spare, bitem, map->elsize);

As long as elsize is smaller than bucketsz this shouldn't be a problem.

Removed items from the hash still appear in the iterator

I have a hash where the key is a const char*. When I use hashmap_delete to remove items, some of the items still appear in the hashmap iterator.

I worked around the issue by looking up the item in the iterator to make sure it's valid. IMO deleted items shouldn't be returned by the iterator.

bool MapStringId_Iter(const void *item, void *udata) {
    const struct MapStringId* map = item;
	
	if (map != NULL)
	{
		//workaround issue with deleted items showing up
		struct MapStringId search;
		search._mKey = map->_mKey;
		search._mId = -1;
		struct MapStringId* kvp = hashmap_get(_gAnimationMapID, &search);
		if (kvp == NULL)
		{
			//fprintf(stdout, "Hash item already deleted.\r\n");
			return true;
		}

    	printf("Animation Id=%d Name=%s Frames=%d\r\n", map->_mId, map->_mKey, ChromaAnimationAPI_GetFrameCount(map->_mId));
	}
    return true;
}

Until I make a specific repro project, I use the hashmap here. https://github.com/tgraupmann/C_ChromaSDK/blob/master/libchromac/Razer/ChromaAnimationMaps.c

I do have a unit test that repros the issue.
make all && make test

If you disable the workaround, you'll see deleted items printing in the iterator.

thread safe concern

I noticed that iter is not thread safe, so i worry about other API
Is it thread safe while multi-thread doing get/set/delete?
For example, item returned by set and detele are all from map->spare, while a later set overwrite the earlier item in map->spare?
what will happen if A call get and B delete it the same time?

thank you for your time and patience~

Freeing memory failed in example code

Hi @tidwall

thank you for your work and i would like to use your implementation in my project. However, I encoutered a problem which eludes me:

I ran the example code in README.md with debug configuratioin in eclipse and it crashes at the last step "hmfree(map);" in hashmap_free. More information in screenshot below.
Screenshot 2021-03-22 171956

I ran it as well in console, got a warning about heap block, please see screenshot below:
Screenshot 2021-03-22 173211

Could you reproduce this issue?
I am using gcc4.9.1 of MinGW. Looking forward to your reply. Thank you!

Best regards
Yuan

Total size allocated

It'd be useful to have a function to call which returns how many bytes have been allocated. I can do that for my items but I'd rather not keep track of it for the bucket and map structs. Tracking the memory footprint of this data structure is important for embedded applications using this library.

RFE: Add meson build

Feature Request

I'd like to request to add meson as a build system for hashmap.c to easily integrate it into other (meson) projects.

If this RFE is considered to be an improvement for the project, I'd be happy to provide a PR for it.

Background

In the project hirte we are currently using a copy of hashmap.c. However, we want to be able to easily upgrade the library in our project if a new hashmap.c version is released. Since we are using meson as a build system we consider integrating it as a meson subproject via a wrap-git. This, however, only works for projects using meson (ignoring some workarounds).

Saving hash map to file

Hi!

Would it be possible to save the hash map, and load its contents later? Is this something that is implemented / would be easy to add?

Kind regards,
Axel

Add function to clear/empty hashmap without deallocating.

Very minor, optional issue. I generally like when data structures have some way to clear the contents. This way I don't need to allocate a whole new hashmap if I am done with the map and want a new similarly sized hashmap. It is easy to implement externally by deleting all the entries, but i feel it might be faster to have an internal implementation.

When bucket is full there can be an infinite loop

When the bucket is full, an infinite loop can occur on line 271 due to the fact that a return statement is missing after line 289.

New space is created in the bucket, and the entry added to the bucket, but the loop does not end here as it should (NULL should be returned as the entry did not exist yet) and bucket the list continues to grow forever without return.

I will submit a PR to fix this, which works for me.

Add extern "C" and more doc

Hello Josh, et. al.

I'm going to submit a pull request which adds a conditional extern "C" to hashmap.h, which makes the module more C++ friendly, and adds a section to the README.md "API Notes".

The C++ change may seem odd since most C++ programmers would just use an STL hashmap, but I'm an embedded programmer who has learned the hard way that STL is frequently a bad choice. Better to have a simple C implementation, especially one that can be modified if need be.

The extra doc covers topics that may not be obvious to new users..

Thanks for making this code available.

Pull request coming soon.

Steve

Is it possible to have an int as key?

Lets take your example from README.md. I would like to have the age as my key. So I can get the name from the hashmap based on the provided age not the name. What to I need to change in your example code to do so?

This just do not work:

uint64_t user_hash(const void *item, uint64_t seed0, uint64_t seed1) {
    const struct user *user = item;
    return hashmap_sip(user->age, strlen(user->age), seed0, seed1);
}

Key and value not separate in the hashmap_set function

Really nice implementation of a hashmap in C!

The get_hash function should always be called with the key as second argument, but in the setter you are using the item itself. Just add another input parameter to the setter to provide the key along with the value. This key parameter should then be passed on to the get_hash function.

Biggest issue with the current implementation is that you already have to know the item to be able to poll it from the map, making the call to the map redundant.

Segfault when calling hashmap_set

I'm using this library to make a 2D hashmap for a game I'm developing. For the "keys", I have 2 64 bit ints that represent the x and z coordinate. For the "values", I just have a huge array that represents the data in the x, z chunk. When I run hashmap_set, I get a segfault. I ran it with GDB and it says its coming from this line:

211         char edata[map->bucketsz]; // VLA;

If I'm doing something wrong, does anyone have any ideas why? If it's something wrong with the library (which it is most likely not), is there some feature that needs to be implemented? Thanks.

how to free mem?

i found that hashmap_free do not work, the mem can not free and with the time going on, the mem will reach very high, so what can i do?

Not Working With MSVC

This has been working with GCC for a while but with MSVC there is an error with this line of code (line 121):

    map->edata = map->spare+bucketsz;

because map->spare is a raw pointer so you can't do pointer arithmetic. On GCC, the size of raw pointers is assumed to be 1 byte but this isn't the case on MSVC. Changing that line of code to this:

    map->edata = (char*)map->spare+bucketsz;

seems to work.

Is this hashmap thread-safe?

Very user-friendly and powerful hash map library! Is this hash map thread-safe? I want to insert and delete items in different threads, are there any risk that the program will crash?
Since I haven't find 'mutex/lock/atomic' in the source, I guess it probably won't work well in such condition. If so, does anyone have idea about achieving thread-safety for this hash map? I'm glad to create a pull request after I implement this part. :)

[Improvement request] possibility to add destructor for item.

Even if items are copied into hashmap it might contains some pointers to owned memory. It would be nice to have possibility to define destructor for such items. For example give it at as a param for hashmap_new.
Then:

  1. I would change behavior of hashmap_delete to call destructor on deleted item and return true/false if item was found and deleted.
  2. Since hashmap_delete removes and destroys object new function hashmap_detach should provide functionality which currently is done via hashmap_delete.

Reasoning.

  • Every time I want to delete item I need to detach it via hashmap_delete and call destructor function on detached item manually, it's very error prone and can be automated.
  • Every time I want to destroy whole hashmap I need to perform hashmap_scan and call destructors on each item in map, again this creates boilerplate code.

[QUESTION] Bit-packing and subsequent shifting

Looking over the code a bit, I have a few questions. The main struct bucket is declared as such:

struct bucket {
    uint64_t hash:48;
    uint64_t dib:16;
};

where we are bit-packing such for a given "bucket", the hash ends up being 48 bits, and the "distance in bucket" (effectively a measure of wealth w/ Robin Hood probing) can be 16 bits. Since each is a uint64_t, we have 48 + 16, so we are effectively "taking up" the space of one uint64_t but storing two distinct values there. Cool.

Later when calculating the hash, we do:

static uint64_t get_hash(struct hashmap *map, const void *key) {
    return map->hash(key, map->seed0, map->seed1) << 16 >> 16;
}

focusing specifically on the << 16 >> 16 part. Is this to "trim off" the 16 bits that we don't need from the uint64_t such that it aligns with the bit-packing? Is this reliable across different compiler versions (the implementation of bit-packing is completely compiler-dependent, while it should yield a similar result bit-packing wise)?


Separately, what is the point of the first part of this conditional?

            if (map->nbuckets > map->cap && map->count <= map->shrinkat) {
                // Ignore the return value. It's ok for the resize operation to
                // fail to allocate enough memory because a shrink operation
                // does not change the integrity of the data.
                resize(map, map->nbuckets/2);
            }

When can map->buckets be greater than map->cap?

Usage error

Hi, I would like to use this hashmap but when I run it I get this error:

dep/hashmap_c/hashmap.c:210:15: error: variable length array used [-Werror,-Wvla]
    char edata[map->bucketsz]; // VLA

Complete misses with keys > 256 bytes?

I think I have noticed misses with keys greater than 256 bytes in length.

  • When I say "misses" I mean being unable to find existing entries with hashmap_get() or hashmap_delete().
  • The keys are filenames using WCHARs under Windows 10.
  • I have tried both the SIP and the Murmur3 hash functions and get roughly the same results.
  • I get fewer misses with using SIP and setting the seeds both to 0, rather than using a random number.

Any help with issue would be much appreciated.

Not finding element which was just retrieve while iterating

I wanna make sure that this hashmap does not get more than x elements (1000 in this case), however, when I have a new element to insert, I want to remove one from the hashmap and replace it with the new element, the element that I want to remove is hardcoded for now, I iterate through the map and select the first one.

When calling hashmap_get and hashmap_delete with the retrieved element, it returns NULL and never gets to delete the element from the hashmap, which is weird, because I retrieved the element from the hashmap using hashmap_iter.

The code that is have is the following:

// ====== HashMap variables and methods ======
struct hashmap *map;

struct FrequencyEntry {
    unsigned char *hash;
    uint32_t counter;
};

int entry_compare(const void *a, const void *b, void *data) {
    const struct FrequencyEntry *ea = a;
    const struct FrequencyEntry *eb = b;
    // return strcmp(ea->hash, eb->hash);
    return strcmp_unsigned(ea->hash, eb->hash);
}

int strcmp_unsigned(const char *s1, const char *s2) {
    unsigned char *p1 = (unsigned char *)s1;
    unsigned char *p2 = (unsigned char *)s2;

    while ((*p1) && (*p1 == *p2)) {
        ++p1;
        ++p2;
    }
    return (*p1 - *p2);
}

bool entry_iter(const void *item, void *data) {
    const struct FrequencyEntry *entry = item;
    printf("[Debug-Enclave]] %s (counter=%d)\n", entry->hash, entry->counter);
    return true;
}

uint64_t entry_hash(const void *item, uint64_t seed0, uint64_t seed1) {
    const struct FrequencyEntry *entry = item;
    return hashmap_sip(entry->hash, strlen((char *)entry->hash), seed0, seed1);
}

// ====== End of HashMap variables and methods ======


... 

uint32_t FULL_CACHE_SIZE = 1000;  

uint32_t insert_and_increment_NodeInLINK(unsigned char *hash) {
    struct FrequencyEntry *current = hashmap_get(map, &(struct FrequencyEntry){.hash = hash});
    if (current == NULL) {
        uint32_t count = hashmap_count(map);

        if (count < FULL_CACHE_SIZE) {
            hashmap_set(map, &(struct FrequencyEntry){.hash = hash, .counter = 1});
            return 1;
        }

        printf("[Debug-Enclave] Hashmap has %zu entries, FULL_CACHE_SIZE=%d\n", count, FULL_CACHE_SIZE);

        // retrieve values from server and/or send values to server
        printf("[Debug-Enclave] Cache is full, consulting server cache...\n");

        uint32_t result;

        hashmap_get_ocall(hash, &result);

        struct FrequencyEntry *randomEntries[1];

        getNRandomCacheEntries(randomEntries, 1);


        struct FrequencyEntry *entry = hashmap_get(map, randomEntries[0]);
        if (entry == NULL) {
            printf("[Debug-Enclave] hashmap_get: entry is NULL\n");
        }

        hashmap_put_ocall(randomEntries[0]->hash, randomEntries[0]->counter);

        entry = hashmap_delete(map, randomEntries[0]);
        if (entry == NULL) {
            printf("[Debug-Enclave] hashmap_delete: entry is NULL\n");
        }

        hashmap_set(map, &(struct FrequencyEntry){.hash = hash, .counter = result + 1});
        return result + 1;
    }

    current->counter++;
    return current->counter;
}

void getNRandomCacheEntries(struct FrequencyEntry *randomEntries[], int n) {
    size_t iter = 0;
    void *item;

    while (hashmap_iter(map, &iter, &item)) {
        struct FrequencyEntry *entry = item;
        randomEntries[0] = entry;
        break;
    }
}

Producing this output:

[Debug-Enclave] Hashmap has 1000 entries, FULL_CACHE_SIZE=1000
[Debug-Enclave] Cache is full, consulting server cache...
[Debug-Enclave] hashmap_get: entry is NULL
[Debug-Enclave] hashmap_delete: entry is NULL
[Debug-Enclave] Hashmap has 1001 entries, FULL_CACHE_SIZE=1000
[Debug-Enclave] Cache is full, consulting server cache...
[Debug-Enclave] hashmap_get: entry is NULL
[Debug-Enclave] hashmap_delete: entry is NULL
...

The hashmap keeps getting new elements and never deletes the extra ones.
I don't know if it's relevant but the code is running inside an enclave.

I appreciate any help :)

ttlHashMap based on this project to support expiration

This user-friendly and powerful hash map library helps me a lot, and I have an idea about it.

One day, I used hashmap.c to replace hiredis as a lightweight cache. I found that, in that condition, hashmap.c is much faster than hiredis sync API, and is much simpler than hiredis async API. However, it does not support expiration. So, I wrote a wrapper based on hashmap.c: ttlHashmap. It supports expiration and is thread-safe.

I'm glad to create a pull request or something else to contribute to this project.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.