Giter Site home page Giter Site logo

birdnet's People

Contributors

kahst avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

birdnet's Issues

BirdNET Analyzer GUI - Split clips longer than 3 seconds prior to using for training data

Currently, BirdNET Analyzer GUI only uses 3-second segments as the input for training the model. If it is longer than 3 seconds, it takes the middle 3 seconds of the clip. If shorter than 3 seconds, it pads the clip.

It would be very helpful if BirdNET could split clips > 3 seconds into consecutive 3-second clips. Many species produce vocalizations that are > 3 seconds long, and people annotating these clips produce longer selections. Having BirdNET process these longer clips automatically would be tremendously helpful, and lead to better training data and therefore better models.

In making these successive 3-second segments, potentially we could also specify an overlap % between clips. Thanks!

iOS App - Typo on Splash-Screen

Dear Stefan,

I've found a small typo on the splash screen of the iOS app.

It shows 'Technische Universität Chemintz' instead of Chemnitz. Just thought I'd let you know. 😃

Cheers,

Julian

BirdNET Analyzer GUI - High Frequency Bound

When BirdNET Analyzer GUI outputs selection tables. The high frequency of the selections should be the Nyquist frequency, if the sampling rate is less than 30 kHz. If the sampling frequency is < 30 kHz, BirdNET says that the max frequency is 15 kHz, and these selection tables are not openable in Raven because the high frequency value (15000) is above the nyquist.

Publish the app on F-Droid (Open Source Repository)

This app sounds quite promising, however, can't be used by people with a Google-free Android smartphone. Since it's under the MIT license it would be great to see it published as open source app at F-Droid (f-droid.org). That way it could be used on Android without Google's Play-Store.

Unable to display observations

Hi ! I don't know if it's the right place for such an issue.
I have been using this application for about a year (iPhone ; BirdNET v1.0.7). Until now, everything worked fine, but for a few weeks now it has been totally impossible to display the observations, the application crashes systematically...
It's a pity because it's undoubtedly the most reliable in song recognition. I hesitate to uninstall it and reinstall it to avoid losing my observations.
Any suggestion would be welcome. Thanks !

Expected output for Desktop App?

I followed the setup instructions and ran: python analyze.py --i example/Soundscape_1.wav.

I get the following result:

FILES IN DATASET: 1 
LOADING SNAPSHOT BirdNET_Soundscape_Model.pkl ... DONE! 
BUILDING BirdNET MODEL... DONE! 
IMPORTING MODEL PARAMS... DONE! 
COMPILING THEANO TEST FUNCTION FUNCTION... DONE! 
LOADING eBIRD GRID DATA... DONE! 13800 GRID CELLS 
SID: 1 PROCESSING: Soundscape_1.wav SPECIES: 987 DETECTIONS: 38 TIME: 8 

How do I know what species 987 is? It would be nice to see an example of expected output in the README.md as well.

Run Docker image as non root user gives cache error

When running the Docker image as non root user

 docker run --user $(id -u):$(id -g) -v /home/ivo/Sites/audiomoth-zx/files/destination:/audio breallis/birdnet:cpu --i audio/audios --o /audio/birdnet

this error occurs:

RuntimeError: cannot cache function '__shear_dense': no locator available for file '/usr/local/lib/python3.7/site-packages/librosa/util/utils.py'

I guess the default NUMBA_CACHE_DIR can only be written as root?
I have no clue of python though.
Any ideas how to fix this?

The files created by the Docker image are created by root per default which causes permission problems with further processing of the files.

ValueError: could not convert string to float: '5.826450433232822e'

Hello @kahst, i am encountering an issue when running BirdNET on 2-hour WAV files, with a high overlap (2.9 seconds).

Here's my use case:

python analyze.py --i /scratch/vl1019/BirdVox-300h/BirdVox-300h_wav/2015-10-02_02-00-00_unit05.wav --o /scratch/vl1019/BirdVox-300h/BirdVox-300h_BirdNET --lat 42.38 --lon -76.49 --week 37 --overlap 2.9 --min_conf 0.01

and the traceback:

LOADING eBIRD GRID DATA... DONE! 13800 GRID CELLS 
SID: 1 PROCESSING: 2015-09-25_04-00-00_unit10.wav SPECIES: 101 Traceback (most recent call last):
  File "analyze.py", line 292, in <module>
    main()
  File "analyze.py", line 288, in main
    process(s, dataset.index(s) + 1, result_path, args.results, test_function)
  File "analyze.py", line 222, in process
    stable, dcnt = getRavenSelectionTable(p, soundscape.split(os.sep)[-1])
  File "analyze.py", line 114, in getRavenSelectionTable
    start, end = decodeTimestamp(timestamp)
  File "analyze.py", line 85, in decodeTimestamp
    end_seconds = float(end[0]) * 3600 + float(end[1]) * 60 + float(end[2])
ValueError: could not convert string to float: '5.826450433232822e'

It looks like timestamps get encoded in scientific notation. Perhaps they are infinitesimally small due to the high overlap?
If so, it would be good to raise an exception ahead of analysis.

Transfer learning

There are some species that BirdNET doesn't detect in my area (Altadena, CA), e.g. Amazona viridigenalis and Catherpes mexicanus. Is it possible to train the BirdNET model on my own data to increase accuracy for these species?

Feature Request - Allow Auto-Rotate

Whilst out using the app on both android phone or tablet - it might be useful for the display to rotate with the device's orientation.
Most microphones are situated at the base of the device, so when pointing the microphone at the source of the sound - the app screen is then upside down.
Could Auto Rotate be enabled for the app, please?

Thanks again for a valued, effective app with a high degree of accurate bird recognition !!

Thank you.
Dave.

Devices -
HTC 10 - android 8
Samsung Galaxy Tab S4 - android 9

Trying docker version on Debian 10 (buster) 64bit gives Errors

I've build BirdNET as you proposed in Installation (Docker) and building works fine so far.

Trying it out gives the following error messages:

sudo docker run -v $PWD/example:/audio birdnet --i Soundscape_1.wav
FILES IN DATASET: 0
/usr/local/lib/python3.7/site-packages/librosa/util/decorators.py:9: NumbaDeprecationWarning: An import was requested from a module that has moved location.
Import requested from: 'numba.decorators', please update to use 'numba.core.decorators' or pin to Numba version 0.48.0. This alias will not be present in Numba version 0.50.0.
from numba.decorators import jit as optional_jit
/usr/local/lib/python3.7/site-packages/librosa/util/decorators.py:9: NumbaDeprecationWarning: An import was requested from a module that has moved location.
Import of 'jit' requested from: 'numba.decorators', please update to use 'numba.core.decorators' or pin to Numba version 0.48.0. This alias will not be present in Numba version 0.50.0.
from numba.decorators import jit as optional_jit

My System is: Debian Linux 10 (buster) 64bit

Any suggestions what's going wrong here respectively how to solve this???

Cheers, Joachim

Additional documentation and Raspberry Pi 4 installation

Hello and thank you so much for BirdNET. I have forked BirdNET as a systemd service for aarch64 machines and put together an interactive installer with a mediawiki. The Wiki covers using a Raspberry Pi 4B. It is still a work in progress, but I wanted to share it with you and the others and say thank you!

UPDATE:
@kahst I wanted to also share that I have put together a companion system for the BirdNET-Lite repo that is also meant for Raspberry Pi 4B installation. It has really wonderful results so far (especially for those of you not in North America 😸).

Docker GPU Build fails on step 6/11 in Ubuntu 20.04 (WSL)

I clone BirdNET to my WSL installation and run the standard line: sudo docker build -f Dockerfile-GPU -t birdnet-gpu . All is fine until step 6.

[ 6/11] RUN cd libgpuarray && mkdir Build && cd Build && cmake .. -DCMAKE_BUILD_TYPE=Release && make && make install && cd .. && python3 setup.py build && python3 setup.py install && ldconfig:
#9 0.508 -- The C compiler identification is GNU 7.5.0
#9 0.513 -- Check for working C compiler: /usr/bin/cc
#9 0.568 -- Check for working C compiler: /usr/bin/cc -- works
#9 0.569 -- Detecting C compiler ABI info
#9 0.627 -- Detecting C compiler ABI info - done
#9 0.637 -- Detecting C compile features
#9 0.811 -- Detecting C compile features - done
#9 0.820 -- Looking for strlcat
#9 0.879 -- Looking for strlcat - not found
#9 0.879 -- Looking for mkstemp
#9 0.935 -- Looking for mkstemp - found
#9 0.940 -- Could NOT find PkgConfig (missing: PKG_CONFIG_EXECUTABLE)
#9 0.941 -- Checking for one of the modules 'check'
#9 0.943 Tests disabled because Check was not found
#9 0.945 -- Configuring done
#9 0.969 -- Generating done
#9 0.970 -- Build files have been written to: /libgpuarray/Build
#9 0.998 [ 1%] Generating ../../src/cluda_opencl.h.c
#9 1.000 /bin/sh: 1: python: not found
#9 1.000 src/CMakeFiles/gpuarray.dir/build.make:74: recipe for target '../src/cluda_opencl.h.c' failed
#9 1.000 make[2]: *** [../src/cluda_opencl.h.c] Error 127
#9 1.000 make[1]: *** [src/CMakeFiles/gpuarray.dir/all] Error 2
#9 1.000 CMakeFiles/Makefile2:124: recipe for target 'src/CMakeFiles/gpuarray.dir/all' failed
#9 1.000 make: *** [all] Error 2
#9 1.000 Makefile:140: recipe for target 'all' failed


executor failed running [/bin/sh -c cd libgpuarray && mkdir Build && cd Build && cmake .. -DCMAKE_BUILD_TYPE=Release && make && make install && cd .. && python3 setup.py build && python3 setup.py install && ldconfig]: exit code: 2

Installation step 7 for GPU support

when i run this command: python setup.py install (inside libgpuarray), i get the following cython error:

[1/2] Cythonizing pygpu/collectives.pyx /usr/local/lib/python3.6/site-packages/Cython-3.0a5-py3.6-linux-x86_64.egg/Cython/Compiler/Main.py:344: FutureWarning: Cython directive 'language_level' not set, using '3str' for now (Py3). This has changed from earlier releases! File: /home/datadrivenbirding/BirdNET/libgpuarray/pygpu/collectives.pxd tree = Parsing.p_module(s, pxd, full_module_name) Error compiling Cython file: try: if is_c_cont: # Smallest in index dimension has the largest stride if src.ga.dimensions[0] % gpucount == 0: chosen_dim_size = src.ga.dimensions[0] / gpucount ^ pygpu/collectives.pyx:394:55: Cannot assign type 'double' to 'size_t' Error compiling Cython file: else: raise TypeError, "Source GpuArray cannot be split in %d c-contiguous arrays" % (gpucount) else: # Largest in index dimension has the largest stride if src.ga.dimensions[nd - 1] % gpucount == 0: chosen_dim_size = src.ga.dimensions[nd - 1] / gpucount ^ pygpu/collectives.pyx:408:60: Cannot assign type 'double' to 'size_t' Traceback (most recent call last): File "setup.py", line 158, in <module> ext_modules=cythonize(exts), File "/usr/local/lib/python3.6/site-packages/Cython-3.0a5-py3.6-linux-x86_64.egg/Cython/Build/Dependencies.py", line 1110, in cythonize cythonize_one(*args) File "/usr/local/lib/python3.6/site-packages/Cython-3.0a5-py3.6-linux-x86_64.egg/Cython/Build/Dependencies.py", line 1277, in cythonize_one raise CompileError(None, pyx_file) Cython.Compiler.Errors.CompileError: pygpu/collectives.pyx

Dealing with Grayscale

Creating a new axis for rgb is fine, but you could automatically implement this with a 2D Convolution. This is the example in keras:
input_tensor = Input(shape=(128,128,1) )
x = Conv2D(3,(3,3),padding='same',activation='relu')(input_tensor)

Docker gpu version dependencies error

Hi,
I tested the CPU version with docker and it works fine. Then I tested the GPU version but ran into some error.

The Dockerfile-GPU install python 3.6 by default. The build works but when I test it with the command

sudo docker run --gpus all -v $PWD/example:/audio birdnet-gpu --i audio

I get the following error:

Traceback (most recent call last):
  File "./analyze.py", line 8, in <module>
    from utils import audio
  File "/utils/audio.py", line 6, in <module>
    import librosa
  File "/usr/local/lib/python3.6/dist-packages/librosa/__init__.py", line 11, in <module>
    from . import cache
  File "/usr/local/lib/python3.6/dist-packages/librosa/cache.py", line 7, in <module>
    from joblib import Memory
  File "/usr/local/lib/python3.6/dist-packages/joblib/__init__.py", line 120, in <module>
    from .parallel import Parallel
  File "/usr/local/lib/python3.6/dist-packages/joblib/parallel.py", line 26, in <module>
    from ._parallel_backends import (FallbackToBackend, MultiprocessingBackend,
  File "/usr/local/lib/python3.6/dist-packages/joblib/_parallel_backends.py", line 17, in <module>
    from .pool import MemmappingPool
  File "/usr/local/lib/python3.6/dist-packages/joblib/pool.py", line 31, in <module>
    from ._memmapping_reducer import get_memmapping_reducers
  File "/usr/local/lib/python3.6/dist-packages/joblib/_memmapping_reducer.py", line 37, in <module>
    from .externals.loky.backend import resource_tracker
  File "/usr/local/lib/python3.6/dist-packages/joblib/externals/loky/__init__.py", line 12, in <module>
    from .backend.reduction import set_loky_pickler
  File "/usr/local/lib/python3.6/dist-packages/joblib/externals/loky/backend/reduction.py", line 125, in <module>
    from joblib.externals import cloudpickle  # noqa: F401
  File "/usr/local/lib/python3.6/dist-packages/joblib/externals/cloudpickle/__init__.py", line 4, in <module>
    from .cloudpickle import *  # noqa
  File "/usr/local/lib/python3.6/dist-packages/joblib/externals/cloudpickle/cloudpickle.py", line 64, in <module>
    import typing_extensions as _typing_extensions
  File "/usr/local/lib/python3.6/dist-packages/typing_extensions-4.2.0-py3.6.egg/typing_extensions.py", line 159, in <module>
    class _FinalForm(typing._SpecialForm, _root=True):
AttributeError: module 'typing' has no attribute '_SpecialForm'

If I change the Dockerfile-GPU to install a newer version of python (3.7 or 3.8). The previous error is fix but I have another error about pygpu. The command execute but I get the following message ERROR (theano.gpuarray): pygpu was configured but could not be imported or is too old (version 0.7 or higher required) Here is the complete output

$ sudo docker run --gpus all -v $PWD/example:/audio birdnet-gpu --i audio
ERROR (theano.gpuarray): pygpu was configured but could not be imported or is too old (version 0.7 or higher required)
NoneType: None
FILES IN DATASET: 2
LOADING SNAPSHOT BirdNET_Soundscape_Model.pkl ... DONE!
BUILDING BirdNET MODEL... DONE!
IMPORTING MODEL PARAMS... DONE!
COMPILING THEANO TEST FUNCTION FUNCTION... DONE!
LOADING eBIRD GRID DATA... DONE! 13800 GRID CELLS
SID: 1 PROCESSING: Soundscape_1.wav SPECIES: 987 DETECTIONS: 38 TIME: 32
SID: 2 PROCESSING: Soundscape_2.wav SPECIES: 987 DETECTIONS: 16 TIME: 33

BirdNet on Google COLAB

I have BirdNet up and running on Google Colab with CPU support

BirdNet on Colab

I failed to get it to work with GPU support.
If anyone knows how to get that to work please let me know.
It should speed up things considerably.

WARNING (theano.tensor.blas): 
We did not find a dynamic library in the library_dir of the library we use for blas. 
If you use ATLAS, make sure to compile it with dynamics library.

No Output Files from Docker on Windows

Hi,

I'm new to using BirdNET via Docker Desktop. I followed the instructions on how to build and run BirdNET using Docker on my Windows machine. Everything seems to work correctly, except no output files are produced?

As a test, I ran BirdNET on the example audio files provided as part of the BirdNET GitHub repo. I ran the following,

>docker run -v ./BirdNET/example:/audio birdnet --i audio 

FILES IN DATASET: 2
LOADING SNAPSHOT BirdNET_Soundscape_Model.pkl ... DONE!
BUILDING BirdNET MODEL... DONE!
IMPORTING MODEL PARAMS... DONE!
COMPILING THEANO TEST FUNCTION FUNCTION... DONE!
LOADING eBIRD GRID DATA...

When I go to look in the directory ./BirdNET/example I see the audio files, but no output files? Any suggestions? I was expecting 2 output files named Soundscape_1.BirdNET.selections.txt and Soundscape_2.BirdNET.selections.txt. No errors are thrown, so I'm not sure if it's getting hung up somewhere or the files are not being written to my local machine and are getting lost in the VM.

Thanks!

Feature request: Live sound input

Thanks for the great work! It seems to work pretty well while testing the online versions.

Would it be possible to use this Docker install also with live sound stream as you do on your website? Maybe it's possible to publish the code for the "Live stream demo" as well?

FileNotFoundError: [Errno 2] 'model/BirdNET_Soundscape_Model.pkl'

I think BirdNET is so cool! Sadly I can't get it to run on OSX yet.

After installation on OSX 10.14.6 I get this error message:
FileNotFoundError: [Errno 2] No such file or directory: 'model/BirdNET_Soundscape_Model.pkl'

This might well be something minor, but I am not sure how to fix it. Of course this is so specialised that I could not find any solution on Google.

Here is what I entered:
python3 analyze.py --i example/

FILES IN DATASET: 2
LOADING SNAPSHOT BirdNET_Soundscape_Model.pkl ... Traceback (most recent call last):
File "analyze.py", line 292, in
main()
File "analyze.py", line 266, in main
test_function = loadModel()
File "analyze.py", line 37, in loadModel
snapshot = model.loadSnapshot('model/BirdNET_Soundscape_Model.pkl')
File "/Users/vk0604/BirdNET/model/model.py", line 28, in loadSnapshot
with open(path, 'rb') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'model/BirdNET_Soundscape_Model.pkl'

French contributor

Hi, thanks fort this wonderful app.
Is there a french contributor in your team ?
So, I could send my issues to him.

How can I extract a regional species list (less than 984) if there is no eBird information in my area?

Hi birdNET fans!

I have birdNET running on Windows and I am really impressed!! It is 'mostly' detecting the correct species in the Yukon, Canada, HOWEVER, birdNET is not reducing the candidate species list (from 984) and is reporting many European and southern species that do not occur here.

I think the issue is: there is little/no eBird information this far north so birdNET cannot generate a candidate species list and defaults to 984 species. When I specify coordinates from populated areas in Canada (i.e., Vancouver BC) it filters to 160 or so species, but the detections are still not quite right (e.g., birdNET filters out northern species like ptarmigans etc). I tried specifying coordinates for northern cities like Whitehorse YT or Anchorage AK where there are far more eBird checklists, but that did not extract a species list (not enough checklists?).

I have tried to manually change GRID_STEP_SIZE in config.py to see if I could extract information from a larger spatial area (I think it uses the 3x3km eBird grid cells?). Again, this worked for the southern areas (I increased the species pool in Sapsucker Woods from 130 to 180) BUT ALAS it still does not work in Northern regions.

What should I do?
Can I manually feed birdNET my own species list? Or can I alter the python code do draw checklists from a much larger area? ie Yukon/Alaska/NWT should have enough information??

Thank you for helping!

Update: @kahst please help!

unable to share to Facebook or Twitter on Android

Facebook is not presented as a share option.
I see whatsapp, gmail, messenger etc, but not fb or twitter (ironically)

pixel 4a
android v11

Would love to get you a heap more users by sharing on Facebook, but can't!

Screenshot_20210522-145241
Screenshot_20210522-145235

Multithreading not working

Hi All,
I'm trying to run BirdNet on a server (without GPU computation). Installation works fine, the detection software runs but on only one thread/CPU. I've tried Ubuntu 20.04, 18.04 and even CentOS. None solves the problem. I also tried several versions of numpy just in case.
Does anyone know of a possible solution to this problem ?
The weird thing is that I installed it on my personal machine (with Ubuntu 20.04) and everything works fine, the algo runs on all 8 cores. I followed the exact same procedure in all cases (installation readme until step 7).
The goal is to be able to batch-process a high number of files collected on ARUs (otherwise I guess I could go for @mcguirepr89 's arm64 fork).

Generally speaking I'm a little worried about the fact that birdNET doesn't seem to be maintained anymore. I'm not criticising, the work provided by @kahst is incredibly good and useful! But I wonder if it is a good idea to invest time on a tool that may break anytime because dependencies break, libraries become outdated etc. Sadly I can't find a fork of BirdNET that is currently active and not on arm64.
Thanks in advance for your help!
Cheers all and thanks again @kahst @mcguirepr89 for all the good work.
Thomas

ValueError: scale < 0

Hi @kahst,
I have been using the BirdNET Docker for a few weeks now (on my local machine and also on a HPC cluster) and actually never had a problem. Now I wanted to analyse a larger set of wav-files (several 12h recordings, saved as 15min files).

Unfortunately I get an error message for a lot of the files (but not all).

My command for analysing the files is:
docker run -v F:/jreeg/MakroOeko/BirdNET/example:/audio birdnet --i audio/SMU02869_20210921_213002.wav --lat 52.73379 --lon 12.21099

BirdNET output:

FILES IN DATASET: 1
LOADING SNAPSHOT BirdNET_Soundscape_Model.pkl ... DONE!
BUILDING BirdNET MODEL... DONE!
IMPORTING MODEL PARAMS... DONE!
COMPILING THEANO TEST FUNCTION FUNCTION... DONE!
LOADING eBIRD GRID DATA... DONE! 13800 GRID CELLS
SID: 1 PROCESSING: SMU02869_20210921_213002.wav SPECIES: 201 Traceback (most recent call last):
File "./analyze.py", line 292, in
main()
File "./analyze.py", line 288, in main
process(s, dataset.index(s) + 1, result_path, args.results, test_function)
File "./analyze.py", line 219, in process
p = analyzeFile(soundscape, test_function)
File "./analyze.py", line 182, in analyzeFile
duration=None):
File "/utils/audio.py", line 233, in specsFromFile
for spec in specsFromSignal(sig, rate, **kwargs):
File "/utils/audio.py", line 217, in specsFromSignal
sig_splits = splitSignal(sig, rate, seconds, overlap, minlen)
File "/utils/audio.py", line 208, in splitSignal
split = np.hstack((split, noise(split, (int(rate * seconds) - len(split)), 0.5)))
File "/utils/audio.py", line 27, in noise
noise = RANDOM.normal(min(sig) * amount, max(sig) * amount, shape)
File "mtrand.pyx", line 1501, in numpy.random.mtrand.RandomState.normal
File "_common.pyx", line 557, in numpy.random._common.cont
File "_common.pyx", line 366, in numpy.random._common.check_constraint
ValueError: scale < 0

I only get this message for some of the audiofiles (which have the same recording settings and location, the only difference is the timestemp of the day).

Do you have any idea what causes this error message?

Localization

Dear Team,

thanks for the Docker file. Is it also possible to use a localised version with custom_list like TFlite?

Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.