Giter Site home page Giter Site logo

m2dgr's Introduction

M2DGR: a Multi-modal and Multi-scenario SLAM Dataset for Ground Robots (RA-L & ICRA2022)

💎 First Author: Jie Yin 殷杰   📝 [Paper]   ➡️ [Dataset Extension]   ⭐️[Presentation Video]

Figure 1. Sample Images

🎯 Notice

We strongly recommend that the newly proposed SLAM algorithm be tested on our M2DGR / M2DGR-plus / Ground-Challenge / SJTU-GVI benchmark, because our data has following features:

  1. Rich sensory information including vision, lidar, IMU, GNSS,event, thermal-infrared images and so on
  2. Various scenarios in real-world environments including lifts, streets, rooms, halls and so on.
  3. Our dataset brings great challenge to existing cutting-edge SLAM algorithms including LIO-SAM and ORB-SLAM3. If your proposed algorihm outperforms these SOTA systems on our benchmark, your paper will be much more convincing and valuable.
  4. 🔥 Extensive excellent open-source projects have been built or evaluated on M2DGR/M2DGE-plus so far, for examples, Ground-Fusion, LVI-SAM-Easyused, MM-LINS, Log-LIO, LIGO, Swarm-SLAM, VoxelMap++, GRIL-Cali, LINK3d, i-Octree, LIO-EKF, Fast-LIO ROS2, HC-LIO, LIO-RF, PIN-SLAM, LOG-LIO2, Section-LIO, I2EKF-LO and so on!

Table of Contents

  1. 🚩 News & Updates
  2. Introduction
  3. License
  4. Sensor Setup
  5. ⭐️ Dataset Sequences
  6. 📝 Configuration Files
  7. Development Toolkits
  8. Star History
  9. Acknowledgement

Tip

Check the table of contents above for a quick overview. And check the below news for lateset updates, especially the list of projects based on M2DGR.

News & Updates

>LVI-SAM on M2DGR

  • ⭐️2022/02/18: We have upload a brand new SLAM dataset with GNSS, vision and IMU information. Here is our link SJTU-GVI. Different from M2DGR, new data is captured on a real car and it records GNSS raw measurements with a Ublox ZED-F9P device to facilitate GNSS-SLAM. Give us a star and folk the project if you like it.

  • 📄 2022/02/01: The paper has been accepted by both RA-L and ICRA 2022. The paper is provided in Arxiv version and IEEE RA-L version.

Note

If you build your open-source project based on M2DGR or test a cutting-edge SLAM system on M2DGR, please write a issue to remind me of updating your contributions.

INTRODUCTION

ABSTRACT:

We introduce M2DGR: a novel large-scale dataset collected by a ground robot with a full sensor-suite including six fish-eye and one sky-pointing RGB cameras, an infrared camera, an event camera, a Visual-Inertial Sensor (VI-sensor), an inertial measurement unit (IMU), a LiDAR, a consumer-grade Global Navigation Satellite System (GNSS) receiver and a GNSS-IMU navigation system with real-time kinematic (RTK) signals. All those sensors were well-calibrated and synchronized, and their data were recorded simultaneously. The ground truth trajectories were obtained by the motion capture device, a laser 3D tracker, and an RTK receiver. The dataset comprises 36 sequences (about 1TB) captured in diverse scenarios including both indoor and outdoor environments. We evaluate state-of-the-art SLAM algorithms on M2DGR. Results show that existing solutions perform poorly in some scenarios. For the benefit of the research community, we make the dataset and tools public.

Keywords:Dataset, Multi-model, Multi-scenario,Ground Robot

MAIN CONTRIBUTIONS:

  • We collected long-term challenging sequences for ground robots both indoors and outdoors with a complete sensor suite, which includes six surround-view fish-eye cameras, a sky-pointing fish-eye camera, a perspective color camera, an event camera, an infrared camera, a 32-beam LIDAR, two GNSS receivers, and two IMUs. To our knowledge, this is the first SLAM dataset focusing on ground robot navigation with such rich sensory information.
  • We recorded trajectories in a few challenging scenarios like lifts, complete darkness, which can easily fail existing localization solutions. These situations are commonly faced in ground robot applications, while they are seldom discussed in previous datasets.
  • We launched a comprehensive benchmark for ground robot navigation. On this benchmark, we evaluated existing state-of-the-art SLAM algorithms of various designs and analyzed their characteristics and defects individually.

VIDEO

ICRA2022 Presentation

For Chinese users, try bilibili

LICENSE

This work is licensed under MIT license. International License and is provided for academic purpose. If you are interested in our project for commercial purposes, please contact us on [email protected] for further communication.

If you face any problem when using this dataset, feel free to propose an issue. And if you find our dataset helpful in your research, simply give this project a star. If you use M2DGR in an academic work, please cite:

@article{yin2021m2dgr,
  title={M2dgr: A multi-sensor and multi-scenario slam dataset for ground robots},
  author={Yin, Jie and Li, Ang and Li, Tao and Yu, Wenxian and Zou, Danping},
  journal={IEEE Robotics and Automation Letters},
  volume={7},
  number={2},
  pages={2266--2273},
  year={2021},
  publisher={IEEE}
}
@article{yin2024ground,
  title={Ground-Fusion: A Low-cost Ground SLAM System Robust to Corner Cases},
  author={Yin, Jie and Li, Ang and Xi, Wei and Yu, Wenxian and Zou, Danping},
  journal={arXiv preprint arXiv:2402.14308},
  year={2024}
}

SENSOR SETUP

Acquisition Platform

Physical drawings and schematics of the ground robot is given below. The unit of the figures is centimeter.

Figure 2. The GAEA Ground Robot Equipped with a Full Sensor Suite.The directions of the sensors are marked in different colors,red for X,green for Y and blue for Z.

Sensor parameters

All the sensors and track devices and their most important parameters are listed as below:

  • LIDAR Velodyne VLP-32C, 360 Horizontal Field of View (FOV),-30 to +10 vertical FOV,10Hz,Max Range 200 m,Range Resolution 3 cm, Horizontal Angular Resolution 0.2°.

  • RGB Camera FLIR Pointgrey CM3-U3-13Y3C-CS,fish-eye lens,1280*1024,190 HFOV,190 V-FOV, 15 Hz

  • GNSS Ublox M8T, GPS/BeiDou, 1Hz

  • Infrared Camera,PLUG 617,640*512,90.2 H-FOV,70.6 V-FOV,25Hz;

  • V-I Sensor,Realsense d435i,RGB/Depth 640*480,69H-FOV,42.5V-FOV,15Hz;IMU 6-axix, 200Hz

  • Event Camera Inivation DVXplorer, 640*480,15Hz;

  • IMU,Handsfree A9,9-axis,150Hz;

  • GNSS-IMU Xsens Mti 680G. GNSS-RTK,localization precision 2cm,100Hz;IMU 9-axis,100 Hz;

  • Laser Scanner Leica MS60, localization 1mm+1.5ppm

  • Motion-capture System Vicon Vero 2.2, localization accuracy 1mm, 50 Hz;

The rostopics of our rosbag sequences are listed as follows:

  • LIDAR: /velodyne_points

  • RGB Camera: /camera/left/image_raw/compressed ,
    /camera/right/image_raw/compressed ,
    /camera/third/image_raw/compressed ,
    /camera/fourth/image_raw/compressed ,
    /camera/fifth/image_raw/compressed ,
    /camera/sixth/image_raw/compressed ,
    /camera/head/image_raw/compressed

  • GNSS Ublox M8T:
    /ublox/aidalm ,
    /ublox/aideph ,
    /ublox/fix ,
    /ublox/fix_velocity ,
    /ublox/monhw ,
    /ublox/navclock ,
    /ublox/navpvt ,
    /ublox/navsat ,
    /ublox/navstatus ,
    /ublox/rxmraw

  • Infrared Camera:/thermal_image_raw

  • V-I Sensor:
    /camera/color/image_raw/compressed ,
    /camera/imu

  • Event Camera:
    /dvs/events,
    /dvs_rendering/compressed

  • IMU: /handsfree/imu

DATASET SEQUENCES

We make public ALL THE SEQUENCES with their GT now.

Figure 3. A sample video with fish-eye image(both forward-looking and sky-pointing),perspective image,thermal-infrared image,event image and lidar odometry

An overview of M2DGR is given in the table below:

Scenario Street Circle Gate Walk Hall Door Lift Room Roomdark TOTAL
Number 10 2 3 1 5 2 4 3 6 36
Size/GB 590.7 50.6 65.9 21.5 117.4 46.0 112.1 45.3 171.1 1220.6
Duration/s 7958 478 782 291 1226 588 1224 275 866 13688
Dist/m 7727.72 618.03 248.40 263.17 845.15 200.14 266.27 144.13 395.66 10708.67
Ground Truth RTK/INS RTK/INS RTK/INS RTK/INS Leica Leica Leica Mocap Mocap ---

Outdoors

Figure 4. Outdoor Sequences:all trajectories are mapped in different colors.

Sequence Name Collection Date Total Size Duration Features Rosbag GT
gate_01 2021-07-31 16.4g 172s dark,around gate Rosbag GT
gate_02 2021-07-31 27.3g 327s dark,loop back Rosbag GT
gate_03 2021-08-04 21.9g 283s day Rosbag GT
Sequence Name Collection Date Total Size Duration Features Rosbag GT
Circle_01 2021-08-03 23.3g 234s Circle Rosbag GT
Circle_02 2021-08-07 27.3g 244s Circle Rosbag GT
Sequence Name Collection Date Total Size Duration Features Rosbag GT
street_01 2021-08-06 75.8g 1028s street and buildings,night,zigzag,long-term Rosbag GT
street_02 2021-08-03 83.2g 1227s day,long-term Rosbag GT
street_03 2021-08-06 21.3g 354s night,back and fourth,full speed Rosbag GT
street_04 2021-08-03 48.7g 858s night,around lawn,loop back Rosbag GT
street_05 2021-08-04 27.4g 469s night,staight line Rosbag GT
street_06 2021-08-04 35.0g 494s night,one turn Rosbag GT
street_07 2021-08-06 77.2g 929s dawn,zigzag,sharp turns Rosbag GT
street_08 2021-08-06 31.2g 491s night,loop back,zigzag Rosbag GT
street_09 2021-08-07 83.2g 907s day,zigzag Rosbag GT
street_010 2021-08-07 86.2g 910s day,zigzag Rosbag GT
walk_01 2021-08-04 21.5g 291s day,back and fourth Rosbag GT

Indoors

Figure 5. Lift Sequences:The robot hang around a hall on the first floor and then went to the second floor by lift.A laser scanner track the trajectory outside the lift

Sequence Name Collection Date Total Size Duration Features Rosbag GT
lift_01 2021-08-04 18.4g 225s lift Rosbag GT
lift_02 2021-08-04 43.6g 488s lift Rosbag GT
lift_03 2021-08-15 22.3g 252s lift Rosbag GT
lift_04 2021-08-15 27.8g 299s lift Rosbag GT
Sequence Name Collection Date Total Size Duration Features Rosbag GT
hall_01 2021-08-01 29.1g 351s randon walk Rosbag GT
hall_02 2021-08-08 15.0g 128s randon walk Rosbag GT
hall_03 2021-08-08 20.5g 164s randon walk Rosbag GT
hall_04 2021-08-15 17.7g 181s randon walk Rosbag GT
hall_05 2021-08-15 35.1g 402s circle Rosbag GT

Figure 6. Room Sequences:under a Motion-capture system with twelve cameras.

Sequence Name Collection Date Total Size Duration Features Rosbag GT
room_01 2021-07-30 14.0g 72s room,bright Rosbag GT
room_02 2021-07-30 15.2g 75s room,bright Rosbag GT
room_03 2021-07-30 26.1g 128s room,bright Rosbag GT
room_dark_01 2021-07-30 20.2g 111s room,dark Rosbag GT
room_dark_02 2021-07-30 30.3g 165s room,dark Rosbag GT
room_dark_03 2021-07-30 22.7g 116s room,dark Rosbag GT
room_dark_04 2021-08-15 29.3g 143s room,dark Rosbag GT
room_dark_05 2021-08-15 33.0g 159s room,dark Rosbag GT
room_dark_06 2021-08-15 35.6g 172s room,dark Rosbag GT

Alternative indoors and outdoors

Figure 7. Door Sequences:A laser scanner track the robot through a door from indoors to outdoors.

Sequence Name Collection Date Total Size Duration Features Rosbag GT
door_01 2021-08-04 35.5g 461s outdoor to indoor to outdoor,long-term Rosbag GT
door_02 2021-08-04 10.5g 127s outdoor to indoor,short-term Rosbag GT

CONFIGURATION FILES

For convenience of evaluation, we provide configuration files of some well-known SLAM systems as below:

A-LOAM, LeGO-LOAM, LINS, LIO-SAM, VINS-MONO, ORB-Pinhole, ORB-Fisheye, ORB-Thermal, and CUBMAPSLAM.

Furthermore, a quantity of cutting-edge SLAM systems have been tested on M2DGR by lovely users. Here are the configuration files for ORB-SLAM2, ORB-SLAM3, VINS-Mono,DM-VIO, A-LOAM, Lego-LOAM, LIO-SAM, LVI-SAM, LINS, FastLIO2, Fast-LIVO, Faster-LIO and hdl_graph_slam. Welcome to test! If you have more configuration files, please contact me and I will post it on this website ~

DEVELOPEMENT TOOLKIT

Extracting Images

  • For rosbag users, first make image view
roscd image_view
rosmake image_view
sudo apt-get install mjpegtools

open a terminal,type roscore.And then open another,type

rosrun image_transport republish compressed in:=/camera/color/image_raw raw out:=/camera/color/image_raw respawn="true"

Evaluation

We use open-source tool evo for evalutation. To install evo,type

pip install evo --upgrade --no-binary evo

To evaluate monocular visual SLAM,type

evo_ape tum street_07.txt your_result.txt -vaps

To evaluate LIDAR SLAM,type

evo_ape tum street_07.txt your_result.txt -vap

To test GNSS based methods,type

evo_ape tum street_07.txt your_result.txt -vp

Calibration

For camera intrinsics,visit Ocamcalib for omnidirectional model. visit Vins-Fusion for pinhole and MEI model. use Opencv for Kannala Brandt model

For IMU intrinsics,visit Imu_utils

For extrinsics between cameras and IMU,visit Kalibr For extrinsics between Lidar and IMU,visit Lidar_IMU_Calib For extrinsics between cameras and Lidar, visit Autoware

Getting RINEX files

For GNSS based methods like RTKLIB, we usually need to get data in the format of RINEX. To make use of GNSS raw measurements, we use Link toolkit.

ROS drivers for UVC cameras

We write a ROS driver for UVC cameras to record our thermal-infrared image. UVC ROS driver

Star History

Star History Chart

ACKNOWLEGEMENT

This work is supported by NSFC(62073214). Authors from SJTU hereby express our appreciation.

m2dgr's People

Contributors

sjtu-visys avatar sjtuyinjie avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

m2dgr's Issues

handsfree IMU传感器测量初值(重力)问题

您好,我在使用数据集lift04.bag的时候发现handsfree传感器在设备静止的时候测量的重力加速度约为[0.0, 0.15, 10
.05], 总的加速度也超过了常见的9.81,请问这是什么原因呢?谢谢

GT consistency across sequences

First of all, thank you for releasing this dataset.

I have an issue regarding the ground truth poses, I want to test an approach for inter-sequence LiDAR localization, i.e. localize a single scan from a sequence in a LiDAR map built from another sequence. However, when I try to visualize two point clouds of the same place taken from two different sequences using the provided GTs, the scan does not align precisely (see attached images). As it was not clear in which frame the GTs are expressed (GNSS? Lidar? IMU?) I also checked the extrinsic calibration provided (calibration_results.txt). However, all the extrinsic matrices have an identity rotation, thus the results is the same.

image
image

The sequence gate_03's GT seems wrong

I used gate_03.bag to obtain a trajectory data and I found that my results were very different from the true value
Finally I found the data of gate_03.txt seems wrong where x, y and z value of gate_03.txt is several million meter

固态激光雷达

你好,首先非常感谢你们提供了一个非常完整的SLAM数据集。目前关于固态激光雷达的研究也越来越多,希望你们后面能够增加包含固态激光雷达的数据集🙏

imu的标定

请问一下,你们的IMU是怎么进行标定的?这个型号的IMU,我也使用imu_utils进行标定,不过标定出来的结果是nan。所以,想请问一下你们是直接使用ros获取的imu bag进行标定的嘛?还是进行了一些特殊的处理?非常感谢。

Intrinsic parameters of Event camera

Thank you for providing such a comprehensive dataset. Is there any Intrinsic parameters provided for the event camera? I can't find it in file [calibration_results.txt] (only extrinsic parameters provided).

The file LVISAM-modified.md mentioned the third paragraph that comment some code

// Comment the following code

// pcl::PointCloud<PointType>::Ptr laser_cloud_offset(new pcl::PointCloud<PointType>());
// Eigen::Affine3f transOffset = pcl::getTransformation(L_C_TX, L_C_TY, L_C_TZ, L_C_RX, L_C_RY, L_C_RZ);
// pcl::transformPointCloud(*laser_cloud_in, *laser_cloud_offset, transOffset);
// *laser_cloud_in = *laser_cloud_offset;

As far as i concerned, it is not correct.
It aims to convert point cloud from lidar frame to IMU frame.

Camera topics in ros bag are compressed image format, it didn't correspond to the config file

In config file both lvi-sam or vins-mono, the yaml file image topic are raw image which is sensor_msgs/image format.

However in ros bag file, the camera image is sensor_msgs/Compressedimage format.(I used rosbag info to test door_01, door_02, gate_01, room_01, room_dark01)

So the feature_tracker will not work, did the rosbag has been updated recently(which had been compressed)?

If you can give me the modified feature_tracker cpp file or the original ros bag. It would be a great help for me to test.

You had done a great work, maybe my question is silly.

image

Questions about Lidar-IIMU calib

Hi, thks for your nice work. For the extrinsic between Lidar and IMU, you refer to [Lidar_IMU_Calib], did it works well for ground vehicles without sufficient excitement? And in the livestream tonight, you said you have done some measurements manually, so what information did you adopt from Lidar_IMU_Calib?

Fail to Download Rosbags

Hi, thanks for your great work first.
I failed to download the "alternative indoors and outdoors" rosbags from the provided link. But I could download the ".txt" format ground truth files successfully.
Do you have any ideas how to fix it?
Thank you!

About long term

Thank you for your excellent work!

I notice that the sequences, "street 1" and "street 2", are marked with long-term. However, it seems two trajectories are not joint?

By the way, my research topic is about long-term localization. The common datasets I used are Oxford RobotCar and NCLT. In both datasets, different seasons and weathers are also included. One more important thing is that most of their trajectories are overlapped. In this way, for example, we can localize our robot in the summer on the map built in the winter to evaluate our performance on long-term situation.

rosbag包

你好,rosbag包的图像都是经过压缩的,请问你那边有没有图像没经过压缩的rosbag包?

It would be better with raw wheel encoding data.

Due to no enough excitation of imu on planar motion and insane drift of accelerator integration when visual tracking is lost, a lot of indoor ground vehicle applications requires wheel encoders to be practical. I would appreciate it even more if your guys be so kind to add wheel encoding raw data into your already fantastic dataset.

Lack of Extrinsic Parameter from Mocap to Lidar

Hi, thanks for your great job!
I am going to use the room data to test my algorithm. As is mentioned in the paper, the groundtruth is provided by a Mocap. But I cant find the related extrinsic parameter of this sensor. Would you mind updating the calibration_results.txt as soon as possible to provide this necessary extrinsic parameter? Or can I get the extrinsic parameter by calculating other extrinsic parameters?
Thank you in advance!

Speed up the downloading

Downloading directly from Onedrive is somewhat slow. Do you have any suggestions to speed up the downloading process?

标注问题

图片上的5和8是不是标注错误了

貌似有点问题啊

GNSS data of M2DGR

1)The M2DGR literature points out that there are two GNSS receivers' data, but I see that the schematic diagram has only one antenna. Do the two receivers share the same antenna, one of which records the original observation and the other records the real trajectory?
2)Can you provide documents such as the format description of topic, or where you can find such documents? It seems that the topic description file on the ROS official website does not contain the description of the raw observations of GNSS.
3)The operation of converting rosbag (street_02.bag) into GNSS raw observation data through the ublox2rinex program failed, and only the header file of RINEX was generated.

关于数据集下载方式的一个请求

您好,

首先非常感谢贵实验室对SLAM领域做出的贡献,我计划用该数据集做一些相关实验来尝试解决一些问题。

但是在下载数据集时,通过OneDrive方式的下载速度有些慢并经常中途下载失败。

这里想请问你们能否提供对国内用户更友好的下载方式,比如百度网盘?

希望得到您的回复~

LINS

感谢这非常出色的工作,你能配好LINS的config,我改了很多次,效果不是很好。

street_07的真实轨迹是不是有问题

您好。我在用您的street07数据集跑LIO-SAM过程中发现 真实轨迹明显多出了一截(我已经把真实轨迹开始和结束多余的时间删除了)
屏幕截图_1

您好,想咨询一下关于GNSS发布话题的相关问题

您好,首先非常感谢您的工作!
关于GNSS发布的相关话题,数目较多,但是对其说明的资料较少,想了解一下GNSS相关话题的意义或用途,比如我该订阅哪个消息以获得经纬信息等,其他话题大概是关于什么的。
希望您能提供一些参考,再次非常感谢您的工作,数据集很棒!

固态激光雷达

你好,首先非常感谢你们提供了一个非常完整的SLAM数据集。目前关于固态激光雷达的研究也越来越多,希望你们后面能够增加包含固态激光雷达的数据集🙏

gate_02真值有问题

真值80多秒处,进入了一个1m左右的坑,不管看图像还是跑算法,都不存在这么大的坑啊

The quaternion of the ground-truth is all 0 and wrong GT link

Hi, Thanks for your great job!
I found that the quaternion of the ground truth which is provided by the Leica is all 0. Is it for Leica only provides 3 DOF position estimation?
Another problem is that some GT link is wrong. For example the GT link of hall_03 will guild you to the hall_02. I suggest to check those links again.
Thanks and wait for your kind response!

VI-Sensor d435i depth data?

Thanks for your great work!
I have a question that from your paper and readme I noticed that Intel Realsense d435i is used to collect RGB and IMU data. But there is no mention of depth image. I wonder if you recorded the depth image data. I'm looking for SLAM dataset with depth and fisheye image.

请问真值中的意义

请问真值中的x,y,z具体怎么来的呢,照道理来说z的值应该都差不多才对,但是我显示出来轨迹都是倾斜的

evo_traj tum gate_01.txt -p

2022-03-11 15-58-13 的屏幕截图

The Intrinsic Parameter for RGB Camera

I want to run VINS-MONO with the RGB Camera and IMU, but the result is not good. It seems caused by the wrong camera parameter setting.

The config I use is like the following:
Screenshot from 2022-07-28 16-59-20

By the way, because the RGB Camera is a fish-eye lens with high FOV, so my camera model_type is MEI. The detailed intrinsic parameter and external parameter refers to calibration_results.txt.

Hope someone tell me if my config setting for camera is correct or not.

Thank you

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.