Giter Site home page Giter Site logo

Comments (19)

unicornx avatar unicornx commented on August 18, 2024

image

from tinylab.org.

lzufalcon avatar lzufalcon commented on August 18, 2024

[falcon] Testing and the stable tree : how to make stable-tree really stable.

There have been some concerns expressed that the stable kernel is growing too much, by adding too many patches, which makes it less stable.

Levin 一直致力于让 Linux 内核的 Stable tree 更加的 Stable,他的两个关注方向是:”that fewer regressions are released“ 和 ”that all of the fixes get out there for users“。

第二点在 2018 年北美开源峰会上,Levin 介绍了 Machine learning and stable kernels,旨在自动的发现那些没有被人主动提交给 Stable release 的 patch,确保有更多的 fixes 能够同步到 Stable tree。

第一点则是如何确保这些 fixes 在人工 Review 之后有更充分的测试,这个 Topic 主要是聚焦这个问题。大家讨论到了用于 Filesystem 的 xfstests,用于 Storage 的 blktests,用于 Memory 的 MMTests。另外,大家也讨论到用哪些自动化的测试框架来跑这些测试,作者希望是 KernelCI,有听众提到为什么不是 O-Day automated testing,作者回复 O-Day 是 Semi Open 的不如完全开源的 KernelCI。后面大家有讨论一个 include/exclude list 的问题,主要原因是这些测试用例跑的时间可能很长或者本身并不能在任意内核版本任意配置选项上跑,所以需要维护一个子集,还要排除一些可能导致 false-positive 的特定 Test Case,LTP 是一个很典型的例子,它有数千个测试用例,但是不能照搬直接放到产品测试中,因为它本身会带来大量的误报(发现的Bug并不影响最终产品,但是可能需要耗费大量时间解决),需要设立 include/exclude list。

Levin 最后呼吁大家把好的 Test Case 贡献到 KernelCI,以便 Stable tree 可以跑更多更好的测试,从而让 Stable tree really stable。

from tinylab.org.

lzufalcon avatar lzufalcon commented on August 18, 2024

[falcon] Storage testing: how to use blktests

He has also recently been running blktests to track down a problem that manifested itself as an ext4 regression in xfstests. It turned out to be a problem in the SCSI multiqueue (mq) code, but he thought it would be nice to be able to pinpoint whether future problems were block layer problems or in ext4. So he has been integrating blktests into his test suite.

来自 Google 的 Ted Ts 同学最近在玩 NFS testing 和 blktests,NFS testing 主要用 xfstests。它用 blktests 找到了一个 ext4 的衰退,当然这个问题表现是 ext4,但是实际问题可能出在 SCSI multiqueue,block layer。他已经把这个用例集成进了他自己的自动化测试平台。他发现 blktests 还有待完善,比如说要跑所有的测试依赖打开 38 个内核模块,但是 blktests 并没有说明,他正准备把相关的 setup 工作贡献进 blktests,他建议更多的kernel developer 来跑 blktests,“it makes his life easier”。

from tinylab.org.

lzufalcon avatar lzufalcon commented on August 18, 2024

[falcon] A way to do atomic writes: how to implement atomic writes in filesystem layer

Application developers hate the fact that when they update files in place, a crash can leave them with old or new data—or sometimes a combination of both. He discussed some implementation ideas that he has for atomic writes for XFS and wanted to see what the other filesystem developers thought about it.

There are filesystems that can write out-of-place, such as XFS, Btrfs, and others, so it would be nice to allow for atomic writes at the filesystem layer.

In that system, users can write as much data as they want to a file, but nothing will be visible until they do an explicit commit operation. Once that commit is done, all of the changes become active. One simple way to implement this would be to handle the commit operation as part of fsync(), which means that no new system call is required.

应用开发人员经常抱怨,在更新完一个文件以后,某个 Crash 会导致写入磁盘的数据不如预想的那样会写入最新的那份。

目前系统提供了 sync(), fsync(), fdatasync() 等接口用于做数据同步,但是只有 fsync() 是确保某个单一的文件写到磁盘,如果指定错了文件描述符呢?结果可能会莫名其妙。sync() 只是确保数据排入写队列,而 fdatasync() 不保证 metadata 写入,而 fflush() 只保证用户空间的流缓冲区被刷新。另外一个是在 open() 的时候指定 O_SYNC,确保数据被立即写入磁盘,立即写入磁盘的动作则会让低速的 IO 拖慢 CPU。

而底层的设备暂时都没有实现这样的 atomic writing 或者现有的接口很 awkward,所以从文件系统层面实现这样的接口是很有必要的。五年以前,HP Research 有一篇 论文 介绍到如何在 open() 接口中增加一个 flag 来实现相关功能,在这样一个系统中,用户可以随便写,直到最后做一个 commit 操作,相关的改动才会实际发生。而这个 commit 操作可以复用 fsync() 来实现。Hellwig 基于这个理论写了一些 patch,他主要是给 open() 加了个 O_ATOMIC flag,不过目前还遗留了一些问题:

It adds a new O_ATOMIC flag for open, which requests writes to be
failure-atomic, that is either the whole write makes it to persistent
storage, or none of it, even in case of power of other failures.

大家讨论很激烈,几个文件系统都表示很感兴趣,期待!

from tinylab.org.

lzufalcon avatar lzufalcon commented on August 18, 2024

[falcon] The Linux "copy problem": it that possible to make copy faster?

Much of the development work that has gone on in the Linux filesystem world over the last few years has been related to the performance of copying files, at least indirectly, he said. There are still pain points around copy operations, however, so he would like to see those get addressed.

French 抛出这样一个问题,是发现,虽然随着硬件性能的提升(NVMe/UFS), cp 性能有了很大的提升,但是 Linux 下的 cp 性能表现还是有点”难堪“,软件上尤其是内核上能够配合软件做哪些工作才能有所改善呢?他做了一些数据测试:

On the fast drive, for a 2GB copy on ext4, cp took 1.2s (1.7s on the slow), scp 1.7s (8.4s), and rsync took 4.3s (not run on the slow drive, apparently). These represent "a dramatic difference in performance" for a "really stupid" copy operation

他分析后发现可能是 cp 用了 128K I/O size,而其他的都用了 16k,其他工具,比如 parcp, parallel, fpart and fpsync, mutil 则采用了并行的优化方法, 那么文件系统能做哪些工作呢?

  1. copy_file_range(): 比如说目前内核中 5 个左右的文件系统提供了支持,但是 Btrfs 目前却不支持。这个函数旨在在内核中完成两个文件之间的拷贝,避免数据从用户空间和内存空间来回搬动。

  2. ACL/xattrs:目前没有一个 API 可以完整的复制整个文件以及其 meta data,所以是要提供一个用户空间的 library 还是说内核可以做点什么呢,尤其是跟 security 相关的部分。

  3. I/O size: 这个参数依赖于硬件,但是目前用户空间从 stat() 拿到的是 st_blksize 只是文件系统的 block size,并不是设备支持的最佳 I/O size,所以是否有必要增加一个参数到 statx()?不过对于挂了不同设备的 RAID 而言,还得 RAID controller 跟设备去打交道,获得这个数据。

  4. page cache: 是不是可以提供一个方式,让用户空间关闭 page cache。对于马上要访问的数据而言,page cache是有价值的,比如说拷贝了一份内核,马上要编译,但是拷贝完如果不用,这个 page cache 就是没有必要走的路径,关掉会提升不少性能。

  5. fiemap: fiemap 用于取代 block by block mapping,在连续 block 之上构建一个 extent,从而减少元数据的开销,即 bitmap 的开销,文件系统支持这样的特性也可以提升性能。

French said that Linux copy performance was a bit embarrassing; OS/2 was probably better for copy performance in some ways. But he did note that the way sparse files were handled using FIEMAP in cp was great. Ts'o pointed out that FIEMAP is a great example of how the process should work. Someone identified a problem, so kernel developers added a new feature to help fix it, and now that code is in cp; that is what should be happening with any other kernel features needed for copy operations.

French 发起的这个话题引人深思,内核以及 Kernel Developer 永远不是独立的,完整交付给用户的 Linux System 还包含了其他工具,如果最后确实有体验上的问题,Kernel Developer 不能把问题简单抛给 User space developer,去做并行就好了嘛?!所以,French 的工作态度很值得赞赏!

French would like to see the filesystem developers participate in developing the tools or at least advising those developers.

不同社区共同协作,成立工作组,甚至在这样的峰会上主动邀请相关的 User space developer 参加或许是更好的方向。感谢社区中那些积极思考和探索 "really stupid" 的 "cp problem"。

from tinylab.org.

lzufalcon avatar lzufalcon commented on August 18, 2024

[falcon] Shrinking filesystem caches for dying control groups: reparent
the accounted slabs to the parent cgroup on cgroup removal.

Control groups are managed using a virtual filesystem; a particular group can be deleted just by removing the directory that represents it. But the truth of the matter, Gushchin said, is that while removing a control group's directory hides the group from user space, that group continues to exist in the kernel until all references to it go away. While it persists, it continues to consume resources.

Specifically, for control groups that share the same underlying filesystem, the shrinkers are not able to reclaim memory from the VFS caches after a control group dies, at least under slight to moderate memory pressure.

Control Group 被设计用来更好地管理资源,但是它自己也面临了资源回收的问题。每个控制组在不用的时候可以从用户空间把相应的目录删掉,但是实际上,内核空间还会保留相应的资源,直到相关的资源引用计数都变为 0,而相应的资源回收需要触发 memory shrinker,但是 memory shrinker 通常有一定的条件,比如要申请的内存超过了某一个门限,如果系统还有足够的内存,那么 memory shrinker 就不会触发,这会导致会有大量的这样的 Control Group 累计,虽然每个只占用了 200KB 的空间,但是 Gushchin 发现 "I've even seen a host with more than 100Gb of memory wasted for dying cgroups"。

Gushchin 早期的方案是制造一个 memory shrinker 的 pressure,触发系统回收内存,但是引起了性能衰退,被 Revert 了。目前他提出了一个方案,就是把被删除的 Control Group 的 pages 挂到它的父 Control Group 上,这样它自己就可以从容离开。从 最新 Patch 提供的数据来看,确实解决了问题,期望能够很快在内核主线上看到,这个对于云服务主机可能会很有帮助。

from tinylab.org.

lzufalcon avatar lzufalcon commented on August 18, 2024

May 23 of LWN, left LSFMM parts:

[falcon] Supporting the UFS turbo-write mode: Faster for some scenes?

A new version of the UFS specification is being written and turbo-write is expected to be part of it. The idea behind turbo-write is to use an SLC buffer to provide faster writes, with the contents being shifted to the slower TLC as needed.

关注手机的用户可能能深刻体会到,这几年发展下来,手机是越来越快,这得益于手机的处理器频率的提升(更好设计和制作工艺),当然也得益于更快的存储技术,而 UFS 就是这里头的一个大功臣。早期 eMMC 通过并行不断提升读写速度,但是 UFS 通过更高速的串行双工通信解决了 eMMC 发展遇到的瓶颈,速度得到了数倍的提升。

当然,大家可能还有一个深刻的感受就是随着时间的推移,手机越来越慢,原因是另外一个背景,也就是在 eMMC 和 UFS 这两个通信标准的背后,都是存储芯片,而存储芯片也经历了三个重要的发展阶段,即 SLC, MLC 和 TLC,演进是更大的存储空间,更低的成本,但是却迎来了更低的擦写寿命(写到一定程度存在漏电)和更慢的读写速度。这个演进过程是通过引入不同电为的电荷增加单个 CELL 的表达的位数,随着位数的增多,SLC single-level-cell, MLC 2bits-per-cell, TLC 3bits-per-cell,所以操作复杂度和寿命下降就比较容易理解,MLC 和 TLC 的寿命只有 SLC 的 1/10, 1/20,TLC 单个存储单元的擦写次数只有 500 次左右(相当于一个 64G 的 TLC 芯片可以写入 32000 G 新内容,也没有想象的少哈),而 MLC 在 1万次左右,SLC 则能到 10 万次。但是由于成本大幅下降,所以市面上现在 TLC, MLC 的比较多,或者有跟 SLC 混合的产品。

这里的 "turbo mode" 就是用来管理 SLC 和 TLC/MLC 混合的产品,这类产品用 SLC 当 buffer,"turbo mode" 用于直接写入 SLC,但是 SLC 容量比较有限,一直开着这个模式会写满,很快就慢了,所以,一方面要管理什么情况下做 "turbo mode",另外一个是什么时候在后台把内容从 SLC 搬到 TLC,确保需要的时候,有足够的 SLC 容量保障 "turbo mode" 可以工作。还有一个是也不能一有数据就往 TLC 搬,要充分利用 SLC 的寿命,做到 Wear-Leveling。

所以 UFS driver 的工作要兼顾上面的需求:"both the turbo-write governance and the evacuation policy should be handled by the UFS driver"。

最后,其实我们可以提前思考一个问题,如果这样一种 "turbo mode" 开放给内核,那手机厂商能够基于这个 mode 做点什么工作呢?除了 Benchmarking,还能做什么?哪些场景需要这个突发的瞬时的读写性能呢?

from tinylab.org.

lzufalcon avatar lzufalcon commented on August 18, 2024

May 23 of LWN, left LSFMM parts:

[falcon] Filesystems for zoned block devices: new filesystem support for SMR storage

zoned block devices have multiple zones with different characteristics; usually there are zones that can only be written in sequential order as well as conventional zones that can be written in random order. The genesis of zoned block devices is shingled magnetic recording (SMR) devices, which were created to increase the capacity of hard disks, but at the cost of some flexibility.

SMR 存储通过缩小磁盘间距离甚至允许磁轨重叠从而提升存储密度,但是会牺牲写入操作的灵活性。这里讨论了如果一块磁盘,一部分用了 SMR(仅支持顺序写),一部分是传统的磁盘,需要什么样的文件系统。目前社区正在添加支持的文件系统有 F2FSBtrfs,还有一个是全新开发的 ZoneFS。

ZoneFS is a new filesystem that exposes zoned block devices to users in the simplest possible way, Le Moal said. It exports each zone as a file under the mountpoint in two directories: /conventional for random-access zones or /sequential for sequential-only zones. Under those directories, the zones will be files that use the zone number as the file name.

三者的状态都不完善,还在不断开发中。

from tinylab.org.

lzufalcon avatar lzufalcon commented on August 18, 2024

May 23 of LWN, left LSFMM parts:

[falcon] Filesystems and crash resistance: another topic about ”atomic write“

Currently, there are applications that create and populate a temporary file, set the attributes desired, then rename it, Goldstein said. The developers think that the file is written persistently to the filesystem, which is not true, but mostly works. The official answer is that you must use fsync(), but it is a "terrible answer" because it has impacts all over the system.

又在讨论 Application Developers 渴望确保数据真正回写到磁盘的需求,只是讨论到最后也没有结论。另外一篇讨论在 The Linux "copy problem",反而有一些实质的讨论成果。

倒是有提到 Overlayfs 实现了类似的 Feature,通过设置 xattrs,然后再 rename 文件的话就能确保 再 rename 完成之后,metadata 被持久写入。还有一个是在 XFS 和 ext4,可以通过 FIEMAP ioctl + FIEMAP_FLAG_MAP 能实现类似的功能。

There are two types of barriers that he is talking about. The first would be used by overlayfs; it sets extended attributes (xattrs) on a file, then renames it. Overlayfs expects that, if the rename is observed, then the metadata has been written persistently. The other barrier is for data to be persistently written, which can be done today using the FIEMAP ioctl() command (with the FIEMAP_FLAG_SYNC flag), at least for XFS and ext4, he asserted.

实际上有其他人补充,即使是 XFS 和 ext4,FIEMAP 在 Sparse files 上也不能保证原子写入。

有篇来自 CrashMonkey 的文档 Documenting the crash-recovery guarantees of Linux file systems 详细介绍了 POSIX, xfs, btrfs, ext4, F2FS 对 fsync(file) 和 fsync(dir) 的要求和实现,Application Developers 可以先详细看看。

from tinylab.org.

lzufalcon avatar lzufalcon commented on August 18, 2024

May 23 of LWN, left LSFMM parts:

[falcon] Asynchronous fsync(): fsync2() or io_uring based version of fsync()?

The idea of an asynchronous version of fsync() is kind of counter-intuitive, Wheeler said. But there are use cases in large-scale data migration. If you are copying a batch of thousands of files from one server to another, you need a way to know that those files are now durable, but don't need to know that they were done in any particular order. You could find out that the data had arrived before destroying the source copies.

Wheeler 抛出来一个这样的主题是希望有这么一个 API,能够批处理一批文件,但是不要依赖特别的 order。有人建议用 io_uring

The io_uring interface allows arbitrary operations to be done in a kernel worker thread and, when they complete, notifies user space.

但是 Wheeler 看上去并不是希望这样,所以 Ts'o 提出了 fsync2():

fsync2() that takes an array of file descriptors and returns when they have all been synced. If the filesystem has support for fsync2(), it can do batching on the operations. It would be easier for application developers to call a function with an array of file descriptors rather than jumping through the hoops needed to set up an io_uring, he said.

与其在内核这么复杂的操作,那在用户态实现一个基于 io_uring 的 library 不也可以吗?!

from tinylab.org.

lzufalcon avatar lzufalcon commented on August 18, 2024

May 23 of LWN, left LSFMM parts:

[falcon] Lazy file reflink: VFS-level snapshots

Amir Goldstein has a use case for a feature that could be called a "lazy file reflink", he said, though it might also be described as "VFS-level snapshots".

Goldstein 在两年前就演示了 overlayfs snapshots,思路是指定一个子目录并创建它的快照,所以,在该目录结构下任何文件的变更都可以用写时复制的方式处理。它在 VFS 层实现,所以不用关心实际用的文件系统类型。

实现主要基于 filesystem change journalinotify / fanotify相应的源码都已经是公开的,感兴趣的同学可以参考。

from tinylab.org.

unicornx avatar unicornx commented on August 18, 2024

New system calls: pidfd_open() and close_range(): proposed system calls for opening pidfds and wholesale closing of ranges of file descriptors.

两个在未来的内核中可能会被支持的 系统调用

pidfd_open()

5.2 merge window 为 clone 系统调用增加了一个 CLONE_PIDFD 选项,用于在创建子进程时返回该进程对应的 pidfd。

传统的做法是通过 /proc, 但缺点是在某些系统上 /proc 可能不存在或者由于权限等问题无法访问

为了满足那些 Android's low memory killer (LMK) and service managers such as systemd. 的需要,为那些采用传统方式创建的进程获取对应的 pidfd,Christian Brauner 又提出 pidfd_open() 这个系统调用
原先的 pidfd_open() 以前就提出过,但使用中需要利用 ioctl() 进行转化以兼容 '/proc' 方式。新的方式解决了这个弊端。
这个新的系统调用可能在 5.3 被合入内核主线

参考 Rethinking race-free process signaling
5.1 kernel introduced a new system call pidfd_send_signal(). pidfd 概念的引入是为了解决如果直接采用进程号发送信号有可能发送给错误的进程对象,因为进程号可能循环使用。
late March, Christian Brauner posted a patch set adding another new system call: pidfd_open() 除此之外还引入一个ioctl的使用。linus认可pidfd_open的引入但反对ioctrl

pidfd_open 的做法会引入竞争,而真正会消除竞争的做法是通过clone

另一个参考,时间更早 Toward race-free process signaling
有关信号的设计存在许多缺陷,历史上的改进很多,譬如signalfd()
但最近内核社区开始关注的一个问题是有关发送信号中使用的进程号存在竞争问题。
Signal-related APIs identify processes by PID. The disadvantage of this method is that, in the lifetime of a system, the same PID is reused as processes are created and terminated.
这篇文章介绍了Brauner主导的补丁的由来。It proposes to solve the signal delivery issue by using file descriptors to identify processes; these descriptors would be obtained by opening a process's /proc directory.

close_range()

根据指定的 fd 范围关闭落在该范围内的所有打开的文件。这比在用户态通过循环方式逐个关闭速度要快得多。

from tinylab.org.

unicornx avatar unicornx commented on August 18, 2024

A kernel debugger in Python: drgn

来自 Facebook 的 Omar Sandoval 在 2019 LSFMM 峰会上给大家介绍了他开发的一款内核调试器 drgn。利用该调试器,开发人员可以使用 python 脚本,在运行期访问内核的数据结构,譬如,检查 root 文件系统的 superblock 信息以及遍历该 superblock 缓存的的所有 inode 信息。

from tinylab.org.

unicornx avatar unicornx commented on August 18, 2024

Improving .deb

Debian 及其派生系列(如 Ubuntu)的一些共同特点是使用 .deb 作为分发打包的格式。有关该打包格式的最后一次重大改版发生在 1995 年,此后一直未有什么变化,具体 .deb 的格式可以参考 deb(5) 手册的描述。然而,最近在 debian-devel 邮件列表中,由 Adam Borowski 发起,讨论了对该格式进行改进的可能性。
Borowski 所提出的改进并不复杂,很容易实现。包括:

  • 采用 zstd 代替 xz 对内容进行压缩,主要考虑是可以加快解压的速度
  • 重新使用老的(1995 年之前)的打包格式,目的是为了避免现在格式中对打包数据长度的限制。
  • 使用 tar 代替 ar 来对数据进行打包。

由于众说纷纭,就改变 .deb 的打包方式上,社区目前并没有达成一致的意见。

from tinylab.org.

unicornx avatar unicornx commented on August 18, 2024

New system calls for memory management

最近社区提出了几个新的系统调用,希望能够在近期版本中被添加到内核中。其中一些和内存管理工作有关:

  • Kirill Tkhai 提出的 process_vm_mmap(),可以实现将一个进程的 VMA 克隆到另一个进程中(这期间会涉及对相应页表(page table)的克隆)。该系统调用可以提供类似于采用两个已存在的系统调用 process_vm_writev()process_vm_readv() 可以实现的功能,但效率会更高。
  • Oleksandr Natalenko 提出在 /proc 中为每个进程新增一个用户接口文件 madvise,写入一些特定的关键字后可以起到调用 madvise() 的效果,譬如写入 merge ,等同于进程调用了 madvise(MADV_MERGEABLE)
  • Minchan Kim 提出的 process_madvise(int pidfd, ......),通过 pidfd 指定某个进程并为该进程调用 madvise()
    以上三个提案中,第三个似乎更受欢迎。

from tinylab.org.

unicornx avatar unicornx commented on August 18, 2024

Memory: the flat, the discontiguous, and the sparse

这是一篇综述一般的介绍,不适合作为资讯新闻。

from tinylab.org.

unicornx avatar unicornx commented on August 18, 2024

Brief items

内核发布状态

当前最新的内核开发版本是 5.2-rc2,于 5 月 26 日发布。Linus说:“rc2 看上去相当正常,没有什么特别的亮点,我认为和上一个发布的主要差别是有关 SPDX 的更新。开句玩笑话,本周的亮点显然应该是芬兰赢得了冰球世界锦标赛冠军。

版本代号修改为 ”Golden Lions“。

所谓的 “SPDX 更新” 指的是用 SPDX 标签替换了内核源文件中的许可证模版。该修改工作比较大,自 5.1发布以来,已经合并了近 300 个与此相关的代码变更。

稳定版本更新: 5.1.5, 5.0.19, 4.19.46, 4.14.122 和 4.9.179。发布于 5 月 25 日。

A farewell to tmem

十年前,为了能够更有效地利用内存,内核推出了 Transcendent memory 的概念。然而,由于相关开发者多年前就离开了内核社区,这个概念一直没有完全实现。考虑到相关代码缺乏必要的维护性,所以从 5.3 版本开始,大部分和 Transcendent memory 有关的代码可能都会被删除。尽管如此,与 frontswap 有关的代码由于仍然有人使用所以还会保留。

from tinylab.org.

lzufalcon avatar lzufalcon commented on August 18, 2024

重要软件版本资讯

  1. Qemu 发布 v4.0.0, add micro:bit support
  2. U-Boot 发布 v2019.07-rc4
  3. Buildroot 发布 2019.05,Linux: Default to 5.1.x series
  4. Busybox 发布 1.31.0
  5. LLVM 发布 8.0.1-rc2
  6. GCC 发布 9.1
  7. Linux RT 发布 v5.0.19-rt11
  8. Rust 发布 1.35.0

from tinylab.org.

lzufalcon avatar lzufalcon commented on August 18, 2024

已整理完毕,即将发布。

from tinylab.org.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.