Giter Site home page Giter Site logo

efficientloftr's Introduction

Efficient LoFTR: Semi-Dense Local Feature Matching with Sparse-Like Speed


Efficient LoFTR: Semi-Dense Local Feature Matching with Sparse-Like Speed
Yifan Wang*, Xingyi He*, Sida Peng, Dongli Tan, Xiaowei Zhou
CVPR 2024

realtime_demo.mp4

TODO List

  • Inference code and pretrained models
  • Code for reproducing the test-set results
  • Add options of flash-attention and torch.compiler for better performance
  • jupyter notebook demo for matching a pair of images
  • Training code

Installation

conda env create -f environment.yaml
conda activate eloftr
pip install torch==2.0.0+cu118 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt 

The test and training can be downloaded by download link provided by LoFTR

We provide the our pretrained model in download link

Reproduce the testing results with pytorch-lightning

You need to setup the testing subsets of ScanNet and MegaDepth first. We create symlinks from the previously downloaded datasets to data/{{dataset}}/test.

# set up symlinks
ln -s /path/to/scannet-1500-testset/* /path/to/EfficientLoFTR/data/scannet/test
ln -s /path/to/megadepth-1500-testset/* /path/to/EfficientLoFTR/data/megadepth/test

Inference time

conda activate eloftr
bash scripts/reproduce_test/indoor_full_time.sh
bash scripts/reproduce_test/indoor_opt_time.sh

Accuracy

conda activate eloftr
bash scripts/reproduce_test/outdoor_full_auc.sh
bash scripts/reproduce_test/outdoor_opt_auc.sh
bash scripts/reproduce_test/indoor_full_auc.sh
bash scripts/reproduce_test/indoor_opt_auc.sh

Training

The Training code is coming soon, please stay tuned!

Citation

If you find this code useful for your research, please use the following BibTeX entry.

@inproceedings{wang2024eloftr,
  title={{Efficient LoFTR}: Semi-Dense Local Feature Matching with Sparse-Like Speed},
  author={Wang, Yifan and He, Xingyi and Peng, Sida and Tan, Dongli and Zhou, Xiaowei},
  booktitle={CVPR},
  year={2024}
}

efficientloftr's People

Contributors

wyf2020 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

efficientloftr's Issues

Question about the mask usage in attentions

Hi, thanks for your code release.

I observed that features are cropped with their masks before being fed into attention modules, which differs from the way used in LoFTR. I also noticed that the comment "Not support generalized attention mask yet." is included. Could you please provide an explanation for this?

ERROR: cannot load libmkl_vml_avx512.so.1 or libmkl_vml_def.so.1.

Hello! I followed the README to create the conda env and try to reproduce the results by bash scripts/reproduce_test/outdoor_full_auc.sh . But I met this errors:

| INFO     | __main__:<module>:128 - Args and config initialized!
INTEL MKL ERROR: /usr/local/lib/libmkl_vml_avx512.so.1: undefined symbol: mkl_lapack_dspevd.
Intel MKL FATAL ERROR: cannot load libmkl_vml_avx512.so.1 or libmkl_vml_def.so.1.

Any ideas? Thanks!

Fine Preprocess feature size.

Hello!

I had a brief question regarding image_1's fine features' dimention, in particular the addtion of +2 when unfolding the local windows here. I fail to understand the reasoning behind the +2, as along the pipeline conf_matrix_ff has a size of [M,W**2, (W+2)**2] here. Although softmax_matrix_f does become [M,WW,WW], conf_matrix_ff is stored as [M,W**2, (W+2)**2].

Would really appreciate if you could provide an explanation for the +2.

Thank you!

spv_bids variables

Hello,

Thank you for the great work!

Edit: spv_b/i/j_bids are the output of the coarse sueprvision, which has not been released yet, apologies.

Thank you!

LoFTR vs EfficientLoFTR predicitons

Hi,
I have noticed that LoFTRs predictions are mostly on the defined grid that is used for coarse matching. I have noticed that in EfficientLoFTR this is not necessarily the case and that it has more refined predictions. Why is this and is the data and supervision done differently during training ?
(note: i have changed the image size to 256x256)
Original LoFTR:
image

Efficient LofTR:
image

inference time not as fast as expected

Hi, thanks for open-sourcing the code and model weights
As i said in a previous post I would like to use EfficientLoFTR to do a comparative benchmark in our study.

I found strange results in my benchmark since in size 1x1x256x256 Efficient LoFTR inference time is close to 26 ms
This is better than LoFTR witch run at ~40 ms for this resolution but very close to topicFMfast when measured on my GeForce RTX 2070 Mobile GPU

Since topicFMfast is not in your benchmark I would like to know if I do a mistake when using your code.

here is my inference code :

import time
import cv2
import numpy as np
import pytorch_lightning as pl
import argparse
import pprint
import torch
import kornia as K
import kornia.feature as KF
import matplotlib.pyplot as plt
from kornia_moons.viz import draw_LAF_matches

from loguru import logger as loguru_logger

from src.config.default import get_cfg_defaults
from src.utils.profiler import build_profiler

from src.lightning.data import MultiSceneDataModule
from src.lightning.lightning_loftr import PL_LoFTR



def parse_args():
    # init a costum parser which will be added into pl.Trainer parser
    # check documentation: https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html#trainer-flags
    parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter)
    parser.add_argument(
        '--data_cfg_path', type=str, default="configs/data/megadepth_test_1500.py", help='data config path')
    parser.add_argument(
        '--main_cfg_path', type=str, default="configs/loftr/eloftr_optimized.py", help='main config path')
    parser.add_argument(
        '--ckpt_path', type=str, default="weights/eloftr_outdoor.ckpt", help='path to the checkpoint')
    parser.add_argument(
        '--dump_dir', type=str, default=None, help="if set, the matching results will be dump to dump_dir")
    parser.add_argument(
        '--profiler_name', type=str, default=None, help='options: [inference, pytorch], or leave it unset')
    parser.add_argument(
        '--batch_size', type=int, default=1, help='batch_size per gpu')
    parser.add_argument(
        '--num_workers', type=int, default=2)
    parser.add_argument(
        '--thr', type=float, default=None, help='modify the coarse-level matching threshold.')
    parser.add_argument(
        '--pixel_thr', type=float, default=None, help='modify the RANSAC threshold.')
    parser.add_argument(
        '--ransac', type=str, default=None, help='modify the RANSAC method')
    parser.add_argument(
        '--scannetX', type=int, default=832, help='ScanNet resize X')
    parser.add_argument(
        '--scannetY', type=int, default=832, help='ScanNet resize Y')
    parser.add_argument(
        '--megasize', type=int, default=1152, help='MegaDepth resize')
    parser.add_argument(
        '--npe', action='store_true', default=False, help='')
    parser.add_argument(
        '--fp32', action='store_true', default=False, help='')
    parser.add_argument(
        '--ransac_times', type=int, default=None, help='repeat ransac multiple times for more robust evaluation')
    parser.add_argument(
        '--rmbd', type=int, default=None, help='remove border matches')
    parser.add_argument(
        '--deter', action='store_true', default=False, help='use deterministic mode for testing')

    parser = pl.Trainer.add_argparse_args(parser)
    return parser.parse_args()

def inplace_relu(m):
    classname = m.__class__.__name__
    if classname.find('ReLU') != -1:
        m.inplace=True

if __name__ == '__main__':
    # parse arguments
    args = parse_args()
    pprint.pprint(vars(args))

    # init default-cfg and merge it with the main- and data-cfg        
    config = get_cfg_defaults()
    config.merge_from_file(args.main_cfg_path)
    config.merge_from_file(args.data_cfg_path)
    if args.deter:
        torch.backends.cudnn.deterministic = True
    pl.seed_everything(config.TRAINER.SEED)  # reproducibility

    # tune when testing
    if args.thr is not None:
        config.LOFTR.MATCH_COARSE.THR = args.thr

    if args.scannetX is not None and args.scannetY is not None:
        config.DATASET.SCAN_IMG_RESIZEX = args.scannetX
        config.DATASET.SCAN_IMG_RESIZEY = args.scannetY
    if args.megasize is not None:
        config.DATASET.MGDPT_IMG_RESIZE = args.megasize

    if args.npe:
        if config.LOFTR.COARSE.ROPE:
            assert config.DATASET.NPE_NAME is not None
        if config.DATASET.NPE_NAME is not None:
            if config.DATASET.NPE_NAME == 'megadepth':
                config.LOFTR.COARSE.NPE = [832, 832, config.DATASET.MGDPT_IMG_RESIZE, config.DATASET.MGDPT_IMG_RESIZE] # [832, 832, 1152, 1152]
            elif config.DATASET.NPE_NAME == 'scannet':
                config.LOFTR.COARSE.NPE = [832, 832, config.DATASET.SCAN_IMG_RESIZEX, config.DATASET.SCAN_IMG_RESIZEX] # [832, 832, 640, 640]
    else:
        config.LOFTR.COARSE.NPE = [832, 832, 832, 832]

    if args.ransac_times is not None:
        config.LOFTR.EVAL_TIMES = args.ransac_times

    if args.rmbd is not None:
        config.LOFTR.MATCH_COARSE.BORDER_RM = args.rmbd

    if args.pixel_thr is not None:
        config.TRAINER.RANSAC_PIXEL_THR = args.pixel_thr

    if args.ransac is not None:
        config.TRAINER.POSE_ESTIMATION_METHOD = args.ransac
        if args.ransac == 'LO-RANSAC' and config.TRAINER.RANSAC_PIXEL_THR == 0.5:
            config.TRAINER.RANSAC_PIXEL_THR = 2.0

    if args.fp32:
        config.LOFTR.FP16 = False

    loguru_logger.info(f"Args and config initialized!")

    # lightning module
    profiler = build_profiler(args.profiler_name)
    model = PL_LoFTR(config, pretrained_ckpt=args.ckpt_path, profiler=profiler, dump_dir=args.dump_dir)
    loguru_logger.info(f"LoFTR-lightning initialized!")
    model.matcher = model.matcher.eval().cuda()
    # model.matcher = torch.compile(model.matcher)
    
    print('start inference')
    # Load example images
    img0_pth = "assets/01.BMP"
    img1_pth = "assets/02.BMP"
    img0_raw = cv2.imread(img0_pth, cv2.IMREAD_GRAYSCALE)
    img1_raw = cv2.imread(img1_pth, cv2.IMREAD_GRAYSCALE)
    size = 256
    img0_raw = cv2.resize(img0_raw, (size, size))  # input size shuold be divisible by 8
    img1_raw = cv2.resize(img1_raw, (size, size))
    img0 = torch.from_numpy(img0_raw)[None][None].cuda() / 255.
    img1 = torch.from_numpy(img1_raw)[None][None].cuda() / 255.
    data_dict = {'image0': img0, 'image1': img1, 'pair_names': ('01', '02'), 'dataset_name' : 'scan4all'}
    print('image 0 size', img0.shape)
    print('image 1 size', img1.shape)
    # inference (with warmup)
    num_inferences = 105
    times = np.zeros(num_inferences)
    with torch.no_grad():
        with torch.autocast(enabled=config.LOFTR.FP16, device_type='cuda', dtype=torch.float16):
            for i in range(num_inferences):
                torch.cuda.current_stream().synchronize()
                t0 = time.time()
                model.matcher(data_dict)
                torch.cuda.current_stream().synchronize()
                t1 = time.time()
                current_time = (t1 - t0) *1000
                print(f"inference pytorch {current_time :.1f} [ms]")
                times[i] = current_time
    print('times ', times)
    print(f"average inference time = {times[5:].mean() :.1f} [ms] std {times[5:].std() :.1f} for {num_inferences - 5} samples")
    print('data_dict.keys()', data_dict.keys())     
    print('mconf', data_dict['mconf'].shape)
    print('data_dict', data_dict['mkpts0_f'].shape)
    print('data_dict', data_dict['mkpts1_f'].shape)
    # print('mconf', data_dict['mconf'])
    mkpts0 = data_dict['mkpts0_f']
    mkpts1 = data_dict['mkpts1_f']
    mconf = data_dict['mconf']
    mkpts0 = mkpts0.cpu().numpy()
    mkpts1 = mkpts1.cpu().numpy()
    # inliers filtering
    mconf = mconf.unsqueeze(1)
    mconf = mconf.cpu().numpy()
    mconf = mconf > 0.2
    print("mconf", mconf.shape)
    # plot matchs
    
    fig = plt.figure()
    ax = fig.add_subplot(1, 1, 1)
    draw_LAF_matches(
        KF.laf_from_center_scale_ori(
            torch.from_numpy(mkpts0).view(1, -1, 2),
            torch.ones(mkpts0.shape[0]).view(1, -1, 1, 1),
            torch.ones(mkpts0.shape[0]).view(1, -1, 1),
        ),
        KF.laf_from_center_scale_ori(
            torch.from_numpy(mkpts1).view(1, -1, 2),
            torch.ones(mkpts1.shape[0]).view(1, -1, 1, 1),
            torch.ones(mkpts1.shape[0]).view(1, -1, 1),
        ),
        torch.arange(mkpts0.shape[0]).view(-1, 1).repeat(1, 2),
        K.tensor_to_image(img0),
        K.tensor_to_image(img1),
        mconf,
        draw_dict={"inlier_color": (0.2, 1, 0.2), "tentative_color": None, "feature_color": (0.2, 0.5, 1), "vertical": False},
        ax=ax
    )
    plt.savefig(f"assets/output_filtered_by_confidence_size{size}_num-match{len(mconf)}_{(t1 - t0) *1000 :0.1f}_ms.png")

here is my environment setup :

conda env create -f environment.yaml
conda activate eloftr
pip install torch==2.0.0+cu118 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt 
pip install kornia_moons
python inference.py
  • average inference time = 26.0 [ms] std 2.5 for 100 samples at 1x1x256x256
  • average inference time for topicFMfast is close to 30 ms on same image pair same PC and same GPU

Did I miss something to make your code more efficient ?

Bests

训练时常

请问使用megadepthv1训练的话大概多长时间?几张gpu?^_^

Issues with the evaluation

Hi,

Before everyone gets too excited, I need to point out some obvious issues in the evaluation described in the paper.

In Figure 1, the inference time of the semi-dense approaches is largely under-estimated because it is computed at a much lower resolution than the pose accuracy (on MegaDepth). This is evidenced by Table 8: 56.4 AUC@5° and 40.1 ms (Table 1) actually correspond to resolutions 1184×1184 and 640×640, respectively. In reality, the proposed approach is much slower: the inference time at this resolution is 139ms (compare this to LightGlue's 30ms). For the reported inference time, the proposed approach is actually not more accurate than LightGlue (and most likely less).

The same story goes for other semi-dense matchers - for LoFTR it should be much higher than 66ms, closer to 180ms (LightGlue paper, Table 2). Even at this resolution, the accuracy gap might completely vanish when using a modern implementation of RANSAC, as found in PoseLib. Evidence of this can also be found in LightGlue, Table 2 (LO-RANSAC). This can be easily evaluated in glue-factory so this omission is surprising.

We'd appreciate having the authors comment on this - @wyf2020 @hxy-123 @Cuistiano - thank you!
cc @Phil26AT

导出onnx

话说,作者有想过导出onnx,然后c++调用吗

Script for 2 camera demo

Hello!

Could the script used to demonstrate the feature matching from 2 camera feeds be shared?

Thank you

image size for training and Ablation Studies of Image Resolution

What image size is used for training? This may not be mentioned in paper.
As for table 8 (Impact of test image resolution on the MegaDepth dataset), do you use the corresponding resolution for training and evaluation or use the same model for different evaluation resolution?

Training code release

hello!

Thank you for the amazing work!

I just wanted to ask if you are planning to release the training code?

Thank you!

Training code

Hello, I just ported over your code only to realize at the end there is no loss function. Ending up having to implement it myself in the mean time based on the paper, wondering when this will be available so I can use the official (and most definitely) more accurate version from you and the team?

Thanks and best,

Ethan

我修改的jupyterlab

import os
os.chdir("..")
from copy import deepcopy

import torch
import cv2
import numpy as np
import matplotlib.cm as cm
from src.utils.plotting import make_matching_figure
from src.utils.plotting import make_matching_figure_2
print(torch.cuda.is_available())
from src.loftr import LoFTR,default_cfg
_default_cfg = deepcopy(default_cfg)
_default_cfg['coarse']['temp_bug_fix'] = True # set to False when using the old ckpt
_default_cfg['coarse']['npe'] = [832, 832, 832, 832]
print(_default_cfg)

matcher = LoFTR(config=_default_cfg)
matcher.load_state_dict(torch.load('D:\jx\ai\featue_match\ELOFTR\EfficientLoFTR-main\eloftr_outdoor.ckpt')['state_dict'])
matcher.eval().cuda()

Load example images

img0_pth = "D:\jx\ai\imageRdataset\models-master\models-master\research\delf\delf\python\examples\data\oxford5k_images\Snipaste_2024-03-14_17-34-54_r.jpg"
img1_pth = "D:\jx\ai\imageRdataset\models-master\models-master\research\delf\delf\python\examples\data\oxford5k_images\Snipaste_2024-03-14_17-31-37.jpg"
#img0_pth = "D:\jx\data\040-1-014231-514-004490.jpg"
#img1_pth = "D:\jx\data\040-2-014231-514-004490.jpg"
img0_raw = cv2.imread(img0_pth, cv2.IMREAD_GRAYSCALE)
img1_raw = cv2.imread(img1_pth, cv2.IMREAD_GRAYSCALE)
img0_raw = cv2.resize(img0_raw, (640, 480))
img1_raw = cv2.resize(img1_raw, (640, 480))

img0 = torch.from_numpy(img0_raw)[None][None].cuda() / 255.
img1 = torch.from_numpy(img1_raw)[None][None].cuda() / 255.
batch = {'image0': img0, 'image1': img1}

Inference with LoFTR and get prediction

with torch.no_grad():
matcher(batch)
mkpts0 = batch['mkpts0_f'].cpu().numpy()
mkpts1 = batch['mkpts1_f'].cpu().numpy()
mconf = batch['mconf'].cpu().numpy()

Draw

color = cm.jet(mconf)
text = [
'LoFTR',
'Matches: {}'.format(len(mkpts0)),
]
fig = make_matching_figure_2(img0_raw, img1_raw, mkpts0, mkpts1, color, text=text,path="D:\jx\ai\imageRdataset\models-master\models-master\research\delf\delf\python\examples\data\oxford5k_images\result_1.jpg")

配置文件的内容我就这样写了
from yacs.config import CfgNode as CN

def lower_config(yacs_cfg):
if not isinstance(yacs_cfg, CN):
return yacs_cfg
return {k.lower(): lower_config(v) for k, v in yacs_cfg.items()}

############## ↓ LoFTR Pipeline ↓ ##############
_CN = CN()
_CN.BACKBONE_TYPE = 'RepVGG'
_CN.ALIGN_CORNER = False
_CN.RESOLUTION = (8, 1)
_CN.FINE_WINDOW_SIZE = 8 # window_size in fine_level, must be even
_CN.FP16 = False
_CN.REPLACE_NAN = False
_CN.EVAL_TIMES = 1

1. LoFTR-backbone (local feature CNN) config

_CN.BACKBONE = CN()
_CN.BACKBONE.BLOCK_DIMS = [64, 128, 256] # s1, s2, s3

2. LoFTR-coarse module config

_CN.COARSE = CN()
_CN.COARSE.D_MODEL = 256
_CN.COARSE.D_FFN = 256
_CN.COARSE.NHEAD = 8
_CN.COARSE.LAYER_NAMES = ['self', 'cross'] * 4
_CN.COARSE.AGG_SIZE0 = 4
_CN.COARSE.AGG_SIZE1 = 4
_CN.COARSE.NO_FLASH = False
_CN.COARSE.ROPE = True
_CN.COARSE.NPE = None

3. Coarse-Matching config

_CN.MATCH_COARSE = CN()
_CN.MATCH_COARSE.THR = 0.1
_CN.MATCH_COARSE.BORDER_RM = 2
_CN.MATCH_COARSE.DSMAX_TEMPERATURE = 0.1
_CN.MATCH_COARSE.TRAIN_COARSE_PERCENT = 0.2 # training tricks: save GPU memory
_CN.MATCH_COARSE.TRAIN_PAD_NUM_GT_MIN = 200 # training tricks: avoid DDP deadlock
_CN.MATCH_COARSE.SPARSE_SPVS = True
_CN.MATCH_COARSE.SKIP_SOFTMAX = False
_CN.MATCH_COARSE.FP16MATMUL = False

4. Fine-Matching config

_CN.MATCH_FINE = CN()
_CN.MATCH_FINE.SPARSE_SPVS = True
_CN.MATCH_FINE.LOCAL_REGRESS_TEMPERATURE = 1.0
_CN.MATCH_FINE.LOCAL_REGRESS_SLICEDIM = 8

5. LoFTR Losses

-- # coarse-level

_CN.LOSS = CN()
_CN.LOSS.COARSE_TYPE = 'focal' # ['focal', 'cross_entropy']
_CN.LOSS.COARSE_WEIGHT = 1.0
_CN.LOSS.COARSE_SIGMOID_WEIGHT = 1.0
_CN.LOSS.LOCAL_WEIGHT = 0.5
_CN.LOSS.COARSE_OVERLAP_WEIGHT = False
_CN.LOSS.FINE_OVERLAP_WEIGHT = False
_CN.LOSS.FINE_OVERLAP_WEIGHT2 = False

-- - -- # focal loss (coarse)

_CN.LOSS.FOCAL_ALPHA = 0.25
_CN.LOSS.FOCAL_GAMMA = 2.0
_CN.LOSS.POS_WEIGHT = 1.0
_CN.LOSS.NEG_WEIGHT = 1.0

-- # fine-level

_CN.LOSS.FINE_TYPE = 'l2_with_std' # ['l2_with_std', 'l2']
_CN.LOSS.FINE_WEIGHT = 1.0
_CN.LOSS.FINE_CORRECT_THR = 1.0 # for filtering valid fine-level gts (some gt matches might fall out of the fine-level window)

default_cfg = lower_config(_CN)
不知道有没有问题 我jupyter跑出来的结果倒是没问题,就是有时候点太多了
精度感觉没有那么高

Training Code

hello, i wonder when would you release the training code?

Output descriptors

Hi there,

thank you for your great work! I am wondering whether you can kindly provide the possibility to get access to the descriptor of each matched keypoint? I would appreciate it a lot!

generate high-quality point clouds

Do you plan to use it to generate high-quality point clouds in the future? I think using it to generate high-quality point clouds would have progressive significance for many tasks such as SLAM and 3D reconstruction.

add model weights

When do you plan to provide the model weights?
I would like to use them to do a comparative benchmark in our study
best regards

速度对比

你好,这个速度比LoFTR真的快2.5倍吗? 我设置的参数为 opt fp16 时间在70ms左右,LoFTR在180ms左右,GTX2080super+cuda117

请问我还有可以调参优化的地方吗?

训练代码

请问训练代码什么时候可以发布呢

how to produce poses and intrinsics

The jupiter nokebook has provided a demo to match 2 imgs. I want to kown how to match several imgs to produce camera poses and intrinsics.Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.