Giter Site home page Giter Site logo

tfaehse / dashcamcleaner Goto Github PK

View Code? Open in Web Editor NEW
122.0 10.0 27.0 512.5 MB

Censor identifiable information in videos, in particular dashcam recordings in Germany.

License: GNU Affero General Public License v3.0

Python 99.38% Dockerfile 0.62%
dashcam blur opencv anonymizer plates faces video frame pytorch deep-learning

dashcamcleaner's Introduction


DashcamCleaner

This tool allows you to automatically censor faces and number plates on dashcam footage.
Report Bug · Request Feature

Table of Contents

  1. About The Project
  2. Getting Started
  3. Usage
  4. Weights
  5. Roadmap
  6. Contributing
  7. License
  8. Contact
  9. Acknowledgements

About The Project

This project is a result of data protection laws that require identifiable information to be censored in media that is posted to the internet. Dashcam videos in particular tend to be fairly cumbersome to manually edit, so this tool aims to automate the task.

The goal is to release a simple to use application with simple settings and acceptable performance that does not require any knowledge about image processing, neural networks or even programming as a whole on the end user's part.

Development started with an MVP using understand.ai's Anonymizer for its backend. Since then, the project has moved on to a custom-trained YOLOv8 network. I wrote about my experiences training the network and generating training data on Towards Data Science.

Getting Started

To get a local copy up and running follow these simple steps.

Prerequisites

You need a working Python environment with a Python version of 3.8 or higher that satisfies the listed requirements.txt. Depending on your machine, you can leverage GPU acceleration for pytorch - see here.

Since OpenCV does not care about audio channels, ffmpeg is used to combine the edited video and the audio channel of the input video. The environment variable FFMPEG_BINARY needs to be set to the ffmpeg executable for this to work.

Installation example on Windows using Conda

  1. Clone the repo
    git clone https://github.com/tfaehse/DashcamCleaner.git
  2. Set up Python environment and install requisites
    conda create -n py38 python=3.8
    conda activate py38
    pip install -r requirements.txt
  3. Install ffmpeg binaries (release essentials is enough) and create an environment variable "FFMPEG_BINARY" that points to the ffmpeg.exe binary.

Usage

On first launch, the YOLOv8 model is automatically downloaded and fused with the custom weights for face and plate detection from this repo.

UI screenshot

The UI is fairly self-explanatory: To use the tool, you need to:

  • choose an input video file
  • choose an output location
  • hit start!

The options adjust parameters of the detection algorithm and post-processing options laid out in the roadmap. The detection threshold and inference size are direct parameters of the YOLOv8 detector, they provide the main controls for detection quality and speed that can be tweaked. In short:

  • Each recognized object, i.e. a face or a license plate, possesses a confidence value that describes how likely it is to actually be a license plate or a face. Increasing the threshold results in fewer false positives, at the cost of potential false negatives
  • The performance of the detector depends on the input size of the image, so the resolution of the video. The inference size option allows downscaling the input for detections only. The result is faster detection with reduced precision. NOTE: The output video still uses the full resolution from the input video, there is no loss in quality! Only detection runs at a lower resolution.

The blur size determines how strongly detected faces and license plates are blurred. Boxes around faces and license plates can be enlarged by a factor between 0.8 and 10 using the ROI enlargement dial.

Sometimes, a license plate might be missed for just one frame. This one frame, usually 1/30th of a second long, still means the license plate or face could easily be identified - a computationally very cheap (as opposed to increasing the inference scale) way to fix such false negatives can be the frame memory option. In essence, it blurs not only the detected boxes in the current frame, it also blurs regions that were detected in n frames before. Especially in combination with ROI enlargement and videos without very quick movement, this method can hide away missed detections.

For reference: even at 1080p inference, i.e. an inference scale of 1, a 1080p30fps video from my 70mai 1S processes at around 10 frames per second, a 1 minute clip takes ~3 minutes to blur on a 5820K/GTX1060 machine.

CLI

There's now also a fairly simple CLI to blur a video:

usage: cli.py -i INPUT_PATH -o OUTPUT_PATH [-w WEIGHTS] [-bw BLUR_WORKERS] [-s [1, 1024]] [-b [1, 99]] [-t [0.0, 1.0]] [-r [0.0, 2.0]] [-q [1.0, 10.0]] [-fe [0, 99]] [-nf] [-bm [0, 10]] [-m] [-mc] [-j] [-h]

This tool allows you to automatically censor faces and number plates on dashcam footage.

required arguments:
    -i INPUT_PATH
    --input_path INPUT_PATH
        Input video file path. Pass a folder name for batch processing all files in the folder.
        
    -o OUTPUT_PATH
    --output_path OUTPUT_PATH
        Output video file path. Pass a folder name for batch processing.
        

optional arguments:
    -w WEIGHTS  (Default: 720p_medium_mosaic)
    --weights WEIGHTS
        Weights file to use. See readme for the differences. (default = 720p_medium_mosaic).
        
    -bw BLUR_WORKERS  (Default: 2)
    --blur_workers BLUR_WORKERS
        Amount of processes to use for blurring frames. (default = 2)
        
    -b [1, 99]  (Default: 9)
    --blur_size [1, 99]
        Kernel radius of the blurring-filter. Higher value means more blurring, 0 would mean no blurring at all.
        
    -t [0.0, 1.0]  (Default: 0.4)
    --threshold [0.0, 1.0]
        Detection threshold. Higher value means more certainty, lower value means more blurring. This setting affects runtime, a lower threshold means slower execution times.
        
    -r [0.0, 2.0]  (Default: 1.0)
    --roi_multi [0.0, 2.0]
        Increase or decrease the area that will be blurred - 1.0 means no change.
        
    -q [1.0, 10.0]  (Default: 10)
    --quality [1.0, 10.0]
        Quality of the resulting video. higher = better. Conversion to crf: ⌊(1-q/10)*51⌋.
        
    -fe [0, 99]  (Default: 5)
    --feather_edges [0, 99]
        Feather edges of blurred areas, removes sharp edges on blur-mask. 
        Expands mask by argument and blurs mask, so effective size is twice the argument.
        
    -nf   (Default: False)
    --no_faces 
        Do not censor faces.
        
    -bm [0, 10]  (Default: 0)
    --blur_memory [0, 10]
        Blur detected plates from n previous frames too in order to (maybe) cover up missed identifiable information
        
    -h 
    --help 
        Show this help message and exit.
        

optional arguments (advanced):
    -s [1, 1024]  (Default: 2)
    --batch_size [1, 1024]
        Inference batch size - large values require a lof of memory and may cause crashes!
        This will read multiple frames at the same time and perform detection on all of those at once.
        Not recommended for CPU usage.
        
    -m   (Default: False)
    --export_mask 
        Export a black and white only video of the blur-mask without applying it to the input clip.
        
    -mc   (Default: False)
    --export_colored_mask 
        Export a colored mask video of the blur-mask without applying it to the input clip.
        The value represents the confidence of the detector.
        Lower values mean less confidence, brighter colors mean more confidence.
        If the --threshold setting is larger than 0 then detections with a lower confidence are discarded.
        Channels; Red: Faces, Green: Numberplates.
        Hint: turn off --feather_edges by setting -fe=0 and turn --quality to 10
        
    -j   (Default: False)
    --export_json 
        Export detections (based on index) to a JSON file.

Container

Batch processing videos inside a docker (or podman) container

mkdir -p {input,output} # then place your video files in the input folder
docker run -it --rm -v "$PWD/input:/input" -v "$PWD/output:/output" ghcr.io/tfaehse/dashcamcleaner:edge
# to customize options, just use regular cli parameter:
docker run -it --rm -v "$PWD/input:/input" -v "$PWD/output:/output" ghcr.io/tfaehse/dashcamcleaner:edge --weights 1080p_medium_mosaic --blur_size 25 --inference_size 1080 --quality 7 --batch_size 10

GPU Support:

# test driver works
docker run -it --rm --gpus all nvidia/cuda:12.0.1-runtime-ubuntu22.04  nvidia-smi
docker run -it --rm --gpus all -v "$PWD/input:/input" -v "$PWD/output:/output" ghcr.io/tfaehse/dashcamcleaner:edge
# Output should contain "Using NVIDIA GeForce RTX xxxx.", not: "Using CPU."

Manual Docker Image Build:

git clone https://github.com/tfaehse/DashcamCleaner.git
docker build --pull -t dashcamcleaner DashcamCleaner
mkdir -p {input,output}
# place your files in the input folder
docker run -it --rm -v "$PWD/input:/input" -v "$PWD/output:/output" dashcamcleaner
# to customize options, just use regular parameter:
docker run -it --rm -v "$PWD/input:/input" -v "$PWD/output:/output" dashcamcleaner --weights 1080p_medium_v8

Weights

DashcamCleaner now supports loading weights dynamically, differently trained networks can be selected in the user interface. As part of this change, I will distribute trained networks for German roads with different training parameters over the next weeks:

  • different training image resolutions
  • different network depths, i.e. YOLOv8's nano, small and medium definitions
  • training with (_mosaic) and without (_rect) yolov8's mosaic dataloader only mosaic weights are available by now, as they perform significantly better

Once this is completed, I intend to publish an analysis on how training and inference image size and network depth affect performance and quality of the program.

As a rule of thumb:

  • bigger image sizes lead to better detection of small objects, for both training and inference
  • deeper networks have a higher ceiling for object detection but slow down training and inference
  • lowering inference image size can dramatically speed up the program
  • the mosaic dataloader has a large, positive impact. This was slightly unexpected for me, I assumed the (higher) resolution of fixed-size training would improve results, given that lots of objects are very small

In summary, you should select the highest inference image size you can afford. Inference size (if your input allows it) has a larger impact on detections than the ntwork size, i.e. in my caase 1080p_small beats 720p_medium

Roadmap

With the transition to a custom YOLOv8 detector, the original targets for the tool have been met. Performance is satisfactory and detection quality is very promising. However, work remains:

  • further trained networks and an analysis of different network sizes and image sizes used for training
  • release standalone executable

Implemented post processing steps:

  • a "frame memory": plate and face positions from the last n frames are also blurred → useful for static plates/faces removed because frames are processed in parallel now
  • enlarging of blurred regions

Contributing

Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

Distributed under the MIT License. See LICENSE.txt for more information.

Contact

Project Link: https://github.com/tfaehse/DashcamCleaner

Acknowledgements

  • YOLOv8 was chosen for its combination of performance, speed and ease of use
  • The original prototype was essentially a wrapper for Anonymizer, and the current implementation wouldn't have been possible without its high quality labels.

dashcamcleaner's People

Contributors

alexx-km avatar dark-vex avatar joshinils avatar knrdl avatar syperium avatar tfaehse avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dashcamcleaner's Issues

Performance Compare GUI Settings

Hello

I spend a bit time today to check the diffrent settings in the GUI Version with Yolo V8 (August 2023)

I used a 1min 4K Clip driving on a highway.
3840X2160p /H264-MPEG4 AVC/29.97fps/439MB

Also one Test with 1080p vs 4K

My Computer:
OS: Win11 (Latest)
CPU: i7 13700k
GPU: Nvidia RTX4090 (24GB)
RAM: 32GB DDR5 6000MHz
Storage: 2TB NMME

Screenshot 2023-08-10 103209

Screenshot 2023-08-10 103406

Only Issue I have are the Trucks licence plate not getting detected well from Switzerland:
See Video:
(url)](https://youtu.be/dOnSkP5UEac)

Any other advice I should try?

I`m happy with the result 1min 4K video takes 05:12min and 1080p version 01:42min

Switch over to pyav

pyav provides lower-level python bindings for ffmpeg. This should in theory allow for better performance when reading and writing frames, and it could massively clean up the blurring speed. Potential upsides include:

  • Not having to copy over the audio stream after the fact anymore, pyav supports this in one, clean write operation
  • Improved performance
  • More direct control over ffmpeg options

Error blurrer

Hello,

I had 4 errors, but unfortunately I could not fix the 2 errors yet.

Traceback (most recent call last):
  File "d:\Dashcam\D1\src\qt_wrapper.py", line 72, in run
    frame_blurred = self.apply_blur(frame, detections)
  File "d:\Dashcam\D1\src\blurrer.py", line 50, in apply_blur
    export_colored_mask = self.parameters["export_colored_mask"]
KeyError: 'export_colored_mask'

Cache was cleared and I also reinstalled the PC.

Rectangles are visible when using frame_memory larger than 0

(using the fixed frame memory code from after #33)

examples with each the same frame:
frame_memory == 0 on the left, frame_memory == 1 in the middle, frame_memory == 2 on the right
image two ellipses image

there is some kind of hard or obvious border that aligns with the pixel grid

same happens with plates:
image image image

`--frame_memory` option is off by one

meaning a value of 0 actually is interpreted as 1, so any default behaviour is to use two frames, not only the current one.

and for and larger value like 5 it uses 6 frames.

range and default values for "required named arguments"

i would like to know a range of valid values for some arguments and a good default value.
and change the code so it is no longer a required named argument, i only did that since i did not know either for

  • threshold
  • frame_memory
  • blur_size
  • batch_size

weights not found

weights_path = os.path.join("weights", f"{weights_name}.pt".replace(".pt.pt", ".pt"))

The problem is that path gets treated as a relative path, not as an absolute path.
So the only place one can run the code from is the folder in the cloned repo, nowhere else.

doesn't depend on main.py or cli.py

[feature] store detections and use detections file

While developing, I run the same file over and over again with different options.
I think it would be good if there is an option to export and import the detections.
That way I can run different settings fast without needing to wait on detections each time again and again.

If possible this could also be done via some temporary file-name on-the-fly without an option.
Such that if the hash of the file, its size, last modified date or absolute path is the same you can append to the detections, and reuse them with different settings in case the process fails or is aborted.

I do not know how the detections are stored before they are made into bounds objects, and i do not know how to serialize them into a file to be read later on.
this probably needs some metadata like frame-number and filename etc. for each temp-file.

also there should be some process to get rid of these temp-files, otherwise i worry they might spam someone's hard drive if they automate blurring clips for video production.

CLI did not use GPU

Hi,

I use DashcamCleaner with the CLI command and it works pretty great. But it didn't use my GPU (GeForce
RTX 2080 TI) and so it censored the videos relatively slow.
I use the Anaconda Prompt and I have installed pytorch how it is recommended.
Is there an argument, I can use to force anaconda to use the GPU?

Screenshot 2023-04-23 1335011

Many errors a noob can't solve ^^

Hey, first, i have no clue in programming. I just found a video with a tutorial how to install DCC.
The GUI is loading now and my first try ended in the "TypeError: unsupported operand type(s) for /: 'tuple' and 'int'" error.
As you mentioned in the other issue, i deleted the ".cache/torch" folder. But now i receive the following error:

Blurrer started!
Traceback (most recent call last):
File "D:\Dashcam Shit\DashcamCleaner\dashcamcleaner\src\qt_wrapper.py", line 74, in run
new_detections = self.detect_identifiable_information(frame_buffer)
File "D:\Dashcam Shit\DashcamCleaner\dashcamcleaner\src\blurrer.py", line 39, in detect_identifiable_information
results_list = self.detector(images, size=(scale,)).xyxy
File "D:\Dashcam Shit\Anaconda\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "D:\Dashcam Shit\Anaconda\envs\py38\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\Users\spams/.cache\torch\hub\ultralytics_yolov5_master\models\common.py", line 704, in forward
y = self.model(x, augment=augment) # forward
File "D:\Dashcam Shit\Anaconda\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\spams/.cache\torch\hub\ultralytics_yolov5_master\models\common.py", line 514, in forward
y = self.model(im, augment=augment, visualize=visualize) if augment or visualize else self.model(im)
File "D:\Dashcam Shit\Anaconda\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\spams/.cache\torch\hub\ultralytics_yolov5_master\models\yolo.py", line 209, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "C:\Users\spams/.cache\torch\hub\ultralytics_yolov5_master\models\yolo.py", line 121, in _forward_once
x = m(x) # run
File "D:\Dashcam Shit\Anaconda\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\spams/.cache\torch\hub\ultralytics_yolov5_master\models\common.py", line 167, in forward
return self.cv3(torch.cat((self.m(self.cv1(x)), self.cv2(x)), 1))
File "D:\Dashcam Shit\Anaconda\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "C:\Users\spams/.cache\torch\hub\ultralytics_yolov5_master\models\common.py", line 59, in forward_fuse
return self.act(self.conv(x))
File "D:\Dashcam Shit\Anaconda\envs\py38\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "D:\Dashcam Shit\Anaconda\envs\py38\lib\site-packages\torch\nn\modules\conv.py", line 399, in forward
return self._conv_forward(input, self.weight, self.bias)
File "D:\Dashcam Shit\Anaconda\envs\py38\lib\site-packages\torch\nn\modules\conv.py", line 395, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: CUDA out of memory. Tried to allocate 10.58 GiB (GPU 0; 24.00 GiB total capacity; 1.76 GiB already allocated; 19.60 GiB free; 1.96 GiB reserved in total by PyTorch)

And i have really no idea XD

Hope you can help. Thank you!

Cleanup tasks

  • Unify style
  • Update readme
  • Update requirements
  • Simplify GUI

Error

Fusing layers...
Model summary: 308 layers, 21041679 parameters, 0 gradients
Adding AutoShape...
Using NVIDIA GeForce RTX 2080.
Worker created
Blurrer started!
Traceback (most recent call last):
File "d:\Dashcam\D1\dashcamcleaner\src\qt_wrapper.py", line 70, in run
new_detections = self.detect_identifiable_information(frame_buffer)
File "d:\Dashcam\D1\dashcamcleaner\src\blurrer.py", line 137, in detect_identifiable_information
results_list = self.detector(images, size=scale).xyxy
File "D:\Dashcam\Anaconda\envs\P1\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "D:\Dashcam\Anaconda\envs\P1\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\Users\heine/.cache\torch\hub\ultralytics_yolov5_master\models\common.py", line 683, in forward
g = max(size) / max(s) # gain
TypeError: 'float' object is not iterable

generate_training_data.py broken dependency requirements

Hi,

generate_training_data.py requires anonymizer, which in turn requires tensorflow-gpu and a myriad of other packages, as well as a lower python version (3.6) to be installed. This is in conflict with DashcamCleaner, making it no longer possible to run generate_training_data.py

Is this moving towards a more manual approach to creating training data? Or is the script still in use, but setting up the environment for it isn't documented?

Getting an error

I tried executing it on a video file. When I run the script, I get this error:

Traceback (most recent call last):
File "cli.py", line 218, in
cli.start_blurring()
File "cli.py", line 31, in start_blurring
blurrer.blur_video()
File "/home/ubuntu/DashcamCleaner/dashcamcleaner/src/blurrer.py", line 191, in blur_video
new_detections: List[List[Detection]] = self.detect_identifiable_information(frame_buffer)
File "/home/ubuntu/DashcamCleaner/dashcamcleaner/src/blurrer.py", line 138, in detect_identifiable_information
results_list = self.detector(images, size=scale).xyxy
File "/home/ubuntu/myenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ubuntu/myenv/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/ubuntu/.cache/torch/hub/ultralytics_yolov5_master/models/common.py", line 627, in forward
g = max(size) / max(s) # gain
TypeError: 'float' object is not iterable

No audio in output file when audio stream at index 1

When i process a mp4 video with the audio stream at index 1 the audio is not in the final file.

I use the feature/package-installation branch and cli.py with the following parameters:

weights="1080p_medium_v8",
blur_workers=5,
blur_size=9,
threshold=0.2,
roi_multi=2,
quality=5,
feather_edges=10,
no_faces=False,
blur_memory=1,
batch_size=8,
export_mask=False,
export_colored_mask=False,
export_json=False

Tests

Test method:
ffprobe -show_streams -v quiet -show_format -print_format json PATH

Results

Summary

Format: index codec_type codec_name

Ok audio:

original file
0 video h264
1 audio aac

processed file
0 video h264
1 audio aac

Missing audio:

original file
0 audio aac
1 video h264

processed file
0 video h264
1 video h264

Detailed results

Excerpts of ffprobe

Ok audio:

"index": 0,
"codec_name": "h264",
"codec_long_name": "H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10",
"profile": "High",
"codec_type": "video",
"codec_tag_string": "avc1",

"index": 1,
"codec_name": "aac",
"codec_long_name": "AAC (Advanced Audio Coding)",
"profile": "LC",
"codec_type": "audio",
"codec_tag_string": "mp4a",

after process

"index": 0,
"codec_name": "h264",
"codec_long_name": "H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10",
"profile": "High",
"codec_type": "video",
"codec_tag_string": "avc1",

"index": 1,
"codec_name": "aac",
"codec_long_name": "AAC (Advanced Audio Coding)",
"profile": "LC",
"codec_type": "audio",
"codec_tag_string": "mp4a",

Missing audio:

"index": 0,
"codec_name": "aac",
"codec_long_name": "AAC (Advanced Audio Coding)",
"profile": "LC",
"codec_type": "audio",
"codec_tag_string": "mp4a",

"index": 1,
"codec_name": "h264",
"codec_long_name": "H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10",
"profile": "High",
"codec_type": "video",
"codec_tag_string": "avc1",

after process

"index": 0,
"codec_name": "h264",
"codec_long_name": "H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10",
"profile": "High",
"codec_type": "video",
"codec_tag_string": "avc1",

"index": 1,
"codec_name": "h264",
"codec_long_name": "H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10",
"profile": "High",
"codec_type": "video",
"codec_tag_string": "avc1",

Where is the feature of bluring previous frames in case of missed detections implemented?

According to the readme, the dashcamcleaner has a nice feature:
"Sometimes, a license plate might be missed for just one frame. This one frame, usually 1/30th of a second long, still means the license plate or face could easily be identified - a computationally very cheap (as opposed to increasing the inference scale) way to fix such false negatives can be the frame memory option. In essence, it blurs not only the detected boxes in the current frame, it also blurs regions that were detected in n frames before."

Where is it implemented and how can i influence the number of previous blurred frames?

weight error

(py38) C:\Anaconda3\DashcamCleaner\dashcamcleaner>python main.py
YOLOv5 2022-12-29 Python-3.8.15 torch-1.13.1+cpu CPU

Traceback (most recent call last):
File "C:\Users\Laen/.cache\torch\hub\ultralytics_yolov5_master\hubconf.py", line 49, in _create
model = DetectMultiBackend(path, device=device, fuse=autoshape) # detection model
File "C:\Users\Laen/.cache\torch\hub\ultralytics_yolov5_master\models\common.py", line 345, in init
model = attempt_load(weights if isinstance(weights, list) else w, device=device, inplace=True, fuse=fuse)
File "C:\Users\Laen/.cache\torch\hub\ultralytics_yolov5_master\models\experimental.py", line 79, in attempt_load
ckpt = torch.load(attempt_download(w), map_location='cpu') # load
File "C:\Anaconda3\envs\py38\lib\site-packages\torch\serialization.py", line 771, in load
with _open_file_like(f, 'rb') as opened_file:
File "C:\Anaconda3\envs\py38\lib\site-packages\torch\serialization.py", line 270, in _open_file_like
return _open_file(name_or_buffer, mode)
File "C:\Anaconda3\envs\py38\lib\site-packages\torch\serialization.py", line 251, in init
super(_open_file, self).init(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'C:\Anaconda3\DashcamCleaner\dashcamcleaner\weights\.pt.pt'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "C:\Users\Laen/.cache\torch\hub\ultralytics_yolov5_master\hubconf.py", line 60, in _create
model = attempt_load(path, device=device, fuse=False) # arbitrary model
File "C:\Users\Laen/.cache\torch\hub\ultralytics_yolov5_master\models\experimental.py", line 79, in attempt_load
ckpt = torch.load(attempt_download(w), map_location='cpu') # load
File "C:\Anaconda3\envs\py38\lib\site-packages\torch\serialization.py", line 771, in load
with _open_file_like(f, 'rb') as opened_file:
File "C:\Anaconda3\envs\py38\lib\site-packages\torch\serialization.py", line 270, in _open_file_like
return _open_file(name_or_buffer, mode)
File "C:\Anaconda3\envs\py38\lib\site-packages\torch\serialization.py", line 251, in init
super(_open_file, self).init(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'C:\Anaconda3\DashcamCleaner\dashcamcleaner\weights\.pt.pt'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "main.py", line 256, in
window = MainWindow()
File "main.py", line 29, in init
self.blur_wrapper = self.setup_blurrer()
File "main.py", line 49, in setup_blurrer
blur_wrapper = qtVideoBlurWrapper(weights_name, init_params)
File "C:\Anaconda3\DashcamCleaner\dashcamcleaner\src\qt_wrapper.py", line 25, in init
VideoBlurrer.init(self, weights_name, parameters)
File "C:\Anaconda3\DashcamCleaner\dashcamcleaner\src\blurrer.py", line 33, in init
self.detector = setup_detector(weights_path)
File "C:\Anaconda3\DashcamCleaner\dashcamcleaner\src\blurrer.py", line 254, in setup_detector
model = torch.hub.load("ultralytics/yolov5", "custom", weights_path, verbose=False)
File "C:\Anaconda3\envs\py38\lib\site-packages\torch\hub.py", line 542, in load
model = _load_local(repo_or_dir, model, *args, **kwargs)
File "C:\Anaconda3\envs\py38\lib\site-packages\torch\hub.py", line 572, in _load_local
model = entry(*args, **kwargs)
File "C:\Users\Laen/.cache\torch\hub\ultralytics_yolov5_master\hubconf.py", line 83, in custom
return _create(path, autoshape=autoshape, verbose=_verbose, device=device)
File "C:\Users\Laen/.cache\torch\hub\ultralytics_yolov5_master\hubconf.py", line 78, in _create
raise Exception(s) from e
Exception: [Errno 2] No such file or directory: 'C:\Anaconda3\DashcamCleaner\dashcamcleaner\weights\.pt.pt'. Cache may be out of date, try force_reload=True or see ultralytics/yolov5#36 for help.

Output Video has stand image

The original video play completely fine, but the output video has a stand image after the first half.
The sound is still playing fine, it's just the video that doesn't play anymore.

In the console there is no error, there is only the following warning after start:
IMAGEIO FFMPEG_WRITER WARNING: input image is not divisible by macro_block_size=16, resizing from (1920, 1080) to (1920, 1088) to ensure video compatibility with most codecs and players. To prevent resizing, make your input image divisible by the macro_block_size or set the macro_block_size to 1 (risking incompatibility).

Setup of my system & environment:

  • OS: Windows 11 (latest updates installed)
  • Python: 3.8 setup with conda like you mentioned in the ReadMe
  • CPU: AMD Ryzen 3950X
  • RAM: 64GB
  • GPU: RTX 2080
  • Disk-type where source file is located: HDD (not C Drive)
  • Using GPU: yes
  • Source Video Resolution: 1080p
  • Source File-Format: mp4
  • Weights selected: 720p_medium_mosaic
  • Inference size: 720p
  • Batch size: 1
  • Output Quality: 5 (if i set it to 10 the output video has no video at all)

Have you any idea what i'm doing wrong?

Programm ends with an error, but video is created

After the video is created, following error occurs:

Traceback (most recent call last): File "D:\conda\DashcamCleaner\dashcamcleaner\src\blurrer.py", line 160, in run subprocess.run( File "D:\Users\marku\miniconda3\envs\py38\lib\subprocess.py", line 493, in run with Popen(*popenargs, **kwargs) as process: File "D:\Users\marku\miniconda3\envs\py38\lib\subprocess.py", line 858, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "D:\Users\marku\miniconda3\envs\py38\lib\subprocess.py", line 1251, in _execute_child args = list2cmdline(args) File "D:\Users\marku\miniconda3\envs\py38\lib\subprocess.py", line 553, in list2cmdline for arg in map(os.fsdecode, seq): File "D:\Users\marku\miniconda3\envs\py38\lib\os.py", line 818, in fsdecode filename = fspath(filename) # Does type-checking of filename. TypeError: expected str, bytes or os.PathLike object, not NoneType

But the video is fine

would it make sense to allow `frame_memory` into the future?

what is the reason for this setting?

I can only think of needing to also blur the same position a frame around the actual detection in-case the next frame does not detect it or detects poorly.

in that case blurring the frames around the detection makes more sense i think than only in one time-direction.

at least it could be an option if one needs it and it is easy to implement

No output for specific videos

While trying to blur specific videos I'm getting the following errors from ffmpeg and no output is produced.

I'm not sure but I'm thinking that this could be because the failing videos have no audio stream. Would that be an issue? Or is there anything else I could try?

This is what I'm getting while trying to blur such a video by using cli:

Using CPU.
Worker created
Blurrer started!
Worker started
Video blurred successfully in 1 minutes and 4 seconds.

Using cache found in /root/.cache/torch/hub/ultralytics_yolov5_master
YOLOv5  2022-5-23 Python-3.8.13 torch-1.8.1+cu102 CPU

Fusing layers... 
Model summary: 308 layers, 21041679 parameters, 0 gradients
Adding AutoShape... 
ffmpeg version 4.3.3-0+deb11u1 Copyright (c) 2000-2021 the FFmpeg developers
  built with gcc 10 (Debian 10.2.1-6)
  configuration: --prefix=/usr --extra-version=0+deb11u1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libdav1d --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-pocketsphinx --enable-libmfx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
  libavutil      56. 51.100 / 56. 51.100
  libavcodec     58. 91.100 / 58. 91.100
  libavformat    58. 45.100 / 58. 45.100
  libavdevice    58. 10.100 / 58. 10.100
  libavfilter     7. 85.100 /  7. 85.100
  libavresample   4.  0.  0 /  4.  0.  0
  libswscale      5.  7.100 /  5.  7.100
  libswresample   3.  7.100 /  3.  7.100
  libpostproc    55.  7.100 / 55.  7.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/out_videos/test_out_copy.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf58.29.100
  Duration: 00:00:10.00, start: 0.000000, bitrate: 155 kb/s
    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 480x640, 152 kb/s, 15 fps, 15 tbr, 15360 tbn, 30 tbc (default)
    Metadata:
      handler_name    : VideoHandler
Input #1, mov,mp4,m4a,3gp,3g2,mj2, from '/videos/test.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    creation_time   : 2022-07-22T08:02:10.000000Z
  Duration: 00:00:10.00, start: 0.000000, bitrate: 1009 kb/s
    Stream #1:0(eng): Video: h264 (Baseline) (avc1 / 0x31637661), yuv420p, 480x640, 1006 kb/s, 15 fps, 15 tbr, 15 tbn, 30 tbc (default)
    Metadata:
      creation_time   : 2022-07-22T08:02:10.000000Z
      encoder         : JCodec
Stream map '1:1' matches no streams.
To ignore this, add a trailing '?' to the map.

Application ends before censoring all the video

Hello, I'm trying to censor gopro videos. The app starts without problem, but it ends at around 3% with the message that the process has been completed successfully.
The .mkv output file is correctly censored, but it does not contain the entire length of the input file.

Do you have any idea of what I can do?
Thank you for any help you can provide.

Screenshot (47)

Application keeps crashing

Sorry for bothering you here. Unfortunately I wasn't able to get it working on my machine.
Using windows 10 and followed the steps.

when I click start the app freezes and at some point just disappears. No output / error.

grafik

Any help is appreciated :)

frame_memory option

Please bring back the frame_memory cli parameter. Without this feature the videos contain unblurred frames, so the output is less usable because the blurring is not reliable enough.
I see that frame_memory > 0 might not work with batch_size > 1 but for batch_size == 1 it would be great to have this option again.

Thanks for keeping this useful tool in development!

data donation

I have ~29h of video (5x speed up, so technically ~145h), including understand.ai's labels. It is definitely a different camera than yours. I can donate that for retraining.

Since their model has some unfortunate misses or inaccurate bounding boxes, I also went ahead and started manually labeling. That dataset is currently only ~10k images with ~18k head¹ and ~9k plate bounding boxes that are "large enough" to avoid training the model on bboxes with only a few pixels. I can also donate the model with these bboxes at 1280px, which proved sufficient for my use case, as the small bboxes are not discernible anyway. The model is definitely not perfect, here's an example inference that was not used for training/validation: https://vimeo.com/660127406 . The model is trained from yolov5x6 and 1.1GB, since "deployability to someone else's computer" was not a concern, though.

¹: I use a "head + neckline" bounding box, rather than just "face" as anonymizer does. I.e. the two are incompatible.

total amount of frames in progress bar is approximated

length = int(duration * fps)
audio_present = "audio_codec" in meta
# save the video to a file
with imageio.get_writer(
temp_output, codec="libx264", fps=fps, quality=quality
) as writer:
with tqdm(
total=length, desc="Processing video", unit="frames", dynamic_ncols=True

this sometimes leads to the progress bar never being closed (properly)

see https://stackoverflow.com/questions/2017843/fetch-frame-count-with-ffmpeg
for potential solution (idk if the current codebase can get the total framecount without looking at the file in another subprocess)

Wie Öffnen?

Hallo. Ich habe soweit alles nach Anleitung auf meinen Desktop bekommen.

Jetzt habe ich ein bescheidenes Problem und mir ist es unangenehm zu fragen, jedoch...wie öffne ich nun das Tool?

Gibt es dazu einen Code oder eine .exe die ich nicht finde?
Im Reiter Usage ist für mich als Laie keine verwertbare Information gegeben.

LG, Tom

GPU not fully utilized

Hello,
I use my GPU and it fits up to the first 10%.
After that it drops to a permanent 10-20%.

I once had a version from July/June where the GPU was fully utilized, but somehow not in the current version.

I have already tested with the batch size, but was not really successful
2022-10-03 (1)

DashcamCleaner hangs at 0% when using AMD GPU

Hi,

Issue: When launching DashcamCleaner with CUDA being available through ROCm the program will hang at 0% while hogging one CPU core at 100% and keeping the GPU at 99%. This is with a video that is ~7 seconds long, has 30 FPS and has a resolution of 2704x1520px. I've let the process run for 5 minutes before aborting. I tried different arguments and both the UI and the CLI version.

When disabling CUDA/ROCm the same video with the same arguments takes ~3 minutes to complete.

How I launch the program with ROCm support: HSA_OVERRIDE_GFX_VERSION=10.3.0 python ./cli.py -i ~/trip.mp4 -o ./test.mp4 -t 0.8 -nf --weights 1080p_small_v8.pt -t 0.4 -nf --weights 1080p_small_v8.pt -s 2 -q 6 -f 2 -b 10 -r 1.1

Without ROCm support: CUDA_VISIBLE_DEVICES="1" python ./cli.py -i ~/trip.mp4 -o ./test.mp4 -t 0.4 -nf --weights 1080p_small_v8.pt -s 2 -q 6 -f 2 -b 10 -r 1.1

Where it hangs is this line: results_list = self.detector(images, imgsz=[scale]) in blurrer.py:detect_identifiable_information.

Information about my system:

CPU: AMD Ryzen 5 5600X
GPU: AMD Radeon RX 5700 XT
Kernel: 6.3.1-arch2-1
Distro: Arch Linux

Python version: 3.11.3

>>> import torch
>>> torch.cuda.is_available()
True
>>> t = torch.tensor([5, 5, 5], dtype=torch.int64, device='cuda')
>>> print(str(t))
tensor([5, 5, 5], device='cuda:0')

The attached log.txt shows the output that occurs when trying to utilize my GPU.
log.txt

errors

I've tested it with Python version 8 - 10 and always get the same error.

(P2) d:\Dashcam\D2>python main.py
Traceback (most recent call last):
File "d:\Dashcam\D2\main.py", line 261, in
window = MainWindow()
File "d:\Dashcam\D2\main.py", line 37, in init
self.blur_wrapper = self.setup_blurrer()
File "d:\Dashcam\D2\main.py", line 58, in setup_blurrer
blur_wrapper = qtVideoBlurWrapper(weights_name, init_params)
File "d:\Dashcam\D2\src\qt_wrapper.py", line 28, in init
QThread.init(self)
TypeError: VideoBlurrer.init() missing 2 required positional arguments: 'weights_name' and 'parameters'

Question Linux support + FFMPEG_BINARY

hi,
thanks for your work. not an issue but 2 questions.

The Ui only works in Windows I guess?
Or should it work in Linux too?

Can you point out how to set the ffmpeg environment variable in windows?

RuntimeError: Couldn't load custom C++ ops.

Blurrer started!
Worker started
Traceback (most recent call last):
  File "/home/niels/Documents/DashcamCleaner/dashcamcleaner/src/blurrer.py", line 158, in run
    new_detections = self.detect_identifiable_information(frame.copy())
  File "/home/niels/Documents/DashcamCleaner/dashcamcleaner/src/blurrer.py", line 97, in detect_identifiable_information
    results = self.detector(image, size=scale)
  File "/home/niels/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/niels/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/niels/.cache/torch/hub/ultralytics_yolov5_master/models/common.py", line 606, in forward
    y = non_max_suppression(y if self.dmb else y[0],
  File "/home/niels/.cache/torch/hub/ultralytics_yolov5_master/utils/general.py", line 859, in non_max_suppression
    i = torchvision.ops.nms(boxes, scores, iou_thres)  # NMS
  File "/home/niels/.local/lib/python3.8/site-packages/torchvision/ops/boxes.py", line 35, in nms
    _assert_has_ops()
  File "/home/niels/.local/lib/python3.8/site-packages/torchvision/extension.py", line 62, in _assert_has_ops
    raise RuntimeError(
RuntimeError: Couldn't load custom C++ ops. This can happen if your PyTorch and torchvision versions are incompatible, or if you had errors while compiling torchvision from source. For further information on the compatible versions, check https://github.com/pytorch/vision#installation for the compatibility matrix. Please check your PyTorch version with torch.__version__ and your torchvision version with torchvision.__version__ and verify if they are compatible, and if not please reinstall torchvision so that it matches your PyTorch install.
./main.py:193: DeprecationWarning: 'exec_' will be removed in the future. Use 'exec' instead.
  msg_box.exec_()

sry, this is running my fork, so line numbers may be wrong

Avoid GUI

Is it possible to use your tool without the GUI provided, but giving the input arguments through command line? It would be great because:

  1. If one uses your repository on a Linux machine to which he connects via ssh from a windows computer, the application fails.
  2. It would be easier to automate different input configurations to assess which one is the most suited for a specific case

[Question] Fish-eye

Hello, great job you did here!

I've got one question - was this model trained using fish-eye or non-fish-eye dashcam images?
Plus, how do you expect it to behave facing fish-eye videos from a GoPro?

Thank you.
Pedro

error with `TypeError: unsupported operand type(s) for /: 'tuple' and 'int'`

split from #61

diff --git a/dashcamcleaner/src/blurrer.py b/dashcamcleaner/src/blurrer.py
index fb0b269..fc878b4 100644
--- a/dashcamcleaner/src/blurrer.py
+++ b/dashcamcleaner/src/blurrer.py
@@ -36,7 +36,7 @@ class VideoBlurrer:
         :return: detected faces and plates
         """
         scale = self.parameters["inference_size"]
-        results_list = self.detector(images, size=(scale,)).xyxy
+        results_list = self.detector(images, size=scale).xyxy
         return [
             [
                 Detection(

results_list = self.detector(images, size=(scale,)).xyxy

b3dc71d#diff-dc15d6e41ef6c6b5e2ae45f31eb3dd289b44ea277afa9438d0d8e4cb1ee90090R140

CPU not fully utilized

image
I own an AMD Ryzen 3800xt 8-core CPU with 16 Threads.
When blurring a video, not all of the CPU is being used.

My guess is this has something to do with reading frames from memory, then detecting info, then blurring it, then storing the blurred frame, then repeating the loop.
My profiling skills in python are non-existent, so I can only guess that this is the reason why CPU performance is not at 100% utilization.
weaving/shuffling IO-bound tasks and CPU-bound ones to do them concurrently would be ideal, so reading and writing happens in the background while detections take place

[feature] blur masks to soften transition between blurred areas and clear video

maybe it is possible to blur the mask which says where to let through the blurred frame, such that the edges are softer and far less noticeable.
this can then also make the twofold blurring with two different kernels redundant and unnecessary.

maybe the amount this gets feathered can also be introduced as a new parameter -fe --feather_edges [0, n] (0 is current behaviour) which says how many pixels to feather the edges of the detections by.
careful not to blur the inside of the detections in the mask and thus make them too translucent and reveal the to be blurred info.

cli.py videos freeze

I tried to batch convert multiple videos with the cli. This run through and unfortunately, most of the cleaned videos freeze within seconds after first blur. Working on the same videos one by one with the gui does work with the same set of parameters.

Currently, I use the main branch with latest commit from Jul 26th. The software runs on Anaconda / Windows 10 and utulizes CUDA. If needed, I can give this a try on a Ubuntu laptop but without GPU of course. Maybe I find some time to investigate in this myself, but I cannot promise too much.

FFMPEG is ffmpeg version N-112250-g6f7bf64dbc-20231001 Copyright (c) 2000-2023 the FFmpeg developers (latest)
If you need the other lib versions, I can provide a full list.

self-trained detection for UK licence plates in my own fork, but not working perfectly

It seems that the training data is mainly German licence plates, so the program can't really identify the rear plate of British cars, which is yellow instead of white. I've trained my own model and uploaded the UK_licence_plate.pt weight file in my own fork. However, it seems like my weight is not working perfectly, as I need to manually assign the weights_name and training_inference_size variable in main.py for it to work.

Is it possible to:
guide me on how to make my UK_licence_plate.pt work better, i.e., without needing me to manually assign variables?
maybe add my trained weights to the main program for British users who wants to clean licence plates from videos?

Thank you very much!

not blurring faces

i screen recorded a clip starting from https://youtu.be/EfdmAuB7JzU?t=296 and cut it down to one second.
i have an amd gpu, so cuda does not work for me, cpu time for that one second clip was 3minutes 8sec.

none of the faces are blurred in the output clip.
i used the 1080p_medium_rect weights
and inference_size 1080
roi_multi of 1
quality of 10
threshold of 1
blur_size of 10
frame_memory of 1

threshold option not working

Hello,
i experimented a little bit with the threshold option and it seems that it has no effect whatsoever.

In my final test run i created 3 files with a threshold of 0.0, 0.5 and 1.0. And every time the result and the runtime was the same.
I used cli.py and these options:

weights 1080p_medium_v8
blur_workers 5
blur_size 9
roi_multi 1
quality 5
feather_edges 10
batch_size 8
export_mask

I merged all my test files in one video with the different clips shifted by 40px to make them all visible (because they would be on top of each other).
You can see that there is always 1 element for each threshold value and always in a triplet.

threshold_tests.mp4

For reference, this is the environment of the video.
vlcsnap-00005

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.