Giter Site home page Giter Site logo

puzer / stylegan-encoder Goto Github PK

View Code? Open in Web Editor NEW

This project forked from nvlabs/stylegan

1.1K 1.1K 167.0 10.35 MB

StyleGAN Encoder - converts real images to latent space

License: Other

Python 2.67% Jupyter Notebook 97.33%
gan generative-adversarial-network perceptual-losses

stylegan-encoder's People

Contributors

puzer avatar tkarras avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stylegan-encoder's Issues

KeyError: "The name 'G_synthesis_1/_Run/concat:0' refers to a Tensor which does not exist. The operation, 'G_synthesis_1/_Run/concat', does not exist in the graph."

Getting this error after running the following with TensorFlow 1.x in Google Colab on a GPU runtime:
!python encode_images.py aligned_images/ generated_images/ latent_representations/ --iterations 1000

Full error:
Traceback (most recent call last):
File "encode_images.py", line 80, in
main()
File "encode_images.py", line 53, in main
generator = Generator(Gs_network, args.batch_size, randomize_noise=args.randomize_noise)
File "/content/stylegan/encoder/generator_model.py", line 35, in init
self.generator_output = self.graph.get_tensor_by_name('G_synthesis_1/_Run/concat:0')
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py", line 3783, in get_tensor_by_name
return self.as_graph_element(name, allow_tensor=True, allow_operation=False)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py", line 3607, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py", line 3649, in _as_graph_element_locked
"graph." % (repr(name), repr(op_name)))
KeyError: "The name 'G_synthesis_1/_Run/concat:0' refers to a Tensor which does not exist. The operation, 'G_synthesis_1/_Run/concat', does not exist in the graph."

Any help would be greatly appreciated!

Latent layers importance

Hey, great analysis! :)
I've learned a lot from reading it.

Just a quick comment: the W matrix produced by the mapping network contains a single vector w_v tiled in with respect to layers (so w[0] = w[1] = ... = w[n_layers - 1]).
The layer-wise affine transformation happens on the synthesis network

The notebook however operates over this tiled w, which is why you saw the surprising behavior (since all the layers are identical).
I imagine that running it again over the transformed Ws would show something much different.

P.S: This bug also affects the result of the non-linear model

Generated images color was deformed

Hi everyone,

I got the results as follow. I tried to convert RGB with a couple of way but they didn't work. Is there anyone who has an opinion what cause this ?

image

Using the stylegan-encoder for the LSUN bedroom dataset

@Puzer you did some great work here! I'm trying to apply your encoder to the bedroom network, however the generated images are not of the same quality as the results of the FFHQ network. Initially, I changed the SGD optimizer to the Adam optimizer, because the loss decreased faster and the generated images looked more realistic. I also tried to tweak the hyperparameters, but only the number of iterations has a major impact on the result. Do you have any ideas to improve the results?

Command Original Results
--iterations 1000 --batch_size 1 --lr 0.1 content 1 content 1-1000itr
--iterations 2000 --batch_size 4 --lr 0.1 content 1 content 1-2000itr-4batch
--iterations 10000 --batch_size 1 --lr 0.1 content 1 content 1-10000itr
Command Original Results
--iterations 1000 --batch_size 1 --lr 0.1 style 2 style 2-1000itr
--iterations 2000 --batch_size 4 --lr 0.1 style 2 style 2-2000itr-4batch
--iterations 10000 --batch_size 1 --lr 0.1 style 2 style 2-10000itr

The following results are generated while using the SGD optimizer:

Command Original Results
--iterations 2000 --batch_size 2 --lr 1 bedroom2-0 bedroom2-2000itr-2batch
--iterations 3000 --batch_size 3 --lr 1 --randomize_noise bedroom2-0 bedroom2-3000itr-3batch-rn
--iterations 4000 --batch_size 4 --lr 1 bedroom2-0 bedroom2-4000itr-4batch

Results of style transfer:

Fine style mixing

image

Coarse style mixing

image

Both results are significantly different from the original StyleGAN style transfer results for bedrooms. Because of the lower quality of the images it was expected to have slightly less stunning images, however the original images are not derivable in these results.

Generate random image

I am a bit confused on how to generate a random image, as the image generation in generate_image() seems to be quite different from the main stylegan example code in pretrained_example.py.

I naively tried the following:
generate_image(np.random.rand(18, 512))

which does not seem to work.

Issue with session and api

I have created an API to run this github.
For the first client, everything is executing perfectly and giving good result. But for the second client on the same session giving error like
ValueError: Tensor("Const:0", shape=(3,), dtype=float32) must be from the same graph as Tensor("strided_slice:0", shape=(1, 256, 256, 3), dtype=float32)

Trying to resolve the issue.
Help me for this.
Thanks & Regards,
Sandhya

Error while using encodie_images.py file when trying to run it in colab

Traceback (most recent call last): File "encode_images.py", line 80, in <module> main() File "encode_images.py", line 51, in main generator_network, discriminator_network, Gs_network = pickle.load(f) _pickle.UnpicklingError: invalid load key, '\x0a'.

I have duplicated pickle file in my Google drive as the quota was exceeding. But, now I am getting this error. Is is because of copying or something else ?

According to this Stackoverlow post , maybe the file itself has been corrupted. If it's the case , Is there any mirror link for the same file which can be used safely ?

The error of load image

Hi, I'm trying to use 'python align_images.py raw_images/ aligned_images/' to see the latent space of my owe image data. But it threw an error reminding which is "Unknown image file format: Unable to load image in file raw_images/.ipynb_checkpoints". I run this model on colab. Can anyone help me to fix this problem? Many thanks!!

Join Stylegan encoder research

Dear Puzer,
I am currently working on stylegan encoder for architecture design opportunities. I was wondering how can I join your research lab and collaborate with you guys.
I am a PhD candidate in Chung Ang University, South Korea.
contact me on: [email protected]
best regards
Ahmed Khairadeen Ali

How to find the attribute direction?

Hey, Puzer, you did a nice work. I am working on generating more meaningful face images and controlling the the attributes by myself. And I found that you got the attribute direction, like smiling, age, gender. How do I get more attribute direction, like hair, color of skin or other facial expressions and so on. Do you have some script or any way? Thank you.

Interpolation between 2 faces in dlatent space not as meaningful as it is in qlatent space

Hi!

First, thanks for your work!

I tried to interpolate between 2 faces in the dlatent space (18, 512) and the result seems to be not as meaningful as it is if interpolating between 2 vectors in the qlatent space (512). It kinda works but some transient images contain strange artifacts or do not look like very valid face. Did you notice this effect? Seems like not all points along the linear path in the dlatent space correspond to real faces, though in the qlatent space they do.

Just wandering if it possible somehow to get latent representations in the original qlatent space to compare interpolation quality.

why is an optimizer used in something that is not a neural network?

Can someone tell me the intuition of applying an optimizer (descending gradient, adam) to the latent code.

the optimizer looks for the image in the latent space, the latent space is updated instead of the weights of a neural network.

Why does it work in this case, since it is not the weights of a neural network that is updated?

How does the optimizer know the latent code that represents the input image for the generator?

I am trying to obtain the same result with pytorch.

I did the encoding like the author of this github has mentioned.
VGG = Perceptual model
G = Generator model
G.style() is the fully connected encoder in generator which gives dlatent code.

First question
After optimizing perceptual loss between G and VGG, do I need to save the weights of G.style(), or pass random vector to G.style() save the output from G.style() which is dlatent.

Second Question
If the first approach is right, than I proceed by getting attributes of the face using microsoft cognitive api. I tried on just 860 images to begin with.
I do get the result but it is wrong.
image

invalid syntax for dnnlib/__init__.py

submit_config: SubmitConfig = None # Package level variable for SubmitConfig which is only valid when inside the run function. is error with message SyntaxError: invalid syntax, should I just ignore this error? Thanks!

Syntax error in encode_images.py

Getting this when trying to learn new vectors and running python encode_images.py aligned_images/ generated_images/ latent_representations/:

File "/content/stylegan-encoder/encode_images.py", line 73
img.save(os.path.join(args.generated_images_dir, f'{img_name}.png'), 'PNG')

I have read that f'{} requires python3, but I cannot run python3 without tensorflow 2.x, while this model requires Tensorflow 1.14. I am not sure how to get past this.

Can anyone please help me? Thank you!

Scaled loss / magic number

Hi! Great project :)

I would like to ask where does the scale in perceptual model loss come from. I mean that line:
self.loss = tf.losses.mean_squared_error(self.features_weight * self.ref_img_features, self.features_weight * generated_img_features) / 82890.0

I tried 82890.0 to factorize it 2 * 3^3 * 5 * 307 to check if it's the size of output features or anything similar, but apparently no. Also I searched through all the Github to find exactly the same scale in other projects, but found no good match.

Cannot load network snapshot for restarting: corrupted pickles?

I'm trying to restart a job by loading a network snapshot pickle (called network-snapshot-001283.pkl). The pickle is written during training of the previous run and loaded at the beginning of the restart by the functions at the beginning of training/misc.py.

I get the following error when trying to load the snapshot:

_pickle.UnpicklingError: invalid load key, '<'.

This error appears for all my snapshots produced training with different data on different runs, as well as when trying to use generate_figures.py.
Either the pickle written during training is corrupted, or I'm doing something wrong, although I haven't change any line corresponding to this part.
Any idea?

Thanks

Can't download LATENT_TRAINING_DATA

I can't download LATENT_TRAINING_DATA in Learn_direction_in_latent_space.ipynb .

HTTPError: 404 Client Error: Not Found for url: https://drive.google.com/uc?id=1xMM3AFq0r014IIhBLiMCjKJJvbhLUQ9t

Thanks.

Question: running on CPU?

Do you know if it's possible to run stylegan on a GPU machine? Been trying to run a version of stylegan I've trained on my desktop machine and would like to mess around with the checkpoints while it's training. Keep seeing that the operations are specifically set to use GPU still.

Specifically, this error:

:GPU:0 but available devices are [ /job:localhost/replica:0/task:0/device:CPU:0 ]. Make sure the device specification refers to a valid device.
         [[Node: Gs/_Run/Gs/latents_in = Identity[T=DT_FLOAT, _device="/device:GPU:0"](Gs/_Run/split)]]```

encode_images.py runs out of memory on image sequence

When encoding a large number of images the time to set reference images to the perceptual model takes longer, and eventually the script crashes with the following error:

019-02-25 21:02:48.244097: W tensorflow/core/common_runtime/bfc_allocator.cc:271] Allocator (GPU_0_bfc) ran out of memory trying to allocate 320.00MiB. Current allocation summary follows.
2019-02-25 21:02:48.252031: W tensorflow/core/common_runtime/bfc_allocator.cc:275] ****************************______
2019-02-25 21:02:48.257082: W tensorflow/core/framework/op_kernel.cc:1273] OP_REQUIRES failed at conv_grad_input_ops.cc:937 : Resource exhausted: OOM when allocating tensor with shape[5,16,1024,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[5,16,1024,1024] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node gradients_10/G_synthesis_1/_Run/G_synthesis/ToRGB_lod0/Conv2D_grad/Conv2DBackpropInput}} = Conv2DBackpropInput[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="SAME", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](gradients_10/G_synthesis_1/_Run/G_synthesis/ToRGB_lod0/Conv2D_grad/ShapeN, G_synthesis_1/_Run/G_synthesis/ToRGB_lod0/mul, gradients_10/G_synthesis_1/_Run/G_synthesis/ToRGB_lod0/add_grad/tuple/control_dependency)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.

When training on a batch of 1 image at a time it takes about 250 images before it crashes. When training on a batch of 5 images at a time it crashes on the 10th batch (50 images).

Is the perceptual model holding onto previous images? Could there be a memory leak somewhere? As far as I can tell the crash happens on the self.sess.run command of the optimize method. I also tried removing tqdm from the script but it still crashes during training.

Reverse the changes of alignment

Hi! Thank you for this project. I am using this project with video frames instead of images. I notice that during facial alignment, if there is a tilt in the image etc, it is straightened out. Can I reverse these effects in the image after the image has been moved along the direction of the latent vector? If yes, how?

Fast embeddings for head pose

Hey just wondering if there was a simple way of getting fast embeddings just for the pose/coarse style (face landmarks & headPose). I was thinking of trying a realtime generator responding to movement through a webcam input - has anyone tried this & do you think it could be possible?

shape (1, 12, 512) to (1, 18, 512)?

how can I create stylegan model with (1, 18, 512)
my stylegan model is creating shape (1, 12, 512) and I cannot find the latent space developed by Puzer because of shape difference

in more details:
my model produce shape with (1, 12, 512) using (https://github.com/NVlabs/stylegan)
but when I use stylegan encoder (https://github.com/Puzer/stylegan-encoder) to find latent space it requires (1, 18, 512), do you have any idea how can I produce (1, 18, 512) model shapes instead of (1, 12, 512)?

I just found a project that allows controlling a bunch of StyleGAN features through UI knobs:

I just found a project that allows controlling a bunch of StyleGAN features through UI knobs:
https://github.com/SummitKwan/transparent_latent_gan

Being a total newbie at machine learning, I'm wondering, what are the main differences between Puzer's approach and transparent_latent_gan?

Another issue - transparent_latent_gan is using the smaller CelebA dataset, so that might be the reason why sometimes its features get entangled too much and StyleGAN gets stuck when you try to lock and combine too many features (try to adjust the sliders to create an old, bald, non-smiling, bearded man with eyeglasses).

I'm wondering if Puzer's approach could work better? I tried current age direction and noticed that at some point it tries to add glasses and beard. I guess, those two features got entangled with age and I'm not sure what could be done to disentangle them - I hope to get only wrinkles and receding hairline for age direction.

Also, when encoding images, I found out that sometimes align works incorrectly cropping away top of a head. And for some of my images, the optimal encoder combination seems to be learning rate of 4.0 and image size of 512. With default settings (learning rate of 1 and image size 256) it got some tricky images (old black&white photos) or complex scenarios (large mustache over lips) totally corrupted, and for some less complex images it lost enough tiny details to make the photo feel too "uncanny" to consider to be exact match, especially, for younger people who don't have enough deep wrinkles or beards and also when images are shot with lots of light, so those tiny details and shadows matter a lot.

Of course, 4.0 @ 512 can take pretty long time to train, and sometimes 1000 iterations are not enough. With one specific tricky image I went as far as to 4000 iterations to get satisfactory results, while for some other images such high learning rate + iterations led to washed-out images (overfitting?).

Originally posted by @progmars in #5 (comment)

Basic Usage Question

I apologize if I overlooked it and if this isn't an "issue", but I couldn't find any basic steps/commands needed to reproduce the change in expression found in the teaser image:
https://raw.githubusercontent.com/Puzer/stylegan-encoder/master/teaser.png

I was able to encode an image successfully using the commands in the readme but I can't seem to find any documentation on modifying the expression or angle of a face after doing so. All I'm trying to do is tinker with adjustments on a custom image, am I misunderstanding the functionality of this repo?

Reduce encode_images.py time by using one model instance

Hi, I am trying to decrease generation time. So far I am getting 2 min and 20 seconds per image (generating 10 images as output for age).
What I am realizing is that encode_images.py is taking this long for each input image:

  1. Initializing generator : 7.2106 secs
  2. Creating PerceptualModel : 9.0305 secs
  3. Loading Resnet model : 23.0473 secs
  4. Loop loss : 1.0582 secs
  5. Loop loss : 0.0619 secs
  6. Loop loss : 0.0630 secs
  7. Loop loss : 0.0618 secs
  8. Loop loss : 0.0628 secs
  9. Loop loss : 0.0621 secs

So I am trying to initialize the generator, create the perceptual model and load the resnet model at once at the beginning of my script and pass as parameters to encode_images.py so steps 1 to 3 are not being done for each image.

But I have no idea if that's the right way to do it. I defined an auxiliar() function instead of calling the script directly and passing same flags and parameters:

New defined function

auxiliar(optimizer='lbfgs', face_mask=True, iterations=6, use_lpips_loss=0, use_discriminator_loss=0, output_video=False, src_dir='aligned_images/', generated_images_dir='generated_images/', dlatent_dir='latent_representations/')

Former script call

python encode_images.py --optimizer=lbfgs --face_mask=True --iterations=6 --use_lpips_loss=0 --use_discriminator_loss=0 --output_video=False aligned_images/ generated_images/ latent_representations/

So far I am getting this error:
ValueError: Tensor(“Const_1:0”, shape=(3,), dtype=float32) must be from the same graph as Tensor(“strided_slice:0", shape=(1, 256, 256, 3), dtype=float32).

At this point of the code that used to be in encode_images.py:

perceptual_model = PerceptualModel(args, perc_model=perc_model, batch_size=batch_size)
perceptual_model.build_perceptual_model(generator, discriminator_network)

Question about the direction for an attribute

Does this only work for binary labels, like smile vs. no-smile or male vs. female? Or is it possible to do it for multi-class labels like ethnicity (ie. white, black, asian)? Thanks!

error while running encode.py

Facing this error

Traceback (most recent call last):
File "encode_images.py", line 80, in
main()
File "encode_images.py", line 53, in main
generator = Generator(Gs_network, args.batch_size, randomize_noise=args.randomize_noise)
File "/content/gdrive/My Drive/RI/stylegan-encoder/encoder/generator_model.py", line 35, in init
self.generator_output = self.graph.get_tensor_by_name('G_synthesis_1/_Run/concat:0')
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 3783, in get_tensor_by_name
return self.as_graph_element(name, allow_tensor=True, allow_operation=False)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 3607, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/framework/ops.py", line 3649, in _as_graph_element_locked
"graph." % (repr(name), repr(op_name)))
KeyError: "The name 'G_synthesis_1/_Run/concat:0' refers to a Tensor which does not exist. The operation, 'G_synthesis_1/_Run/concat', does not exist in the graph."

ValueError: Shape of a new variable (ref_img_features) must be fully defined, but instead was (?, 64, 64, 256)

When I run “python3 encode_images.py aligned_images/ generated_images/ latent_representations/”,I get the errer:

Traceback (most recent call last):
File "encode_images.py", line 86, in
main()
File "encode_images.py", line 61, in main
perceptual_model.build_perceptual_model(generator.generated_image)
File "/media/data2/laixc/stylegan-encoder/encoder/perceptual_model.py", line 41, in build_perceptual_model
dtype='float32', initializer=tf.initializers.zeros())
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/variable_scope.py", line 1328, in get_variable
constraint=constraint)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/variable_scope.py", line 1090, in get_variable
constraint=constraint)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/variable_scope.py", line 435, in get_variable
constraint=constraint)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/variable_scope.py", line 404, in _true_getter
use_resource=use_resource, constraint=constraint)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/variable_scope.py", line 764, in _get_single_variable
"but instead was %s." % (name, shape))
ValueError: Shape of a new variable (ref_img_features) must be fully defined, but instead was (?, 64, 64, 256).

tensorflow.python.framework.errors_impl.InvalidArgumentError

  • I am try to run encode_image.py, and get 2 root error(s) found:
    (0) Invalid argument: You must feed a value for placeholder tensor 'G_synthesis_1/_Run/dlatents_in' with dtype float
    [[{{node G_synthesis_1/_Run/dlatents_in}}]]
    (1) Invalid argument: You must feed a value for placeholder tensor 'G_synthesis_1/_Run/dlatents_in' with dtype float
    [[{{node G_synthesis_1/_Run/dlatents_in}}]]
    [[G_synthesis_1/_Run/concat/concat/_8419]]
  • I am try to encode my 2 images, someone can help me for this issues ?

Getting black output images

Hi,

When I try to optimize d_latent as in the code, I start from a nice image but it transforms into a completely black image.
Adding an L2 loss between the reference and the generated image helps, but the resulting image is not similar to the reference.

Any ideas?

Thanks!

Why the latent code from GAN inversion methods can be manipulated by the boundary

Hi, thanks for sharing this great work!

I have some question as follow:

  1. The shape of 'donald_trump_01.npy' is (18,512) which has different values of 18 layers. However, your "smile.npy" has the same values of 18 layers. The meaning of 18 layers of 'donald_trump_01.npy' is different because the values are different, why it can edit by the same smile boundary?
    Do you know why it can also work?

  2. In move_and_show function, new_latent_vector[:8] = (latent_vector + coeff*direction)[:8]. Why just edited the first eight layers?

Thank you!

Getting numpy array of custom feature directions

Hi,
I modified the non-linear model inputs to get different feature directions. How can I get numpy array of those feature vectors for non-linear model?
For linear model, I realized it can be obtain by making minor changes here:

clf = LogisticRegression(class_weight='balanced')
clf.fit(X_data.reshape((-1, 18*512)), y_gender_data)
gender_dircetion = clf.coef_.reshape((18, 512))

Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.