Giter Site home page Giter Site logo

laplacesdemon's Introduction

Travis-CI Build Status

LaplacesDemon

A complete environment for Bayesian inference within R

The goal of LaplacesDemon, often referred to as LD, is to provide a complete and self-contained Bayesian environment within R. For example, this package includes dozens of MCMC algorithms, Laplace Approximation, iterative quadrature, Variational Bayes, parallelization, big data, PMC, over 100 examples in the Examples vignette, dozens of additional probability distributions, numerous MCMC diagnostics, Bayes factors, posterior predictive checks, a variety of plots, elicitation, parameter and variable importance, Bayesian forms of test statistics (such as Durbin-Watson, Jarque-Bera, etc.), validation, and numerous additional utility functions, such as functions for multimodality, matrices, or timing your model specification. Other vignettes include an introduction to Bayesian inference, as well as a tutorial.

There are many plans for the growth of this package, and many are long-term plans such as to cotinuously stockpile distributions, examples, samplers, and optimization algorithms. Contributions to this package are welcome.

The main function in this package is the LaplacesDemon function, and the best place to start is probably with the LaplacesDemon Tutorial vignette.

Installation


From CRAN install.packages("LaplacesDemon")

Using the 'devtools' package:

install.packages("devtools")
library(devtools)
install_github("LaplacesDemonR/LaplacesDemon")

Package History

LaplacesDemon was initially developed and uploaded to CRAN by Byron Hall, the owner of Statisticat, LLC. Later on, the maintainer of the package changed to Martina Hall.

The last version available on CRAN from the original authors and maintainers was version 13.03.04, which was removed from CRAN on 2013-07-16 at the request of the maintainer.

After removal from CRAN, the development of LaplacesDemon continued for some time on GitHub under the name of Statisticat LLC (presumably still run by Byron Hall). The last commit by Statisticat for LaplacesDemon on GitHub was performed on 25. Mar 2015. After that Statisticat deleted their account on GitHub and ceased further development of the package.

As Statisticat could not be reached, neither by e-mail nor by snail-mail (the latter was attempted by Rasmus Bååth), Henrik Singmann took over as maintainer of LaplacesDemon in July 2016 with the goal to resubmit the package to CRAN (as version 16.0.x). Henrik Singmann does not actively continue the development of LaplacesDemon but only retains it on CRAN in its current state.

Note that in order to resubmit the package to CRAN all links to the now defunct website of Statisticat (formerly: http://www.bayesian-inference.com) were replaced with links to versions of this website on the web archive (https://web.archive.org/web/20141224051720/http://www.bayesian-inference.com/index).

To contribute to the development of LaplacesDemon or discuss the development please visit its new repository: https://github.com/LaplacesDemonR/LaplacesDemon

laplacesdemon's People

Contributors

benmarwick avatar danheck avatar emmanuelcharpentier avatar jfiksel avatar mansmeg avatar mariusbommert avatar quentingronau avatar samedii avatar singmann avatar stla avatar willemvandenboom avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

laplacesdemon's Issues

VariationalBayes() Example

As best as I can determine, the example for the VariationalBayes() function on page 384 of the pdf will not converge. Do you have an example where the fit dies converge?

error in rmatrixnorm

Hello,

The rmatrixnorm function does not correctly sample from a matrix variate normal distribution.

One can see this by checking the expectation of (X-M)(X-M)'. This should be equal to U*tr(V). This is not the case:

library(LaplacesDemon)
set.seed(314)
U <- rwishart(3, diag(2))
V <- rwishart(3, diag(2))
square <- matrix(0, 2, 2)
for(i in 1:10000){
  X <- rmatrixnorm(matrix(0,2,2), U, V)
  square <- square + X%*%t(X)
}
> square/10000
          [,1]       [,2]
[1,]  2.572725 -1.1191975
[2,] -1.119198  0.7415746
> U*tr(V)
          [,1]      [,2]
[1,]  0.879866 -1.230179
[2,] -1.230179  2.471029

The correct sampler is this one:

myrmatrixnorm <- function(M, U, V){
  Z <- matrix(rnorm(nrow(M)*ncol(M)), nrow(M), ncol(M))
  M + t(chol(U)) %*% Z %*% chol(V)
}
square <- matrix(0, 2, 2)
for(i in 1:10000){
  X <- myrmatrixnorm(matrix(0,2,2), U, V)
  square <- square + X%*%t(X)
}
> square/10000
           [,1]      [,2]
[1,]  0.8815791 -1.242929
[2,] -1.2429287  2.502462
> U*tr(V)
          [,1]      [,2]
[1,]  0.879866 -1.230179
[2,] -1.230179  2.471029

The error is at the last line of rmatrixnorm:
X <- M + chol(U) %*% Z %*% chol(V)
should be
X <- M + t(chol(U)) %*% Z %*% chol(V)

Slice sampler bug if first parameter 'Nominal' etc.

Hi, if a simple model with only one categorical parameter is used in LaplacesDemon, I get:

Fit <- LaplacesDemon(Model, Data=D, Initial.Values=IV,
Covar=NULL, Iterations=2000, Status=100, Thinning=10,
Algorithm="Slice", Specs=list(B=NULL, Bounds=c(1,2), m=100,
Type="Nominal", w=1))
...
Error in .mcmcslice(Model, Data, Iterations, Status, Thinning, Specs, :
object 'prop' not found

in .mcmcslice it looks as if prop is not initialized if the first parameter isn't 'Continuous'.
The code can easily be made to work by adding the following before the call above. However it would be good if this was fixed in the .mcmcslice itself.

Mo0 <- Model(IV, D); prop <- Mo0[["parm"]];

Also, might it be possible to add some example argument lists for categorical variables? For example, the Examples.pdf file says e.g. that a mixture model was fitted with this sampler, but does not give the example arguments for the LaplacesDemon call. For those of us who are not familiar esp. with the w argment, it would be very useful to have some guidance about arguments for the categorical parameters.

Many thanks,
Michael.

dinvgaussian function is incorrect

The dinvgaussian function, the pdf of the inverse-Gaussian distribution, produces incorrect results.

It doesn't integrate to 1 like a proper pdf, but to 1.732051. Its moments also mismatch what they should algebraically be, e.g. its expected value should equal the mu parameter, but it isn't.

afbeelding

I discovered the issue. Discord user Clarinetist#7695 identified the problem to be in this piece of code:

dens <- log(lambda/(2 * pi * x^3)^0.5) - ((lambda * (x - mu)^2)/(2 * mu^2 * x)) if (log == FALSE) dens <- exp(dens)

Stating: "Taking the log of this, it should be"
"$$\log f(x) = \dfrac{1}{2}\log\left(\dfrac{\lambda}{2\pi x^3} \right) - \dfrac{\lambda (x-\mu)^2}{2\mu^2 x}$$"

dist.Matrix.Normal R file

I recently released a new R package, , matrixNormal, posted on CRAN <see https://cran.r-hub.io/web/packages/matrixNormal/index.html> that has a CDF function has well. The random generation uses Kronecker product of U and V, as well as options to do other decompositions as well. I was finding how matrixNormal was similar or different than others out there.

The LaplacesDemon::dist.Matrix.Normal Rd file has a density function as well. It uses a logdet function that takes advantage of Cholesky decomposition of positive definite matrices (which I like). I like this function as it does not seem to impact the random matrix X; X does not need to be positive definite (Iranmanesh et.al. 2010). The density adds the scale parameters; while the Iranmanesh et.al. 2010 paper subtracts them.
When LaplacesDemon::dmatrixnorm() calculates the density, your code writes as
Stuff + logdet(V) * (n/2) + logdet(U) * (k/2))
But the log of density in Iranmanesh et.al. (2010) is:
Stuff - logdet(V) * (n/2) -logdet(U) * (k/2))
The full code corrected should be:

 dens <- -0.5 * tr(as.inverse(V) %*% t(ss) %*% as.inverse(U) %*% 
                      ss) - (log(2 * pi) * (n * k/2) **- logdet(V) * (n/2) -
                               logdet(U) * (k/2))**

The other packages and the Wikipedia page (https://en.wikipedia.org/wiki/Matrix_normal_distribution) do not transpose chol(U), but the rmatrixnorm() function does.
You wrote:
X <- M + t(chol(U)) %*% Z %*% chol(V)
but it should be:
X <- M + chol(U) %*% Z %*% chol(V)

References
Iranmanesh, A., Arashi, M., & Tabatabaey, S. M. M. (2010). On Conditional Applications of Matrix Variate Normal Distribution. IJMSI, 5(2), 33–43. https://doi.org/10.7508/ijmsi.2010.02.004

dlnormp

I am not sure but it seems there is an error in the documentation and also in the code: Should the mu be outside of the log?
curve(dlnormp(x, mu = 5, tau = 1/2), from = 0, to = 10)
This will create NaN for x <= 5 which makes sense if there is a log(x - mu)

Broken preconditioned Crank-Nicolson?

Hi everyone, the preconditioned Crank-Nicolson (pCN) algorithm in LaplacesDemon seems to be broken: I get a WARNING: All proposals were rejected message even in the case of a simple normal distribution. I have the impression that it must be a simple bug in the code, for example a counter set to zero by mistake.

In which file of the package is this algorithm defined, so that I can take a look?

Cheers!

Changing decomposition update period in AFSS algorithm

Hi all, this refers to LaplacesDemon.R

The Automated Factor Slice Sampler has an initial adaptive stage, of A iterations, during which the proposal covariance matrix is periodically updated from the scatter matrix of the observed samples. The period of these updates is given by decomp.freq. Such updates are no longer made after A iterations.

In the present version, this update period is chosen as (line 1794)
decomp.freq <- max(LIV * floor(Iterations / Thinning / 100), 10)

where LIV is the number of parameters (dimension of the sampled space), Iterations is the total number of requested MCMC iterations (before thinning), and Thinning is the thinning.

It seems strange to me that the update period is calculated based on the total number of iterations requested, rather than on the number of adaptive ones, A. Consider this case: the user wants a very large number of samples (Iterations/Thinning large), but chooses a much shorter adaptive time A, known to be sufficient. In such a case the update period, according to the formula above, would be very large, potentially becoming insufficient. In fact it may never happen if A < max(LIV * floor(Iterations / Thinning / 100), 10).

The update period should probably be calculated based on the number of parameters alone. But at least it should depend on the adaptive time A rather than the full sampling time, for the reason explained above. I propose to change line 1794 into either
decomp.freq <- max(LIV * floor(A/ Thinning / 10), 10)
or even just
decomp.freq <- max(LIV, 10) or decomp.freq <- max(2*LIV, 10)
to make sure that at least as many iterations as parameters have passed. Similar changes should be made for the blockwise computation on lines 1963–1964.

I checked Tibbits & al. 2014, mentioned in the manual, but they don't seem to give any specific recommendation.

Any take on this?

Bug: dmvl

There seems to be two bugs in the code for the function dmvl. Two times + is used instead of - in the code. The following version of the code could be used instead of the current version of the code to fix the bugs:

dmvl = function (x, mu, Sigma, log = FALSE) {
    if (!is.matrix(x)) 
        x <- rbind(x)
    if (!is.matrix(mu)) 
        mu <- matrix(mu, nrow(x), ncol(x), byrow = TRUE)
    if (missing(Sigma)) 
        Sigma <- diag(ncol(x))
    if (!is.matrix(Sigma)) 
        Sigma <- matrix(Sigma)
    Sigma <- as.symmetric.matrix(Sigma)
    if (!is.positive.definite(Sigma)) 
        stop("Matrix Sigma is not positive-definite.")
    k <- nrow(Sigma)
    Omega <- as.inverse(Sigma)
    ss <- x - mu
    z <- rowSums({
        ss %*% Omega
    } * ss)
    z[which(z == 0)] <- 1e-300
    dens <- as.vector(log(2) - log(2 * pi) * (k/2) - logdet(Sigma) * 
        0.5 + (log(pi) - log(2) - log(2 * z) * 0.5) * 0.5 - sqrt(2 * 
        z) - log(z/2) * 0.5 * (k/2 - 1))
    if (log == FALSE) 
        dens <- exp(dens)
    return(dens)
}

For calculating the value dens, -logdet(Sigma) is used instead of logdet(Sigma) and -log(2 * z) is used instead of log(2 * z) in my version of the function.

dhalfcauchy returns non-zero values for negative numbers

Hello,

I was using the dhalfcauchy function from LaplacesDemon. My understanding of the Half-Cauchy distribution is the density should be 0 for values less than 0. However, this does not appear to be the case.

dhalfcauchy(3,scale=0.1)
dhalfcauchy(-3,scale=0.1)

In both cases, the output is 0.007065702.

Multivariate Laplace

Hi,

on line 1366 of file distributions.R, of the function dmvl(),

(log(pi) - log(2) + log(2*z)*0.5)*0.5 - log(2*z)*0.5 -

should be changed into

(log(pi) - log(2) + log(2*z)*0.5)*0.5 - sqrt(2*z) -

because the fourth term above is originally within exp().

rlnormp

I think it returns normal random generated numbers instead of lognormal. Thanks for this package!

Mistake in documentation for density of inverse Wishart distribution?

Lines 25 to 30 of https://github.com/LaplacesDemonR/LaplacesDemon/blob/master/man/dist.Inverse.Wishart.Rd are currently

Density: \eqn{p(\theta) = (2^{\nu k/2} \pi^{k(k-1)/4}
      \prod^k_{i=1} \Gamma(\frac{\nu+1-i}{2}))^{-1} |\textbf{S}|^{nu/2}
      |\Omega|^{-(nu-k-1)/2} \exp(-\frac{1}{2} tr(\textbf{S}
      \Omega^{-1}))}{p(theta) = (2^(nu*k/2) * pi^(k(k-1)/4) *
      [Gamma((nu+1-i)/2) * ... * Gamma((nu+1-k)/2)])^(-1) * |S|^(nu/2) *
      |Omega|^(-(nu-k-1)/2) * exp(-(1/2) * tr(S Omega^(-1)))}

I think these line should be edited to

Density: \eqn{p(\theta) = (2^{\nu k/2} \pi^{k(k-1)/4}
      \prod^k_{i=1} \Gamma(\frac{\nu+1-i}{2}))^{-1} |\textbf{S}|^{\nu/2}
      |\Omega|^{-(\nu+k+1)/2} \exp(-\frac{1}{2} tr(\textbf{S}
      \Omega^{-1}))}{p(theta) = (2^(nu*k/2) * pi^(k(k-1)/4) *
      [Gamma((nu+1-i)/2) * ... * Gamma((nu+1-k)/2)])^(-1) * |S|^(nu/2) *
      |Omega|^(-(nu+k+1)/2) * exp(-(1/2) * tr(S Omega^(-1)))}

https://en.wikipedia.org/wiki/Inverse-Wishart_distribution

Also it would be clearer if the density were written with Sigma instead of Omega because the function argument is Sigma.

I think there is no mistake in the density calculation in the function itself.

Clearer manuals?

Dear All, thank you very much for keeping this rich package alive!

I'm exploring it and trying to use it, but I have serious problems in understanding the manuals LaplacesDemonTutorial.pdf, Examples.pdf, and the package's function help (LaplacesDemon.pdf), in particular the 'predict.demonoid' function. I'd have two questions:

  1. Is there a mailing list or user group where one can ask usage questions about LaplacesDemon?
  2. Would it be possible to amend the manuals and make them clearer?

I'd be very happy to help with 2. (once I understand the main functions).

Cheers.

LaplacesDemonCpp

Well thank you for keeping the project going!

I have another question: LaplacesDemonCpp seems also in stasis. Are there plans for that package as well? Perhaps this account might also be best?

Release of next version to CRAN (16.1.0)

Thanks to many contributors with bug fixes (@stla, @amawl, @danheck, & @quentingronau) the next version of LaplacesDemon is ready for CRAN. I plan to submit this version to CRAN on the 10th of December (as version 16.1.0).

If you think there are more quick bug fixes, please send pull requests before this day. Or let me know if you need more time.

I am also very happy to provide write account for anyone who wants to commit regularly.

Little bug in dmatrixnorm

Hi and thanks for keeping up the package!

After noticing inconsistency with the results provided by other packages, I spotted a little bug in dmatrixnorm, function to evaluate the density of a matrix normal. The error comes from a misplaced parenthesis in one of the lines of the function. Specifically, the line

dens <- -0.5 * tr(as.inverse(V) %*% t(ss) %*% as.inverse(U) %*% ss) - (log(2 * pi) * (n * k/2) - logdet(V) * (n/2) - logdet(U) * (k/2))

should be corrected as:

dens <- -0.5 * tr(as.inverse(V) %*% t(ss) %*% as.inverse(U) %*% ss) - log(2 * pi) * (n * k/2) - logdet(V) * (n/2) - logdet(U) * (k/2)

I hope this helps.

Posterior and Prior distribution plot

Dear all,

I am a new user of LaplacesDemon and have a question regarding the plot function. Is there a function to plot both the prior and posterior distribution for each parameter being estimated?

Error in inv gaussian?

Hi!

I think I found a bug in the invgaussian density (dinvgaussian). Now the code is as follows:

dens <- log(lambda/(2 * pi * x^3)^0.5) - ((lambda * (x - 
    mu)^2)/(2 * mu^2 * x))

Although it should be (see https://en.wikipedia.org/wiki/Inverse_Gaussian_distribution):

dens <- log(lambda^0.5/(2 * pi * x^3)^0.5) - ((lambda * (x - 
    mu)^2)/(2 * mu^2 * x))

i.e the square root of lambda is missing.

LaplacesDemon.R: Possibility of regularly saving samples

Dear all,
It'd be useful if the LaplacesDemon function had the option of saving the value of the "parm" value at regular intervals, more or less as it regularly outputs the value of "LP" (log-probability) for status check. The "parm" values so saved could be used to check how the Monte Carlo sampling is progressing.

It's maybe even better to have the possibility of fully saving the whole sampled values at regular intervals. Otherwise, if the computer crashes, all the samples will be lost. This is costly for Monte Carlo samplings that take days or weeks.

I can try to implement the first or both features in LaplacesDemon.R, if there's enough interest. But afterwards I'd need some help with compilation matters, which are completely opaque to me.

Little bug in "burnin.R"

If the x input is a vector whose length is not an integer multiple of 10, the function will give an error. the problem is on line 16 (https://github.com/LaplacesDemonR/LaplacesDemon/blob/master/R/burnin.R): x was transformed into a 1-column matrix, and on line 16 it is inadvertently transformed back into a vector, which causes problems when BMK.diagnostic is called. Line 16 should be changed from

if(n %% 10 != 0) x2 <- x[1:(10*trunc(n/10)),]

to

if(n %% 10 != 0) x2 <- x[1:(10*trunc(n/10)), , drop=FALSE]

improvement of rwishartc and rinvwishart

Hello,

Let's take a number of degrees of freedom nu and a scale matrix S:

nu <- 6
p <- 3; S <- cov(matrix(rnorm(10*p),10,p))

This is the code of rwishartc:

k <- nrow(S)
Z <- matrix(0, k, k)
x <- rchisq(k, nu:{
  nu - k + 1
})
x[which(x == 0)] <- 1e-100
diag(Z) <- sqrt(x)
if (k > 1) {
  kseq <- 1:(k - 1)
  Z[rep(k * kseq, kseq) + unlist(lapply(kseq, seq))] <- rnorm(k * 
  {
    k - 1
  }/2)
}
chol(crossprod(Z %*% chol(S)))

There's something circular in the final line. In fact, Z %*% chol(S) is already the Cholesky factor:

> chol(crossprod(Z %*% chol(S)))
         [,1]       [,2]       [,3]
[1,] 4.250561 -0.4035846  0.8737172
[2,] 0.000000  1.4643197 -0.8089389
[3,] 0.000000  0.0000000  1.9047683
> Z %*% chol(S)
         [,1]       [,2]       [,3]
[1,] 4.250561 -0.4035846  0.8737172
[2,] 0.000000  1.4643197 -0.8089389
[3,] 0.000000  0.0000000  1.9047683

Thus rwishartc would be more efficient by replacing the last line with Z %*% chol(S).

Also note that the code would be clearer and maybe more efficient by replacing
Z[rep(k * kseq, kseq) + unlist(lapply(kseq, seq))] <-
with
Z[upper.tri(Z)] <-

And one could avoid some repeated code by defining rwishart by

rwishart <- function(nu, S){
  Z <- rwishartc(nu, S)
  crossprod(Z)
}

(currently rwishart and rwishartc have exactly the same code up to the final line).

Now, once we have an efficient implementation of rwishartc, one can improve rinvwishart. Currently, rinvwishart runs rwishart and takes the inverse with solve. It is better to get the inverse from the Cholesky factor with the help of chol2inv, and then one can define rinvwishart like this:

rinvwishart <- function(nu, S){
  chol2inv(rwishartc(nu,solve(S)))
}

error in dmatrixgamma

Hello,

There's an error in dmatrixgamma.

As said in the help, the matrix variate Gamma distribution is the same as the Wishart distribution when alpha=nu/2 and beta=2. This equality does not occur:

> S <- rwishart(4, diag(4))
> Sigma <- rwishart(4, diag(4))
> dwishart(S, nu, Sigma, log=TRUE)
[1] -25.80533
> dmatrixgamma(S, nu/2, 2, Sigma, TRUE)
[1] -19.44854

The fix consists in replacing dens <- logdet(Omega) + ... with dens <- alpha*logdet(Omega) + ...:

mydmatrixgamma <- function (X, alpha, beta, Sigma, log = FALSE) 
{
  k <- nrow(Sigma)
  gamsum <- 0
  for (i in 1:k) gamsum <- gamsum + lgamma(alpha - 0.5 * (i - 
                                                            1))
  gamsum <- gamsum + log(pi) * (k * (k - 1)/4)
  Omega <- as.inverse(Sigma)
  dens <- alpha*logdet(Omega) + logdet(X) * (alpha - 0.5 * (k + 1)) - 
    (log(beta) * (k * alpha) + gamsum) + (-1/beta) * tr(Omega %*% 
                                                          X)
  if (log == FALSE) 
    dens <- exp(dens)
  return(dens)
}
> dwishart(S, nu, Sigma, log=TRUE)
[1] -25.80533
> mydmatrixgamma(S, nu/2, 2, Sigma, TRUE)
[1] -25.80533

By the way I don't understand the interest of this distribution. It is the same as the Wishart distribution with scale matrix beta/2*Sigma:

> beta <- 3
> dwishart(S, nu, beta/2*Sigma, log=TRUE)
[1] -28.2996
> mydmatrixgamma(S, nu/2, beta, Sigma, TRUE)
[1] -28.2996

This fact allows to easily implement rmatrixgamma by the way, which is missing.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.