Giter Site home page Giter Site logo

ae's Introduction

header


๐Ÿ‘‹ Hi there! I'm A.I. research engineer of AITok.
I am very interested in speech recognition and natural language processing.

๐Ÿ’ช Skills

Platforms & Languages

Deep Learning Framework

Jayden's GitHub stats

ae's People

Contributors

jayden5744 avatar

Watchers

 avatar

ae's Issues

[๋…ผ๋ฌธ์„ค๋ช…] Using Adversarial Autoencoders for Multi-Modal Automatic Playlist Continuation(2) - AAE

1. Adversarial Autoencoder (AAE)

  • VAE์— GAN์„ ๋ง์ž…ํžŒ ๊ตฌ์กฐ
  • ์ด๋ ‡๊ฒŒ ๋งŒ๋“  ์ด์œ ๋Š”?
    • VAE๋Š” ์‚ฌ์ „ํ™•๋ฅ  ๋ถ„ํฌ p(z)๋ฅผ ํ‘œ์ค€์ •๊ทœ๋ถ„ํฌ๋กœ ๊ฐ€์ •ํ•˜๊ณ , q(z|x)๋ฅผ ์ด์™€ ๋น„์Šทํ•˜๊ฒŒ ๋งž์ถ”๋Š” ๊ณผ์ •์—์„œ ํ•™์Šต์ด ์ด๋ฃจ์–ด์ง
    • ์™œ๋ƒํ•˜๋ฉด ํ‘œ์ค€์ •๊ทœ๋ถ„ํฌ์™€ ๊ฐ™์ด ๊ฐ„๋‹จํ•œ ํ™•๋ฅ ํ•จ์ˆ˜์—ฌ์•ผ๋งŒ ์ƒ˜ํ”Œ๋ง์ด ์šฉ์ดํ•˜๊ณ , KLD ๊ณ„์‚ฐ์ด ์‰ฝ๊ธฐ ๋•Œ๋ฌธ์—
    • ๊ทธ๋Ÿฐ๋ฐ ์‹ค์ œ ๋ฐ์ดํ„ฐ ๋ถ„ํฌ๊ฐ€ ์ •๊ทœ๋ถ„ํฌ๋ฅผ ๋”ฐ๋ฅด์ง€ ์•Š๊ฑฐ๋‚˜ ์ด๋ณด๋‹ค ๋ณต์žกํ•  ๊ฒฝ์šฐ VAE ์„ฑ๋Šฅ์— ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Œ
      image

p(z)๋Š” ์–ด๋–ป๊ฒŒ ์•„๋Š”๊ฑธ๊นŒ?

image
image

  • prior distribution์„ ์ •ํ•˜๋Š” ๊ฒƒ์„ ๋ณด์•˜์„ ๋•Œ, ๋ฏธ๋ฆฌ ์–ด๋Š ๋ถ„ํฌ๋ฅผ ํ•˜๋‚˜ ์ •ํ•˜๊ณ , ๊ทธ ๋ถ„ํฌ์˜ ํ˜•ํƒœ๋ฅผ ์ทจํ•˜๋Š” ๊ฒƒ ์ค‘์— ์‚ฌ์‹ค AAE๋Š” x๋ฅผ ์ž˜ ๋ณต์›ํ•˜๋Š” ๋ถ„ํฌ๋ฅผ ๋งŒ๋“œ๋Š”๊ฒŒ ์•„๋‹๊นŒ? (๋‡Œํ”ผ์…œ)

  • Generator

    • VAE์˜ Encoder ๋ผ์ธ์ด ๋ถ„ํฌ์˜ ๋ชจ์ˆ˜๋ฅผ ๋งŒ๋“œ๋Š” ์ƒ์„ฑ์ž(Generator)์˜ ์—ญํ• ์„ ํ•œ๋‹ค.
    • ๋ฐ์ดํ„ฐ x๋ฅผ ๋ฐ›์•„์„œ ์ž ์žฌ๋ณ€์ˆ˜๋ถ„ํฌ ๋ชจ์ˆ˜๋ฅผ ์ƒ์„ฑํ•ด์„œ z๋ฅผ ์ƒ˜ํ”Œ๋งํ•œ๋‹ค.
  • Discriminator

    • ๊ตฌ๋ถ„์ž๋Š” ์ƒ์„ฑ์ž์˜ Encdoer๊ฐ€ ์ƒ˜ํ”Œ๋งํ•œ ๊ฐ€์งœ z์™€ p(z)๋กœ๋ถ€ํ„ฐ ์ง์ ‘ ์ƒ˜ํ”Œ๋งํ•œ ์ง„์งœ z๋ฅผ ๊ตฌ๋ถ„ํ•˜๋Š” ์—ญํ• ์„ ํ•œ๋‹ค.
  • GAN์˜ ๊ฒฝ์šฐ ๋ชจ๋ธ์— ํŠน์ • ํ™•๋ฅ ๋ถ„ํฌ๋ฅผ ์ „์ œํ•  ํ•„์š”๊ฐ€ ์—†์Šต๋‹ˆ๋‹ค. GAN์€ ๋ฐ์ดํ„ฐ๊ฐ€ ์–ด๋–ค ๋ถ„ํฌ๋ฅผ ๋”ฐ๋ฅด๋“ , ๋ฐ์ดํ„ฐ์˜ ์‹ค์ œ ๋ถ„ํฌ์™€ ์ƒ์„ฑ์ž(๋ชจ๋ธ)๊ฐ€ ๋งŒ๋“ค์–ด๋‚ด๋Š” ๋ถ„ํฌ ์‚ฌ์ด์˜ ์ฐจ์ด๋ฅผ ์ค„์ด๋„๋ก ํ•™์Šต๋˜๊ธฐ ๋•Œ๋ฌธ

  • VAE์˜ regularization term์„ GAN Loss๋กœ ๋Œ€์ฒดํ•  ๊ฒฝ์šฐ ์‚ฌ์ „ํ™•๋ฅ ๊ณผ ์‚ฌํ›„ํ™•๋ฅ  ๋ถ„ํฌ๋ฅผ ์ •๊ทœ๋ถ„ํฌ ์ด์™ธ์— ๋‹ค๋ฅธ ๋ถ„ํฌ๋ฅผ ์“ธ ์ˆ˜ ์žˆ๊ฒŒ ๋ผ ๋ชจ๋ธ ์„ ํƒ์˜ ํญ์ด ๋„“์–ด์ง€๋Š” ํšจ๊ณผ๋ฅผ ๋ˆ„๋ฆด ์ˆ˜ ์žˆ๋‹ค

# Encoder
Q = torch.nn.Sequential(
    torch.nn.Linear(X_dim, h_dim),
    torch.nn.ReLU(),
    torch.nn.Linear(h_dim, z_dim))

# Decoder
P = torch.nn.Sequential(
    torch.nn.Linear(z_dim, h_dim),
    torch.nn.ReLU(),
    torch.nn.Linear(h_dim, X_dim),
    torch.nn.Sigmoid())

# Discriminator
D = torch.nn.Sequential(
    torch.nn.Linear(z_dim, h_dim),
    torch.nn.ReLU(),
    torch.nn.Linear(h_dim, 1),
    torch.nn.Sigmoid())

image

""" Step 1. Reconstruction phase """
z_sample = Q(X)  # input -> Encoder -> z
X_sample = P(z_sample)  # z -> Decoder -> x'
recon_loss = nn.binary_cross_entropy(X_sample, X)  # X์™€ x'๋ฅผ ๋น„๊ต

image

""" Step 2. loss for discriminator """
# Discriminator
z_real = Variable(torch.randn(mb_size, z_dim))   # ์‹ค์ œ ๋ถ„ํฌ์—์„œ ๋ฝ‘์•„๋‚ธ z
z_fake = Q(X)   # encoder์—์„œ ์ƒ˜ํ”Œ๋งํ•˜๋Š” z
D_real = D(z_real)  # z_real์„ Discriminator์— ๋„ฃ์€ output(ํ™•๋ฅ )
D_fake = D(z_fake)  # z_fake๋ฅผ Discriminator์— ๋„ฃ์€ output(ํ™•๋ฅ )
D_loss = -torch.mean(torch.log(D_real) + torch.log(1 - D_fake))  # log๋ฅผ ์ทจํ•œ๋’ค ํ‰๊ท 
# D_real์ด 1(์ง„์งœ๋ผ๊ณ  ํŒ๋‹จํ•˜๋ฉด) log 1==0 / D_fake๊ฐ€ 0(๊ฐ€์งœ๋ผ๊ณ  ํŒ๋‹จํ•˜๋ฉด) log (1-0) == 0
# D_real์ด 0.5์ด๋ฉด log 0.5 = -0.3010 / D_fake๊ฐ€ 0.5 log (1-0.5) = -0.3010 => ๊ทธ๋ ‡๊ธฐ๋–„๋ฌธ์— minus๋ฅผ ๋ถ™์—ฌ์คŒ

image

""" Stpe 3. Generator"""
z_fake = Q(X)   # input -> Encoder -> z
D_fake = D(z_fake)  #  z -> Discriminator -> z'
G_loss = -torch.mean(torch.log(D_fake))  # Discriminator๊ฐ€ ํŒ๋‹จํ•˜๊ธฐ ์–ด๋ ค์šธ ์ •๋„๋กœ ์œ ์‚ฌํ•œ z๋ฅผ ๋งŒ๋“ค์–ด๋‚ด๋Š”์ง€์— ๋Œ€ํ•œ loss
# D_fake๊ฐ€ 1(์ง„์งœ)๋กœ ์˜ˆ์ธกํ•˜๋ฉด loss๋Š” 0์œผ๋กœ ์ˆ˜๋ ดํ•˜๊ณ , D_fake๋ฅผ 0(๊ฐ€์งœ)๋กœ ์˜ˆ์ธกํ•  ์ˆ˜๋ก loss์˜ ๊ฐ’์€ ์ปค์ง ex) log(0.00001) = -5

์ฐธ๊ณ ์ž๋ฃŒ

[๋…ผ๋ฌธ์„ค๋ช…] Using Adversarial Autoencoders for Multi-Modal Automatic Playlist Continuation(1) - VAE

1. Variational AutoEncoder(VAE)์˜ ๋ชฉ์ 

  • AutoEncoder์˜ ๋ชฉ์ ๊ณผ ๊ฐ™์ด ์ฃผ์–ด์ง„ ๋ฐ์ดํ„ฐ๋ฅผ ์ž˜ ์„ค๋ช…ํ•˜๋Š” latent vector ์ฆ‰, z๋ฅผ ์ฐพ๋Š” ๊ฒƒ์ด๋‹ค.
  • ๊ทธ๋Ÿฌ๋‚˜ ์ฐจ์ด์ ์€ VAE๋Š” ๊ณ ์ •๋œ z๊ฐ€ ์•„๋‹Œ z์˜ ๋ถ„ํฌ๋ฅผ ์ฐพ๋Š” ๊ฒƒ์ด๋‹ค.
  • ์‰ฝ๊ฒŒ ๋งํ•˜๋ฉด AE์˜ z๋Š” ์›๋ณธ x๋ฅผ ์ œ์ผ ์ž˜ ํ‘œํ˜„ํ•˜๋Š” ๊ฐ’์„ ์ฐพ๋Š” ๋ฐฉ์‹์ด๊ณ , VAE๋Š” ์›๋ณธ x๋ฅผ ์ œ์ผ ์ž˜ ํ‘œํ˜„ํ•˜๋Š” ๋ถ„ํฌ๋ฅผ ์ฐพ๋Š” ๋ฐฉ์‹์ด๋‹ค.

๊ฐ’์ด ์•„๋‹Œ ๋ถ„ํฌ๋ฅผ ํ•ด์„œ ์–ป์„ ์ˆ˜ ์žˆ๋Š” ์ด์ ์€ ๋ญ˜๊นŒ?

  • ์•„๋ฌด๋Ÿฐ ๊ฐ€์ • ์—†์ด random variable๋งŒ ์ž…๋ ฅ์œผ๋กœ ๋„ฃ์–ด์ฃผ๊ณ  ๋ฐ์ดํ„ฐ๋ฅผ ๋ณต์›ํ•ด๋ณด๋ผ๊ณ ํ•˜๋ฉด ๋„ˆ๋ฌด ์–ด๋ ต๊ณ  ์ž˜ ๋ ๋ฆฌ๊ฐ€ ๋งŒ๋ฌดํ•˜๋‹ˆ๊นŒ ์ž˜ ๋ณต์›ํ•ด์ฃผ๋Š” random variable์„ ๋งŒ๋“ค ์ˆ˜๋Š” ์—†์„๊นŒ๋ผ๋Š” ๊ณ ๋ฏผ์—์„œ ๋น„๋กฏํ•œ ๋ฐฉ๋ฒ•
  • ๊ทธ๋ ‡๋‹ค๊ณ  ํ•ด์„œ AE๋ณด๋‹ค VAE๊ฐ€ ๋” ์ข‹๋‹ค๊ณ ๋Š” ํ•  ์ˆ˜ ์—†๊ณ , ๋‹ค๋งŒ ํ•™์Šต์€ ๋” ์•ˆ์ •์ ์œผ๋กœ ๋œ๋‹ค๊ณ  ํ•จ.

image

  • ๋จผ์ € ๋ถ„ํฌ๋ฅผ ์ •์˜ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋ชจ์ˆ˜๋ฅผ ์ •์˜ํ•ด์•ผ ํ•˜๋Š”๋ฐ, ์—ฌ๊ธฐ์„œ๋Š” ๋ถ„ํฌ๋ฅผ ์ •๊ทœ๋ถ„ํฌ๋กœ ๊ฐ€์ •ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ํ‰๊ท ๊ณผ ํ‘œ์ค€ ํŽธ์ฐจ๋ฅผ ๋ชจ์ˆ˜๋กœ ๊ฐ€์ง„๋‹ค.
  • Encoder๋Š” ํ•™์Šต์šฉ ๋ฐ์ดํ„ฐ(x)๋ฅผ ์ž…๋ ฅ์œผ๋กœ ๋ฐ›๊ณ  ์ž ์žฌ ๋ณ€์ˆ˜(z)์˜ ํ™•๋ฅ ๋ถ„ํฌ์— ๋Œ€ํ•œ ํŒŒ๋ผ๋ฏธํ„ฐ(๋ชจ์ˆ˜)๋ฅผ ์ถœ๋ ฅํ•œ๋‹ค
  • Decoder ๋Š” ์ž ์žฌ๋ณ€์ˆ˜์— ๋Œ€ํ•œ ํ™•๋ฅ  ๋ถ„ํฌ p(z)์—์„œ ์ƒ˜ํ”Œ๋งํ•œ ๋ฒกํ„ฐ๋ฅผ ์ž…๋ ฅ๋ฐ›์•„ ์ด๋ฅผ ์ด์šฉํ•ด ์›๋ณธ ๋ฐ์ดํ„ฐ๋ฅผ ๋ณต์›ํ•œ๋‹ค.

2. Encoder

  • Encoder์˜ ์—ญํ• ์€ ๋ฐ์ดํ„ฐ๊ฐ€ ์ฃผ์–ด์กŒ์„ ๋•Œ Decoder๊ฐ€ ์›๋ž˜์˜ ๋ฐ์ดํ„ฐ๋กœ ์ž˜ ๋ณต์›ํ•  ์ˆ˜ ์žˆ๋Š” z๋ฅผ ์ƒ˜ํ”Œ๋งํ•  ์ˆ˜ ์žˆ๋Š” ์ด์ƒ์ ์ธ p(z|x)๋ฅผ ์ฐพ๋Š” ๊ฒƒ
  • ๊ทธ๋Ÿฌ๋‚˜ ์ด์ƒ์ ์ธ p(z|x)๊ฐ€ ๋ฌด์—‡์ธ์ง€ ์•„๋ฌด๋„ ๋ชจ๋ฅธ๋‹ค
  • ์ด๋ฅผ ์•Œ๊ธฐ ์œ„ํ•œ ๋ฐฉ๋ฒ•์œผ๋กœ Variational inference๋ฅผ ์‚ฌ์šฉํ•œ๋‹ค

2-1 Variational inference

  • ์šฐ๋ฆฌ๊ฐ€ ์ด์ƒ์ ์ธ ํ™•๋ฅ ๋ถ„ํฌ๋ฅผ ๋ชจ๋ฅผ ๋•Œ, ์ด๋ฅผ ์ถ”์ •ํ•˜๊ธฐ ์œ„ํ•ด ๋‹ค๋ฃจ๊ธฐ ์‰ฌ์šด ๋ถ„ํฌ๋ฅผ ๊ฐ€์ •ํ•˜๊ณ  ์ด ํ™•๋ฅ ๋ถ„ํฌ์˜ ๋ชจ์ˆ˜๋ฅผ ๋ฐ”๊ฟ”๊ฐ€๋ฉฐ, ์ด์ƒ์ ์ธ ํ™•๋ฅ ๋ถ„ํฌ์— ๊ทผ์‚ฌํ•˜๊ฒŒ ๋งŒ๋“ค์–ด ๊ทธ ํ™•๋ฅ ๋ถ„ํฌ๋ฅผ ๋Œ€์‹  ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ
  • ์ด ๋‹ค๋ฃจ๊ธฐ ์‰ฌ์šด ๋ถ„ํฌ๋ฅผ qฯ• ๋ผ๊ณ  ํ•œ๋‹ค๋ฉด, Encoder๋Š” ฯ• ๋ผ๋Š” ํŒŒ๋ผ๋ฏธํ„ฐ(๋ชจ์ˆ˜)๋“ค์„ ๋ฐ”๊พธ์–ด๊ฐ€๋ฉฐ, qฯ•(z|x) ํ™•๋ฅ  ๋ถ„ํฌ๋ฅผ ์ด์ƒ์ ์ธ ํ™•๋ฅ  ๋ถ„ํฌ p(z|x) ์— ๊ทผ์‚ฌ์‹œํ‚ค๋Š” ์—ญํ• ์„ ์ˆ˜ํ–‰ํ•œ๋‹ค
    image
  • Encoder์˜ output์€ qฯ•(z|x) ํ™•๋ฅ  ๋ถ„ํฌ์˜ ๋ชจ์ˆ˜์ด๋‹ค. ์œ„์—์„œ ์šฐ๋ฆฌ๋Š” ์ •๊ทœ๋ถ„ํฌ๋กœ ๊ฐ€์ •ํ–ˆ๊ธฐ ๋•Œ๋ฌธ์—, ํ™•๋ฅ ๋ถ„ํฌ์˜ ๋ชจ์ˆ˜๋Š” ํ‰๊ท ๊ณผ ๋ถ„์‚ฐ์ด๋‹ค.
  • ํ‰๊ท ๊ณผ ๋ถ„์‚ฐ์„ ๊ฐ€์ง€๊ณ  ๊ทธ๋ƒฅ ์ƒ˜ํ”Œ๋ง ํ•œ๋‹ค๋ฉด back propagation์ด ๋ถˆ๊ฐ€๋Šฅํ•˜๋‹ค. ๊ทธ๋ž˜์„œ ์‚ฌ์šฉํ•˜๋Š” ๊ฒƒ์ด Reparameterization trick์ด๋‹ค.

์™œ back propagation์ด ๋ถˆ๊ฐ€๋Šฅํ•œ๊ฐ€?

  • ๋ฐ‘์— ๊ทธ๋ฆผ์„ ๋ณด๋ฉด Deterministic node๋Š” Backpropagation์ด ๊ฐ€๋Šฅํ•˜์ง€๋งŒ Random Node๋Š” ๊ทธ๋Ÿด ์ˆ˜๊ฐ€ ์—†๋‹ค. ์™œ๋ƒํ•˜๋ฉด ์–ด๋–ค ์ˆ˜์‹์œผ๋กœ ์ธํ•˜์—ฌ ๋‚˜์˜จ ๊ฒฐ๊ณผ๋ฅผ ์—ญ์œผ๋กœ ์ถ”์ ํ•˜๋Š” ๊ฒƒ์ด Back propagation์ธ๋ฐ Random์œผ๋กœ ๋‚˜์˜จ ๊ฒฐ๊ณผ๋Š” ์ˆ˜์‹์— ๋”ฐ๋ผ ํƒ€๊ณ  ๊ฐˆ ์ˆ˜ ์—†๊ธฐ ๋–„๋ฌธ์—(๋‡Œํ”ผ์…œ)
    image

2-2 Reparameterization trick

  • ๊ฐ€์šฐ์‹œ์•ˆ ์ •๊ทœ๋ถ„ํฌ์˜ ์ƒ˜ํ”Œ์„ ์ถ”์ถœํ•˜๊ณ  ์‹ถ์„ ๋•Œ, ์•„๋ž˜์˜ ์‹๊ณผ ๊ฐ™์ด ์ƒ˜ํ”Œ๋ง ํ•˜๋Š” ๊ฒƒ์„ ๋งํ•จ
  • ํ™•๋ฅ ์  ํŠน์„ฑ์„ ๊ทธ๋Œ€๋กœ ๋ณด์กดํ•˜๋ฉด์„œ, Back propagation์ด ๊ฐ€๋Šฅํ•จ

image

image

import torch

def __init__(self):
    self.fc1_1 = nn.Linear(784, hidden_size)
    self.fc1_2 = nn.Linear(784, hidden_size)
    self.relu = nn.ReLU()
                        
def encode(self,x):
    x = x.view(batch_size,-1)
    mu = self.relu(self.fc1_1(x))     # mean vector
    log_var = self.relu(self.fc1_2(x))  # variation
    return mu,log_var
    
def reparametrize(self, mu, logvar):  # Reparametrization Trick
    std = logvar.mul(0.5).exp_()      # ฯƒ(x)
    eps = torch.FloatTensor(std.size()).normal_()  # ฯตโˆผN(0,1)
    return eps.mul(std).add_(mu)     # z=ฮผ(x)+ฯƒ(x)ร—ฯต

3. Decoder

  • Decoder๋Š” ์ถ”์ถœํ•œ ์ƒ˜ํ”Œ(z)๋ฅผ ์ž…๋ ฅ์œผ๋กœ ๋ฐ›์•„ ์›๋ณธ์„ ์žฌ๊ตฌ์ถ•ํ•˜๋Š” ์—ญํ• ์„ ํ•œ๋‹ค. ๊ธฐ์กด์˜ AutoEncoder์˜ Decoder์™€ ๋™์ผ

4. Loss Function - Evidence LowerBOund(ELBO)

  • ์šฐ๋ฆฌ๊ฐ€ ์•Œ๊ณ  ์‹ถ์€ ๊ฒƒ์€ p(x)์ด๋‹ค.

4-1 Jensen's Inquality

image

image

# the Binary Cross Entropy between the target and the output
reconstruction_function = nn.BCELoss(size_average=False)

def loss_function(recon_x, x, mu, logvar):
    BCE = reconstruction_function(recon_x, x)
    KLD_element = mu.pow(2).add_(logvar.exp()).mul_(-1).add_(1).add_(logvar)
    KLD = torch.sum(KLD_element).mul_(-0.5)
    return BCE + KLD
  • BCE๋กœ input x์™€ output x๋ฅผ ๋น„๊ตํ•ด์„œ loss๋ฅผ ๊ตฌํ•˜๊ณ 
  • KLD๋กœ ์ •๊ทœ๋ถ„ํฌ๋ž‘ ์›๋ž˜ z๋ถ„ํฌ์˜ loss๋ฅผ ๊ตฌํ•œ๋‹ค.

์ฐธ๊ณ ์ž๋ฃŒ

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.